title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Quaternions, polarizations and class numbers",
"Quaternions, polarizations and class numbers"
] | [
"V Rotger At Barcelona "
] | [] | [] | We study abelian varieties A with multiplication by a totally indefinite quaternion algebra over a totally real number field and give a criterion for the existence of principal polarizations on them in pure arithmetic terms. Moreover, we give an expression for the number π 0 (A) of isomorphism classes of principal polarizations on A in terms of relative class numbers of CM fields by means of Eichler's theory of optimal embeddings. As a consequence, we exhibit simple abelian varieties of any even dimension admitting arbitrarily many non-isomorphic principal polarizations. On the other hand, we prove that π 0 (A) is uniformly bounded for simple abelian varieties of odd square-free dimension. | 10.1515/crll.2003.065 | [
"https://arxiv.org/pdf/math/0211120v1.pdf"
] | 119,601,073 | math/0211120 | 281d0b8df918e9ac02daa6d2ccf967c17e1279d6 |
Quaternions, polarizations and class numbers
6 Nov 2002
V Rotger At Barcelona
Quaternions, polarizations and class numbers
6 Nov 20021
We study abelian varieties A with multiplication by a totally indefinite quaternion algebra over a totally real number field and give a criterion for the existence of principal polarizations on them in pure arithmetic terms. Moreover, we give an expression for the number π 0 (A) of isomorphism classes of principal polarizations on A in terms of relative class numbers of CM fields by means of Eichler's theory of optimal embeddings. As a consequence, we exhibit simple abelian varieties of any even dimension admitting arbitrarily many non-isomorphic principal polarizations. On the other hand, we prove that π 0 (A) is uniformly bounded for simple abelian varieties of odd square-free dimension.
Introduction
It is well-known that every elliptic curve E over an arbitrary algebraically closed field admits a unique principal polarization up to translations. This is in general no longer shared by higher dimensional abelian varieties and it is a delicate question to decide whether a given abelian variety A is principally polarizable. Even, if this is the case, it is an interesting problem to investigate the set Π 0 (A) of isomorphism classes of principal polarizations on A. By a theorem of Narasimhan and Nori (cf. [22]), Π 0 (A) is a finite set. We shall denote its cardinality by π 0 (A).
The aim of this paper is to study these questions on abelian varieties with quaternionic multiplication. It will be made apparent that the geometric properties of these abelian varieties are encoded in the arithmetic of their ring of endomorphisms. The results of this paper shed some light on the geometry and arithmetic of the Shimura varieties that occur as moduli spaces of abelian varieties with quaternionic multiplication and their groups of automorphisms. In this regard, we refer the reader to [27] and [28]. Our results are also the basis of a study of the diophantine properties of abelian surfaces with quaternionic multiplication over number fields carried by Dieulefait and the author in [4].
Let us remark that a generic principally polarizable abelian variety admits a single class of principal polarizations. In ( [14]), Humbert was the first to exhibit simple complex abelian surfaces with two non-isomorphic principal polarizations on them. Later, Hayashida and Nishi ([9] and [8]) computed π 0 (E 1 × E 2 ) for isogenous elliptic 1 Partially supported by a grant FPI from Ministerio de Ciencia y Tecnología, by MCYT BFM2000-0627 and by DGCYT PB97-0893. 1 curves E 1 /C and E 2 /C with complex multiplication. In positive characteristic, Ibukiyama, Katsura and Oort ( [15]) related the number of principal polarizations on the power E n of a supersingular elliptic curve to the class number of certain hermitian forms. With similar methods, Lange ([17]) produced examples of simple abelian varieties of high dimension with several principal polarizations. However, he showed that for an abelian variety with endomorphism algebra End(A) ⊗ Q = F , a totally real number field, the number π 0 (A) is uniformly bounded in terms of the dimension of A: π 0 (A) ≤ 2 dim(A)−1 . That is, abelian varieties whose ring of endomorphisms is an order in a totally real number field admit several but not arbitrarily many principal polarizations.
It could be expected that Lange's or some other bound for π 0 (A) held for any simple abelian variety. Hence the question: given g ≥ 1, are there simple abelian varieties of dimension g with arbitrarily many non-isomorphic principal polarizations?
As was already observed, this is not the case in dimension 1. In g = 2, only simple abelian surfaces with at most π 0 (A) = 2 were known. One of our main results, stated in a particular case, is the following. Theorem 1.1. Let F be a totally real number field of degree [F : Q] = n, let R F denote its ring of integers and ϑ F/Q the different of F over Q. Let A be a complex abelian variety of dimension 2n whose ring of endomorphisms End(A) ≃ O is a maximal order in a totally indefinite quaternion division algebra B over F .
Assume that the narrow class number h + (F ) of F is 1 and that ϑ F/Q and disc(O) are coprime ideals. Then,
(1) A is principally polarizable.
(2) The number of isomorphism classes of principal polarizations on A is
π 0 (A) = 1 2 S h(S),
where S runs among the set of orders in the CM-field F (
√ −D) that contain R F [ √ −D]
, the element D ∈ F * + is taken to be a totally positive generator of the reduced discriminant ideal D O of O and h(S) denotes its class number.
In particular, if A is an abelian surface,
π 0 (A) = h(−4D) + h(−D) 2 if D ≡ 3 mod4, h(−4D) 2 otherwise.
We prove Theorem 1.1 in the more general form of Proposition 6.2 and our main Theorem 7.1. In order to accomplish it, we present an approach to the problem which stems from Shimura's classical work [29] on analytic families of abelian varieties with prescribed endomorphism ring.
Our approach is essentially different from Lange's in [17] or Ibukiyama-Katsura-Oort's in [15]. Indeed, whereas in [17] and [15] the (noncanonical) interpretation of line bundles as symmetric endomorphisms is exploited, we translate the questions we are concerned with to Eichler's language of optimal embeddings. This leads us to solve a problem that has its roots in the work of O'Connor, Pall and Pollack (cf. [25]) and that has its own interest: see Section 4 for details.
In regard to the question above, the second main result of this article is the following.
Theorem 1.2. Let g be a positive integer. Then
(1) If g is even, there exist simple abelian varieties A of dimension g such that π 0 (A) is arbitrarily large. (2) If g is odd and square-free, π 0 (A) ≤ 2 g−1 for any simple abelian variety A of dimension g over C.
The boundless growth of π 0 (A) when g is even follows from our main Theorem 7.1 combined with analytical results on the asymptotic behaviour and explicit bounds for relative class numbers of CM-fields due to Horie-Horie ( [12]) and Louboutin ([19], [20]). The second part of Theorem 1.2 follows from the ideas of Lange in [17]. See Section 8 for details.
The following corollary follows from Theorem 1.2 and the fact that any simple principally polarized abelian surface is the Jacobian of a smooth curve of genus 2 which, by Torelli's Theorem, is unique up to isomorphism. Corollary 1.3. There are arbitrarily large sets C 1 , ..., C N of pairwise nonisomorphic genus 2 curves with isomorphic simple unpolarized Jacobian varieties J(C 1 ) ∼ = J(C 2 ) ∼ = ... ∼ = J(C N ).
In view of Theorem 1.2, it is natural to wonder whether there exist arbitrarily large sets of pairwise nonisomorphic curves of given even genus g ≥ 4 with isomorphic unpolarized Jacobian varieties. In this direction, Ciliberto and van der Geer ( [3]) proved the existence of two nonisomorphic curves of genus 4 whit isomorphic Jacobian varieties. Explicit examples of curves with isomorphic (nonsimple) Jacobians have been constructed by Howe ([13]), while examples of pairs of distinct modular curves of genus 2 defined over Q with isomorphic unpolarized absolutely simple Jacobian varieties have been obtained by González, Guàrdia and the author in [16].
Finally, let us note that the statement of Theorem 1.2 does not cover odd non square-free dimensions. Conjecture 1.4. Let g be a non square-free positive integer. Then there exist simple abelian varieties of dimension g such that π 0 (A) is arbitrarily large.
The conjecture is motivated by the fact that, when g is odd and non squarefree, there exist abelian varieties whose ring of endomorphisms is an order in a non commutative division algebra over a CM-field and there is a strong similitude between the arithmetic of the Néron-Severi groups of these abelian varieties and those in the quaternion case.
Acknowledgements. I am indebted to Pilar Bayer for her assistance throughout the elaboration of this work. I also express my gratitude to J. C. Naranjo, S. Louboutin, J. Brzezinski and G. van der Geer for some helpful conversations. I thank N. Schappacher and the Institut de Rechérche Mathématique Avancée at Strasbourg for their warm hospitality in March 2001. Finally, I thank the referee for the valuable help in improving the exposition.
Abelian varieties with quaternionic multiplication and their
Néron-Severi group
Let F be a totally real number field of degree [F : Q] = n and let R F be its ring of integers. Let B denote a totally indefinite division quaternion algebra over F and let D = disc(B) = 2r i=1 ℘ i , where ℘ i are finite prime ideals of F and r ≥ 1, be its (reduced) discriminant ideal. We shall denote n = n B/F and tr = tr B/F the reduced norm and trace of B over F , respectively. Since B is totally indefinite, the Hasse-Schilling-Maass Norm Theorem asserts that n(B) = F (cf. [10] and [31], p. 80). We fix an isomorphism of F -algebras
(η σ ) : B ⊗ Q R ≃ σ M 2 (R σ ),
where σ : F ֒→ R runs through the set of embeddings of F into R and R σ denotes R as a F -vector space via the immersion σ. For any β ∈ B, we will often abbreviate β σ = η σ (β) ∈ M 2 (R).
Let O be an hereditary order of B, that is, an order of B all whose one-sided ideals are projective. The discriminant ideal D O = disc(O) of O is square-free and it can be written as D · N for some ideal N coprime to D, which is called the level of O ( [26], Chapter 9). For the rest of this section, let A/C denote a complex abelian variety with quaternionic multiplication by the hereditary order O. We will identify End(A) = O and End(A) ⊗ Q = B. Since B is a division algebra, A is simple, that is, it contains no proper sub-abelian varieties.
As a complex manifold, A(C) = V /Λ for V a complex vector space of dimension g and Λ ⊂ V a co-compact lattice that can be identified with the first group of integral singular homology H 1 (A, Z). The lattice Λ is naturally a left O-module and Λ ⊗ Q is a left B-module of the same rank over Q as B. Since every left B-module is free (cf. [33], Chapter 9), there is an element v 0 ∈ V such that Λ ⊗ Q = B · v 0 and therefore Λ = I · v 0 for some left O-ideal I ⊂ B.
Let Pic ℓ (O) be the pointed set of left (projective) ideals of O up to principal ideals. By a theorem of Eichler ([5], [6]), the reduced norm on B induces a bijection of sets n : Pic ℓ (O) ∼ → Pic(F ) onto the class group of F . Note that the left ideal I is determined by A up to principal ideals and we can choose (and fix) a representative of I in its class in Pic ℓ (O) such that n(I) ⊂ F is coprime with D O . This is indeed possible because B is totally indefinite: it is a consequence of the Hasse-Schilling-Maass Norm Theorem, Eichler's Theorem quoted above and the natural epimorphism of ray class groups Cl D O (F )→Cl(F ) of ideals of F ( [23], Chapter VI, Section 6).
Let ρ a : B ֒→ End(V ) ≃ M 2n (C) and ρ r : O ֒→ End(Λ) ≃ M 4n (Z) denote the analytic and rational representations of B and O on V and Λ, respectively. It is well known that ρ r ∼ ρ a ρ a and it follows that, in an appropriate basis,
ρ a (β) = diag(η σ i (β))
for any β ∈ B (cf. [18], Chapter 9, Lemma 1.1). Moreover, this basis can be chosen so that the coordinates of v 0 are (τ 1 , 1, ..., τ n , 1) for certain τ i ∈ C, Im(τ i ) = 0. The choice of the element v 0 fixes an isomorphism of real vector spaces B ⊗ R ≃ V . Conversely, for any choice of a vector v 0 = (τ 1 , 1, ..., τ n , 1) ∈ V with Im(τ i ) = 0 and a left O-ideal I in B, we can consider the complex torus V /Λ with Λ = I · v 0 and B acting on V via the fixed diagonal analytic representation ρ a . The torus V /Λ admits a polarization and can be embedded in a projective space. In consequence, it is the set of complex points of an abelian variety A such that End(A) ⊇ O (cf. [29] and [18], Chapter 9, Section 4).
If The following theorem describes NS(A) intrinsically in terms of the arithmetic of B. In addition, it establishes when two line bundles on A are isomorphic and translates this into a certain conjugation relation in B. We keep the notations as above.
Theorem 2.2. There is a natural isomorphism NS(A) ∼ → N (I) ♯ 0 L → µ = µ(L)
between the Néron-Severi group of A and the group of pure quaternions of the codifferent of the two-sided ideal N (I). Moreover, for any two line bundles L 1 and L 2 ∈ NS(A), we have that L 1 ≃ L 2 if and only if there exists α ∈ O * such that µ(L 1 ) =ᾱµ(L 2 )α.
Proof.
By the Appell-Humbert Theorem, the first Chern class allows us to interpret a line bundle L ∈ NS(A) as a Riemann form: an R-alternate bilinear form
E = c 1 (L) : V × V → R such that E(Λ × Λ) ⊂ Z and E( √ −1u, √ −1v) = E(u, v) for all u, v ∈ V .
Fix a line bundle L on A and let E = c 1 (L) be the corresponding Riemann form. The linear map B → Q, β → E(βv 0 , v 0 ) is a trace form on B and hence, by the nondegeneracy of tr B/Q , there is a unique element µ ∈ B such that E(βv 0 , v 0 ) = tr B/Q (µβ) for any β ∈ B. Since E is alternate, E(av 0 , av 0 ) = tr F/Q (a 2 tr B/F (µ)) = 0 for any a ∈ F . It follows again from the nondegeneracy of tr F/Q and the fact that the squares F * 2 span F as a Q-vector space that tr B/F (µ) = 0. Thus µ 2 + δ = 0 for some δ ∈ F . The line bundle L induces an anti-involution ̺ on B called the Rosati involution. It is characterized by the rule E(u, βv) = E(β ̺ u, v) for any β ∈ B and u, v ∈ V . From the discussion above, it must be β ̺ = µ −1β µ and we conclude that the
Riemann form E = c 1 (L) attached to the line bundle L on A is E := E µ : V × V −→ R (u, v) → tr B⊗ Q R/R (µγβ)
where µ ∈ B is determined as above and γ and β are elements in B ⊗ Q R ≃ M 2 (R) n such that u = γv 0 and v = βv 0 . Since E(Λ × Λ) ⊂ Z and tr(µ) = 0, we deduce that µ ∈ N (I) ♯ 0 . Conversely, one checks that any element µ ∈ N (I) ♯ 0 defines a Riemann form E µ which is in turn the first Chern class of a line bundle L on A. Indeed, since µ ∈ N (I) ♯ , the form E µ is integral over the lattice Λ = I · v 0 and E µ is alternate because tr(µ) = 0. Moreover, let ι = diag(ι 1 , ..., ι n ) ∈ GL 2n (R) with ι i ∈ GL 2 (R) and
ι 2 i + 1 = 0, be a matrix such that ι · v 0 = √ −1v 0 . Then E µ ( √ −1u, √ −1v) = E µ (γ √ −1v 0 , β √ −1v 0 ) = tr(µγιῑβ) = E µ (u, v) for all u, v ∈ V .
This concludes the proof of the first part of the theorem.
As for the second, we note that the first Chern class of the pull-back α * L of a line bundle L on A by an automorphism α ∈ Aut(A) = O * is represented by the
Riemann form α * E : V × V → R defined by (u, v) → E(αu, αv), where E = c 1 (L)
is the Riemann form associated to L. Hence, if L 1 = α * (L 2 ), then tr(µ 1 γβ) = tr(µ 2 αγβᾱ) = tr(ᾱµ 2 αγβ) for all γ and β ∈ B and this is satisfied if and only if µ 1 =ᾱµ 2 α, by the nondegeneracy of the trace form. Reciprocally, one checks that if µ 1 =ᾱµ 2 α for some α ∈ O * , then E µ 1 = α * E µ 2 and therefore L 1 = α * L 2 . ✷ In view of Theorem 2.2, we identify the first Chern class c 1 (L) of a line bundle L on A with the quaternion µ = µ(L) ∈ B 0 such that E µ is the Riemann form associated to L. We are led to introduce the following equivalence relation, that was studied (over B) by O'Connor and Pall in the 1930s and Pollack in the 1960s (cf. [25]).
Definition 2.3. Two quaternions µ 1 and µ 2 ∈ B are Pollack conjugated over O if µ 1 =ᾱµ 2 α for some unit α ∈ O * . We will denote it by µ 1 ∼ p µ 2 .
Isomorphism classes of line bundles and Eichler theory on optimal embeddings
As in the previous section, let A denote an abelian variety with quaternionic multiplication by an hereditary order O in a totally indefinite division quaternion algebra B over a totally real number field F . As is well-known, a line bundle L ∈ NS(A) induces a morphism ϕ L : A →Â defined by P → t * P (L) ⊗ L −1 , where t P : A → A denotes the translation by P map. Since A is simple, any nontrivial line bundle L ∈ NS(A) is nondegenerate: ϕ L is an isogeny with finite kernel K(L). We say that L is principal if K(L) is trivial, that is, if ϕ L : A →Â is an isomorphism. Proposition 3.1. Let L be a line bundle on A and let c 1 (L) = µ be its first Chern class for some element µ ∈ B such that µ 2 + δ = 0 and δ ∈ F . Then For the sake of simplicity and unless otherwise stated, we assume for the rest of the article that ϑ F/Q and D O are coprime ideals in F . The general case can be dealt by means of the remark below. Proof. Let L be a principal line bundle on A and let E µ = c 1 (L) be the associated Riemann form for some µ ∈ N (I) ♯ 0 such that µ 2 + δ = 0. Since L is principal, the induced Rosati involution ̺ on End(A) ⊗ Q = B must also restrict to End(A) = O and we already observed that β ̺ = µ −1β µ. Therefore µ belongs to the normaliser
deg(ϕ L ) = N F/Q (ϑ 2 F/Q · n(I) 2 · D O · δ) 2 where ϑ F/Q = (R ♯ F ) −1 is the different of F over Q. Proof. The degree |K(L)| of ϕ L can be computed in terms of the Riemann form as follows: deg(ϕ L ) = det(E µ (x i , x j )) = det(tr B/Q (µβ iβj )), where x i = β i v 0 runs through a Z-basis of the lattice Λ. We have det(tr B/Q (µβ iβj )) = n B/Q (µ) 2 det(tr B/Q (β i · β j )) = n B/Q (µ) 2 disc B/Q (I) 2 = (N F/Q (δ · n(I) 2 · ϑ 2 F/Q · D O )) 2 .group Norm B * (O) of O in B. The quotient Norm B * (O)/O * F * ∼ = W
is a finite abelian 2-torsion group and representatives w of W in O can be chosen so that the reduced norms n(w) ∈ R F are only divisible by the prime ideals ℘|D O (cf. [31], p. 39, 99, [2]). Hence, we can express µ = u · t · w −1 for some u ∈ O * , t ∈ F * and w ∈ W .
Recall that (n(I),
D O ) = 1 and (ϑ F/Q , D O ) = 1. Since, from Proposition 3.1, n(I) 2 · ϑ 2 F/Q · D O = (δ −1 ) = (t −2 · n(w)), we conclude that n(I) · ϑ F/Q = (t −1 ) and D O = (n(w)) are principal ideals.
Conversely, suppose that n(I) · ϑ F/Q = (t −1 ) and D O = (D) are principal ideals, generated by some elements t and D ∈ F * . Let S be the ring of integers in L = F ( √ −D). Since any prime ideal ℘|D ramifies in L, Eichler's theory of optimal embeddings guarantees the existence of an embedding ι : S ֒→ O of S into the quaternion order O (cf. [31], p. 45). Let w = ι( √ −D) ∈ O and let µ = t · w −1 . As one checks locally, µ ∈ Norm B * (O) ∩ N (I) ♯ 0 and, by Theorem 2.2 and Proposition 3.1, µ is the first Chern class of a principal line bundle on A. ✷
Corollary 3.3. If D O and ϑ F/Q · n(I) are principal ideals, then A is self-dual, that is, A ∼ =Â.
Remark 3.4. The case when ϑ F/Q and D O are non necessarily coprime is reformulated as follows: A admits a principal line bundle if and only if there is an integral ideal a = ℘ e 1 1 · ... · ℘ e 2r 2r |ϑ F/Q in F such that both D O · a 2 and n(I) · ϑ F/Q · a −1 are principal ideals. In this case, A is also self-dual. The proof is mutatis mutandi the one given above. Assume that there is a principal line bundle on A. Otherwise π(A) = 0 and there is nothing to compute. We assume also that (ϑ F/Q , D O ) = 1. By Theorem 3.2, we know that D O = (D) for some D ∈ F * . We have Theorem 3.8. Let A be an abelian variety with quaternionic multiplication by a maximal order O. Then
π(A) = 1 2h(F ) u S 2 e S h(S),
where u ∈ R * F /R * 2 F runs through a set of representatives of units in R F up to squares and S runs through the (finite) set of orders in F (
√ −uD) such that R F [ √ −uD] ⊆ S. Here, 2 e S = |R * F /N F ( √ −uD)/F (S * )|.
The proof of Theorem 3.8 will be completed in Section 4. There are several remarks to be made for the sake of its practical applications.
Remark 3.9. Note that R * 2 F ⊆ N L/F (S * ) and hence R * F /N L/F (S * ) is naturally an F 2 -vector space. By Dirichlet's Unit Theorem, e S ≤ [F : Q] = n. The case F = Q is trivial since Z * /Z * 2 = {±1}.
In the case of real quadratic fields F , explicit fundamental units u ∈ R * F such that R * F /R * 2 F = {±1, ±u} are well known. For totally real number fields of higher degree there is abundant literature on systems of units. See [17] and [32], Chapter 8 for an account.
of R F [ √ −uD] in L over R F is f = q|2 q∤D q aq , for some 0 ≤ a q ≤ e q .
Further, the conductor f can be completely determined in many cases as follows. For a prime ideal q|2 such that q ∤ D, let π be a local uniformizer of the completion of F at q and k = F 2 f be the residue field. Let e = e q ≥ 1. Since −uD ∈ R * Fq , we have −uD = x 0 + x k π k + x k+1 π k+1 + ... for some 1 ≤ k ≤ ∞ and x i in a system of representatives of F 2 f in R Fq such thatx 0 andx k = 0. Here we agree to set k = ∞ if −uD = x 0 . Then, min([ k 2 ], e) ≤ a q ≤ e and we exactly have
a q = [ k 2 ] if k ≤ e + 1, e if [ k 2 ] ≥ e. Otherwise, if [ k 2 ] < e < k − 1,i µ : F ( √ −uD) ֒→ B a + b √ −uD → a + bµ for which i µ (R F [ √ −uD]) ⊂ O.
The following definition is due to Eichler.
Definition 3.11. Let S be an order over R F in a quadratic algebra L over F . An
embedding i : S ֒→ O is optimal if i(S) = i(L) ∩ O.
For any µ ∈ P(u, O) there is a uniquely determined order S µ ⊇ R F [ √ −uD] such that i µ is optimal at S µ . Moreover, two equivalent quaternions µ 1 ∼ p µ 2 ∈ P(u, O) are optimal at the same order S. Indeed, if α ∈ O * is such that µ 1 =ᾱµ 2 α, then α is forced to have reduced norm n(α) = ±1. Henceᾱ = ±α −1 ∈ O * and the observation follows since α normalizes O. Conversely, any optimal embedding i : Since this question is interesting on its own, we will make our statements in greater generality in the next section.
S ֒→ O, S ⊇ R F [ √ −uD] determines a quaternion µ = i( √ −uD) ∈ P(u, O).
Pollack conjugation versus Eichler conjugation
Let F be a number field and let B be a division quaternion algebra over F . In [25], Pollack studied the obstruction for two pure quaternions µ 1 and µ 2 ∈ B with the same reduced norm to be conjugated over B * , that is, µ 1 =ᾱµ 2 α with α ∈ B * , and expressed it in terms of the 2-torsion subgroup Br 2 (F ) of the Brauer group Br(F ) of F . He further investigated the solvability of the equation µ 1 =ᾱµ 2 α over O * for quaternions µ 1 and µ 2 in a maximal order O of B.
As a refinement of his considerations, it is natural to consider the set of orbits of pure quaternions µ ∈ O of fixed reduced norm n(µ) = d ∈ F * under the action of the group of units O * by Pollack conjugation. As already mentioned, a necessary condition for µ 1 ∼ p µ 2 over O * is that µ 1 and µ 2 induce an optimal embedding at the same quadratic order F (
√ −d) ⊃ S ⊇ R F [ √ −d].
We will drop the restriction on O to be maximal in our statements.
The connection between optimal embeddings orbits and class numbers is made possible by the theory of Eichler. However, in contrast to Pollack conjugation, two optimal embeddings i, j : S ֒→ O lie on the same conjugation class in the sense of Eichler, written i ∼ e j, if there exists α ∈ O * such that i = α −1 jα. Proof. Let us agree to say that two pure quaternions µ 1 and µ 2 ∈ O lie in the same ±Eichler conjugation class if there exists α ∈ O * such that µ 1 = ±α −1 µ 2 α. O). See [31], p. 45, for Eichler orders and [11] and [2] for Gorenstein and Bass orders.
Since it will be of use later, we will consider in Proposition 4.4 below a stronger form of Eichler's Theorem. We keep the notations as in Theorem 3.8 The above action acquires a real arithmetic meaning and coincides with Shimura's reciprocity law in the particular case that L is a CM field over F . In this situation, E(S, O) can also be interpreted as the set of Heegner points on a Shimura variety X on which the Galois group ∆ is acting (cf. [30], Section 9.10). Let A have quaternionic multiplication by an hereditary order O in B. It is our aim to compute the index of a line bundle L on A in terms of the quaternion µ = c 1 (L). For these purposes, we introduce the following notation. Let µ ∈ B 0 be a pure quaternion. It satisfies µ 2 + δ = 0 for some δ ∈ F * . For any immersion σ : F ֒→ R, let ν σ = ν σ (µ) ∈ GL 2 (R) be such that ν σ µ σ ν −1 σ = ω σ , where
. Let S ⊇ R F [ √ −uD], u ∈ R * F ,
The index of a nondegenerate line bundle
ω σ = 0 √ σ(δ) − √ σ(δ) 0 if σ(δ) > 0, √ σ(−δ) 0 0 − √ σ(−δ) otherwise.
We say that µ has positive or negative orientation at σ according to the sign of the real number det(ν σ ). Although ν σ is not uniquely determined by µ, the sign sgn(det(ν σ )) is. Thus, to any pure quaternion µ ∈ B 0 , we can attach a signature sgn(µ) = sgn(det(ν σ )) ∈ {±1} n . Motivated by the following theorem, we say that a pure quaternion µ is ample with respect to A if it has the same orientation as v 0 ∈ V = Lie(A): sgn(µ) = sgn(Im(τ i )). For any real immersion σ : F ֒→ R, define the local archimedean index i σ (µ) of µ by
i σ (µ) =
0 if σ(δ) > 0 and det(ν σ ) · Im(τ σ ) > 0, 1 if σ(δ) < 0, 2 if σ(δ) > 0 and det(ν σ ) · Im(τ σ ) < 0. With these notations we have
)· v 0 , diag (0, ..., γ n ) · v 0 } for β i , γ i ∈ M 2 (R). Let ι = diag(ι σ ) ∈ GL 2n (R) be such that ι · v 0 = √ −1v 0 . For any β = diag σ (β σ ), we have that γ = diag σ (γ σ ) ∈ M 2n (R) and H µ (βv 0 , γv 0 ) = σ tr (µ σ β σ ι σγσ ) + √ −1 σ tr (µ σ β σγσ ). Thus, if we let H σ ∈ GL 2 (C) denote the restriction of H µ to V σ = M 2 (R)· τ σ 1 t ,
the matrix of H µ respect to the chosen basis has diagonal form H µ = diag (H σ ). In order to prove Theorem 5.1, it suffices to show that the hermitian form H σ has i σ (µ) negative eigenvalues. Take β ∈ M 2 (R) and let v = β · τ σ 1 t ∈ V σ . Then,
H σ (v, v) = tr(µ σ β σ ι σβσ ) = tr(ω σ β ′ σ ι ′ σβ ′ σ ), where β ′ σ = ν σ β σ ν −1 σ
and ι ′ σ = ν σ ι σ ν −1 σ . Denote w σ = w 1 w 2 t = ν σ β σ · τ σ 1 t ∈ C 2 and w σ 2 = w 1w1 + w 2w2 . Some computation yields that
H σ (v, v) = σ C σ |σ(δ)| det(ν σ )Im(τ σ ) ,
where C σ = w 2 if σ(δ) > 0 and C σ = 2Re(w 1w2 ) if σ(δ) < 0. From this, the result follows. ✷ Remark 5.2. From the above formula, the well-known relation i(L) + i(L −1 ) = dim A ( [21], Chapter III, Section 16, p. 150) is reobtained.
Principal polarizations and self-duality
If an abelian variety A admits a principal line bundle, and hence is self-dual, it is natural to ask whether it admits a principal polarization. It is the purpose of this section to study this question under the assumption that A has quaternionic multiplication by an hereditary order O.
From Corollary 3.3, a sufficient condition for A to be self-dual when (ϑ F/Q , D O ) = 1 is that D O and n(I) · ϑ F/Q are principal ideals. By Theorem 5.1, a necessary condition for A to be principally polarizable is that D O be generated by a totally positive element D in F . However, in general this is not sufficient for the existence of a principal polarization on A.
Let F * + denote the subgroup of totally positive elements of F * , R * F + = R * F ∩ F * + and O * + = {α ∈ O * : n(α) ∈ R * F + }. Let Pic + (F ) be the narrow class group of F and let h + (F ) = |Pic + (F )|. We let Σ = Σ(R * F ) ⊆ {±1} n be the F 2 -subspace of signatures of units in R * F . As F 2 -vector spaces, Σ ≃ R * F /R * F + and, by Dirichlet's Unit Theorem, |Σ| = 2 n h(F ) h + (F ) . Since n(O * ) = R * F , the group Σ fits in the exact sequence
1 → O * + → O * → Σ → 1 α → (sgn(detα σ )).
Definition 6.1. We denote by Ω ⊆ {±1} n the set of signatures
Ω = {(sgn(detν σ (µ)) : µ ∈ P(O)}.
The set Ω can be identified with the set of connected components of R n \∪ n i=1 {x i = 0}. With the notations as above, we obtain the following corollary of Theorems 3.2 and 5.1. We note in passing that, as a consequence of the above corollary and an application ofČebotarev's Density Theorem ( [23], Chapter VII, Section 13), self-dual but non principally polarizable abelian varieties A can be constructed. Examples of these abelian varieties are nontrivial, since in the generic case in which the ring of endomorphisms is End(A) = Z, the abelian variety A is principally polarizable if and only if it is self-dual.
Signature questions on number fields are delicate. In order to have a better understanding of Proposition 6.2, we describe Ω as the union (as sets) of linear varieties in the affine space A n F 2 = {±1} n as follows. Let {u k } be a set of representatives of units in R * F /R * 2 F and, for any order
S ⊇ R F [ √ −u k D] in L = F ( √ −u k D)
, choose µ S ∈ P(S, O). We considered in Section 4 the Galois group ∆ = Ker(N : Pic(S) → Pic(F )). Naturally associated to it there is a sub-space of signatures Σ(∆) in the quotient space A n F 2 /Σ(R * F ): if b is an ideal of S such that N L/F (b) = (b) for some b ∈ F * , the signature of b does not depend on the choice of b in its class in Pic(S) but depends on the choice of the generator b up to signatures in Σ(R * F ). By an abuse of notation, we still denote by Σ(∆) the sub-space of A n F 2 generated by Σ(R * F ) and the signatures of the norms of ideals in ∆. Then, from Proposition 4.4 we obtain that Ω = ⊔ k,S Σ(∆) · sgn(µ S ), as a disjoint union. This allows us to compute the set Ω in many explicit examples and to show that, in many cases, coincides with the whole space of signatures {±1} n . The following corollary, which remains valid even if we remove the assumption (ϑ F/Q , D O ) = 1, illustrates this fact. Formulas for π i (A), 0 ≤ i ≤ g, analogous to that of Theorem 3.8 can be derived. Due to its significance, we will only concentrate on the number π 0 (A) of classes of principal polarizations. The Galois action on the sets E(S, O) of Eichler classes of optimal embeddings and its behaviour respect to the index of the associated line bundles will play an important role.
Assume then that Π 0 (A) = ∅. For simplicity, recall that we also assume that (ϑ F/Q , D O ) = 1. By Proposition 6.2, we can choose D ∈ F * + and t ∈ F * such that D O = (D) and n(I) · ϑ F/Q = (t −1 ).
Let u ∈ R * F + be a totally positive unit. Let us agree to say that an order S ⊇ R F [ √ −uD] is ample respect to O if there exists an optimal embedding i : S ֒→ O such that µ = i( √ −uD) is ample (cf. the discussion preceding Theorem 5.1). Define S u to be the set of ample orders
S ⊇ R F [ √ −uD] in F ( √ −uD)
. The existence of a principal polarization L on A implies that there is some S u nonempty. With this notation, we obtain the following expression for π 0 (A) in terms of the narrow class number of F and the class numbers of certain CM-fields that embed in B.
Theorem 7.1. The number of isomorphism classes of principal polarizations on A is
π 0 (A) = 1 2h + (F ) u∈R F + /R * 2 F S∈Su 2 e + S h(S),
where 2 e + S = |R * F + /N(S * )|. Proof. By the existing duality between Π 0 (A) and Π g (A), it is equivalent to show that π 0 (A) + π g (A) = u S∈Su 2 e + S h(S)/h + (F ). Let us introduce the set P 0,g (O) = {µ ∈ O : sgn(µ) = ±(sign(Imτ i )), n(µ) ∈ R * F + · D}. By Theorems 2.2 and 5.1, the set P 0,g (O) = P 0,g (O) ± / ∼p is in oneto-one correspondence with Π 0 (A) ∪ Π g (A) and we have a natural decomposition P 0,g (O) = ⊔P 0,g (S, O) as S runs among ample orders in S u and u ∈ R * F + /R * 2 F . Fix u ∈ R * F + and S in S u . In order to compute the cardinality of P 0,g (S, O), we relate it to the set E 0,g (S, O) = P 0,g (S, O)/ ∼e of O * ± -Eichler conjugacy classes of optimal embeddings i µ : S ֒→ O. Here, we agree to say that two quaternions µ 1 and µ 2 ∈ P 0,g (S, O) are Eichler conjugated by O * ± if there is a unit α ∈ O * ± of either totally positive or totally negative reduced norm such that µ 1 = α −1 µ 2 α. Note that, by Theorem 5.1, the action of O * ± -conjugation on the line bundle L associated to an element in P(S, O) either preserves the index i(L) or switches it to g − i(L). This makes sense of the quotient E 0,g (S, O).
We have the following exact diagram:
0 → ∆ + → Pic(S) N L/F → Pic + (F ) → 0 ↓ ↓ 0 → ∆ → Pic(S) N L/F → Pic(F ) → 0.
Indeed, there is a natural map Pic(S) N L/F → Pic + (F ), since the norm of an element a + b √ −uD ∈ L for a, b ∈ F is a 2 + ub 2 D ∈ F * + . The surjectivity of the map Pic(S) → Pic + (F ) is argued as in Section 4 by replacing the Hilbert class field H F of F by the big Hilbert class field H + F , whose Galois group over F is Gal(H + F /F ) = Pic + (F ). By Proposition 4.4, ∆ acts freely and transitively on E(S, O). Therefore, by Theorem 5.1, there is also a free action of ∆ + on E 0,g (S, O). Up to sign, the O * ± -Eichler conjugation class of an element µ ∈ P(S, O) has a well-defined orientation ±sgn(µ). Note also that two inequivalent O * ± -Eichler classes that fall in the same O * -conjugation class are never oriented in the same manner, even not up to sign. Taken together, this shows that ∆ + also acts transitively on E 0,g (S, O). This means that
|E 0,g (S, O)| = h(S) h + (F ) .
There is again a natural surjective map ρ : P 0,g (S, O)→E 0,g (S, O) and, arguing as in Section 3, Theorem 7.1 follows. ✷ Examples in low dimensions. When we particularize our results to dimension 2, we obtain an easy to apply expression for the number of principal polarizations on an abelian surface with maximal quaternionic multiplication, as it is stated in Theorem 1.1 in the introduction. As an example, the number of isomorphism classes of principal polarizations on an abelian surface A with quaternionic multiplication by a maximal order in a quaternion algebra of discriminant D = 2 · 3 · 5 · 7 · 11 · 13 · 17 · 19 is π 0 (A) = 1040. This also implies the existence of 1040 pairwise nonisomorphic smooth algebraic curves C 1 , ..., C 1040 of genus 2 such that their Jacobian varieties are isomorphic as unpolarized abelian surfaces.
In addition, since π(A) = π 0 (A) + π 1 (A) + π 2 (A) and π 0 (A) = π 2 (A), Theorems 3.8 and 7.1 yield the formula
π 1 (A) = ε 4D · h(4D) + ε D · h(D) if D ≡ 1 mod 4, ε 4D · h(4D) otherwise,
where ε D and ε 4D = 1 or 1 2 is computed from the formula for e S in Theorem 3.8. Let F be the real quadratic field Q( √ 2) and let B be the quaternion algebra over F that ramifies exactly at the two prime ideals (3± √ 2) above 7. By applying Theorems 3.8, 5.1 and 7.1 and the valuable help of the programming package PARI ([24]), we conclude that, for any abelian four-fold A such that End(A) is a maximal order in the quaternion algebra B/Q( √ 2) of discriminant 7, the number of isomorphism classes of principal line bundles of index 0, 1, 2, 3 and 4 are π 0 (A) = π 4 (A) = 6, π 1 (A) = π 3 (A) = 4 and π 2 (A) = 4, respectively.
8. Asymptotic behaviour of π 0 (A)
We can combine Theorem 7.1 with analytical tools to estimate the asymptotic behaviour of log(π 0 (A)). This will yield a stronger version of part 1 of Theorem 1.2 in the introduction. For any number field L, we let D L and Reg L stand for the absolute value of the discriminant and the regulator, respectively. ByČebotarev's Density Theorem, we can find infinitely many pairwise different totally positive principal prime ideals {℘ i } i≥1 in F . We can also choose them such that (℘ i , ϑ F/Q ) = 1. We then obtain principal ideals (D j ) = ℘ 1 · ℘ 2 · ... · ℘ 2j−1 · ℘ 2j with D j ∈ F * + and (D j , ϑ F/Q ) = 1. According to [31], p. 74, there exists a totally indefinite quaternion algebra B j over F of discriminant D j for any j ≥ 1. Then, Proposition 6.2 asserts that there exists an abelian variety A j of dimension 2n such that End(A j ) is a maximal order in B j and Π 0 (A j ) = ∅.
Proof of Theorem 8.1. Let A be a principally polarizable abelian variety with quaternionic multiplication by a maximal order in a totally indefinite division quaternion algebra B over F of discriminant D ∈ F * + . For any totally positive unit u k ∈ R * F + , let L k = F ( √ −u k D). For any order S ⊇ R F [ −u k D j ] in the CM-field L k , it holds that h(S) = c S h(L k ) for some positive constant c S ∈ Z which is uniformly bounded by 2 n . The class number h(F ) turns out to divide h(L k ) and the relative class number of L k is defined to be h − (L k ) = h(L k )/h(F ) (cf. [19]). Since h + (F ) = 2 m h(F ) for m = n − dim F 2 (Σ(R * F )), Theorem 7.1 can be rephrased as π 0 (A) = 2 (e + S −1−m) c S h − (L k ). In order to apply the Brauer-Siegel Theorem, the key point is to relate the several absolute discriminants D L k and regulators Reg L k as u k vary among totally positive units in F . Firstly, we have the relations D L k = |N F/Q (D L k /F ) · D 2 F | = 2 p k |N F/Q (D)|D 2 F for some 0 ≤ p k ≤ 2n. Secondly, by [32], p. 41, it holds that Reg L k = 2 c Reg F with c = n − 1 or n − 2.
Let ε be a sufficiently small positive number. By the Brauer-Siegel Theorem, it holds that D
(1−ε)/2 L k ≤ h(L k )Reg L k ≤ D (1+ε)/2 L k for D L k ≫ 1. Thus D (1−ε)/2 F h(F )Reg F (D L k /D F ) (1−ε)/2 ≤ h − (L k ) ≤ D (1+ε)/2 F h(F )Reg F (D L k /D F ) (1+ε)/2 .
Fixing an arbitrary CM-field L in the expression for π 0 (A), this boils down to , [20]) on lower and upper bounds for relative class numbers of CM-fields, based upon estimates of residues at s = 1 of Dedekind zeta functions, could be used to obtain explicit lower and upper bounds for π 0 (A).
Finally, we conclude this paper with the proof of the second main result quoted in the Introduction.
Proof of Theorem 1.2. Part 1 is an immediate consequence of Theorem 8.1. Let us explain how part 2 follows. Assume that A is a simple complex abelian variety of odd and square-free dimension g. Then, by Albert's classification of simple division algebras ( [21], Chapter IV, Section 19 and 21), End(A) ≃ S is an order in either a totally real number field F or a CM-field L over a totally real number field F . In any case, [F : Q] ≤ g. In the former case, by Theorem 3.1 of Lange in [17], π 0 (A) = |S * + /S * 2 | ≤ 2 g−1 . In the latter, let S 0 ⊂ F be the subring of S fixed by complex conjugation. If L is a principal polarization on A, the Rosati involution precisely induces complex conjugation on End(A) ≃ S and we have that π 0 (A) = |S * 0+ /N L/F (S * )| ≤ |S * 0+ /S * 2 0 | ≤ 2 g−1 , by applying Theorem 1.5 of [17].
Definition 2. 1 .
1An abelian variety A has quaternionic multiplication by O if dim(A) = 2n and End(A) ≃ O.
O = O ℓ (I) = {β ∈ B, βI ⊆ I} is the left order of I in B, it holds that for the choice of v 0 in a dense subset of V , we exactly have End(A) = O. Besides, for v 0 in a subset of measure zero of V , A fails to be simple and it is isogenous to the square A 2 0 of an abelian variety of dimension n such that End(A 0 ) is an order in a purely imaginary quadratic extension of F ([30], Section 9.4). Let NS(A) = Pic(A)/Pic 0 (A) be the Néron-Severi group of line bundles on A up to algebraic equivalence. Two line bundles L 1 and L 2 ∈ NS(A) are said to be isomorphic, denoted L 1 ≃ L 2 , if there is an automorphism α ∈ Aut(A) such that L 1 = α * (L 2 ). For an O-left ideal J, let J ♯ = {β ∈ B : tr B/Q (Jβ) ⊆ Z} be the codifferent of J over Z. It is a right ideal of O projective over R F . If we let B 0 = {β ∈ B : tr(β) = 0} denote the additive subgroup of pure quaternions of B, we put J ♯ 0 = J ♯ ∩ B 0 . Finally, we define N (J) = n(J)O = JJ to be the two-sided ideal of O generated by the ideal n(J) of F of reduced norms of elements in J.
✷ As a consequence, we obtain the following criterion that establishes whether the abelian variety A admits a principal line bundle in terms of the arithmetic of the hereditary order O = End(A) in B and the left ideal I ∼= H 1 (A, Z). Crucial in the proof of the theorem below is the classical theory of Eichler optimal embeddings.
Theorem 3. 2 .
2The abelian variety A admits a principal line bundle if and only if the ideals D O and ϑ F/Q · n(I) of F are principal.
Definition 3 . 5 .
35The set of isomorphism classes of principal line bundles on A is Π(A) = {L ∈ NS(A) : deg(L) = 1}/ ≃ .
Definition 3 . 6 .
36Assume that D O is a principal ideal of F . Then, we let P(O) = {µ ∈ O : tr(µ) = 0, n(µ)R F = D O } and we define P (O) = P(O)/ ∼p to be the corresponding set of Pollack conjugation classes. The above proof, together with Theorem 2.2, yields Corollary 3.7. Let A be an abelian variety with quaternionic multiplication by a maximal order O. If D O = (D) and ϑ F/Q · n(I) = (t −1 ) are principal ideals, the assignation L → t · c 1 (L) −1 induces a bijection of sets between Π(A) and P (O). In view of Corollary 3.7, it is our aim to compute the cardinality π(A) = |Π(A)| of the set of isomorphism classes of principal line bundles on an abelian variety A with quaternionic multiplication by a maximal order O. Theorem 3.8 below exhibits a close relation between π(A) and the class number of F and of certain orders in quadratic extensions L/F that embed in B.
Remark 3 . 10 .
310Let 2R F = q e 1 1 · ... · q em m be the decomposition of 2 into prime ideals in F . Fix a unit u ∈ R * F . Then, the conductor f
the determination of a q depends on the choice of the system of representatives of F 2 f in R Fq and it deserves a closer inspection.This gives an easy criterion for deciding whetherR F [ √ −uD] is the ring of integers of F ( √ −uD) (that is, f = 1). In any case, the set of orders S in L = F ( √ −uD) that contain R F [ √ −uD] can be described as follows. Any order S ⊇ R F [ √ −uD] has conductor f S |f and for every ideal f ′ |f there is a unique order S ⊇ R F [ √ −uD] of conductor f ′ . Further, f S |f T if and only if S ⊇ T .We omit the details of the proof of these facts.In order to prove Theorem 3.8, we begin by an equivalent formulation of it. As it was pointed out in Corollary 3.7, the first Chern class induces a bijection of sets between Π(A) and the set of Pollack conjugation classes P (O). For u ∈ R * F , let us write P(u, O) := {µ ∈ O : µ 2 + uD = 0}. Observe that P(O) is the disjoint union of the sets P(u k , O) as u k run among units in any set of representatives of R * F /R * 2 F . Any quaternion µ ∈ P(u, O) induces an embedding
For any quadratic order S over R F , let P(S, O) denote the set of optimal embeddings of S in O and P (S, O) = P(S, O)/ ∼p . We obtain a natural identification of sets P (O) = ⊔ k ⊔ S P (S, O), where S runs through the set of quadratic orders S ⊇ R F [ √ −u k D] for any unit u k in a set of representatives of R * F /R * 2 F . Hence, in order to prove Theorem 3.8, it is enough to show that, for any quadratic order S ⊇ R F [ √ −uD], u ∈ R * F , it holds that p(S, O) := |P (S, O)| = 2 e S −1 h(S) h(F ) .
We let E(S, O) = P(S, O)/ ∼e be the set of Eichler conjugation classes of optimal embeddings of S into an order O of B and denote e(S, O) = |E(S, O)|.
Proposition 4. 1 .
1Let S be an order in a quadratic algebra L over F and let O be an order in a division quaternion algebra B over F . Then, the number of Pollack conjugation classes is p(S, O) = |n(O * )/N L/F (S * )| · e(S, O) 2 .
We shall denote it by µ 1 ∼ ±e µ 2 and E ± (S, O) = P(S, O)/ ∼ ±e . The identity map µ → µ descends to a natural surjective map ρ : P (S, O) → E ± (S, O) and the proposition follows from the following lemma.
Lemma 4. 2 .
2Let e S = dim F 2 (n(O * )/N L/F (S * )). Let µ ∈ P(S, O) and let ε µ = 1 if µ ∼ e −µ and ε µ = 2 otherwise. Then, in the ±Eichler conjugation class {±α −1 µα : α ∈ O * } of µ, there are exactly ε µ 2 e S −1 Pollack conjugation classes of pure quaternions.Proof of Lemma 4.2. Suppose first that ε µ = 1. Then, the ±Eichler conjugationclass of µ ∈ P(S, O) is {α −1 µα : α ∈ O * }. Let γ ∈ O * be such that −µ = γµγ −1 = γ −1 µγ.We claim that, for any given α ∈ O * , it holds that µ ∼ p α −1 µα if and only if n(α) ∈ N L/F (S * ) ∪ (−n(γ)N L/F (S * )). Indeed, if µ ∼ p α −1 µα, let β ∈ O * with n(β) = ±1 be such thatβα −1 µαβ = µ. If n(β) = 1, then αβµ = µαβ and hence αβ ∈ L ∩ O * = S * . Thus n(αβ) = n(α) ∈ N L/F (S * ). If n(β) = −1, a similar argument shows that n(α) ∈ −n(γ) · N L/F (S * ). Conversely, let n(α)= v ∈ N L/F (S * ) ∪ (−n(γ)N L/F (S * )) and let s ∈ S * be such that N L/F (s) = v or − vn(γ) −1 . Since µ induces an embedding S ֒→ O, we can regard s as an element in O * such that n(s) = v or − vn(γ) −1 and sµ = µs. Hence α −1 µα = α −1 sµs −1 α = (α −1 s)µ(α −1 s) or (α −1 sγ −1 )µ(α −1 sγ −1 ). This proves the claim.Since B is division, Pollack's Theorem on Pall's Conjecture ([25], Theorem 4) applies to show that −n(γ) ∈ N L/F (S * ). We then conclude that the distinct Pollack conjugation orbits in {α −1 µα : α ∈ O * } are exactly the classesC u = {α −1 µα : n(α) ∈ uN(S * ) ∪ (−n(γ)uN(S * ))} as u ∈ n(O * ) runs through a set of representatives in n(O * )/ −n(γ), N(S * ) . There are 2 e S −1 of them.Assume that ε µ = 2, that is, µ ∼ e −µ. Then, the ±Eichler conjugation class of µ ∈ P(S, O) is {α −1 µα : α ∈ O * } ∪ {−α −1 µα : α ∈ O * }. As in the previous case, it is shown that µ ∼ p α −1 µα if and only if n(α) ∈ N L/F (S * ) and µ ∼ p −α −1 µα if and only if n(α) ∈ −N L/F (S * ). We obtain that, as u ∈ n(O * ) runs through a set of representatives in n(O * )/N(S * ), the 2 e S distinct Pollack conjugation classes in the ±Eichler conjugation class of the quaternion µ ∈ P(S, O) areC ′ u = {α −1 µα : n(α) ∈ uN(S * )} ∪ {−α −1 µα : n(α) ∈ −uN(S * )}. ✷ Proof of Theorem 3.8.Firstly, under the assumptions of Theorem 3.8, the Hasse-Schilling-Maass Norm Theorem in its integral version ([31], p. 90) asserts that n(O * ) = R * F . Secondly, we have thate(S, O) = h(S) h(F )for any S ⊇ R F [ √ −uD], u ∈ R * F . This follows from a theorem of Eichler (cf.[31], p. 98) together with Remark 3.10. The combination of these facts together with the discussion at the end of the Section 3 and Proposition 4.1 yield the theorem. ✷
Remark 4. 3 .
3In view of Proposition 4.1, the effective computation of the number of Pollack conjugation classes p(S, O) for arbitrary orders lies on the computability of the groups N L/F (S * ) and n(O * ) and the number e(S, O). The study of the former depends on the knowledge of the group of units S * and there is abundant literature on the subject. If O is an Eichler order, the Hasse-Schilling-Maass Norm Theorem in its integral version ([31], p. 90) describes n(O * ) in terms of the archimedean ramified places of B. Finally, there are several manuscripts which deal with the computation of the numbers e(S,
be a quadratic order and let H S be the ring class field of S over L = F ( √ −uD). The Galois group Gal(H S /L) is isomorphic, via the Artin reciprocity map, to the Picard group Pic(S) of classes of locally invertible ideals of S. In the particular case that S is the ring of integers of L, then H S is the Hilbert class field of L. The quadratic extension L/F is ramified at the prime ideals ℘|D and recall that, by Remark 3.10, these prime ideals do not divide the conductor of S. Therefore L and H S are linearly disjoint over F , that is,F = L ∩ H S . The norm induces a map N L/F : Pic(S) → Pic(R F ) that,by the reciprocity isomorphism can be interpreted as the restriction map Gal(H S /L) → Gal(L · H F /L) ≃ Gal(H F /F ) (cf. [23], Chapter VI, Section 5). In particular, we have an exact sequence 0 → ∆ → Pic(S) N L/F → Pic(R F ) → 0. Here, ∆ = Ker (N L/F ) can be viewed as the Galois group of H S over the fixed field L △ of H S by ∆. The group ∆ = Gal(H S /L △ ) acts on E(S, O) by a reciprocity law as follows: let i : S ֒→ O be an optimal embedding and let τ ∈ Gal(H S /L △ ). Then let b = [τ, H S /L] be the locally invertible ideal in S corresponding to τ by the Artin's reciprocity map. Since the reduced norm on B induces a bijection of sets n : Pic ℓ (O) ≃ Pic(F ) and N L/K (b) is a principal ideal in F , it follows that i(b)O = βO is a principal right ideal of O and we can choose a generator β ∈ O. Then τ acts on i ∈ E(S, O) as i τ = β −1 iβ. It can be checked that this action does not depend on the choice of the ideal b in its class in Pic(S) nor on the choice of the element β ∈ O. Moreover, a local argument shows that this action is free. Since |∆| = |E(S, O)|, we obtain
Proposition 4 . 4 .
44The action of ∆ on the set of Eichler conjugacy classes of optimal embeddings E(S, O) is free and transitive.
Let L ∈ NS(A) be a nondegenerate line bundle on an abelian variety A/C. By Mumford's Vanishing Theorem, there is a unique integer i(L) such that H i(L) (A, L) = 0 and H j (A, L) = 0 for all j = i(L) (cf. [21], Chapter III, Section 16, p. 150). The so-called index i(L) does only depend on the class of L in NS(A) and we have 0 ≤ i(L) ≤ g = dim(A). If H is the hermitian form associated to a line bundle L, the index i(L) agrees with the number of negative eigenvalues of H ([21], Chapter III, Section 16, p. 162). By the Riemann-Roch Theorem, |K(L)| = |Ker ϕ L : A →Â| = dim(H i(L) (A, L)). In particular, L is principal when dim(H i(L) (A, L)) = 1. Finally, L is a polarization, i. e., an ample line bundle, if and only if i(L) = 0.
The index of i(L) coincides with the number of negative eigenvalues of the hermitian form H µ associated to the line bundle L. If we regard M 2 (R)× n ... ×M 2 (R) embedded diagonally in M 2n (R), there is an isomorphism of real vector spaces between B ⊗ Q R and M 2 (R)× n ... ×M 2 (R) explicitly given by the map β → β · v 0 . The complex structure that M 2 (R) n inherits from that of V is such that {0} × ...×M 2 (R)×...×{0} are complex vector subspaces of M 2 (R) n and we may choose a Cbasis of V of the form {diag (β 1 , 0, ..., 0)·v 0 , diag (γ 1 , 0, ..., 0)·v 0 , ..., diag (0, ..., 0, β n
Proposition 6 . 2 .
62Let I be an ideal of B and assume that its left order O is hereditary. Then, there exist principally polarizable abelian varieties A with quaternionic multiplication by O and H 1 (A, Z) ∼ = I if and only if D O and n(I)·ϑ F/Q are principal ideals and D O = (D) is generated by an element D ∈ F * + . If this is the case, an abelian variety A = V /I · (τ 1 , 1, ..., τ n , 1) t , admits a principal polarization if and only if (sgn(Imτ i )) ∈ Ω.
Corollary 6 . 3 .
63Let F be a totally real number field of degree [F : Q] = n. Let O be an hereditary order in a totally indefinite quaternion algebra B over F and let I be a left O-ideal such that D O = (D) for D ∈ F * + and n(I) · ϑ F/Q = (t −1 ) for t ∈ F * . If the narrow class number h + (F ) of F equals the usual class number h(F ), i. e. if Σ(R * F ) = {±1} n , then any abelian variety A with quaternionic multiplication by O and H 1 (A, Z) ∼ = I is principally polarizable. In particular, if h + (F ) = 1, then the above conditions on O and I are accomplished. Proof. Since Σ(R * F ) = {±1} n , we have Ω = {±1} n and the result follows from Proposition 6.2. ✷ This is highly relevant in the study of certain Shimura varieties. As was already known to the specialists in the case of maximal orders, we obtain that any abelian surface with quaternionic multiplication by an hereditary order in an indefinite quaternion algebra B/Q admits a principal polarization. 7. The number of isomorphism classes of principal polarizations Let A = V /Λ with Λ = I·v 0 be an abelian variety with quaternionic multiplication by a maximal order O. For any integer 0 ≤ i ≤ g, let Π i (A) denote the set of isomorphism classes of principal line bundles L ∈ NS(A) of index i(L) = i. The set Π(A) naturally splits as the disjoint union Π(A) = ⊔Π i (A). Moreover, due to the relation i(L) + i(L −1 ) = g, the map L → L −1 induces a one-to-one correspondence between Π i (A) and Π g−i (A).
Theorem 8 . 1 .
81Let F be a totally real number field of degree n. Let A range over a sequence of principally polarizable abelian varieties with quaternionic multiplication by a maximal order in a totally indefinite quaternion algebra B over F of discrimi-nant D ∈ F * + with |N F/Q (D)| → ∞. Then log π 0 (A) ∼ log |N F/Q (D)| · D F .The proof of Theorem 8.1 adapts an argument of Horie-Horie ([12]) on estimates of relative class numbers of CM-fields. We first show that there indeed exist families of abelian varieties satisfying the properties quoted in the theorem.
F
)Reg F (D L /D F ) (1−ε)/2 ≤ π 0 (A) ≤ C + · D (1+ε)/2 F h(F )Reg F (D L /D F ) (1+ε)/2for some positive constants C − and C + . Taking logarithms, these inequalities yield Theorem 8.1. ✷ Remark 8.2. The argument above is not effective since it relies on the classical Brauer-Siegel Theorem on class numbers. However, recent work of Louboutin([19]
Class numbers and unit signatures. J Armitage, A Fröhlich, Mathematika. 14J. Armitage, A. Fröhlich, Class numbers and unit signatures, Mathematika 14 (1967), 94-98.
On automorphisms of quaternion orders. J Brzezinski, J. reine angew. Math. 403J. Brzezinski, On automorphisms of quaternion orders, J. reine angew. Math. 403 (1990), 166-186.
Non-isomorphic curves of genus four with isomorphic (nonpolarized) jacobians. C Ciliberto, G Van Der, Geer, Contemp. Math. 162C. Ciliberto, G. van der Geer, Non-isomorphic curves of genus four with isomorphic (non- polarized) jacobians, Contemp. Math. 162 (1992), 129-133.
L Dieulefait, V Rotger, The arithmetic of abelian surfaces with QM through their Galois representations. submitted to publicationL. Dieulefait, V. Rotger, The arithmetic of abelian surfaces with QM through their Galois representations, submitted to publication.
Bestimmung der Idealklassenzahl in gewissen normalen einfachen Algebren. M Eichler, J. reine angew. Math. 176M. Eichler, Bestimmung der Idealklassenzahl in gewissen normalen einfachen Algebren, J. reine angew. Math. 176 (1937), 192-202.
Über die Idealklassenzahl hypercomplexer Systeme. M Eichler, Math. Z. 43M. Eichler,Über die Idealklassenzahl hypercomplexer Systeme, Math. Z. 43 (1938), 481-494.
Zur Zahlentheorie der Quaternionen-Algebren. M Eichler, J. reine angew. Math. 195M. Eichler, Zur Zahlentheorie der Quaternionen-Algebren, J. reine angew. Math. 195 (1955), 127-151.
A class number associated with the product of an elliptic curve with itself. T Hayashida, J. Math. Soc. Japan. 20T. Hayashida, A class number associated with the product of an elliptic curve with itself, J. Math. Soc. Japan 20 (1968), 26-43.
Existence of curves of genus two on a product of two elliptic cuves. T Hayashida, M Nishi, J. Math. Soc. Japan. 17T. Hayashida, M. Nishi, Existence of curves of genus two on a product of two elliptic cuves, J. Math. Soc. Japan 17 (1965), 1-16.
Die Normen aus einer normalen Divisionsalgebra. H Hasse, O Schilling, J. reine angew. Math. 174H. Hasse, O. Schilling, Die Normen aus einer normalen Divisionsalgebra, J. reine angew. Math. 174 (1936), 248-252.
Orders in quaternion algebras. H Hijikata, A Pizer, T Shemanske, J. reine angew. Math. 394H. Hijikata, A. Pizer, T. Shemanske, Orders in quaternion algebras, J. reine angew. Math. 394 (1989), 59-106.
CM-fields and exponents of their ideal class groups. K Horie, M Horie, Acta Arith. 55K. Horie, M. Horie, CM-fields and exponents of their ideal class groups, Acta Arith. 55 (1990), 157-170.
Plane quartics with jacobians isomorphic to a hyperelliptic jacobian. E Howe, Proc. Amer. Math. Soc. 129E. Howe, Plane quartics with jacobians isomorphic to a hyperelliptic jacobian, Proc. Amer. Math. Soc. 129 (2000), 1647-1657.
Théorie générale des surfaces hyperelliptiques. G Humbert, J. Math. Sér. IV. 9G. Humbert, Théorie générale des surfaces hyperelliptiques, J. Math. Sér. IV, 9 (1893), 29-170 and 361-475.
Supersingular curves of genus two and class numbers. T Ibukiyama, T Katsura, F Oort, Compos. Math. 57T. Ibukiyama, T. Katsura, F. Oort, Supersingular curves of genus two and class numbers, Compos. Math. 57 (1986), 127-152.
Abelian surfaces of GL 2 -type as Jacobians of curves, Preprint, UPC Vilanova i la Geltrú. J González, J Guàrdia, V Rotger, J. González, J. Guàrdia, V. Rotger, Abelian surfaces of GL 2 -type as Jacobians of curves, Preprint, UPC Vilanova i la Geltrú, 2002.
Abelian varieties with several principal polarizations. H Lange, Duke Math. J. 55H. Lange, Abelian varieties with several principal polarizations, Duke Math. J. 55 (1988), 617-628.
Complex Abelian Varieties. H Lange, Ch Birkenhake, Grundl. math. Wiss. 302SpringerH. Lange, Ch. Birkenhake, Complex Abelian Varieties, Grundl. math. Wiss. 302, Springer, 1992.
Explicit bounds for residues of Dedekind zeta functions, values of L-functions at s = 1, and relative class numbers. S Louboutin, J. Number Th. 85S. Louboutin, Explicit bounds for residues of Dedekind zeta functions, values of L-functions at s = 1, and relative class numbers, J. Number Th. 85 (2000), 263-282.
S Louboutin, Brauer-Siegel like results for relative class numbers of CM-fields, Preprint, Institut de Mathématiques de Luminy. S. Louboutin, Brauer-Siegel like results for relative class numbers of CM-fields, Preprint, In- stitut de Mathématiques de Luminy, 2002.
D Mumford, Abelian varieties. Oxford University PressD. Mumford, Abelian varieties, Oxford University Press, 1970.
Polarizations on an abelian variety. M Narasimhan, M Nori, Proc. Indian Acad. Sci. Math. Sci. 90M. Narasimhan, M. Nori, Polarizations on an abelian variety, Proc. Indian Acad. Sci. Math. Sci. 90 (1981), 125-128.
Algebraic Number Theory, Grundl. math. J Neukirch, Wiss. 322SpringerJ. Neukirch, Algebraic Number Theory, Grundl. math. Wiss. 322, Springer, 1999.
The equationtat = b in a quaternion algebra. B Pollack, Duke Math. J. 27B. Pollack, The equationtat = b in a quaternion algebra, Duke Math. J. 27 (1960), 261-271.
Maximal orders. I Reiner, Academic PressI. Reiner, Maximal orders, Academic Press, 1975.
On the group of automorphisms of Shimura curves and applications. V Rotger, Compos. Math. 132V. Rotger, On the group of automorphisms of Shimura curves and applications, Compos. Math. 132 (2002), 229-241.
Modular Shimura varieties and forgetful maps. V Rotger, submitted to publicationV. Rotger, Modular Shimura varieties and forgetful maps, submitted to publication.
On analytic families of polarized abelian varieties and automorphic functions. G Shimura, Ann. Math. 78G. Shimura, On analytic families of polarized abelian varieties and automorphic functions, Ann. Math. 78 (1963), 149-192.
Construction of class fields and zeta functions of algebraic curves. G Shimura, Ann. Math. 85G. Shimura, Construction of class fields and zeta functions of algebraic curves, Ann. Math. 85 (1967), 58-159.
Arithmétique des algèbres de quaternions. M.-F Vignéras, Lect. Notes Math. 800SpringerM.-F. Vignéras, Arithmétique des algèbres de quaternions, Lect. Notes Math. 800, Springer, 1980.
Introduction to cyclotomic fields. L Washington, Grad. Texts Math. 83SpringerL. Washington, Introduction to cyclotomic fields, Grad. Texts Math. 83, Springer, 1982.
Basic Number Theory. A Weil, Dpt.Àlgebra i Geometria. 144Gran ViaUniversitat de BarcelonaGrundl. math. Wiss.. E-08007, Barcelona. E-mail address: [email protected]. Weil, Basic Number Theory, Grundl. math. Wiss. 144, Springer, 1967. Dpt.Àlgebra i Geometria, Universitat de Barcelona, Gran Via, 585, E-08007, Barcelona. E-mail address: [email protected]
| [] |
[
"Associated Higgs Production in CP-violating supersymmetry: probing the 'open hole' at the Large Hadron Collider",
"Associated Higgs Production in CP-violating supersymmetry: probing the 'open hole' at the Large Hadron Collider"
] | [
"Priyotosh Bandyopadhyay \nRegional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia\n",
"Amitava Datta \nDepartment of Physics\nJadavpur University\n700032KolkataIndia\n",
"Aseshkrishna Datta \nRegional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia\n",
"Biswarup Mukhopadhyaya \nRegional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia\n"
] | [
"Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia",
"Department of Physics\nJadavpur University\n700032KolkataIndia",
"Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia",
"Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road\n211019JhunsiAllahabadIndia"
] | [] | A benchmark CP -violating supersymmetric scenario (known in the literature as 'CPX-scenario') is studied in the context of the Large Hadron Collider (LHC). It is shown that the LHC, with low to moderate accumulated luminosity, will be able to probe the existing 'hole' in the m h 1 -tan β plane, which cannot be ruled out by the Large Electron Positron Collider data. This can be done through associated production of Higgs bosons with top quark and top squark pairs leading to the signal dilepton + ≤ 5 jets (including 3 b-jets) + missing p T . Efficient discrimination of such a CP -violating supersymmetric scenario from other contending ones is also possible at the LHC with a moderate volume of data. | 10.1103/physrevd.78.015017 | [
"https://arxiv.org/pdf/0710.3016v2.pdf"
] | 15,787,893 | 0710.3016 | 9a6a435283956141eb4f0ec5a66aaf180e0b1873 |
Associated Higgs Production in CP-violating supersymmetry: probing the 'open hole' at the Large Hadron Collider
2 Nov 2007
Priyotosh Bandyopadhyay
Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road
211019JhunsiAllahabadIndia
Amitava Datta
Department of Physics
Jadavpur University
700032KolkataIndia
Aseshkrishna Datta
Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road
211019JhunsiAllahabadIndia
Biswarup Mukhopadhyaya
Regional Centre for Accelerator-based Particle Physics Harish-Chandra Research Institute Chhatnag Road
211019JhunsiAllahabadIndia
Associated Higgs Production in CP-violating supersymmetry: probing the 'open hole' at the Large Hadron Collider
2 Nov 20071 [email protected] 2 [email protected] 3 [email protected] 4 [email protected] 1
A benchmark CP -violating supersymmetric scenario (known in the literature as 'CPX-scenario') is studied in the context of the Large Hadron Collider (LHC). It is shown that the LHC, with low to moderate accumulated luminosity, will be able to probe the existing 'hole' in the m h 1 -tan β plane, which cannot be ruled out by the Large Electron Positron Collider data. This can be done through associated production of Higgs bosons with top quark and top squark pairs leading to the signal dilepton + ≤ 5 jets (including 3 b-jets) + missing p T . Efficient discrimination of such a CP -violating supersymmetric scenario from other contending ones is also possible at the LHC with a moderate volume of data.
Introduction
One of the main motivations for suggesting supersymmetry (SUSY) is to remove the fine-tuning problem in the Higgs sector of the standard model. The condition of holomorphicity of the superpotential requires two Higgs doublets in the minimal SUSY extension of the standard model (SM). There the Higgs sector has a larger particle content than the SM, and the physical states in this sector comprise two neutral scalars, one pseudoscalar and one charged Higgs boson. Finding the signatures of these scalars is thus inseparably linked with the search for SUSY at the upcoming Large Hadron Collider (LHC).
Prior to the LHC several Higgs search experiments have yielded negative results. The strongest lower bound on the smallest Higgs mass (m h ) from the Large Electron Positron Collider (LEP) is m h > 114.4 GeV [1,2]. This limit is valid for a SM like Higgs as well as for the lightest neutral Higgs boson in the minimal supersymmetric standard model (MSSM) in the decoupling limit i.e. the limit in which the masses of all other scalars in the Higgs sector become very large. Although smaller values of m h are allowed away from the decoupling limit, the lower bound on its mass is approximately the Z-mass. However, when the Higgs sector inherits some CP -violating phase through radiative corrections [3,4], the above limit ceases to be valid. Our discussion is centred around such situations.
It is well-known by now that lower bound on the mass of the lightest Higgs boson of the CP -conserving MSSM from LEP [2] can be drastically reduced or even may entirely vanish if non-zero CP -violating phases are allowed [5]. This can happen through radiative corrections to the Higgs potential, whereby the phases, if any, of the Higgsino mass parameter µ and the trilinear soft SUSY breaking parameter A enter into the picture. As a result of the CP -violating phase, the neutral spinless states are no more of definite parity, and their couplings to gauge bosons as well as fermions are thus modified, depending on the magnitude of the phases. Thus there are three neutral states h i (i=1,2,3); the collider search limits for all of them are modified since the squared amplitudes for production via W W , ZZ and qq couplings for all of them now consist of more than one terms. Mutual cancellation among such terms can take place in certain regions of the parameter space, thus resulting in reduced production rates and consequent weakening of mass limits at collider experiments.
For example, in the context of a benchmark CP -violating scenario (often called the CPX scenario in the literature [5]), it has been found that m h 1 as low as 50 GeV or even smaller, cannot be ruled out by the final LEP data for low and moderate values of tan β, where h 1 is the lightest neutral Higgs, and tan β is the ratio of the vacuum expectation values of the two Higgs doublets. In other words, a 'hole' is found to exist in the m h 1 -tan β parameter space covered by the LEP searches, the underlying reason being the reduction in the coupling ZZh 1 due to the CP -violating phase(s), as mentioned above. Moreover, complementary channels such as e + e − → h 1 h 2 , suffer from coupling as well as phase-space suppression within this 'hole', thus making it inaccessible to LEP searches. The existence of this hole has been confirmed by the analyses of the LEP data by different experimental groups [2,5,6], although people are not unanimous about the exact span of the hole.
The next natural step is to assess the prospect of closing the hole at Tevatron Run II or the LHC. The existing analysis on this [7], however, focuses on the discovery channels based on the conventional Higgs production and decay mechanisms employed in the context of the SM. It has been noted that although the hadron colliders can probe most of the parameter space of the CPX scenario and can indeed go beyond some regions of the parameter space scanned by the LEP searches, the lightest Higgs boson within the aforementioned hole may still escape detection. This is because not only ZZh 1 but also the W W h 1 and tth 1 couplings tend to be very small within this hole. On the other hand, the relatively heavy neutral Higgs bosons h 2,3 couple to W , Z and t favourably, but they can decay in non-standard channels, thus requiring a modification in search strategies. The work [8] which has compiled possible signals of the CPX scenario at the LHC is also restricted to the production of h i (i=1,2,3) bosons in SM-like channels. However, it looked into more decay channels of the h i bosons thus produced. It has been henceforth concluded that parts of the holes in the M + H -tan β or the m h 1 -tan β parameter space can be plugged, although considerable portions of the hole, especially for low tan β, may escape detection at the LHC even after accumulating 300 fb −1 of integrated luminosity.
Thus it is important to look for other production channels for the scalars in the CPX region, especially by making use of the couplings of h 1 with the sparticles. It is gratifying to note in this context that thet 1t * 1 h 1 coupling, wheret 1 is the lighter top squark, indeed leads to such a discovery channel, in cases where the t-t-h 1 and W -Wh 1 , Z-Z-h 1 couplings are highly suppressed. In fact it has been noted that in a general CP -violating MSSM, the cross section oft 1t * 1 h 1 production could be dramatically larger than that obtained by switching off the CP -violating phases [9]. Since the trilinear SUSY breaking parameter A t is necessarily large in the CPX scenario,t 1 tends to be relatively light and may be produced at the LHC with large cross section. As a bonus, both h 2 and h 3 also couple favourably to the tt pair and can add modestly to the signal although by themselves they fail to produce a statistically significant signal. In this work we investigate the implications of these couplings at the LHC, by concentrating on a specific signal arising from the associated production of the neutral Higgs bosons with a top-pair or a pair of lighter stop squarks.
Our task, however, does not end here. While we wish to extract information on the neutral Higgs sector in the CPX scenario, other SUSY processes driven by other particles in the spectrum may yield the same final state. To make sure that one is indeed looking at the Higgs sector, one needs to isolate the Higgs-induced channels, and find event selection criteria to not only reduce the SM backgrounds but also ensure that the canonical SUSY channels do not overwhelm the Higgs signatures. In our analysis, we first introduce suitable criteria which will suppress the SM background compared to the total SUSY contribution in CPX. Next, we suggest additional discriminators for further filtering out the contributions of the lightest Higgs (h 1 ) from other SUSY channels. We finally show that if nature prefers the SM alone with m h ≥ 114.4 GeV, or, alternatively, CP -conserving SUSY, the proposed signal would indeed be much smaller if our selection criteria are imposed.
The paper is organised as follows. In Section 2 we discuss the basic inputs of the CPX scenario, the resulting mass spectrum and other features they lead to. All of our subsequent numerical analysis would be in this framework where we also use the alternative expression CPV-SUSY to mean the CPX-scenario. In section 3 we set out to define the proposed signal, devise the event selection criteria to reduce both SM and residual SUSY backgrounds and fake events, and present the final numerical results. We summarise and conclude in section 4.
The CPX Model: values of various parameters
As indicated in the Introduction, we adopt the so called CPX scenario in which the LEP analyses have been performed. It has been observed [3,4] that the CP -violating quantum effects on the Higgs potential is proportional to Im(µA t )/M 2 SU SY , where A t is the trilinear soft SUSY breaking parameter occurring in the top squark mass matrix, and M SU SY is the characteristic SUSY breaking scale, being of the order of the third generation squark masses. With this in mind, a benchmark scenario known as CPX was proposed [5] and its consequences were studied [[10]- [23]] in some of which steps are suggested for closing the aforementioned 'hole' [24,25,26]. In this scenario, the effects of CP -violation are maximized. The corresponding inputs that we adopt here are compatible with the "hole" left out in the analysis.
mt = mb = mτ = M SU SY = 500 GeV, µ = 4M SU SY = 2 TeV |A t | = |A b | = 2M SU SY = 1 TeV, arg(A t,b ) = 90 • |mg| = 1 TeV, arg(mg) = 90 • M 2 = 2M 1 = 200 GeV, tan β = 5 − 10
where the only departure from reference [7] lies in a small tweaking in the mass ratio of the U (1) and SU (2) gaugino masses M 1 and M 2 , aimed at ensuring gaugino mass unification at high scale. It has been checked that this difference does not affect the Higgs production or the decay rates [27]. The presence of a relatively large A t ensures that one of the top squarks will be relatively light. The value of the top quark mass has been taken to be 175 GeV 5 .
It is to be noted that the first two generation sfermion masses must be kept sufficiently heavy so that the stringent experimental bound (for example, the electric dipole moment of the neutron) is satisfied. Here we have not considered possible ways of bypassing such bounds, and set the masses of the first two sfermion families at 10 TeV. Thus our analysis is based on the mass spectrum showed in Table 1. Table 1: Physical masses (in GeV) of neutral Higgs bosons, squarks and lighter gauginos in the CPX scenario with tan β=5 and m H ± =130 GeV.
m h 1 m h 2 m h 3 mt 1 mt 2 mb 1 mb 2 m χ 0 1 m χ 0 2 m χ ±
The specific choice of m H ± is made to obtain the mass of the lightest Higgs boson within the LEP-hole in m h 1 -tan β space. It should be noted that such a choice makes the remaining two neutral Higgs bosons not so heavy either. This kind of a situation has a special implication in CPV-MSSM, namely, all the neutral Higgs bosons can be produced in association with at 1 pair. Such production is kinematically suppressed in the CP -conserving case due to the lower bound on m h . The CPX set of parameters listed above constitutes our benchmark point number 1 (BP1) in the detailed analysis to be undertaken in the next section. We list at the end of that section the final results corresponding to six more benchmark points within the hole unprobed by current data. These points are denoted by BP2 -BP7.
Signals at the LHC
Since, in CPX-SUSY the V V h 1 (V =W, Z) and tth 1 interactions are suppressed for the lightest neutral scalar(h 1 ), we shall have to think of some alternative associate production mechanism at the LHC. One possibility is to consider the associated production of h 1 with a pair of lighter stops. The large value of A t is encouraging in this respect. In addition, since the point CPX yields a not-so-high value of the lighter stop mass, this production mechanism is kinematically quite viable.
The cross sections for different supersymmetric associated production processes are computed with CalcHEP [28] (interfaced with the program CPSuperH [29,30]) and listed in Table 2. As one can see, while a substantial production rate is predicted for h 1 associated with a pair oft 1 , the corresponding cross sections for h 2 and h 3 are smaller by two orders of magnitude. This is not only because of phase space suppression for the latter at the CPX point, but also due to the conspiracy of a number of terms in the effective interaction involved. Table 2 also reveals a complementary feature in Higgs production in association with a pair of top quarks, the underlying reason being again the multitude of terms that enters into the squared amplitudes, and the provision of their mutual cancellation in the CPX scenario. Thus we can identify, for the given set of input parameters,t 1t * 1 h 1 and tth 2,3 as the production processes that can be potentially useful in closing the hole in the parameter space.
Also indicated in Table 2 is the gluino pair production cross section in the CPX scenario for mg =1 TeV which is a CPX input indicated earlier in this section. Later in this section, we shall explain how this process could affect our signal. Table 2: Production cross sections (in fb) at lowest-order computed with CalcHEP interfaced with CPsuperH for different signal processes at the LHC in the CPX scenario and for the spectrum of Table 1. CTEQ6L parton distribution functions are used and the renormalization/factorization scale is set to √ŝ .
σt 1t * 1 h 1 σt 1t * 1 h 2 σt 1t * 1 h 3 σ tth 1 σ tth 2 σ tth 3 σgg
The branching fractions of the lighter scalar top and the lightest neutral Higgs boson plays a crucial role in selecting the viable modes in which the signal for CPV-SUSY can be looked for. In Table 3 we present the relevant branching fractions, keeping in mind that new final states emerge whenever the branching fraction for a heavier neutral scalar decaying into two lighter ones is of sizable magnitude. In any case, it is interesting to note that not only the lightest Higgs h 1 but also h 2 and h 3 could play significant roles in signals of the Higgs sector in the CPX scenario, given the possibility of all of them being rather light. Before we enter into the discussion of our specifically chosen signal, let us mention that, in this study, CalcHEP (interfaced to the program CPSuperH) has also been used for generating parton-level events for the relevant processes. The standard CalcHEP-PYTHIA interface [31], which uses the SLHA interface [32] was then used to pass the CalcHEP-generated events to PYTHIA [33]. Further, all relevant decayinformation are generated with CalcHEP and are passed to PYTHIA through the same interface. All these are required since there is no public implementation of CPV-MSSM in PYTHIA. Subsequent decays of the produced particles, hadronization and the collider analyses are done with PYTHIA (version 4.610).
Br(t 1 → bχ + 1 ) Br(t 1 → tχ 0 1 ) Br(h 1 → bb) Br(h 2 → h 1 h 1 ) Br(h 3 → h 1 h 1 ) Br(g → tt * 1 ) 0.81 0.19 0.91 0.71 0.82 0.16
We used CTEQ6L parton distribution function (PDF) [34,35]. In CalcHEP we opted for the lowest order α s evaluation, which is appropriate for a lowest order PDF like CTEQ6L. The renormalization/factorization scale in CalcHEP is set at √ŝ . This choice of scale results in a somewhat conservative estimate for the event rates.
As discussed earlier, the processes of primary importance for the present study are pp →t 1t * 1 h 1 and pp → tth 2,3 . At the parton level, the lightest Higgs and both top quarks (or top squarks) dominantly decay to b quarks. For our signal, the associated W 's (or charginos) produced in the decay of t (ort 1 ) are required to decay into leptons with known or calculable branching ratios. These decays lead to a final state with four b-quarks along with other SM particles. In addition, the large branching ratios for h (2,3) → h 1 h 1 can make the modest contributions from the tth (2,3) particularly rich in final state b's, which, with a finite b-tagging efficiency, can provide a combinatoric factor of advantage to us.
However, although h 1 decays dominantly into bb, our simulation reveals that in a fairly large fraction of events both the b-quarks do not lead to sufficiently hard jets with reasonable b-tagging efficiency. This is because of the lightness of h 1 in this scenario. To illustrate this, we present in Figure 1 the ordered p T distributions for the four parton-level b-quarks in the signal fromt 1t * 1 h 1 . It is clear from this figure that the b-quark with the lowest p T in a given event is often below 40 GeV or thereabout, which could have ensured a moderate tagging efficiency (≥ 50%). This forces us to settle for three tagged b-jets in the final state, and look for 3 tagged b-jets + dilepton + other untagged jets + missing p T .
Later in this section we will demonstrate that this feature is retained under a realistic situation, i.e. on inclusion of hadronization.
We have used PYCELL, the toy calorimeter simulation provided in PYTHIA, with the following criteria:
• the calorimeter coverage is |η| < 4.5 and the segmentation is given by ∆η × ∆φ = 0.09 × 0.09 which resembles a generic LHC detector • a cone algorithm with ∆R = ∆η 2 + ∆φ 2 = 0.5 has been used for jet finding • no jet should match with a hard lepton in the event In addition, the following set of basic (standard) kinematic cuts is incorporated throughout our analysis:
p j 1,2 T ≥ 50 GeV p j 3 T ≥ 40 GeV |η| j,ℓ ≤ 2.5 ∆R ℓj ≥ 0.4 ∆R ℓℓ ≥ 0.2
where ∆R ℓj and ∆R ℓℓ measure the lepton-jet and lepton-lepton isolations respectively, with ∆R = ∆η 2 + ∆φ 2 , ∆η being the pseudo-rapidity difference and ∆φ being the difference in azimuthal angle for the adjacent leptons and/or jets. Since efficient identification of the leptons is crucial for our study, we required, on top of above set of cuts, that hadronic activity within a cone of ∆R = 0.2 between two isolated leptons should be minimum with p jet T < 10 GeV in the specified cone. Throughout the analysis we have assumed that a b-jet with p jet T > 40 GeV can be tagged with 50% probability. In addition, as we shall see below, some further kinematic cuts are necessary to make the proposed signal stand out.
Below the contributions to the final state from different scenarios are discussed:
• Contributions coming from the CPV-SUSY scenario and comprised of pp → t 1t * 1 h 1 , tth 2,3 and pp →gg where m h 1 could escape the LEP bound and can be as light as 50 GeV for low to moderate tan β.
• If nature is supersymmetric but conserves CP (CPC-SUSY), contributions could dominantly come from pp → tth andgg, where the appropriate LEP bound hold for m h . Obviously, m h now has to be much larger than that in the CPV-SUSY case. For our study, this constitutes a crucial difference between these two scenarios for a given set of masses for the gluino and the lighter top squark.
• If the SM is the only theory relevant for the LHC, then the dominant signal process is from pp → ttH, where H is the SM Higgs boson for which the LEP bound of m H > 114.4 GeV is valid.
• The SM contributions coming from pp → tt, ttZ, ttbb 6 etc., which appear as "common background" for all the above three situations.
Note that in first three scenarios the contributing processes all involve characteristic masses and/or couplings either in the production or in the subsequent cascades. Thus observations made there directly carry crucial information on the scenario involved and hence may help discriminate the same from the others.
The SM contributions in the last item of the above list are not sensitive, in any relevant way, to the details of any new physics scenarios. Thus they appear as universal backgrounds to the chosen signal coming from all of the other three scenarios. The major sources in this category are (i) tt production with a c-jet from QCD radiation mistagged as the third b-jet (we assume mistagging probability to be 1/25 [37]), (ii) ttbb production where the semileptonic decays of the t quarks produce the hard, isolated OSD pair and (iii) ttZ production where the Z decays into b-quarks and the leptons come from t-decay.t The most effective way to reduce the contribution from ttH in the SM(with m H =120 GeV) is found to come from the missing p T distributions. In Figure 2, we present the p T distribution for our proposed signal, arising from the associated lightest Higgs production along with a stop sqaurk pair. Since the plots demonstrate that the CPX signal contains more events with p T on the higher side (due to the massive lightest neutralino pair in the final state), an appropriate p T -cut is clearly useful. Therefore, we have subjected our generated events to the additional requirement p T ≥ 110 GeV.
This is added to the basic cuts listed earlier, yielding an overall efficiency factor denoted here by ǫ which contains the effects of all cuts described so far as well as those to be mentioned later in the text. The finally important numbers for the signal and any of the faking scenarios are thus given by the quantity σ × ǫ, σ being the cross section for the aforementioned final state without any cuts.
In case the SM is the only relevant theory for such final states at the LHC, pp → ttH as well as the sources of 'common backgrounds' will contribute to our final state. In this, one will have to take m H ≥ 114.4 GeV to be consistent with the experimental observations. The missing-p T cut of p T > 110 GeV effectively reduces events of both these types. Thus having enough signal events above the standard model predictions is ensured in this search strategy.
However, the same final state can have strong contributions from strong production such as pp →gg, followed by a cascade likẽ
g → tt * 1 → ttχ 0 1 → bbW + W − χ 0 1
While these may add to the signal strength, there is always the possibility that the fluctuation in the gluino-induced events owing to the uncertainties of strong interaction will tend to submerge the channels of our real interest, namely, the associated production of the neutral Higgs bosons. In the same way, contributions from strong processes may also fake the proposed signals in CP -conserving SUSY. The next task, therefore, is to devise acceptance criteria to avoid such fake events. We take as representative the gluino pair production process as the interfering channel, the contributions from squarks being small at the corresponding parameter region. The first point to note here is that the contributions from strong processes leading to this final state usually have a higher jet multiplicity than in our case. This is evident from Figure 3 where we present the jet-multiplicity distribution at the CPX point. While the contributions from associated Higgs production peak at four jets, the overall peak lies at seven. This immediately suggests jet multiplicity as a useful acceptance criterion here, and thus we demand n j ≤ 5, thereby reducing considerably the artifacts of strong processes.
There are other SUSY processes which may tend to obfuscate the presence of a rather light Higgs boson. For example, similar final states may arise from processes like pp →b 1b * 1 h 1 , where theb 1 's decay into a b-quark and the second lightest neutralino. The latter, in turn, decays into two leptons and the lightest supersymmetric particle (LSP). The number of such events, however, is negligible due to a highly suppressed b 1 -b 1 -h 1 coupling at moderate to low tan β values, i.e., the range of tan β answering to the CPX scenario. In case of faking in a CP -conserving SUSY spectrum with high tan β (≃ 40 or so), one has to study independently the bb and τ + τ − interactions, for example, in the vector boson fusion channel [38,39,40,41], where the values of the parameters can be established as different from those giving rise to the 'hole' in the CPX case.
The strong cascades, however, continue to remain problematic even after imposing the jet multiplicity cut, since the production cross-sections are quite large and the multiplicity cut removes only about half of the events. The next suggestion thus is to use those characteristics of the events that reflect the mass (1 TeV) of the gluino in the CPX case. The obvious distributions to look at are those of the transverse momenta of the various jets, for the final states arising from associated Higgs production vis-a-vis strong processes. It is natural to expect that jets originating in gluino decays will have harder p jet T distributions compared to those coming from the associated Higgs productions. This is obvious on comparing the left and right panels of Figure 4 which shows the ordered p T -distributions of jets arising fromt 1t * 1 h 1 andgg productions in this scenario.
Thus we further impose an upper cut on p jet T , viz., p jet T ≤ 300 GeV, which 'kills' the p T of ordered jets (in GeV) dN/dp T (in GeV more energetic jets from the strong production process. Together with the stipulated upper limit on jet multiplicity, this helps in enhancing the share of the associated Higgs production processes in the final state under investigation. Thus the effects of the p T , multiplicity and maximum p jet T cuts all enter into the quantity ǫ determining the final rates after all the event selection criteria are applied. Now we are in a position to make a comparative estimate of the contributions to dilepton + ≤ 5 jets including three tagged b-jets + p T from the various scenarios, and assess the usefulness of this channel in extracting the signature of a CP -violating SUSY scenario with light neutral scalars. Such an estimate is readily available from Tables 4 and 5. Table 4 contains the contributions to the aforesaid final state from the CPX benchmark point 1 (BP1), CP -conserving SUSY and a standard model Higgs boson of masses 117 and 120 GeV respectively. These are over and above the 'common backgrounds' which are listed in Table 5. In each case, the main contributing processes and the corresponding hard cross-sections are shown. Also displayed are the final event rates once the various cuts are imposed, where the difference made by the upper cut on p jet T is clearly brought out.
As far as the choice of parameters in CP -conserving SUSY is concerned, we have used the same values of the gluino and first two generations of squark masses as in the CPX point. It is expected that any departure in the strong sector masses from those corresponding to the hole in the CPX case will be found out from variables such as the energy profile of jets, if any signal of SUSY is seen at the LHC. Thus other regions of the MSSM parameter space are unlikely to fake the signals of CP -violating situation. The value of tan β is also kept at the region allowed by the CPX hole, and any departure from this region in a faking MSSM scenario has to show up in the branching ratios for h 1 → bb, τ + τ − , using the supplementary data on the vector boson fusion channel. Finally, although some difference from the rates shown in Table 4 for CP -conserving SUSY can in principle occur due to different values of the lighter stop mass, the overall rates are not significantly different, so long as stop squark decays dominantly into either bχ + 1 or tχ 0 1 . Thus the choice of the CP -conserving SUSY parameters in Table 4 can be taken as representative. We checked that for smaller choice oft 1 mass also and the number is still smaller than CPX contribution.
It is easy to draw one's own conclusion from these two tables about the viability of the suggested search strategy. With the selection criteria proposed in this paper (without the upper cut on jet p T ) the size of the signal (50 events) from the dominant processes in CPV-SUSY for only 30 fb −1 of integrated luminosity easily dwarfs the common SM background (13 events). Moreover, the signal size is much larger than that in the CPC scenario (with comparable squark and gluino masses) or in the SM. Thus, important hints regarding the existence of new physics and its nature will be available at this stage (we assume that the gluino mass and some other important parameters will be determined from complimentary experiments p T since nearly 75% of the new physics events are now induced by them. Clearly, even after imposing the upper cut on p jet T , the signals can rise above the SM backgrounds at more than 5σ level within a moderate integrated luminosity like 30fb −1 . This can be further magnified with the accumulation of luminosity. On the other hand, it is not too optimistic to assume that important hints will be available with only 10 fb −1 of integrated luminosity. Before we end this discussion, we show the viability of this signal in other regions of the CPX hole. It has already been noted in the literature that the size and the exact location of the hole in the parameter space depend on the method of calculating the loop corrections [30,42,43]. However, the calculations agree qualitatively and confirm the presence of the hole. To be specific we have chosen points from the hole as presented by [6].
In Table 6 we present different sets of values of tan β and m h 1 , keeping the other parameters fixed at their CPX values. These correspond to six different regions of the LEP hole and are termed as benchmark points 2 -7 (BP2 -BP7), all within the hole. The analysis for each of these points is an exact parallel of that already presented for the first benchmark point. We have computed the generic sensitivity of LHC to the 'hole' corresponding to each of these benchmark points, the results being summarised in Table 7. It is clear from this Table that we always have enough events (> 15) in our attempt to probe the LEP-hole even with an integrated luminosity of 30 fb −1 . As the luminosity accumulates a statistically significant signal will be obtainable from any corner of this hole.
Bench
Cross σ × ǫ in fb Total Events
Summary and Conclusions
Taking a cue from the frequently discussed possibility of CP -violation in MSSM and its phenomenological consequences at colliders, we explore a popular benchmark scenario (called the CPX scenario) of this broad framework. The study is motivated by recent analyses which reveal that the LEP, in its standard Higgs searches, could not probe some of the region in the parameter space of this scenario having low m h 1 and low to moderate tan β values. We concentrated on this 'unfilled hole' in the parameter space and studied how well LHC could explore it. We have found that the associated production of the lightest Higgs boson (which may evade the LEP bound and be as light as 50 GeV or smaller) and two of its 'light' mates along with a pair of top quarks and top squarks could be extremely useful in reaching out to this region. This is because one can now exploit modes where the involved couplings and the masses are very characteristic of the CP -violating SUSY scenario. The particular signal we choose for the study is 3-tagged b-jets + dilepton + tagged jets + missing transverse momentum, the total number of jets being within 5. It is shown that the entire 'LEP-hole' can be probed in detail in this final state with less than 50 fb −1 of LHC data, and that the CP -violating SUSY effects cannot be faked even by a combined effect from the contending scenarios like CP -conserving MSSM and/or the standard model.
Figure 1 :
1• p jet T,min = 30 GeV and jets are ordered in p T • leptons (ℓ = e, µ) are selected with p T ≥ 30 GeV and |η| ≤ 2.5 p T of ordered b's (in GeV) Ordered p T distributions for all four parton level b-jets arising from the decays of t 1 ,t * 1 and h 1 int 1t * 1 h 1 production.
Figure 2 :
2p T distribution with arbitrary normalisation for the CPV-SUSYt 1t * 1 h 1 and the SM ttH background.
Figure 3 :
3Final state jet multiplicity distributions (with arbitrary normalisation) arising fromt 1t * 1 h 1 (in green) andgg (in red) in the CPV-SUSY scenario.
Figure 4 :
4Ordered p jet T distributions in CPV-SUSY scenario:t 1t * 1 h 1 (left) andgg (right)
Table 3 :
3Branching fractions for lighter top squark, the neutral Higgs bosons and gluino in the CPX scenario.
Table 4 :
4Event rates for the CPX point, CP -conserving SUSY and the standard model with same mass spectrum as CPX except for m h,H = 117, 120 GeV for latter two cases respectively.
). The presence of the lightest Higgs boson and its not so heavy mates becomes clear after the upper cut on Hard σ × ǫ in fb FinalModels
Processes Cross-sections
without
number
in fb
(with)upper
at
(without cut)
p jet
T cut
L=30 fb −1
(pp → tt)
3.7×10 5
0.1(0.1)
3(3)
Common
(pp → ttZ)
370
0.03(0.03)
1(1)
Background (pp → ttbb)
831
0.3(0.3)
9(9)
Table 5 :
5Event rates for the 'common background' with and without the upper cut on p jet T .
Table 6 :
6Benchmark points within the LEP-hole in m h 1 -tan β plane.
Table 7 :
7Final numbers of signal events for 30 fb −1 integrated luminosity at various benchmark points in the LEP hole.
The frequent shift in the central value of m t , coming from Tevatron measurements, causes the size of the hole to change, although its location remains the same. However, there is little point in worrying about this uncertainty, since the very quantum corrections which are at the root of all CP -violating effects in the Higgs sector are prone to similar, if not greater, theoretical uncertainties.
We thank Manas Maity for estimating this background using the calculation reported in[36].
Acknowledgments: We thank Siba Prasad Das for help in the initial stages of simulation and Manas Maity for providing some important information on the calculation of the backgrounds. We also thank Subhaditya Bhattacharya, Sudhir K. Gupta, Sujoy Poddar, Alexander Pukhov and Gaurab Sarangi for helpful discussions and suggestions on the code. AD thanks Apostolos Pilaftsis for a useful private communication. PB, AKD and BM thank the Theoretical Physics Group of Indian Association for the Cultivation of Science, Kolkata, India for hospitality while the project was in progress. AD acknowledges the hospitality of Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute during the latter part of the project. Computational work for this study was partially carried out in the cluster computing facility at Harish-Chandra Research Institute (HRI) (http://cluster.mri.ernet.in). This work is partially supported by the RECAPP, Harish-Chandra Research Institute, and funded by the Department of Atomic Energy, Government of India under the XIth 5year Plan. AD's work was supported by DST, India, project no SR/S2/HEP-18/2003.
. R Barate, arXiv:hep-ex/0306033Phys. Lett. B. 56561LEP Working Group for Higgs boson searchesR. Barate et al. [LEP Working Group for Higgs boson searches], Phys. Lett. B 565, 61 (2003) [arXiv:hep-ex/0306033];
. S Schael, ALEPH CollaborationarXiv:hep-ex/0602042Eur. Phys. J. C. 47547S. Schael et al. [ALEPH Collaboration], Eur. Phys. J. C 47, 547 (2006) [arXiv:hep- ex/0602042];
. A Pilaftsis, arXiv:hep-ph/9803297Phys. Rev. D. 5896010A. Pilaftsis, Phys. Rev. D 58, 096010 (1998) [arXiv:hep-ph/9803297].
. A Pilaftsis, arXiv:hep-ph/9805373Phys. Lett. B. 43588A. Pilaftsis, Phys. Lett. B 435, 88 (1998) [arXiv:hep-ph/9805373].
. M S Carena, J R Ellis, A Pilaftsis, C E M Wagner, arXiv:hep-ph/0009212Phys. Lett. B. 495155M. S. Carena, J. R. Ellis, A. Pilaftsis and C. E. M. Wagner, Phys. Lett. B 495, 155 (2000) [arXiv:hep-ph/0009212].
. P Bechtle, LEP CollaborationarXiv:hep-ex/0602046PoS. 2005P. Bechtle [LEP Collaboration], PoS HEP2005, 325 (2006) [arXiv:hep- ex/0602046].
. M S Carena, J R Ellis, S Mrenna, A Pilaftsis, C E M Wagner, arXiv:hep-ph/0211467Nucl. Phys. B. 659145M. S. Carena, J. R. Ellis, S. Mrenna, A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B 659, 145 (2003) [arXiv:hep-ph/0211467].
. See, E Example, Accomando, arXiv:hep-ph/0608079109See, for example, E. Accomando et al., [arXiv:hep-ph/0608079], p109.
. Z Li, C S Li, Q Li, arXiv:hep-ph/0601148Phys. Rev. D. 7377701Z. Li, C. S. Li and Q. Li, Phys. Rev. D 73, 077701 (2006) [arXiv:hep-ph/0601148].
. D A Demir, arXiv:hep-ph/9901389Phys. Rev. D. 6055006D. A. Demir, Phys. Rev. D 60, 055006 (1999) [arXiv:hep-ph/9901389].
. A Pilaftsis, C E M Wagner, arXiv:hep-ph/9902371Nucl. Phys. B. 5533A. Pilaftsis and C. E. M. Wagner, Nucl. Phys. B 553, 3 (1999) [arXiv:hep- ph/9902371].
. A Dedes, S Moretti, arXiv:hep-ph/9908516Phys. Rev. Lett. 8422A. Dedes and S. Moretti, Phys. Rev. Lett. 84, 22 (2000) [arXiv:hep-ph/9908516].
. A Dedes, S Moretti, arXiv:hep-ph/9909418Nucl. Phys. B. 57629A. Dedes and S. Moretti, Nucl. Phys. B 576, 29 (2000) [arXiv:hep-ph/9909418].
. S Y Choi, J S Lee, arXiv:hep-ph/9910557Phys. Rev. D. 61115002S. Y. Choi and J. S. Lee, Phys. Rev. D 61, 115002 (2000) [arXiv:hep-ph/9910557].
. S Y Choi, M Drees, J S Lee, arXiv:hep-ph/0002287Phys. Lett. B. 48157S. Y. Choi, M. Drees and J. S. Lee, Phys. Lett. B 481, 57 (2000) [arXiv:hep- ph/0002287].
. G L Kane, L T Wang, arXiv:hep-ph/0003198Phys. Lett. B. 488383G. L. Kane and L. T. Wang, Phys. Lett. B 488, 383 (2000) [arXiv:hep- ph/0003198].
. S Y Choi, K Hagiwara, J S Lee, arXiv:hep-ph/0103294Phys. Rev. D. 6432004S. Y. Choi, K. Hagiwara and J. S. Lee, Phys. Rev. D 64, 032004 (2001) [arXiv:hep- ph/0103294].
. S Heinemeyer, arXiv:hep-ph/0108059Eur. Phys. J. C. 22521S. Heinemeyer, Eur. Phys. J. C 22, 521 (2001) [arXiv:hep-ph/0108059].
. S Y Choi, K Hagiwara, J S Lee, arXiv:hep-ph/0110138Phys. Lett. B. 529212S. Y. Choi, K. Hagiwara and J. S. Lee, Phys. Lett. B 529, 212 (2002) [arXiv:hep- ph/0110138].
. A Arhrib, D K Ghosh, O C W Kong, arXiv:hep-ph/0112039Phys. Lett. B. 537217A. Arhrib, D. K. Ghosh and O. C. W. Kong, Phys. Lett. B 537, 217 (2002) [arXiv:hep-ph/0112039].
. T Ibrahim, P Nath, arXiv:hep-ph/0204092Phys. Rev. D. 6615005T. Ibrahim and P. Nath, Phys. Rev. D 66, 015005 (2002) [arXiv:hep-ph/0204092].
. S Y Choi, M Drees, J S Lee, J Song, arXiv:hep-ph/0204200Eur. Phys. J. C. 25307S. Y. Choi, M. Drees, J. S. Lee and J. Song, Eur. Phys. J. C 25, 307 (2002) [arXiv:hep-ph/0204200].
. S W Ham, S K Oh, E J Yoo, C M Kim, D Son, arXiv:hep-ph/0205244Phys. Rev. D. 6855003S. W. Ham, S. K. Oh, E. J. Yoo, C. M. Kim and D. Son, Phys. Rev. D 68, 055003 (2003) [arXiv:hep-ph/0205244].
. A G Akeroyd, arXiv:hep-ph/0306045Phys. Rev. D. 6877701A. G. Akeroyd, Phys. Rev. D 68, 077701 (2003) [arXiv:hep-ph/0306045].
. D K Ghosh, R M Godbole, D P Roy, arXiv:hep-ph/0412193Phys. Lett. B. 628131D. K. Ghosh, R. M. Godbole and D. P. Roy, Phys. Lett. B 628, 131 (2005) [arXiv:hep-ph/0412193].
. D K Ghosh, S Moretti, arXiv:hep-ph/0412365Eur. Phys. J. C. 42341D. K. Ghosh and S. Moretti, Eur. Phys. J. C 42, 341 (2005) [arXiv:hep- ph/0412365].
. A Pilaftsis, private communicationA. Pilaftsis, private communication.
CalcHEP 3.2: MSSM, structure functions, event generation, batchs, and generation of matrix elements for other packages. A Pukhov, arXiv:hep-ph/0412191A. Pukhov, "CalcHEP 3.2: MSSM, structure functions, event generation, batchs, and generation of matrix elements for other packages", [arXiv:hep-ph/0412191].
. J R Ellis, J S Lee, A Pilaftsis, arXiv:hep-ph/0605288Mod. Phys. Lett. A. 211405J. R. Ellis, J. S. Lee and A. Pilaftsis, Mod. Phys. Lett. A 21, 1405 (2006) [arXiv:hep-ph/0605288].
. J S Lee, A Pilaftsis, M S Carena, S Y Choi, M Drees, J R Ellis, C E M Wagner, arXiv:hep-ph/0307377Comput. Phys. Commun. 156J. S. Lee, A. Pilaftsis, M. S. Carena, S. Y. Choi, M. Drees, J. R. Ellis and C. E. M. Wagner, Comput. Phys. Commun. 156, 283 (2004) [arXiv:hep- ph/0307377].
. P Skands, arXiv:hep-ph/0311123JHEP. 040736P. Skands et al., JHEP 0407, 036 (2004) [arXiv:hep-ph/0311123];
. T Sjostrand, L Lonnblad, S Mrenna, arXiv:hep-ph/0108264T. Sjostrand, L. Lonnblad and S. Mrenna, [arXiv:hep-ph/0108264].
. H L Lai, CTEQ CollaborationarXiv:hep-ph/9903282Eur. Phys. J. C. 12375H. L. Lai et al. [CTEQ Collaboration], Eur. Phys. J. C 12, 375 (2000) [arXiv:hep- ph/9903282].
. J Pumplin, D R Stump, J Huston, H L Lai, P Nadolsky, W K Tung, arXiv:hep-ph/0201195JHEP. 020712J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. Nadolsky and W. K. Tung, JHEP 0207, 012 (2002) [arXiv:hep-ph/0201195].
. S P Das, A Datta, M Guchait, M Maity, S Mukherjee, arXiv:0708.2048hep-phS. P. Das, A. Datta, M. Guchait, M. Maity and S. Mukherjee, arXiv:0708.2048 [hep-ph].
. H See, V Baer, G Barger, H Shaughnessy, L T Summy, Wang, arXiv:hep-ph/0703289Phys. Rev. D. 7595010See, for example, H. Baer, V. Barger, G. Shaughnessy, H. Summy and L. t. Wang, Phys. Rev. D 75, 095010 (2007) [arXiv:hep-ph/0703289].
. D L Rainwater, D Zeppenfeld, arXiv:hep-ph/9712271JHEP. 97125D. L. Rainwater and D. Zeppenfeld, JHEP 9712, 005 (1997) [arXiv:hep- ph/9712271].
. D L Rainwater, D Zeppenfeld, K Hagiwara, arXiv:hep-ph/9808468Phys. Rev. D. 5914037D. L. Rainwater, D. Zeppenfeld and K. Hagiwara, Phys. Rev. D 59, 014037 (1999) [arXiv:hep-ph/9808468].
. T Plehn, D L Rainwater, D Zeppenfeld, arXiv:hep-ph/9902434Phys. Lett. B. 454297T. Plehn, D. L. Rainwater and D. Zeppenfeld, Phys. Lett. B 454, 297 (1999) [arXiv:hep-ph/9902434].
. V Hankele, G Klamke, D Zeppenfeld, T Figy, arXiv:hep-ph/0609075Phys. Rev. D. 7495001V. Hankele, G. Klamke, D. Zeppenfeld and T. Figy, Phys. Rev. D 74, 095001 (2006) [arXiv:hep-ph/0609075].
. M Frank, T Hahn, S Heinemeyer, W Hollik, H Rzehak, G Weiglein, arXiv:hep-ph/0611326JHEP. 070247M. Frank, T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak and G. Weiglein, JHEP 0702, 047 (2007) [arXiv:hep-ph/0611326].
. T Hahn, S Heinemeyer, W Hollik, H Rzehak, G Weiglein, K Williams, arXiv:hep-ph/0611373T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak, G. Weiglein and K. Williams, [arXiv:hep-ph/0611373].
| [] |
[
"PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers",
"PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers"
] | [
"Namgyu Kang \nDepartment of Artificial Intelligence\n\n",
"Byeonghyeon Lee \nDepartment of Artificial Intelligence\n\n",
"Youngjoon Hong \nDepartment of Mathematics\n\n",
"Seok-Bae Yun \nDepartment of Mathematics\n\n",
"Eunbyung Park \nDepartment of Artificial Intelligence\n\n\nDepartment of Electrical and Computer Engineering\nSungkyunkwan University\n\n"
] | [
"Department of Artificial Intelligence\n",
"Department of Artificial Intelligence\n",
"Department of Mathematics\n",
"Department of Mathematics\n",
"Department of Artificial Intelligence\n",
"Department of Electrical and Computer Engineering\nSungkyunkwan University\n"
] | [] | With the increases in computational power and advances in machine learning, data-driven learning-based methods have gained significant attention in solving PDEs. Physicsinformed neural networks (PINNs) have recently emerged and succeeded in various forward and inverse PDE problems thanks to their excellent properties, such as flexibility, mesh-free solutions, and unsupervised training. However, their slower convergence speed and relatively inaccurate solutions often limit their broader applicability in many science and engineering domains. This paper proposes a new kind of data-driven PDEs solver, physics-informed cell representations (PIXEL), elegantly combining classical numerical methods and learning-based approaches. We adopt a grid structure from the numerical methods to improve accuracy and convergence speed and overcome the spectral bias presented in PINNs. Moreover, the proposed method enjoys the same benefits in PINNs, e.g., using the same optimization frameworks to solve both forward and inverse PDE problems and readily enforcing PDE constraints with modern automatic differentiation techniques. We provide experimental results on various challenging PDEs that the original PINNs have struggled with and show that PIXEL achieves fast convergence speed and high accuracy. Project page: https://namgyukang.github.io/PIXEL/ | 10.48550/arxiv.2207.12800 | [
"https://export.arxiv.org/pdf/2207.12800v2.pdf"
] | 251,066,808 | 2207.12800 | 162ce23964f2aef2378297da1b20e08a8d77d000 |
PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers
Namgyu Kang
Department of Artificial Intelligence
Byeonghyeon Lee
Department of Artificial Intelligence
Youngjoon Hong
Department of Mathematics
Seok-Bae Yun
Department of Mathematics
Eunbyung Park
Department of Artificial Intelligence
Department of Electrical and Computer Engineering
Sungkyunkwan University
PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers
With the increases in computational power and advances in machine learning, data-driven learning-based methods have gained significant attention in solving PDEs. Physicsinformed neural networks (PINNs) have recently emerged and succeeded in various forward and inverse PDE problems thanks to their excellent properties, such as flexibility, mesh-free solutions, and unsupervised training. However, their slower convergence speed and relatively inaccurate solutions often limit their broader applicability in many science and engineering domains. This paper proposes a new kind of data-driven PDEs solver, physics-informed cell representations (PIXEL), elegantly combining classical numerical methods and learning-based approaches. We adopt a grid structure from the numerical methods to improve accuracy and convergence speed and overcome the spectral bias presented in PINNs. Moreover, the proposed method enjoys the same benefits in PINNs, e.g., using the same optimization frameworks to solve both forward and inverse PDE problems and readily enforcing PDE constraints with modern automatic differentiation techniques. We provide experimental results on various challenging PDEs that the original PINNs have struggled with and show that PIXEL achieves fast convergence speed and high accuracy. Project page: https://namgyukang.github.io/PIXEL/
Introduction
Partial differential equations (PDEs) have been central to studying various science and engineering domains (Evans 2010). Numerical methods (Smith 1985;Eymard, Gallouët, and Herbin 2000) have been developed to approximate the solutions over the centuries. While successful, it requires significant computational resources and domain expertise. As an alternative approach, data-driven machine learning methods have emerged thanks to recent advances in deep learning (Rudy et al. 2017;Meng et al. 2020).
Physics-informed neural networks (PINNs) have recently received significant attention as new data-driven PDE solvers for both forward and inverse problems (Raissi, Perdikaris, and Karniadakis 2019). PINNs employ neural networks and gradient-based optimization algorithms to represent and obtain the solutions, leveraging automatic differ-entiation (Baydin et al. 2018) to enforce the physical constraints of underlying PDEs. Although promising and successfully utilized in various forward and inverse problems thanks to its numerous benefits, such as flexibility in handling a wide range of forward and inverse problems and mesh-free solutions, PINNs suffer from slow convergence rates, and they often fall short of the desired accuracy (Krishnapriyan et al. 2021;Wang, Sankaran, and Perdikaris 2022;Wang, Teng, and Perdikaris 2021).
Training PINNs generally involves deep neural networks and iterative optimization algorithms, e.g., L-BFGS (Liu and Nocedal 1989) or Adam (Kingma and Ba 2014), which typically requires a large number of iterations to converge. While many techniques have been developed over the past decades to improve the training efficiency of deep neural networks in general (Girshick 2015;Ioffe and Szegedy 2015), high computational complexity is still a primary concern for their broader applicability.
In addition, multi-layer perceptron (MLP) architecture in low dimensional input domains, where PINNs generally operate, is known to have spectral bias, which prioritizes learning low-frequency components of the target function. Recent studies have shown that spectral bias (Rahaman et al. 2019) indeed exists in PINN models (Moseley, Markham, and Nissen-Meyer 2021;Wang, Wang, and Perdikaris 2021b) and this tendency towards smooth function approximation often leads to failure to accurately capture high-frequency components or singular behaviors in solution functions.
In this paper, we propose physics-informed cell representations (coined as PIXEL), a grid representation that is jointly trained with neural networks to improve convergence rates and accuracy. Inspired by classical numerical solvers that use grid points, we divide solution space into many subspaces and allocate trainable parameters for each cell (or grid point). Each cell is a representation that is further processed by following small neural networks to approximate solution functions. The key motivation behind the proposed method is to disentangle the trainable parameters with respect to the input coordinates. In neural network-only approaches, such as PINNs, all network parameters are affected by the entire input domain space. Therefore, parameter updates for specific input coordinates influence the outputs of other input subspaces. On the other hand, each input coordinate has dedicated trainable parameters updated only for certain input coordinates in PIXEL. This parameter separation technique has been explored in visual computing domains (Fridovich-Keil et al. 2022;Martel et al. 2021;Reiser et al. 2021;Sun, Sun, and Chen 2022;Müller et al. 2022;Chen et al. 2022) and has shown remarkable success in terms of convergence speed of the training procedure.
Furthermore, the suggested PIXEL is immune to spectral bias presented in PINNs. A root cause of the bias is the shared parameters of neural networks for the entire input space. In order to satisfy PDE constraints in all input coordinates, neural networks in PINNs primarily find global principle components of solution functions, usually low-frequency modes. In contrast, PIXEL, each cell is only responsible for a small sub-region of the input domain. Therefore, a large difference between neighboring cell values can better approximate high-frequency components or singular behaviors in PDEs.
Even though we introduce discretized grid representations given the fixed resolution similar to classical numerical methods, such as FEM (Zienkiewicz, Taylor, and Zhu 2005), our approach still enjoys the benefits of PINNs. For example, we can use the same optimization frameworks to solve both forward and inverse PDE problems. Furthermore, PIXEL uses an interpolation scheme to implement virtually infinite resolution grids, and the resulting representations are differentiable with respect to input coordinates. It allows us to enforce PDE constraints using recent autograd software libraries (Maclaurin, Duvenaud, and Adams 2015;Paszke et al. 2017). As a result, our proposed method can be easily plugged into the existing PINNs training pipeline.
In sum, we introduce a new type of PDE solver by combining the best of both worlds, classical numerical methods and automatic differentiation-based neural networks. We use grid representations to improve convergence speed and overcome spectral bias in PINNs. A differentiable interpolation scheme allows us to exploit recent advances in automatic differentiation to enforce PDE constraints. We have tested the proposed method on various PDEs. Experimental results have shown that our approach achieved faster convergence rates and better accuracy.
Related Work
Physics-informed neural network. PINN (Raissi, Perdikaris, and Karniadakis 2019) is a representative approach that employs a neural network to solve PDEs and operates with few or even without data (Yuan et al. 2022). The main characteristic of PINN is to learn to minimize the PDE residual loss by enforcing physical constraints. In order to compute PDE loss, output fields are automatically differentiated with respect to input coordinates. PINNs are now applied to various disciplines including material science (Lu et al. 2020), and biophysics (Fathi et al. 2020). Although its wide range of applicability is very promising, it shows slower convergence rates and is vulnerable to the highly nonlinear PDEs (Krishnapriyan et al. 2021;Wang, Sankaran, and Perdikaris 2022).
Grid representations. This combination of neural networks and grid representations has been explored in other domains. This structure can achieve competitive performance with shallower MLP compared to sole MLP architecture. Since shallower MLP can grant shorter training time to the architecture, novel view synthesis (Fridovich-Keil et al. 2022;Sun, Sun, and Chen 2022;Chen et al. 2022), shape representation (Park et al. 2019;Chibane, Alldieck, and Pons-Moll 2020), and image representation (Sitzmann et al. 2020;Chen, Liu, and Wang 2021;Müller et al. 2022) in computer vision literature enjoy the advantage from the such combined structure. To the best of our knowledge, PIXEL is the first attempt to simultaneously learn grid representations and MLPs to solve challenging linear and nonlinear PDEs.
Operator learning. Learning mappings from input functions to solution functions has recently gained significant attention in PDE domains. With the huge success of convolutional neural networks, finite-dimensional operator learning (Guo, Li, and Iorio 2016;Zhu and Zabaras 2018) using convolution layers has been studied. To overcome their limitation, e.g., fixed-resolution, (Lu et al. 2021;Li et al. 2020Li et al. , 2021a have proposed to obtain PDE solutions in a resolution invariant manner. Physics-informed operator learning has also been suggested to enforce PDE constraints and further improve the accuracy of solutions (Wang, Wang, and Perdikaris 2021a;Li et al. 2021b). While promising, it requires training datasets primarily obtained from expensive numerical solvers and often suffers from poor out-ofdistribution generalization ).
PIXEL
Physics-informed neural networks
We briefly review physics-informed neural networks (PINNs). Let us begin with the initial-boundary value problems with PDEs. A general formulation can be written as follows:
N x,t [u](x, t) = f (x, t), x ∈ Ω, t ∈ [0, T ],(1)u(x, 0) = g(x), x ∈ Ω, (2) B x,t [u](x, t) = h(x, t), x ∈ ∂Ω, t ∈ [0, T ],(3)
where N x,t [·] is a linear or non-linear differential operator, B x,t [·] is also a differential operator for boundary conditions, and the initial conditions are denoted as u(x, 0) = g(x). u(x, t) represents the unknown solution function and PINNs use neural networks, u θ (x, t), parameterized by the trainable model parameters θ, to approximate the solution. Then, neural networks are trained by minimizing the following loss function.
L(θ) = λ res L res (θ) + λ ic L ic (θ) + λ bc L bc (θ) + λ data L data (θ),(4)
where, L res , L ic , L bc , L data are PDE residual, initial condition, boundary condition, and observational data loss functions, respectively. λ are weighting factors for each loss term. Each loss term is usually defined as mean square loss functions over sampled points. For example, a PDE residual loss L res over N res collocation points can be written as, For forward problems, observational data is not generally available, hence λ data = 0. In contrast, observational data is accessible in inverse problems, and initial and boundary conditions may also be available depending on the cases (Raissi 2018; Raissi, Perdikaris, and Karniadakis 2019). Gradientbased optimization algorithms are used to optimize the loss function, and L-BFGS and Adam are widely used in PINNs literature. Automatic differentiation is used to compute the gradients of both differential operators w.r.t input coordinates, and the loss function w.r.t trainable neural network parameters.
L res (θ) = 1 N res Nres i=1 |N x,t [u θ ](x i , t i ) − f (x i , t i )| 2 . (5)
Neural networks and grid representations
The proposed architecture consists of a small neural network and a feature extractor of input coordinates using grid representations. We approximate solution functions by a neural network f parameterized by θ,
u(x, t) ≈ f (φ(x,t, C); θ),(6)
where C is a grid representation and φ is a feature extractor given input coordinates and the grid representation using an interpolation scheme, which will be further explained in the next section. Note that both C and θ are model parameters and are updated during the training procedure. The dimension of C is determined by the dimension of the spatial input domain. For example, if x ∈ Ω ⊂ R then C ∈ R c×H×W is a three dimensional tensor, where the channel size c, and H and W are spatial and temporal grid sizes, respectively.
x ∈ [1, H] andt ∈ [1, W ] are normalized input coordinates assuming input domain Ω ⊂ R and [0, T ] are tightly bounded by a rectangular grid. If x ∈ Ω ⊂ R 2 then C is a four dimensional tensor, and if x ∈ Ω ⊂ R 3 then C is a five dimensional tensor 1 . The proposed grid representation is similar in spirit to classical numerical methods such as FDM (Smith 1985) or FEM (Zienkiewicz, Taylor, and Zhu 2005), which can increase the accuracy of solutions by extending the grid size or using more fine-grained nodal points. PIXEL inherits this advantage, and we can obtain the desired accuracy and better capture high-frequency details of solution functions by larger grid representations. Furthermore, we learn representations at each nodal point instead of directly approximating solution fields. A neural network further processes them to obtain the final solutions, which enables us to express a more complex and richer family of solution functions.
Mesh-agnostic representations through interpolation
In two dimensional grid cases, x ∈ Ω ⊂ R and C ∈ R c×H×W , the following is a feature extractor from the cell representations,
φ(x,t, C) = (7) H i=1 W j=1 C ij k(max(0, 1 − |x − i|))k(max(0, 1 − |t − j|)),
where C ij ∈ R c denotes cell representations at (i, j), and k : [0, 1] → [0, 1] represents a monotonically increasing smooth function. Given normalized coordinates (x,t), it looks up neighboring points (2 d+1 points) and computes the weighted sum of the representations according to a predefined kernel function. It is differentiable w.r.t input coordinates so that we can easily compute partial derivatives for PDE differential operator N [·] by using automatic differentiation. In the context of neural networks, it was used in a differentiable image sampling layer (Jaderberg et al. 2015), and this technique has been extensively explored in various domains (Dai et al. 2017;He et al. 2017;Pedersoli et al. 2017). Although not presented, this formulation can be easily extended to higher dimensional cases.
To support higher-order gradients, we need to use a kernel function multiple times differentiable depending on governing PDEs. We use a cosine interpolation kernel, k(x) := 1 2 (1 − cos(πx)) because it is everywhere continuous and infinitely differentiable. We empirically found that it is superior to other choices, e.g., RBF kernel, in terms of computational complexity and solution accuracy. A bilinear interpolation by k(x) := x is still a valid option if PDE only contains the first order derivatives and the goal is to maximize computational efficiency.
PINNs have been widely adopted in various PDEs due to their mesh-free solutions, which enable us to deal with arbi- Figure 2: Multigrid representations trary input domains. Even though we introduce grid representations, our proposed method is also not limited to rectangular input domains. We use a differentiable interpolation scheme, and we can extract cell representations at any query point. In other words, we connect the discretized grid representations in a differentiable manner, and the grid resolution becomes infinity. The predetermined cell locations might affect the solution accuracy meaningfully (Mekchay and Nochetto 2005; Prato Torres, Domínguez, and Díaz 2019), and a more flexible mesh construction would be ideal. We believe this is an exciting research direction, leaving it as future work.
Multigrid representations
Since we introduce the grid representation that each grid point covers only small subregions of the input domain, the higher the resolution of the grid, the faster convergence and more accurate solutions are expected. However, a significant challenge of scaling up the method is that the number of training data points required will likely increase exponentially. Given randomly sampled collocation points, the more fine-grained grids, the less chance a grid cell would see the points during the training. It would result in highly overfitted solutions since finding a satisfactory solution in a small local region with only a few data points would easily fall into local minima e.g., PDE residual loss is very low, but generating wrong solutions. Furthermore, since we no longer rely on the neural network's smooth prior, the proposed grid representations suffer from another spectral bias, which tends to learn high-frequency components quickly. Due to these reasons, the solution function we obtained from high-resolution grids often looks like pepper noise images while quickly minimizing the PDE loss.
One way to inject a smooth prior and avoid overfitting is to look up more neighboring cells for the interpolation, such as cubic interpolation or other variants, instead of the suggested scheme looking at only corner cells in squares or hypercubes. By doing so, more neighboring grid cells are used to compute representations at one collocation point, which would reduce the overfitting issues discussed above. However, it is computationally expensive, and the number of neighboring cells required to look up will vastly increase according to the grid sizes and dimensions.
Inspired by recent hierarchical grid representations (Takikawa et al. 2021;Müller et al. 2022), we suggest using multigrid representations. We stack up multiple coarse-grained grid representations and the representations at each collocation point are computed by summing up the representations from all grids. With a slight abuse of notation, a multigrid representation is defined as four dimensional tensors in two dimensional grids C ∈ R M ×c×H×W . Then we can reformulate an interpolation function as,
φ multi (x,t, C) = M i=1 φ(x + (i − 1) M ,t + (i − 1) M , C i ),(8)
where C i ∈ R c×H×W denotes a grid representation in equation 8. We present a pictorial demonstration of multigrid representations in Figure 2. We have M grids, and each grid is shifted in such a way that an input coordinate can be located in different locations in each grid. In this way, we can increase the effective grid size by a factor of M , meaning the model's expressive power will also increase. Without shifting each grid, an input coordinate lies on identical locations in every grid, and each grid would represent similar values resulting in not increasing the expressive power. The suggested multigrid approach was very critical to overall PIXEL performance. Without this, we observed that PIXEL suffers from serious overfitting issues.
Experiments
This section compares PIXEL with the baseline model PINN on the various linear and nonlinear PDEs. First, we provide a motivating example of what a baseline PINN suffers from, where the solution functions contain high-frequency components. Then, we experimented on various linear PDEs, such as 1D Convection, Reaction-diffusion, and Helmholtz equation. We also tested PIXEL on the Allen-Cahn and Burgers equation to test our model's capability to solve non-linear PDEs. For all experiments, we used Limited-memory BFGS (L-BFGS) second-order optimization algorithms. To compute the accuracy of the approximated solutions, we used the relative L 2 error, defined as ||u−û||2 ||u||2 , whereû is a predicted solution and u is a reference solution. Experimental details will be provided in supplementary materials.
Implementation
We implemented the 2D, and 3D customized CUDA kernel of the triple backward grid sampler that supports cosine, linear, and smoothstep kernel (Müller et al. 2022) and third-order gradients u xxc , u yyc with second-order gradients (Wang, Bleja, and Agapito 2022). As a result, the runtime and the memory requirement were significantly reduced. You can find our customized CUDA kernel code at https://github.com/NamGyuKang/CosineSampler.
An illustrative example
We begin with a motivating example that PINNs have struggled to find an accurate solution, suggested in (Moseley, PDEs Initial condition Boundary condition Inverse problem coefficient
Convection ut + βux = 0 u(x, 0) = sin x u(0, t) = u(2π, t) β x ∈ [0, 2π], t ∈ [0, T ] Reaction ut − νuxx − ρu(1 − u) = 0 u(x, 0) = h(x) u(0, t) = u(2π, t) ν Diffusion x ∈ [0, 2π], t ∈ [0, T ]
Helmholtz ∆u(x, y, z) + k 2 u(x, y, z) = q(x, y, z) u(x, y, z) = 0, q(x, y, z) = k 2 sin (a1πx) sin (a2πy) sin (a2πz) Markham, and Nissen-Meyer 2021). It is the first-order linear PDE and has high-frequency signals in the exact solution, ∂u ∂x1 + ∂u ∂x2 = cos(ωx 1 ) + cos(ωx 2 ), (x 1 , x 2 ) ∈ [2π, 2π] 2 , u ∈ R. ω controls frequency in solutions, and to test the capability of PIXEL to capture complex highfrequency solutions, we set ω = 15. As the results show, our method converges very rapidly. Indeed, we could find an accurate solution in a few iterations, and we already see a clear solution image in two L-BFGS iteration (Figure 3). On the other hand, PINN could not find a satisfactory solution after many iterations even though we have tested numerous hyperparameters. We believe it validates that the proposed method converges very fast and does not have a spectral bias that neural networks commonly encounter.
(3D) −(a1π) 2 sin (a1πx) sin (a2πy) sin (a2πz) · (x, y, z) ∈ ∂[−1, 1] 2 · − (a2π) 2 sin (a1πx) sin (a2πy) sin (a2πz) (x, y, z) ∈ [−1, 1] 2 Navier ut + λ1(uux + vuy) = −px + λ2(uxx + uyy) · · λ1, λ2 -Stokes vt + λ1(uvx + vvy) = −py + λ2(vxx + vyy) (3D) ux + vy = 0 Allen-Cahn ut − 0.0001uxx + λu 3 − 5u = 0 u(x, 0) = x 2 cos(πx) u(t, −1) = u(t, 1) λ x ∈ [−1, 1], t ∈ [0, 1], λ = 5 ux(t, −1) = ux(t, 1) Burgers ut + uux − νuxx = 0 u(0, x) = − sin(πx) u(t, −1) = u(t, 1) = 0 ν x ∈ [−1, 1], t ∈ [0, 1]
PDEs description
1D convection equation. The convection equation describes the heat transfer attributed to the fluid movement. (Krishnapriyan et al. 2021) has studied this PDE in the context of PINNs and without sequential training, the original PINNs has failed to find accurate solutions. We used β = 30 and the initial and boundary conditions of all PDEs used for the experiments are described in Table 1.
Reaction-diffusion equation. Reaction-diffusion equation is also a PDE that the original PINNs have worked poorly (Krishnapriyan et al. 2021). We used the same formulation in (Krishnapriyan et al. 2021), and conducted experiments with the same PDE parameters (ρ = 5, ν = 3).
Helmholtz equation. Helmholtz equation describes the problems in the field of natural and engineering sciences like acoustic and elastic wave propagation. We used the same formulation in (Wang, Teng, and Perdikaris 2021). (Wang, Teng, and Perdikaris 2021) has reported that the original PINN has struggled to find an accurate solution and proposed a learning rate annealing algorithm and a novel neural network architecture.
Allen-Cahn equation. Allen-Cahn equation is a nonlinear second-order PDE that is known to be challenging to solve using conventional PINNs (Wang, Teng, and Perdikaris 2021), and a few techniques, including adaptive re-sampling (Wight and Zhao 2020) and weighting algorithms (McClenny and Braga-Neto 2020; Liu and Wang 2021;Wang, Yu, and Perdikaris 2022), have been proposed.
1D Burgers equation. Finally, we also conducted experiments on 1D Burgers equation. It is a standard benchmark in PINNs literature, which is known to have a singular behavior in the solution. We used the same PDE parameter, ν = 0.01/π in (Raissi, Perdikaris, and Karniadakis 2019).
3D Navier-Stokes eqation The non-linear second order 3D Navier-Stokes is well known as a challenging equation to solve for fluid dynamics. (Raissi, Perdikaris, and Karniadakis 2019) shows the result of the inverse problem, which is the multivariate coefficient simultaneous prediction. We predicted the same coefficients, λ 1 = 1.0, λ 2 = 0.01. (100,200,300,400,500). Each PDE parameters are followed. β = 30 in convection, ν = 3, ρ = 5 in reaction diffusion, a 1 = 7, a 2 = 7, a 3 = 7, k = 1 in Helmholtz (3D), a 1 = 10, a 2 = 10, k = 1 in Helmholtz (2D), and ν = 0.01/π in Burgers equations. We showed solution images of the best performing one out of 5 different runs.
Results and discussion
As in Figure 4, our method converges faster than the PINN baseline in the number of L-BFGS training iterations. In all cases, our method obtained accurate solutions (indistinguishable from the reference solutions) in a few tens of iterations. We showed the training loss curves over 1000 iterations.
For the forward problem, In Convection equation, we observed the same phenomena in (Krishnapriyan et al. 2021). The baseline method PINN converges slowly, and the resulting solution image was not correctly updated for the later time domain, t > 0.4. For Reaction-diffusion equation, PINN showed a constant curve shape after 285 iterations in the averaged loss curve. Whereas PIXEL showed an exponential decay shape until 10,000 iterations.
In both 2D and 3D Helmholtz, we used high-frequency parameters a 1 = 10, a 2 = 10 for 2D, and a 1 = 7, a 2 = 7, a 3 = 7 for 3D. which resulted in a very complex solution. As we expected, due to the spectral bias, PINN has failed to converge to an accurate solution. In contrast, PIXEL obtained a highly accurate solution quickly. With low-frequency parameters a 1 = 1, a 2 = 4 in 2D, PIXEL achieved the best performance result (8.63e-04) with 96 multigrids. However, PINN got a higher relative L 2 error (2.30e-03) than PIXEL. 1.00 1.00 9.08e-01 5.77e-03 3.02e-01 2.46e-01 PINN (ours) (± 7.19e-04) (± 1.49e-06) (± 1.68e-02) (± 1.74e-03) (± 3.40e-01) (± 2.25e-01) (best: 1.00) (best : 1.00) (best : 5.23e-01) (best : 3.35e-03) (best : 2.45e-02) (best : 2.36e-02) PIXEL 5.06e-03 3.05e-01 1.77e-02 9.98e-04 9.48e-03 1.63e-02
(16,4,16,16) (± 2.64e-03) (± 2.38e-01) (± 4.67e-03) (± 3.70e-04) (± 1.62e-03) (± 2.11e-03) (best : 6.61e-04) (best : 7.47e-02) (best : 9.64e-03) (best : 4.88e-04) (best : 6.39e-03) (best : 1.33e-02) PIXEL
1.95e-01 4.26e-01 1.90e-02 6.20e-04 4.69e-03 8.11e-03
(64,4,16,16) (± 1.84e-01) (± 3.10e-01) (± 8.35e-03) (± 2.09e-04) (± 1.25e-03) (± 8.74e-05) (best : 4.86e-02) (best : 1.05e-01) (best : 3.85e-03) (best : 3.85e-04) (best : 2.41e-03) (best : 7.81e-03) PIXEL
1.53e-01 3.11e-01 1.63e-02 7.01e-04 6.19e-03 8.26e-03
(96,4,16,16) (± 6.81e-02) (± 1.43e-01) (± 3.95e-03) (± 3.60e-04) (± 3.36e-03) (± 1.18e-03) (best : 1.34e-02) (best : 1.70e-01) (best : 8.86e-03) (best : 3.20e-04) (best : 1.84e-03) (best : 7.12e-03) For Allen-Cahn, which is known to be notoriously difficult, the previous studies have demonstrated that PINNs perform very poorly without additional training techniques, such as time marching techniques (Mattey and Ghosh 2022) or causal training (Wang, Sankaran, and Perdikaris 2022). However, our method can obtain accurate solutions without any additional methods.
For the inverse problem, PINN showed fluctuation in the prediction curve due to the high standard deviation by the random seed. On the other hand, PIXEL showed robustness in predicting regardless of random seed in 3D as well as 2D equations within a 95% confidence interval. Except for Helmholtz equation which PINN failed to train, PIXEL showed convergence in a few iterations for PDEs.
To compare recent advanced PINNs algorithms, we also provide quantitative results in Table 2. We reported the numbers from the original papers and we also provided our implementation of the baseline PINN method denoted as PINN (ours). For higher accuracy, we trained more iterations until convergence, 10k, 1k, 500k, 39k, 18k, and 10k iterations were performed for each PDE according to the sequence shown in the Table 2, respectively. Our method outperforms other methods for all PDEs, except Allen-Cahn equation. Recently proposed causal PINN (Wang, Sankaran, and Perdikaris 2022), they proposed a loss function that gives more weights to the data points that have already reached low loss values. We can also incorporate this technique into our methods. However, this paper aims to show the performance of the newly proposed architecture without any bells and whistles, and combining recent training techniques into our framework will be a promising research direction.
Conclusion, limitations, and discussion
We propose a new learning-based PDE solver, PIXEL, combining the conventional grid representations and recent neural networks-based solvers. We showed that PIXEL converges rapidly and approximates the solutions accurately on various challenging PDEs. We believe this approach bridges the gap between classical numerical methods and deep learning, especially recently popularized neural fields (Xie et al. 2022).
While promising, it would be more convincing if we provided experiments on PDEs with higher-order derivatives such as Kuramoto-Sivashinsky, and Sawada-Kotera equations. Early results showed that both PIXEL and PINNs are converging very slowly, and we hypothesize that using automatic differentiation suffers from the vanishing gradients problem. We plan to further investigate this phenomenon.
Another natural question related to spatial complexity may arise. The proposed grid-based representations would require intolerable memory footprints for higher dimensional PDEs, such as BGK-Boltzmann equations. To achieve high accuracy, we may need to increase the grid size arbitrarily large. We believe these are open and exciting questions, and combining numerical methods and machine learning techniques may come to the rescue. For example, tensor decomposition techniques (Kolda and Bader 2009;Chen et al. 2022), data compression algorithm (Le Gall 1991; Wallace 1992), or adaptive methods (Nochetto et al. 2009) are few good candidates. We believe we provide a good example that combines neural networks and grid representations for solving PDEs, and marrying existing techniques into the proposed algorithm will be an exciting research area.
Experimental setup and details
For all experiments, we used Limited-memory BFGS (L-BFGS) second-order optimization algorithms. In many cases of training PINNs, it outperforms other first-order optimization algorithms, such as ADAM or SGD. We set the learning rate to 1.0 and used the strong-wolfe line search algorithm. In every L-BFGS iteration, we randomly sample collocation points to make the model robust to the entire input and time domains for training PIXELs. We found that PINNs often have struggled to converge in this setting, so we initially sampled the collocation points and fixed them, which has been a common practice in PINNs literature. To compute the accuracy of the approximated solutions, we used the relative L 2 error, defined as ||u−û||2 ||u||2 , whereû is a predicted solution and u is a reference solution. We used NVIDIA RTX3090 GPUs and A100 GPUs with 40 GB of memory. For all experiments, we used 1 hidden layers and 16 hidden dimensions for shallow MLP architecture, and a hyperbolic tangent activation function (tanh) was used. For coefficients of the loss function, we used λ ic = λ bc = 1, λ data = 0.
1D convection equation. A shallow MLP of 2 layers with 16 hidden units was used. The baseline PINN model was trained with the same number of data points from PIXEL. For the PINN model, we used 3 hidden layers and 50 hidden dimensions following the architecture in (Krishnapriyan et al. 2021).
Reaction-diffusion equation. For training PINNs, we used 3 hidden layers and 50 hidden dimensions following the architecture in (Krishnapriyan et al. 2021).
Helmholtz equation. the source term is given as q(x, y) = −(a 1 π) 2 u(x, y) − (a 2 π) 2 u(x, y) + k 2 u(x, y). The analytic solution of this formulation is known as u(x, y) = sin (a 1 πx) sin (a 2 πy). We tested the PDE parameters k = 1, a 1 = 1 ,and a 2 = 4. For a more complex setting, we also tested k = 1, a 1 = 10 and a 2 = 10. For the baseline PINN model, we used 7 hidden layers and 100 hidden dimensions following the architecture in (Wang, Teng, and Perdikaris 2021).
Allen-Cahn equation. For the baseline PINN model, we used 6 hidden layers and 128 hidden dimensions following the architecture in (Mattey and Ghosh 2022) In the case of Allen-Cahn, there was a problem that NaN occurs in PINN when the seed is 400. Unlike PIXEL, PINN excludes seed 400 only in the case of Allen-Cahn.
1D Burgers equation. For the baseline PINN model, we adopted the same architecture in (Raissi, Perdikaris, and Karniadakis 2019), using 8 hidden layers and 40 hidden dimensions. Table 3 and Table 4 shows the experimental details for the forward and inverse problems respectively. Note that all other hyperparameters of the forward problems and the inverse problems are same explained in main text, including the architecture of PINN, confirming that the proposed method is not sensitive to hyperparameters. For the inverse problems, The number of data points used for ground truth data points is 25,600 for convection equation, Burger equation, and Reaction-diffusion equation. The Helmholtz equation uses 490,000 and the Allen-Cahn equation uses 102,912.
Hyperparameter of experiments
Grid size and the number of data points
This section studies the relationship between the amounts of training data points and the grid sizes. We introduced multigrid representations that inject a smooth prior into the whole framework, which would reduce the required data points for each training iteration. We demonstrate this with the convection equation example by varying the number of training data points (collocation and initial condition) and the number of multigrid representations. We fixed the grid size 16 and the channel size 4, and varied the number of multigrids from 4 to 64. We reported the L 2 relative errors after 500 L-BFGS iterations. As shown in Table 5, our method is robust to the amounts of training data points. Although we can achieve higher accuracy with more training points, it performs comparably with a few data regimes. λ res 0.005 0.005 0.00001 0.1 0.0005 (8,4,16,16) 5.59e-02 4.11e-02 3.10e-02 2.95e-02 3.73e-02 (16,4,16,16) 4.47e-02 2.99e-02 2.51e-02 8.01e-03 1.91e-02 (32,4,16,16) 4.40e-02 2.69e-02 1.13e-02 1.13e-02 8.22e-03 Table 5: Varying the amounts of training points and the number of multigrids (L 2 relative errors) Figure 6: Visualization of multigrid representations for Burgers and Helmholtz equations (best viewed zoomed in): The first row shows image plots of each grid representation, and the second row shows the representations after the interpolation. The final representations are obtained through the sum of each interpolated cell, followed by an MLP to generate the solution. We used (4,4,64,64) and (4,4,16,16) multigrid representations for Burgers and Helmholtz, respectively.
Figure 1 :
1Overall PIXEL architecture for a PDE solver
Figure 3 :Figure 4 :
34An illustrative sinusoid example For the forward problem, training loss curves and solutions of various PDEs: We run both PINN and PIXEL 5 times for each PDE experiment, and the shaded areas show 80% confidence interval of 5 different runs with different random seeds
Figure 5 :
5Experimental results of the inverse problems. PIXEL shows a faster convergence speed compared to PINN and more accurate PDE parameter predictions. The shaded areas of training curves show 95% confidence interval of 5 different runs with different random seeds (100, 200, 300, 400, 500).
2 :
2The comparisons to other methods (L 2 relative errors). Five different experiments were performed and averaged. The standard deviation is shown with the mean in the table. Seeds 100, 200, 300, 400, and 500 were used. we compared against PINN (Raissi, Perdikaris, and Karniadakis 2019), Sequential training(Krishnapriyan et al. 2021), Self-attention (McClenny andBraga-Neto 2020), Time marching(Mattey and Ghosh 2022), and Causal training(Wang, Sankaran, and Perdikaris 2022).
Figure 7 :Figure 8 :Figure 9 :Figure 10 :Figure 11 :
7891011Inverse problem of allen-cahn equation Inverse problem of convection equation Inverse problem of reaction-diffusion equation Inverse problem of burgers equation Inverse problem of helmholtz equation.
Figure 12 :Figure 13 : 03 Figure 14 :Figure 15 :Figure 16 :
121303141516Forward problem of burgers equation Convection equation results of PIXEL, at the 1500 iteration, the final relative L 2 error is 2.69e-Forward problem of convection equation Forward problem of reaction-diffusion equation Forward problem of allen-cahn equation.
Figure 17 :
17Forward problem of helmholtz equation
Table 1 :
1The formulations of various PDEs in our experiments of the forward and the inverse problem.
Table
Table 4 :
4Experimental details of the inverse problems (PIXEL)Multigrid sizes 5,000 (# pts)
10,000
20,000
50,000
100,000
(4, 4, 16, 16)
2.35e-01
2.32e-01 2.25e-01 2.23e-01 2.21e-01
Without temporal coordinates, e.g., Helmholtz equation, C is three or four dimensional tensors, respectively.
AcknowledgementsWe are thankful to Junwoo Cho for helpful discussion and contributions. This research was supported by the Ministry of Science and ICT (MSIT) of Korea, under the National Research Foundation (NRF) grant (2022R1F1A1064184, 2022R1A4A3033571), Institute of Information and Communication Technology Planning Evaluation (IITP) grants for the AI Graduate School program (IITP-2019-0-00421). The research of Seok-Bae Yun was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1801-02.The visualization of multigrid representationsWe demonstrate the intermediate results inFigure 6. In Burgers example (the first and the second rows), we used (4,1,64,64) configuration and two layers of MLP with 16 hidden units. As we expect, each grid show foggy images since the final solution will be the sum of all multigrid representations. Also, we shifted each grid, which resulted in slightly different images from each other. The final solution is completed in the last stage by filling up the remaining contents using an MLP. Importantly, we note that the singular behavior (shock, a thin line in the middle of the solution image) is already well captured by the grid representations. The role of MLP was to represent the smooth global component in solution function. Therefore, the proposed grid representations and an MLP combine each other's strengths to obtain better final solutions.In Helmholtz example, we used small size grids (4,4,16,16). Thus, we can observe notable differences after cosine interpolation (grid-like pattern before the interpolation). We also note that the grid representations already represent complex patterns, and the last MLP stage also refined the solution by darkening the colors in the solution image.DataWe used publicly available data from(Raissi, Perdikaris, and Karniadakis 2019), which is the ground truh data for Burgers and Allen-Cahn equations.ConvectionReactionHelmholtz Allen-Cahn Burgers diffusion Grid sizes(96, 4, 16, 16) (96, 4, 16, 16) (96, 4, 16, 16) (96, 4, 16, 16)Table 3: Experimental details of the forward problems for training PIXELs: (96, 4, 16, 16) means, 96 multigrids, channel size of 4, the spatial grid size of 16, and temporal grid size of 16. # collocation pts, # ic pts, and # bc pts denote the number of collocation, initial condition, and boundary condition points, respectively.Convection Reaction HelmholtzAllen-Cahn Burgers diffusion (a 1 = 1, a 2 = 4) Grid sizes(192, 4, 16, 16) (192, 4, 16, 16) (16, 4, 16, 16) (192, 4, 16, 16) (192, 4, 16, 16) # collocation pts 100,000 100,000 100,000 100,000 100,000 # ic pts 100,000 100,000 N/A 100,000 100,000 # bc pts 100,000 100,000 100,000 N/A 100,000
Automatic differentiation in machine learning: a survey. A G Baydin, B A Pearlmutter, A A Radul, J Siskind, Journal of Marchine Learning Research. 18Baydin, A. G.; Pearlmutter, B. A.; Radul, A. A.; and Siskind, J. M. 2018. Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research, 18: 1-43.
Ten-soRF: Tensorial Radiance Fields. A Chen, Z Xu, A Geiger, J Yu, H Su, European Conference on Computer Vision (ECCV). Chen, A.; Xu, Z.; Geiger, A.; Yu, J.; and Su, H. 2022. Ten- soRF: Tensorial Radiance Fields. In European Conference on Computer Vision (ECCV).
Learning continuous image representation with local implicit image function. Y Chen, S Liu, X Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChen, Y.; Liu, S.; and Wang, X. 2021. Learning continuous image representation with local implicit image function. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, 8628-8638.
Implicit functions in feature space for 3d shape reconstruction and completion. J Chibane, T Alldieck, G Pons-Moll, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionChibane, J.; Alldieck, T.; and Pons-Moll, G. 2020. Implicit functions in feature space for 3d shape reconstruction and completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6970-6981.
Deformable convolutional networks. J Dai, H Qi, Y Xiong, Y Li, G Zhang, H Hu, Y Wei, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionDai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In Proceed- ings of the IEEE international conference on computer vision, 764-773.
Partial differential equations. L C Evans, American Mathematical Soc19Evans, L. C. 2010. Partial differential equations, volume 19. American Mathematical Soc.
Finite volume methods. R Eymard, T Gallouët, R Herbin, Handbook of numerical analysis. 7Eymard, R.; Gallouët, T.; and Herbin, R. 2000. Finite volume methods. Handbook of numerical analysis, 7: 713-1018.
M F Fathi, I Perez-Raya, A Baghaie, P Berg, G Janiga, A Arzani, R M Souza, Super-resolution and denoising of 4D-Flow MRI using physics-informed deep neural nets. 197105729Fathi, M. F.; Perez-Raya, I.; Baghaie, A.; Berg, P.; Janiga, G.; Arzani, A.; and D'Souza, R. M. 2020. Super-resolution and denoising of 4D-Flow MRI using physics-informed deep neu- ral nets. Computer Methods and Programs in Biomedicine, 197: 105729.
Plenoxels: Radiance Fields Without Neural Networks. S Fridovich-Keil, A Yu, M Tancik, Q Chen, B Recht, A Kanazawa, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Fridovich-Keil, S.; Yu, A.; Tancik, M.; Chen, Q.; Recht, B.; and Kanazawa, A. 2022. Plenoxels: Radiance Fields Without Neural Networks. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), 5501-5510.
Fast r-cnn. R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionGirshick, R. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, 1440-1448.
Convolutional neural networks for steady flow approximation. X Guo, W Li, F Iorio, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningGuo, X.; Li, W.; and Iorio, F. 2016. Convolutional neural net- works for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 481-490.
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionHe, K.; Gkioxari, G.; Dollár, P.; and Girshick, R. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, 2961-2969.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, Pmlr, M Jaderberg, K Simonyan, A Zisserman, K Kavukcuoglu, Advances in Neural Information Processing Systems. International conference on machine learningIoffe, S.; and Szegedy, C. 2015. Batch normalization: Accel- erating deep network training by reducing internal covariate shift. In International conference on machine learning, 448- 456. PMLR. Jaderberg, M.; Simonyan, K.; Zisserman, A.; and Kavukcuoglu, K. 2015. Spatial Transformer Networks. In Advances in Neural Information Processing Systems.
Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, arXiv:1412.6980Kingma, D. P.; and Ba, J. 2014. Adam: A Method for Stochas- tic Optimization. arXiv:1412.6980.
Tensor decompositions and applications. T G Kolda, B W Bader, SIAM Review. 513Kolda, T. G.; and Bader, B. W. 2009. Tensor decompositions and applications. SIAM Review, 51(3): 455-500.
Characterizing possible failure modes in physics-informed neural networks. A Krishnapriyan, A Gholami, S Zhe, R Kirby, M W Mahoney, Advances in Neural Information Processing Systems. 34Krishnapriyan, A.; Gholami, A.; Zhe, S.; Kirby, R.; and Ma- honey, M. W. 2021. Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Infor- mation Processing Systems, 34.
MPEG: A video compression standard for multimedia applications. Le Gall, D , Communications of the ACM. 344Le Gall, D. 1991. MPEG: A video compression standard for multimedia applications. Communications of the ACM, 34(4): 46-58.
Z Li, N Kovachki, K Azizzadenesheli, B Liu, K Bhattacharya, A Stuart, A Anandkumar, arXiv:2003.03485Neural Operator: Graph Kernel Network for Partial Differential Equations. Li, Z.; Kovachki, N.; Azizzadenesheli, K.; Liu, B.; Bhat- tacharya, K.; Stuart, A.; and Anandkumar, A. 2020. Neu- ral Operator: Graph Kernel Network for Partial Differential Equations. arXiv:2003.03485.
Fourier Neural Operator for Parametric Partial Differential Equations. Z Li, N B Kovachki, K Azizzadenesheli, B Liu, K Bhattacharya, A M Stuart, A Anandkumar, 9th International Conference on Learning Representations. AustriaICLR 2021, Virtual EventLi, Z.; Kovachki, N. B.; Azizzadenesheli, K.; Liu, B.; Bhat- tacharya, K.; Stuart, A. M.; and Anandkumar, A. 2021a. Fourier Neural Operator for Parametric Partial Differential Equations. In 9th International Conference on Learning Rep- resentations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.
Z Li, H Zheng, N Kovachki, D Jin, H Chen, B Liu, K Azizzadenesheli, A Anandkumar, arXiv:2111.03794Physics-Informed Neural Operator for Learning Partial Differential Equations. Li, Z.; Zheng, H.; Kovachki, N.; Jin, D.; Chen, H.; Liu, B.; Azizzadenesheli, K.; and Anandkumar, A. 2021b. Physics- Informed Neural Operator for Learning Partial Differential Equations. arXiv:2111.03794.
A Dual-Dimer method for training physics-constrained neural networks with minimax architecture. D Liu, Y Wang, Neural Networks. 136Liu, D.; and Wang, Y. 2021. A Dual-Dimer method for train- ing physics-constrained neural networks with minimax archi- tecture. Neural Networks, 136: 112-125.
On the limited memory BFGS method for large scale optimization. D C Liu, J Nocedal, Mathematical programming. 451Liu, D. C.; and Nocedal, J. 1989. On the limited memory BFGS method for large scale optimization. Mathematical programming, 45(1): 503-528.
Extraction of mechanical properties of materials through deep learning from instrumented indentation. L Lu, M Dao, P Kumar, U Ramamurty, G E Karniadakis, S Suresh, Proceedings of the National Academy of Sciences. 11713Lu, L.; Dao, M.; Kumar, P.; Ramamurty, U.; Karniadakis, G. E.; and Suresh, S. 2020. Extraction of mechanical proper- ties of materials through deep learning from instrumented in- dentation. Proceedings of the National Academy of Sciences, 117(13): 7052-7062.
Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. L Lu, P Jin, G Pang, Z Zhang, G E Karniadakis, Nature Machine Intelligence. 33Lu, L.; Jin, P.; Pang, G.; Zhang, Z.; and Karniadakis, G. E. 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3): 218-229.
Autograd: Effortless gradients in numpy. D Maclaurin, D Duvenaud, R P Adams, ICML 2015 AutoML workshop. 238Maclaurin, D.; Duvenaud, D.; and Adams, R. P. 2015. Auto- grad: Effortless gradients in numpy. In ICML 2015 AutoML workshop, volume 238.
Acorn: adaptive coordinate networks for neural scene representation. J N Martel, D B Lindell, C Z Lin, E R Chan, M Monteiro, G Wetzstein, ACM Transactions on Graphics (TOG). 404Martel, J. N.; Lindell, D. B.; Lin, C. Z.; Chan, E. R.; Mon- teiro, M.; and Wetzstein, G. 2021. Acorn: adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics (TOG), 40(4): 1-13.
A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations. R Mattey, S Ghosh, Computer Methods in Applied Mechanics and Engineering. 390114474Mattey, R.; and Ghosh, S. 2022. A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations. Computer Methods in Applied Me- chanics and Engineering, 390: 114474.
L Mcclenny, U Braga-Neto, arXiv:2009.04544Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. McClenny, L.; and Braga-Neto, U. 2020. Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. arXiv:2009.04544.
Convergence of adaptive finite element methods for general second order linear elliptic PDEs. K Mekchay, R H Nochetto, SIAM Journal on Numerical Analysis. 435Mekchay, K.; and Nochetto, R. H. 2005. Convergence of adaptive finite element methods for general second order lin- ear elliptic PDEs. SIAM Journal on Numerical Analysis, 43(5): 1803-1827.
PPINN: Parareal physics-informed neural network for timedependent PDEs. X Meng, Z Li, D Zhang, G E Karniadakis, Computer Methods in Applied Mechanics and Engineering. 370113250Meng, X.; Li, Z.; Zhang, D.; and Karniadakis, G. E. 2020. PPINN: Parareal physics-informed neural network for time- dependent PDEs. Computer Methods in Applied Mechanics and Engineering, 370: 113250.
B Moseley, A Markham, T Nissen-Meyer, arXiv:2107.07871Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations. Moseley, B.; Markham, A.; and Nissen-Meyer, T. 2021. Fi- nite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differ- ential equations. arXiv:2107.07871.
Instant neural graphics primitives with a multiresolution hash encoding. T Müller, A Evans, C Schied, A Keller, ACM Transactions on Graphics. 414Müller, T.; Evans, A.; Schied, C.; and Keller, A. 2022. In- stant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics, 41(4): 1-15.
Theory of adaptive finite element methods: An introduction. Multiscale, Nonlinear and Adaptive Approximation. R H Nochetto, K G Siebert, A Veeser, Nochetto, R. H.; Siebert, K. G.; ; and Veeser, A. 2009. Theory of adaptive finite element methods: An introduction. Multi- scale, Nonlinear and Adaptive Approximation, 409-542.
Deepsdf: Learning continuous signed distance functions for shape representation. J J Park, P Florence, J Straub, R Newcombe, S Lovegrove, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPark, J. J.; Florence, P.; Straub, J.; Newcombe, R.; and Love- grove, S. 2019. Deepsdf: Learning continuous signed dis- tance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 165-174.
Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z De-Vito, Z Lin, A Desmaison, L Antiga, A Lerer, Autodiff Workshop on Advances in Neural Information Processing Systems. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; De- Vito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. Autodiff Work- shop on Advances in Neural Information Processing Systems.
Areas of attention for image captioning. M Pedersoli, T Lucas, C Schmid, J Verbeek, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionPedersoli, M.; Lucas, T.; Schmid, C.; and Verbeek, J. 2017. Areas of attention for image captioning. In Proceedings of the IEEE international conference on computer vision, 1242- 1250.
An adaptive finite element method for a time-dependent Stokes problem. Prato Torres, R Domínguez, C Díaz, S , Numerical Methods for Partial Differential Equations. 351Prato Torres, R.; Domínguez, C.; and Díaz, S. 2019. An adap- tive finite element method for a time-dependent Stokes prob- lem. Numerical Methods for Partial Differential Equations, 35(1): 325-348.
On the spectral bias of neural networks. N Rahaman, A Baratin, D Arpit, F Draxler, M Lin, F Hamprecht, Y Bengio, A Courville, PMLRInternational Conference on Machine Learning. Rahaman, N.; Baratin, A.; Arpit, D.; Draxler, F.; Lin, M.; Hamprecht, F.; Bengio, Y.; and Courville, A. 2019. On the spectral bias of neural networks. In International Conference on Machine Learning, 5301-5310. PMLR.
Deep hidden physics models: Deep learning of nonlinear partial differential equations. M Raissi, The Journal of Machine Learning Research. 191Raissi, M. 2018. Deep hidden physics models: Deep learn- ing of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1): 932-955.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. M Raissi, P Perdikaris, G E Karniadakis, Journal of Computational physics. 378Raissi, M.; Perdikaris, P.; and Karniadakis, G. E. 2019. Physics-informed neural networks: A deep learning frame- work for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Compu- tational physics, 378: 686-707.
Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. C Reiser, S Peng, Y Liao, A Geiger, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionReiser, C.; Peng, S.; Liao, Y.; and Geiger, A. 2021. Kilon- erf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, 14335-14345.
. S H Rudy, S L Brunton, J L Proctor, J Kutz, Rudy, S. H.; Brunton, S. L.; Proctor, J. L.; and Kutz, J. N.
Data-driven discovery of partial differential equations. Science advances. 341602614Data-driven discovery of partial differential equations. Science advances, 3(4): e1602614.
Implicit neural representations with periodic activation functions. V Sitzmann, J Martel, A Bergman, D Lindell, G Wetzstein, Advances in Neural Information Processing Systems. 33Sitzmann, V.; Martel, J.; Bergman, A.; Lindell, D.; and Wet- zstein, G. 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Pro- cessing Systems, 33: 7462-7473.
Direct Voxel Grid Optimization: Super-Fast Convergence for Radiance Fields Reconstruction. G D Smith, C Sun, M Sun, H.-T Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Oxford university pressNumerical solution of partial differential equations: finite difference methodsSmith, G. D. 1985. Numerical solution of partial differential equations: finite difference methods. Oxford university press. Sun, C.; Sun, M.; and Chen, H.-T. 2022. Direct Voxel Grid Optimization: Super-Fast Convergence for Radiance Fields Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5459- 5469.
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes. T Takikawa, J Litalien, K Yin, K Kreis, C Loop, D Nowrouzezahrai, A Jacobson, M Mcguire, S Fidler, IEEE Conference on Computer Vision and Pattern Recognition, CVPR. Takikawa, T.; Litalien, J.; Yin, K.; Kreis, K.; Loop, C.; Nowrouzezahrai, D.; Jacobson, A.; McGuire, M.; and Fidler, S. 2021. Neural Geometric Level of Detail: Real-time Ren- dering with Implicit 3D Shapes. In IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR, 11358-11367.
The JPEG still picture compression standard. G Wallace, IEEE Transactions on Consumer Electronics. 381Wallace, G. 1992. The JPEG still picture compression stan- dard. IEEE Transactions on Consumer Electronics, 38(1): xviii-xxxiv.
Go-surf: Neural feature grid optimization for fast, high-fidelity rgb-d surface reconstruction. J Wang, T Bleja, L Agapito, arXiv:2206.14735arXiv preprintWang, J.; Bleja, T.; and Agapito, L. 2022. Go-surf: Neural feature grid optimization for fast, high-fidelity rgb-d surface reconstruction. arXiv preprint arXiv:2206.14735.
S Wang, S Sankaran, P Perdikaris, arXiv:2203.07404Respecting causality is all you need for training physics-informed neural networks. Wang, S.; Sankaran, S.; and Perdikaris, P. 2022. Respecting causality is all you need for training physics-informed neural networks. arXiv:2203.07404.
Understanding and mitigating gradient flow pathologies in physics-informed neural networks. S Wang, Y Teng, P Perdikaris, SIAM Journal on Scientific Computing. 435Wang, S.; Teng, Y.; and Perdikaris, P. 2021. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM Journal on Scientific Computing, 43(5): A3055-A3081.
Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. S Wang, H Wang, P Perdikaris, Science advances. 7408605Wang, S.; Wang, H.; and Perdikaris, P. 2021a. Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Science advances, 7(40): eabi8605.
On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. S Wang, H Wang, P Perdikaris, Computer Methods in Applied Mechanics and Engineering. 384113938Wang, S.; Wang, H.; and Perdikaris, P. 2021b. On the eigen- vector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural net- works. Computer Methods in Applied Mechanics and Engi- neering, 384: 113938.
When and why PINNs fail to train: A neural tangent kernel perspective. S Wang, X Yu, P Perdikaris, Journal of Computational Physics. 449110768Wang, S.; Yu, X.; and Perdikaris, P. 2022. When and why PINNs fail to train: A neural tangent kernel perspective. Jour- nal of Computational Physics, 449: 110768.
Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics In. C L Wight, J Zhao, arXiv:2007.04542formed Neural Networks. Wight, C. L.; and Zhao, J. 2020. Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics In- formed Neural Networks. arXiv:2007.04542.
Y Xie, T Takikawa, S Saito, O Litany, S Y N Khan, F Tombari, J Tompkin, V Sitzmann, S Sridhar, Neural Fields in Visual Computing and Beyond. 41Xie, Y.; Takikawa, T.; Saito, S.; Litany, O.; Khan, S. Y. N.; Tombari, F.; Tompkin, J.; Sitzmann, V.; and Sridhar, S. 2022. Neural Fields in Visual Computing and Beyond. STAR, 41(2).
A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. L Yuan, Y.-Q Ni, X.-Y Deng, S Hao, Journal of Computational Physics. 111260Yuan, L.; Ni, Y.-Q.; Deng, X.-Y.; and Hao, S. 2022. A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. Journal of Computational Physics, 111260.
Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. Y Zhu, N Zabaras, Journal of Computational Physics. 366Zhu, Y.; and Zabaras, N. 2018. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncer- tainty quantification. Journal of Computational Physics, 366: 415-447.
The finite element method: its basis and fundamentals. O C Zienkiewicz, R L Taylor, J Z Zhu, ElsevierZienkiewicz, O. C.; Taylor, R. L.; and Zhu, J. Z. 2005. The finite element method: its basis and fundamentals. Elsevier.
| [
"https://github.com/NamGyuKang/CosineSampler."
] |
[
"Background Stratospheric Aerosol Investigations Using Multi-Color Wide-Field Measurements of the Twilight Sky",
"Background Stratospheric Aerosol Investigations Using Multi-Color Wide-Field Measurements of the Twilight Sky"
] | [
"Oleg S Ugolnikov [email protected] \nSpace Research Institute\nRussian Academy of Sciences Profsoyuznaya st\n84/32117997MoscowRussia\n",
"Igor A Maslov \nSpace Research Institute\nRussian Academy of Sciences Profsoyuznaya st\n84/32117997MoscowRussia\n"
] | [
"Space Research Institute\nRussian Academy of Sciences Profsoyuznaya st\n84/32117997MoscowRussia",
"Space Research Institute\nRussian Academy of Sciences Profsoyuznaya st\n84/32117997MoscowRussia"
] | [] | First results of multi-wavelength measurements of the twilight sky background using all-sky camera with RGB-color CCD conducted in spring and summer of 2016 in central Russia (55.2°N, 37.5°E) are discussed. They show the effect of aerosol scattering at altitudes up to 35 km which significantly increases to the long-wave range (624 nm, R channel). Analysis of sky color behavior during the light period of twilight with account of ozone Chappuis absorption allows retrieving the angle dependencies of scattering on the stratospheric aerosol particles. This is used to find the parameters of lognormal size distribution: median radius about 0.08 microns and width 1.5-1.6 for stratospheric altitude range. | null | [
"https://export.arxiv.org/pdf/1607.02597v2.pdf"
] | 119,236,187 | 1607.02597 | f2b00bd666379110110bdcb150d63b8af2e79a44 |
Background Stratospheric Aerosol Investigations Using Multi-Color Wide-Field Measurements of the Twilight Sky
Oleg S Ugolnikov [email protected]
Space Research Institute
Russian Academy of Sciences Profsoyuznaya st
84/32117997MoscowRussia
Igor A Maslov
Space Research Institute
Russian Academy of Sciences Profsoyuznaya st
84/32117997MoscowRussia
Background Stratospheric Aerosol Investigations Using Multi-Color Wide-Field Measurements of the Twilight Sky
1
First results of multi-wavelength measurements of the twilight sky background using all-sky camera with RGB-color CCD conducted in spring and summer of 2016 in central Russia (55.2°N, 37.5°E) are discussed. They show the effect of aerosol scattering at altitudes up to 35 km which significantly increases to the long-wave range (624 nm, R channel). Analysis of sky color behavior during the light period of twilight with account of ozone Chappuis absorption allows retrieving the angle dependencies of scattering on the stratospheric aerosol particles. This is used to find the parameters of lognormal size distribution: median radius about 0.08 microns and width 1.5-1.6 for stratospheric altitude range.
Introduction
It is well-known that most part of aerosol particles in the atmosphere of Earth is distributed in its lower layer, the troposphere. However, upper atmospheric layers are not absolutely free from solid or liquid particles. As early as in late XIX century, after Krakatoa eruption in 1883, the color change of the twilight sky was noticed (Clark, 1883), the phenomenon was called "volcanic purple light" (Lee and Hernádez-Andrés, 2003). Gruner and Kleinert (1927) explained it by aerosol light scattering above the troposphere.
Existence of aerosol layer in the lower stratosphere was confirmed in balloon experiments (Junge et al, 1961), and it was called Junge layer. Aerosol was detected at different latitudes and thus could not be contained of pure water or ice, since the temperature is above the water condensation threshold (except polar regions in winter). As it was shown by Rosen (1971), it is the solution of sulfuric acid. It is produced by the chemical reactions of sulfur dioxide transferred to the stratosphere from the ground.
Long series of balloon observations (Deshler et al., 2003) had shown that the particle size distribution was monomodal with median radius about 0.1 μm. Major eruptions in late XX century (El Chichon in 1982, Mt. Pinatubo in 1991 rapidly changed the physical and optical characteristics of stratospheric aerosol (Jager, Deshler, 2002;Deshler et al., 2003;Bauman et al., 2003). The particle size distribution became bimodal with larger fraction size more than 0.3 μm, maximal size exceeded 1 μm. Large amount of aerosol scattered the solar emission, decreasing the Earth's surface temperature (Hansen et al., 1992). It also destructed the stratospheric ozone by means of heterogeneous chemical reactions (Hoffman and Solomon, 1989).
One of possible sources of background aerosol particles is anthropogenic sulfur dioxide (Brock et al., 1995). This relates the question of stratospheric aerosol with global climate change. Hoffman and Rosen (1980) noticed the possible increase of background aerosol compared with early observations (Junge et al., 1961). The gradual increase was also observed recently, during the beginning of XXI century (Solomon et al., 2011).
Background stratospheric aerosol particles also play important role in physics and chemistry of the middle atmosphere. Reflection of solar radiation is basically related with small particles those scatter significant part of light to the back hemisphere (Hinds, 1999). In high latitudes tiny particles also play the role of condensation nuclei for polar stratospheric (or nacreous) clouds strongly influencing the ozone chemistry.
The backscatter ratio of total and Rayleigh scattering measured in lidar experiments (Zuev et al., 2007, Burlakov et al., 2011 is not more than 1.2-1.3 after moderate eruptions like Tavurvur in 2006 and other recent events. During the volcanically quiet epochs, stratospheric aerosol backscattering is quite small admixture to the Rayleigh level. However, it increases at lower scattering angles owing to properties of Mie scattering. Wavelength dependence of the intensity differs from the one of Rayleigh scattering and can be also used for size estimation, that was performed in limb measurements (Thomason et al., 2007;Bourassa et al., 2008) and lidar sounding (Von Zahn et al., 2000, Jumelet, 2008.
Background aerosol scattering also changes the characteristics of the twilight sky: brightness distribution, color, and polarization. This can be used to separate this component of the twilight background and to find its observational characteristics (Ugolnikov and Maslov, 2009). In that paper the polarization of stratospheric aerosol scattering was estimated after the Tavurvur (Rabaul) volcano eruption in 2006 (0.28±0.03 for scattering angle 92° and wavelength 525 nm). Given the type of size distribution of aerosol particles (lognormal with σ=1.6), the mean radius value (0.107±0.005 microns) can be estimated with high accuracy.
Twilight geometry of radiation transfer allows separating the definite atmospheric level still illuminated by the Sun while lower dense layers are immersed into the Earth's shadow. Use of allsky cameras helps to cover wide range of scattering angles, this is important for particle size estimation. It gets more exact if multi-color data are also used. Multi-wavelength observations are of special interest not only because of wavelength dependence of aerosol scattering. Strong excess of Rayleigh scattering in shorter wavelengths leads to the significant difference of effective scattering altitude in blue and red bands. The same atmospheric volume can be still illuminated by straight solar radiation in red band and be strongly obscured in blue band. Blue spectral region is also characterized by higher contribution of multiple scattering (Ugolnikov, 1999;Ugolnikov and Maslov, 2002).
The effects listed above are the reasons of intensive red color of thin clouds at sunset. The stratospheric aerosol particles can influence the sky color during the deeper stage of twilight, at solar zenith angles about 92-93°. It can be detected as color change in the dawn segment. However, color of the sky is also influenced by Chappuis absorption bands of atmospheric ozone and effects of multiple scattering those must be also taken into account. The basic aim of this paper is to fix the size distribution of stratospheric aerosol particles basing on simple color measurements of the twilight sky.
Observational effect of stratospheric aerosol
Measurements of the twilight sky background are being conducted in Chepelevo, central Russia (55.2°N, 37.5°E) using all-sky camera described by Ugolnikov and Maslov (2013ab). This camera is designed for measurements over the wide part of the sky with diameter about 140°. During the spring and early summer of 2016, RGB-color Sony QHYCCD-8L matrix was installed. Effective wavelength of B, G and R channels was equal to 461, 540 and 624 nm, respectively (R-channel is also corrected by IR-blocking filter). The diameter of sky image is about 650 pixels. Exposure time varied from 3 ms at sunset/sunrise to 30 s during the night. Camera position, flat field and atmospheric transparency are controlled by star images photometry in the night frames. This paper is based on the results of measurements in the solar vertical. In this case the sky point position is characterized by the coordinate ζ (Ugolnikov, Maslov, 2013b), equal to zero in the zenith, positive in the dusk/dawn area and negative in the opposite sky part. The module of this value is equal to the zenith distance of the observational point in the solar vertical. Figure 1 shows the dependence of sky brightness ratio in the symmetric points of the solar vertical I(ζ = +45°) / I(ζ = −45°) for all three channels during the evening twilight of March, 27, 2016. This dependence was described in (Ugolnikov, 1999) and reflects the behavior of ratio of single and multiple scattering intensities. During the light stage of twilight, the ratio I(+ζ)/ I(−ζ) increases due to appearance of difference of single scattering altitudes in the dusk area and opposite sky part. During the darker stage, at solar zenith angles more than 96°, the brightness excess in the dusk area decreases as the single scattering fades on the background of multiple scattering. However, this dependence for R channel has a remarkable feature at solar zenith angle about 93°: additional brightness excess in the dusk area (arrows in Figure 1) that is barely visible in G channel and fades in B channel. During this twilight stage, effective scattering takes place at the altitudes about 20 km, close to Junge aerosol layer. Twilight sky color is also strongly influenced by Chappuis bands of stratospheric ozone. However, it has the same order of value in G band, but the dusk excess of brightness is significantly less there. We can see that this excess is shifted to the lower values of solar zenith angle in G channel. This is related with higher position of effective scattering layer compared with R channel. So, the same aerosol layer corresponds to earlier twilight stage in G channel. Figure 2 presents the graphical scheme of single scattering during the light stage of twilight. Before being scattered, the solar emission goes through the stratosphere almost horizontally. The perigee height of effective path decreases with wavelength owing to sharp spectral dependence of Rayleigh scattering. The length of path fraction through ozone layer is long (line 1 in Figure 2), Chappuis absorption in green and red spectral range and multiple scattering effects (Ugolnikov et al., 2004) lead to gradual "bluing" of sky spectrum from dusk/dawn area to opposite part of the sky. When the solar zenith angle decreases (line 2), the path through ozone layer gets shorter and the sky color should turn redder, especially far from the dusk segment. If scattering on the aerosol particles appears, it will cause additional red excess of brightness in the dusk/dawn area by the reason described above. Change of sky color in the different points of solar vertical during the twilight is used to detect the stratospheric aerosol scattering and to explore its properties.
Aerosol scattering analysis
As it was shown in lidar (Burlakov et al., 2011) and space limb (Bourassa et al., 2008) measurements, aerosol density above the Junge layer decreases with altitude and becomes small in upper stratosphere. The brightness excess in dusk area shown in Figure 1 fades at solar zenith angle about 95-96°, that corresponds to effective scattering altitude about 35-40 km for ζ = +45°.
We introduce the observed color index of the sky as the value
. ) , ( ) , ( ln ) , ( z I z I z C B R RB ζ ζ ζ =(1)
Here I is the background intensity, ζ is the sky point position in the solar vertical, z is the solar zenith angle, R and B are the color channels. Figure 3 shows the dependencies of color indexes C RB on solar zenith angle for different ζ values from −45° to +45°, the evening twilight of March, 27, 2016. As it was expected, this value increases (the color turns redder) during the light stage of twilight compared with dark stage. The trend gets faster from the zenith (bold line) to the duskopposite region (dashed lines), and much faster from the zenith to the dusk area (solid lines), that can be related with Chappuis absorption with multiple scattering and stratospheric aerosol, respectively.
We see that color variations along the solar vertical are minimal at solar zenith angle z 0 equal to 96°. This time the trends are practically the same for all ζ values. This is logical since the single effective scattering takes place above the ozone and stratospheric aerosol layer. We take this moment as the reference and check the color index evolution to the lighter stage of twilight, introducing the value:
). , Building the numerical scheme of aerosol scattering separation, we assume it to be negligibly small in B channel. We can do it basing on effects in Figure 1, this assumption can lead just to underestimation of aerosol scattering contribution in R channel. Physically it is defined not only by significant decrease of aerosol to Rayleigh scattering ratio in B channel but also the difference of effective scattering altitudes in these two channels. The brightness of the sky background measured in R channel is ). ,
( ) , ( ) , ( 0 z C z C z D RB RB RB ζ ζ ζ − =(2)( ) , ( ) , ( 0 z I z I z I AR R R ζ ζ ζ + = (3)
Here I A is the stratospheric aerosol scattering intensity, I 0 is the background in the case of clear stratosphere. If I A is a small admixture to I 0 , then we can write the equation for the sky color index:
≈ + + = + = ) , ( ) , ( ) , ( ln ) , ( ) , ( ln ) , ( ) , ( ) ,( ln ) , ( 0 00 0 z I z I z I z I z I z I z I z I z C R AR R B R B AR R RB ζ ζ ζ ζ ζ ζ ζ ζ ζ . ) 2 / ) ,( ( ) , ( ) , ( ) , ( ) , ( ln 00 z I z I z I z I z I AR R AR B R ζ ζ ζ ζ ζ + + ≈ (4)
The last term is good approximation of logarithm of total and aerosol-free brightness values in R channel. The denominator of the term is the mean quantity of these two values can be also written as (I R (ζ, z) − I AR (ζ, z)/2), we denote it as I MR (ζ, z). Assuming that aerosol effects are negligibly small at deeper twilight (z 0 = 96°), we write the equation for color index evolution:
. ) , ( ) , ( ) , ( ) , ( ) , ( ln ) , ( ) , ( ) , ( ) ,( ln ) , ( 0 0+ = − + =(5)
The term D 0 is related with the color change due to Chappuis absorption of atmospheric ozone and effects of multiple scattering. We assume it to be linear by the length of emission path through the stratosphere before scattering (see Figure 2):
). 90 ( ) , ( 1 0 0 0 O RB z tg A A d const A z D − − + = ⋅ + = ζ ζ (6)
The aerosol scattering brightness far from horizon is being found in a form:
. cos )) ,
( ( ) , , ( ) , ( cosζ ζ ζ ζ σ ζ R E AR e z h P z r F z I − − = (7)
Here F is the first component of Mie scattering matrix, refraction index for sulfate particles is taken to be equal to 1.43, r and σ are the parameters of log-normal particle size distribution (r is the mean radius, and σ is the exponent of standard deviation of radius logarithm). E R is the vertical extinction value for R band. The scattering angle is equal to ( z − ζ ), disregarding the refraction (about 0.2° for effective path).
Function P is related with aerosol vertical profile and characterizes the dependence of aerosol scattering brightness on the effective altitude of scattering h. We take h value as corresponding to the solar emission path to the scattering point with optical depth τ=1. It is the altitude of most effective scattering (the atmospheric density decreases upwards, and extinction of solar emission gets stronger downwards). The value of h is calculated using the atmospheric model with real temperature and ozone vertical profiles for each observation date by EOS Aura/MLS data (EOS Team, 2011ab). These values for zenith (ζ=0) are denoted on the x-axis in Figures 1 and 3. If we move to another sky point, the altitude h will change. For every fixed value of solar zenith angle z and corresponding short interval of h we assume the dependence P(h) to be exponential:
. )) , 0 ( ( )) , ( ( )) , 0 ( ) , ( ( 0 z h z h K e z h P z h P − − = ζ ζ(8)
During the light twilight conditions, the difference of effective scattering altitudes in the solar vertical point (ζ) and zenith, (h (ζ , z) − h (0 , z)), does not exceed 2-3 km, being negative in the dusk/dawn area and positive in the opposite sky part. Substituting (6-8) into (5), we have
. cos ) , ( )) , 0 ( ( ) , , ( ) 90 ( ) , ( cos )) , 0 ( ) , ( ( 0 1 0 ζ ζ ζ ζ ζ σ ζ ζ R E MR z h z h K O RB e z I e z h P z r F z tg A A z D − − − ⋅ − + − − + = (9)
This equation has the unknown parameters: A 0 , A 1 , P 0 , r, and σ. We have a number of measurements for different ζ values (the step is 5°). However, this system is hard to solve directly, since the different ( r − σ ) pairs can correspond to very close scattering functions F (the wider is distribution, or the more σ, the less is mean radius r). This effect is well-known, and different methods of determination of particle size distribution lead to the results as the lines in (r−σ) diagram (Bourassa et al., 2008) or mean particle radius r for fixed value of σ. The same is true for noctilucent clouds (Ugolnikov et al., 2016). We also don't initially know the amount of aerosol altitude gradient K and the value I M in R channel.
To solve this problem, we use iteration method. For the first approximation, we assume the value K to be same as for Rayleigh scattering and I MR =I 0 =I R (the contribution of aerosol scattering is small). Then we find the particle radius value r for the lognormal distribution with σ=1.6 (according to (Deshler et al., 2003)). The system becomes non-linear by one unknown parameter, r, and linear by three other ones: A 0 , A 1 , and P 0 , being quite easy to solve by least squares method. Completing this procedure for different solar zenith angles z, we find the intensity of stratospheric aerosol scattering I AR (ζ, z). This allows finding the value of K for definite altitude h:
. )) , 0 ( ( )) , 0 ( ( ) ( 0 0 dh z h P z h dP h K − =
(10) We can also find the quantity
) 2 / ) , ( ( ) , ( ) , ( z I z I z I AR R MR ζ ζ ζ − =(11)
and use it together with K(h) at the next iteration step in equation (9). Owing to low contribution of stratospheric aerosol scattering in the total twilight background and small variations of the altitude h along the solar vertical (equation (8)), the process reaches the result fast, and we find the aerosol scattering field I AR (ζ, z). At the last iteration step, the solution is being found for different σ values with step equal to 0.1. The best-fit models are shown by solid lines in the Figure 4 for the same z as observational dots, corresponding effective scattering altitudes h(0, z) for the zenith are also noted. Typical best-fit parameters for March, 27 are (r=0.07 microns, σ =1.6), (r=0.09 microns, σ =1.5), or (r=0.12 microns, σ =1.4).
Particle size distributions
Results of size distribution retrieval on ( r − σ ) diagram for z = 93.4°, h (0, z) = 21.4 km are shown in Figure 5. The best-fit solution for this moment corresponds to model with r = 0.091 microns and σ=1.5 (dot in the figure), possible solution area is extended to (smaller mean radii -wider distributions) and (larger mean radii -narrower distributions). The areas for single, double and triple error are shown. Dashed line shows the example of best-fit result of OSIRIS limb scattering spectroscopy (Bourassa et al., 2008), that is in good agreement with twilight data. Figure 6. Vertical profiles of mean particle radius, 1 − this work, the same twilight as in Fig. 3 -5, 2 − Bourassa et al. (2008). Figure 6 shows the vertical dependency of mean particle radius for lognormal distribution with σ = 1.6 compared with profile obtained in (Bourassa et al., 2008) for the same σ. Both profiles show the maximum of particle size near 22 km. However, this maximum by the twilight data is less remarkable. This blurring effect can be explained by the thickness of "twilight layer", a wide range of altitudes making the contribution to the aerosol scattering during the definite twilight stage. The dependencies of aerosol scattering contribution in total sky brightness on a effective scattering altitude are shown in Figure 7 for two sky positions (zenith, ζ = 0°, and dusk area, ζ = +45°). It should be noted that for any fixed moment the effective scattering altitude is not the same in these points. The value shown in figure is not equal to the ratio of aerosol and Rayleigh scattering intensity, since the sky background is also contributed by multiple scattering with intensity about 30% of the total value (Ugolnikov and Maslov, 2002). This effect together with ozone Chappuis absorption and difference of effective thickness of Rayleigh and aerosol scattering layers makes difficult to estimate the total extinction of stratospheric aerosol. Background aerosol scattering is significant only in dusk/dawn area. For all other sky areas its contribution is just about several percents, that can be hard to detect.
The results shown in Figures 3-7 refer to the evening twilight of March, 27, 2016, the first twilight of observations with all-sky camera and color CCD. Figure 8 shows the temporal evolution of background aerosol characteristics during the observational period of spring and summer, 2016 for effective scattering altitude 20 km: the contribution of aerosol scattering in the total background in the zenith I AR /I R , particle radius r (lognormal distribution with σ = 1.6), and altitude gradient of vertical aerosol distribution K. We don't see any significant seasonal trend of stratospheric aerosol characteristics during the spring and summer months of 2016.
Discussion and conclusion
In this paper we consider the effect of light scattering on stratospheric aerosol particles that can be observed by multi-wavelength observations during the twilight. This effect is quite significant even in the case of background stratospheric aerosol, it can significantly increase after volcanic eruptions. Fortunately, this observational effect has strong wavelength dependence, appearing in red spectral range. Use of RGB CCD-cameras allows to investigate it numerically, comparing the sky background properties in R and B channels. The method described here does not require polarization measurements of the sky background. However, polarization data together with extinction values measured by satellites can significantly increase the accuracy of size distribution of particles.
The contribution of aerosol scattering in the total sky background reaches about 25% in the dusk/dawn area. In the opposite sky part this value is not more than several percents. The backscattering ratio is expected to be even less, that makes this type of aerosol more difficult to investigate by lidar technique.
The results of size distribution and its vertical profile are in good agreement with other methods of stratosphere aerosol sounding. Typical mean radius of particles is about 0.07-0.08 microns for σ=1.6, the size and contribution of this fraction of sky background reaches maximum at effective scattering altitude about 22 km.
The method described here can be the basis of systematical monitoring of stratospheric aerosol by a large number of color all-sky cameras installed in northern latitudes for aurora and noctilucent clouds observation. This will help to fix possible trends of aerosol characteristics and effects of volcanoes eruptions during upcoming years.
Figure 1 .
1Twilight sky brightness ratio in symmetric points of solar vertical (evening twilight ofMarch, 27, 2016). Arrows show the effect of aerosol scattering in stratosphere.
Figure 2 .
2On the explanation of color effects of the sky background during the twilight.
Figure 3 .
3Color index of the sky background in solar vertical points during the evening twilight of March, 27, 2016, the values of ζ are denoted near the curves.
Figure 4 .
4Difference of sky color indexes at zenith angles z (denoted near the curves) and 96°, the same twilight as in Fig.3. Solid lines correspond to best-fit model of stratospheric aerosol, dashed lines refer to aerosol-free case. The values of effective scattering altitudes at the zenith are denoted in the left. The examples of the dependences D RB (ζ) for different solar zenith angles (the same twilight) are shown in Figure 4. It shows a fast increase at large positive ζ on a background of gradual decrease (dashed lines), as it was expected.
Figure 5 .
5Retrieved characteristics of particle log-normal size distribution: solid line and gray areas (single, double and triple error) − this work, 21.4 km, dashed line − Bourassa et al. (2008).
Figure 7 .Figure 8 .
78Altitude profile of aerosol scattering contribution in the sky background for different solar vertical points, the same twilight as inFig. 3-5. Evolution of contribution to the sky brightness in the zenith, mean particle radius and altitude gradient of aerosol at 20 km in spring and summer, 2016.
AcknowledgmentsAuthors are grateful to Andrey M. Tatarnikov (Sternberg Astronomical Institute, Moscow State University) for his help in observation preparations. The work is supported by Russian Foundation for Basic Research, grant No.16-05-00170-a.
A stratospheric aerosol climatology from SAGE II and CLAES measurements: 2. Results and comparisons, 1984-1999. J J Bauman, P B Russell, M A Geller, P Hamill, J. Geophys. Res. 108Bauman, J.J., Russell, P.B., Geller, M.A., Hamill, P., 2003. A stratospheric aerosol climatology from SAGE II and CLAES measurements: 2. Results and comparisons, 1984-1999. J. Geophys. Res., 108, D13, 4383-4412.
Retrieval of stratospheric aerosol size information from OSIRIS limb scattered sunlight spectra. A E Bourassa, D A Degenstein, E J Llewellyn, Atmos. Chem. Phys. Discuss. 8Bourassa, A.E., Degenstein, D.A., Llewellyn, E.J., 2008. Retrieval of stratospheric aerosol size information from OSIRIS limb scattered sunlight spectra. Atmos. Chem. Phys. Discuss., 8, 4001- 4016.
Particle formation in the upper tropical troposphere -A source of nuclei for the stratospheric aerosol. C A Brock, P Hamill, J C Wilson, H H Jonsson, K R Chan, Science. 270Brock, C.A., Hamill, P., Wilson, J.C., Jonsson, H.H., Chan, K.R., 1995. Particle formation in the upper tropical troposphere -A source of nuclei for the stratospheric aerosol, Science, 270, 1650- 1653.
Lidar Observations of Aerosol Disturbances of Stratosphere above Tomsk (56.5°N, 85.0°E) During the Volcanic Activity Period in 2006-2010. V D Burlakov, S I Dolgii, A V Nevzorov, Atmospheric and Oceanic Optics. 24Burlakov, V.D., Dolgii, S.I., Nevzorov, A.V., 2011. Lidar Observations of Aerosol Disturbances of Stratosphere above Tomsk (56.5°N, 85.0°E) During the Volcanic Activity Period in 2006-2010. Atmospheric and Oceanic Optics. 24, 1031-1040.
1883. The remarkable sunsets. J E Clark, Nature. 29Clark, J.E., 1883. The remarkable sunsets. Nature 29, 130-131.
Thirty years of in situ stratospheric aerosol size distribution measurements from Laramie, Wyoming (41°N), using balloon-borne instruments. T Deshler, M E Hervig, D J Hofmann, J M Rosen, J B Liley, J. Geophys. Res. 108Deshler, T., Hervig, M.E., Hofmann, D.J., Rosen, J.M., Liley, J.B., 2003. Thirty years of in situ stratospheric aerosol size distribution measurements from Laramie, Wyoming (41°N), using balloon-borne instruments. J. Geophys. Res., 108, D5, 4167-4179.
MLS/Aura Level 2 Ozone (O3) Mixing Ratio V003. EOS MLS Science Team, 2011a. MLS/Aura Level 2 Temperature, version 003. Greenbelt, MD, USA; Greenbelt, MD, USA, Goddard Earth; AccessedSciences Data and Information Services Center (GES DISC). Data Access DateEOS MLS Science Team, 2011a. MLS/Aura Level 2 Temperature, version 003, Greenbelt, MD, USA:NASA Goddard Earth Science Data and Information Services Center, Accessed Enter User Data Access Date at http://disc.sci.gsfc.nasa.gov/datacollection/ML2T_V003.html EOS MLS Science Team, 2011b. MLS/Aura Level 2 Ozone (O3) Mixing Ratio V003, Greenbelt, MD, USA, Goddard Earth Sciences Data and Information Services Center (GES DISC), Accessed [Data Access Date] http://disc.gsfc.nasa.gov/datacollection/ML2O3_003.html
Die Dämmerungserscheinungen (Grand. P Gruner, H Kleinert, Hamburg, GermanyGruner, P., Kleinert, H., 1927. Die Dämmerungserscheinungen (Grand, Hamburg, Germany), 103- 107.
Potential climate impact of the Mount Pinatubo eruption. J Hansen, A Lacis, R Ruedy, M Sato, Geophys. Res. Lett. 19Hansen, J., Lacis, A., Ruedy, R., and Sato, M., 1992. Potential climate impact of the Mount Pinatubo eruption: Geophys. Res. Lett., 19, 215-218.
Aerosol technology: properties, behavior, and measurement of airborne particles. W C Hinds, John Wiley & SonsNew YorkHinds, W.C., 1999. Aerosol technology: properties, behavior, and measurement of airborne particles. John Wiley & Sons, New York.
Stratospheric sulfuric acid layer: Evidence for an anthropogenic component. D J Hofmann, J M Rosen, Science. 208Hofmann, D.J., Rosen, J.M., 1980. Stratospheric sulfuric acid layer: Evidence for an anthropogenic component, Science, 208, 1368-1370.
Ozone destruction through heterogeneous chemistry following the eruption of El Chichon. D J Hofmann, S Solomon, J. Geophys. Res. 94Hofmann, D.J., Solomon, S., 1989. Ozone destruction through heterogeneous chemistry following the eruption of El Chichon. J. Geophys. Res., 94, 5029-5041.
Lidar backscatter to extinction, mass and area conversions for stratospheric aerosols based on midlatitude balloonborne size distribution measurements. H Jager, T Deshler, Geophys. Res. Lett. 29Jager, H., Deshler, T., 2002. Lidar backscatter to extinction, mass and area conversions for stratospheric aerosols based on midlatitude balloonborne size distribution measurements. Geophys. Res. Lett., 29, 1929-1932.
Statistical estimation of stratospheric particle size distribution by combining optical modelling and lidar scattering measurements. J Jumelet, S Bekki, C David, P Keckhut, Atmos. Chem. Phys. 8Jumelet, J., Bekki, S., David, C., Keckhut, P., 2008. Statistical estimation of stratospheric particle size distribution by combining optical modelling and lidar scattering measurements. Atmos. Chem. Phys., 8, 5435-5448.
Stratospheric aerosols. C E Junge, C W Changnon, J E Manson, J. Meteorol. 18Junge, C.E., Changnon, C.W., Manson, J.E., 1961. Stratospheric aerosols, J. Meteorol, 18, 81-108.
Measuring and modeling twilight's purple light. R LeeJr, J Hernádez-Andrés, Applied Optics. 42Lee, R. Jr., Hernádez-Andrés, J., 2003. Measuring and modeling twilight's purple light. Applied Optics, 42, 445-457.
The boiling point of stratospheric aerosols. J M Rosen, J. Appl. Meteorol. 10Rosen, J. M., 1971. The boiling point of stratospheric aerosols, J. Appl. Meteorol., 10, 1044-1046.
The persistently variable "background" stratospheric aerosol layer and global climate change. S Solomon, J S Daniel, R R Neely, Iii, J.-P Vernier, E G Dutton, L W Thomason, Science. 333Solomon, S., Daniel, J.S., Neely, R.R. III, Vernier, J.-P., Dutton, E.G., Thomason, L.W., 2011. The persistently variable "background" stratospheric aerosol layer and global climate change. Science, 333, 866-870.
SAGE II measurements of stratospheric aerosol properties at non-volcanic levels. L W Thomason, S P Burton, B.-P Luo, T Peter, Atmos. Chem. Phys. Discuss. 7Thomason, L.W., Burton, S. P., Luo, B.-P., Peter, T., 2007. SAGE II measurements of stratospheric aerosol properties at non-volcanic levels. Atmos. Chem. Phys. Discuss., 7, 6959-6997.
Twilight Sky Photometry and Polarimetry: The Problem of Multiple Scattering at the Twilight Time. Cosmic Research. O S Ugolnikov, 37Ugolnikov, O.S., 1999. Twilight Sky Photometry and Polarimetry: The Problem of Multiple Scattering at the Twilight Time. Cosmic Research. 37, 159-166.
Multicolor Polarimetry of the Twilight Sky. The Role of Multiple Light Scattering as a Function of Wavelength. O S Ugolnikov, I A Maslov, Cosmic Research. 40Ugolnikov, O.S., Maslov, I.A., 2002. Multicolor Polarimetry of the Twilight Sky. The Role of Multiple Light Scattering as a Function of Wavelength. Cosmic Research. 40, 224-232.
Effects of multiple scattering and atmospheric aerosol on the polarization of the twilight sky. O S Ugolnikov, O V Postylyakov, I A Maslov, J. Quant. Spectrosc. Radiat. Transfer. 88Ugolnikov, O.S., Postylyakov, O.V., Maslov, I.A., 2004. Effects of multiple scattering and atmospheric aerosol on the polarization of the twilight sky. J. Quant. Spectrosc. Radiat. Transfer. 88, 233-241.
Studies of the Stratosphere Aerosol Layer Based on Polarization Measurements of the Twilight Sky. O S Ugolnikov, I A Maslov, Cosmic Research. 47Ugolnikov O.S., Maslov I.A., 2009. Studies of the Stratosphere Aerosol Layer Based on Polarization Measurements of the Twilight Sky. Cosmic Research. 47, 198-207.
Undisturbed Mesosphere Optical Properties from Wide-Angle Frequent Twilight Sky Polarimetry. O S Ugolnikov, I A Maslov, Cosmic Research. 51Ugolnikov, O.S., Maslov, I.A., 2013a. Undisturbed Mesosphere Optical Properties from Wide- Angle Frequent Twilight Sky Polarimetry. Cosmic Research 51, 235-240.
Summer mesosphere temperature distribution from wideangle polarization measurements of the twilight sky. O S Ugolnikov, I A Maslov, J. Atmos. Solar Terr. Phys. 105. 106Ugolnikov, O.S., Maslov, I.A., 2013b. Summer mesosphere temperature distribution from wide- angle polarization measurements of the twilight sky. J. Atmos. Solar Terr. Phys. 105-106, 8-14.
Noctilucent cloud polarimetry: Twilight measurements in a wide range of scattering angles. O S Ugolnikov, I A Maslov, B V Kozelov, J M Dlugach, Plan. Space Sci. 125Ugolnikov O.S., Maslov I.A., Kozelov B.V., Dlugach J.M., 2016. Noctilucent cloud polarimetry: Twilight measurements in a wide range of scattering angles. Plan. Space Sci. 125, 105-113.
The ALOMAR Rayleigh/Mie/Raman lidar: objectives, configuration, and performance. Von Zahn, U G Von Cossart, G Fiedler, J Fricke, K H Nelke, G Baumgarten, G Rees, D Hauchecorne, A Adolfsen, K , Ann. Geophysicae. 18Von Zahn, U. G., von Cossart, G., Fiedler, J., Fricke, K. H., Nelke, G., Baumgarten, G., Rees, D., Hauchecorne, A., Adolfsen, K., 2000. The ALOMAR Rayleigh/Mie/Raman lidar: objectives, configuration, and performance. Ann. Geophysicae, 18, 815-833.
Anomaly Aerosol Scattering in the Atmosphere above Tomsk During Autumn-Winter Period of 2006/07. Atmospheric and Oceanic Optics. V V Zuev, V D Burlakov, S I Dolgii, A V Nevzorov, 20Zuev, V.V., Burlakov, V.D., Dolgii, S.I., Nevzorov, A.V., 2007. Anomaly Aerosol Scattering in the Atmosphere above Tomsk During Autumn-Winter Period of 2006/07. Atmospheric and Oceanic Optics. 20, 524-530.
| [] |
[
"Condensation Transitions in Nonequilibrium systems",
"Condensation Transitions in Nonequilibrium systems"
] | [
"M R Evans \nSchool of Physics\nThe University of Edinburgh\nMayfield RoadEH9 3JZEdinburghU.K\n"
] | [
"School of Physics\nThe University of Edinburgh\nMayfield RoadEH9 3JZEdinburghU.K"
] | [] | Systems driven out of equilibrium can often exhibit behaviour not seen in systems in thermal equilibrium-for example phase transitions in one-dimensional systems. In this talk I will review several 'condensation' transitions that occur when a conserved quantity is driven through the system. Although the condensation is spatial, i.e. a finite fraction of the conserved quantity condenses into a small spatial region, useful comparison can be made with usual Bose-Einstein condensation. Amongst some onedimensional examples I will discuss the 'Bus Route Model' where the condensation corresponds to the clustering together of buses moving along a bus-route. | 10.1007/s00023-003-0932-z | [
"https://arxiv.org/pdf/cond-mat/0401341v1.pdf"
] | 16,517,555 | cond-mat/0401341 | 2ddd45ecef04e6108bf83f191ec0317ae42a65be |
Condensation Transitions in Nonequilibrium systems
19 Jan 2004 June 28, 2018
M R Evans
School of Physics
The University of Edinburgh
Mayfield RoadEH9 3JZEdinburghU.K
Condensation Transitions in Nonequilibrium systems
19 Jan 2004 June 28, 2018
Systems driven out of equilibrium can often exhibit behaviour not seen in systems in thermal equilibrium-for example phase transitions in one-dimensional systems. In this talk I will review several 'condensation' transitions that occur when a conserved quantity is driven through the system. Although the condensation is spatial, i.e. a finite fraction of the conserved quantity condenses into a small spatial region, useful comparison can be made with usual Bose-Einstein condensation. Amongst some onedimensional examples I will discuss the 'Bus Route Model' where the condensation corresponds to the clustering together of buses moving along a bus-route.
Introduction
Broadly speaking, one can consider two types of nonequilibrium systems: those relaxing towards thermal equilibrium and those held far from thermal equilibrium e.g. by the system being driven by some external field. In the latter case the steady state of the system will not be described by usual Gibbs-Boltzmann statistical weights rather it will be a nonequilibrium steady state. A natural way to construct a nonequilibrium steady state is to drive the system by forcing a current of some conserved quantity, for example energy or mass, through the system. Such systems are known as driven diffusive systems (DDS) [2].
In recent years the possibility of phase transitions and phase separation in one-dimensional nonequilibrium systems has been explored and some examples are by now well studied. To appreciate the significance one should recall the general dictum that in one-dimensional equilibrium systems phase ordering and phase transitions do not occur (except in the limit of zero-temperature, or with long range interaction) [3].
Let us briefly review work on one-dimensional phase transitions in driven systems. A very simple one-dimensional driven diffusive system is the asymmetric simple exclusion process (ASEP). Here particles hop in a preferred direction on a one-dimensional lattice with hard-core exclusion (at most one particle can be at any given site). Indicating the presence of a particle by a 1 and an empty site (hole) by 0 the dynamics comprises the following exchanges at nearest neighbour sites 1 0 → 0 1 with rate 1 0 1 → 1 0 with rate q
The open system was studied by Krug [4] and boundary induced phase transitions shown to be possible. Specifically one considers a lattice of N sites where at the left boundary site (site 1) a particle is introduced with rate α if that site is empty, and at the right boundary site (site N) any particle present is removed with rate β. Thus the dynamical processes at the boundaries are at site 1 0 → 1 with rate α at site N 1 → 0 with rate β .
(
These boundary conditions force a steady state current of particles J through the system. Phase transitions occur when lim N →∞ J exhibits non-analyticities. The steady state of this system was solved exactly for the totally asymmetric case [5,6] and more recently for the general q case [7,8]. When q < 1 the phase diagram comprises three phases: a high-density phase where the current is controlled by a low exit rate β-one can think of this is queue of cars at a traffic light that doesn't let many cars through; a low-density phase where the current controlled by a low injection rate α-think of this as a traffic light that does not let many cars onto an open road; a maximal-current phase where both α, β are high (α, β > (1 − q)/2) and the current is J = (1 − q)/4. Note that since increasing α and β doesn't increase the current, the current is saturated. In the maximal current phase generic long-range correlations exist, an example being the decay of particle density from the left boundary to the bulk value 1/2 which is a power law ∼ 1/x 1/2 where x is distance from the left boundary.
On the line α = β < (1 − q)/2 which separates the high and low density phases one finds coexistence between a region of low density in the left part of lattice and a region of high density on the right separated by a 'shock' where the density changes sharply over a microscopic distance.
Perodic systems (i.e. a ring of sites) can also exhibit phase separation when inhomogeneities or defects are introduced. A very simple example is to introduce into the asymmetric exclusion process a 'slow bond' through which particles hop with a reduced rate. Then in the steady state one can obtain phase separation between a region of high density behind the slow bond and a region of low density in front of the slow bond. Moving defects (i.e. particles with dynamics different from that of the others) have also been considered and exact solutions obtained [11,12,13]. One can think of a slow agricultural vehicle on a country road with a large queue of cars behind it and open road in front of it.
A further question is whether systems related to the hopping particle models described so far, but without inhomogeneities, can exhibit phase ordering. A very simple model was introduced in [14] comprising three species of conserved particles, amongst which all possible exchanges are allowed. However a key feature is that the dynamics has a cyclic symmetry i.e. A particles move preferentially to the left of B particles which move preferentially to the left of C particles which in turn move preferentially to the left of A particles. The model exhibits strong phase separation into pure domains of A B C. Similar strong phase separation occurs in other related models [15].
A final class of transitions in one-dimensional hopping particle models is that involving spatial condensation, whereby a finite fraction of the particles condenses onto the same site. Examples include the appearance of a large aggregate in models of aggregation and fragmentation [16] and the emergence of a single flock in dynamical models of flocking [17]. We will analyse a simple example of a condensation transition which occurs in the zero-range process which we now define.
The zero-range process
The zero-range process was introduced by Spitzer [9] and recent applications and developments have been reviewed in [10]. We consider a one-dimensional lattice of M sites with sites labelled µ = 1 . . . M and periodic boundary conditions (more generally one can consider the zero-range process on a lattice of arbitrary dimension). Each site can hold an integer number of indistinguishable particles. The configuration of the system is specified by the occupation numbers n µ of each site µ. The total number of particles is denoted by L and is conserved under the dynamics. The dynamics of the system is given by the rates at which a particle leaves a site µ (one can think of it as the topmost particle-see Figure 1a) and moves to the left nearest neighbour site µ−1. The hopping rates u(n) are a function of n the number of particles at the site of departure. Some particular cases are: if u(n) = n then the dynamics of each particle is independent of the others; if u(n) = const for n > 0 then the rate at which a particle leaves a site is unaffected by the number of particles at the site (as long as it is greater than zero).
The important attribute of the zero-range process is that it has a steady state described by a product measure. By this it is meant that the steady state probability P ({n µ }) of finding the system in configuration {n 1 , n 2 . . . n M } is given by a product of factors f (n µ )
P ({n µ }) = 1 Z(M, L) M µ=1 f (n µ ) .(3)
Here the normalisation Z(M, L) is introduced so that the sum of the probabilities for all configurations, with the correct number of particles L, is one. In the basic model described above, f (n) is given by
f (n) = n m=1 1 u(m)
for n ≥ 1 = 1 for n = 0 (4)
To prove (3,4) one simply considers the stationarity condition on the probability of a configuration (probability current out of the configuration due to hops is equal to probability current into the configuration due to hops):
µ θ(n µ )u(n µ )P (n 1 . . . n µ . . . n L ) = µ θ(n µ )u(n µ+1 +1)P (n 1 . . . n µ −1, n µ+1 +1 . . . n L ) . (5)
The Heaviside function θ(n µ ) highlights that it is the sites with n > 1 that allow exit from the configuration (lhs of (5)) but also allow entry to the configuration (rhs of (5)). Equating the terms µ on both sides of (5) and cancelling common factors assuming (3), results in
u(n µ )f (n µ−1 )f (n µ ) = u(n µ+1 + 1)f (n µ − 1)f (n µ+1 + 1)(6)
This equality can be recast as
u(n µ ) f (n µ ) f (n µ − 1) = u(n µ+1 + 1) f (n µ+1 + 1) f (n µ+1 ) = constant(7)
Setting the constant equal to unity implies
f (n µ ) = f (n µ − 1) u(n µ )(8)
and iterating (8) leads to (4).
We can easily generalise to consider an inhomogeneous system by which we mean the hopping rates are site dependent: the hopping rate out of site µ when it contains n µ particles is u µ (n µ ). It is easy to check that the steady state is simply modified to
P ({n µ }) = 1 Z(M, L) L µ=1 f µ (n µ )(9)
where f µ are given by
f µ (n) = n m=1 1 u µ (m) for n ≥ 1 = 1 for n = 0(10)
The proof is identical to that for the homogeneous case, with the replacement of u(n µ ) by u µ (n µ ) There exists an exact mapping from a zero-range process to an asymmetric exclusion process. This is illustrated in Figure 1. The idea is to consider the particles of the zero-range process as the holes (empty sites) of the exclusion process. Then the sites of the zero-range process become the moving particles of the exclusion process. Note that in the exclusion process we have M particles hopping on a lattice of M + L sites. A hopping rate in the zero range process u(m) which is dependent on m corresponds to a hopping rate in the exclusion process which depends on the gap to the particle in front. So the particles can feel each other's presence and one can have a long-range interaction.
Condensation Transitions
We now proceed to analyse the steady states of form (9) and the condensation transition that may occur. The important quantity to consider is the normalisation Z(M, L) as it plays the role of the partition sum. The normalisation is defined through the condition
Z(M, L) = n 1 ,n 2 ...n M δ( µ n µ −L) M µ=1 f µ (n µ )(11)
where the δ function enforces the constraint of L particles. The normalisation may be considered as the analogue of a canonical partition function of a thermodynamic system.
We define the 'speed' v as the average hopping rate out of a site
where we have used (9,10). Note that (12) tells us that the speed is independent of site and thus may be considered a conserved quantity in the steady state of the system. In the totally asymmetric system considered in Section 2 the speed is equal to the current of particles flowing between neighbouring sites. The speed is a ratio of partition functions of different system sizes (12) and corresponds to a fugacity.
We now use the integral representation of the delta function to write the partition function as
Z(M, L) = dz 2πi z −(L+1) M µ=1 F µ (z) ,(13)
where
F µ (z) = ∞ m=0 z m f µ (m) .(14)
For large M, L (13) is dominated by the saddle point of the integral and the value of z at the saddle point is the fugacity. The equation for the saddle point reduces to
L M = z M M µ=1 ∂ ∂z ln F µ (z)(15)
which, defining φ = L/M, can be written as
φ = z M M µ=1 F ′ µ (z) F µ (z) .(16)
In the thermodynamic limit,
M → ∞ with L = φM ,(17)
where the density φ is held fixed, the question is whether a valid saddle point value of z can be found from (16). We expect that for low φ the saddle point is valid but, as we shall discuss, there exists a maximum value of z and if at this maximum value the rhs of (16) is finite, then for large φ (16) cannot be satisfied. We now consider how condensation may occur in the inhomogeneous and the homogeneous case.
Inhomogeneous case
To give an idea of how a condensation transition may occur we consider the case u µ (m) = u µ for m > 0 i.e. the hopping rate does not depend on the number of particles at a site. f µ is given by (18) and the probability of occupancies {n 1 , n 2 , . . . , n M } is
f µ (n) = 1 u µ nµP ({n 1 , n 2 , . . . , n M }) = 1 Z(M, L) M µ=1 1 u µ nµ .(19)
The mapping to an ideal Bose gas is evident: the L particles of the zero-range process are viewed as Bosons which may reside in M states with energies E µ determined by the site hopping rates: exp(−βE µ ) = 1/u µ . Thus the ground state corresponds to the site with the lowest hopping rate. The normalisation Z(M, L) is equivalent to the canonical partition function of the Bose gas. We can sum the geometric series (14) to obtain F µ and F ′ µ then taking the large M limit allows the sum over µ to be written as an integral
φ = ∞ u min duP(u) z u − z(20)
where P(u) is the probability distribution of site hopping rates with u min the lowest possible site hopping rate. Interpreting P(u) as a density of states, equation (20) corresponds to the condition that in the grand canonical ensemble of an ideal Bose gas the number of Bosons per state is φ. The theory of Bose condensation tells us that when certain conditions on the density of low energy states pertain we can have a condensation transition. Then (16) can no longer be satisfied and we have a condensation of particles into the ground state, which is here the site with the slowest hopping rate.
A very simple example is to have just one 'slow site' i.e. u 1 = p while the other M − 1 sites have hopping rates u µ = 1 when µ > 1. Using the mapping to an exclusion process, this corresponds one slow particle i.e. agricultural vehicle example described earlier. One can show [11] that for a high density of particles in the zero range process (low density of particles in the corresponding asymmetric exclusion process) we have a condensate since site 1 contains a finite fraction of the particles. In the low density phase the particles are evenly spread between all sites.
Homogeneous case
We now consider the homogeneous zero-range process where the hopping rates u(n) are site independent. Then (14) is independent of µ and reads
F (z) = ∞ n=0 n m=1 z u(m)(21)
The fugacity z must be chosen so that F converges or else we could not have performed (14). Therefore z is restricted to z ≤ β where we define β to be the radius of convergence of F (z). From (21) we see that β is the limiting value of the u(m) i.e. the limiting value of the hopping rate out of a site for a large number of particles at a site. We interpret (16) as giving a relation between the density of holes (number of holes per site) and the fugacity z.
The saddle point condition (16) becomes
φ = zF ′ (z) F (z)(22)
Given that the rhs of (22) is a monotonically increasing function of z we deduce that density of particle increases with fugacity. However if at z = β, the maximum allowed value of z, the rhs of (22) is still finite then one can no longer solve for the density and one must have a condensation transition. Physically, the condensation would correspond to a spontaneous symmetry breaking where one of the sites is spontaneously selected to hold a finite fraction of the particles.
Thus, for condensation to occur (i.e. when φ is large enough for (22) not to have a solution for the allowed values of z) we require
lim z→β F ′ (z) F (z) < ∞ .(23)
We now assume that u(n) decreases uniformly to β in the large n limit as
u(n) = β(1 + ζ(n))(24)
where ζ(n) is a monotonically decreasing function. Analysis of the series
F (β) = ∞ n=0 exp − n m=1 ln [1 + ζ(m)] F ′ (β) = ∞ n=0 n exp − n m=1 ln [1 + ζ(m)](25)
reveals that the condition for condensation is simply that F ′ (β) is finite and this occurs if u(n) decays to β more slowly than β(1 + 2/n). (This is easiest to see by expanding ln [1 + ζ] and approximating the sum over m by an integral in (25).)
It is interesting to translate this result into the language of the exclusion process. In this context we can have condensation if the hop rate of a particle into a gap of size n decays as β(1 + 2/n) therefore there is an effective long range interaction.
Bus route model
As an example of this let us consider the 'bus route model' [18]. The model is defined on a 1d lattice. Each site (bus-stop) is either empty, contains a bus (a conserved particle) or contains a passenger (non-conserved quantity). The dynamical processes are that passengers arrives at an empty site with rate λ; a bus moves forward to the next stop with rate 1 if that stop is empty; if the next stop contains passengers the bus moves forward with rate β and removes the passengers.
The model thus defined has not been solved but simulations reveal two regimes. At high bus density the gaps between buses are evenly distributed. However at low bus density there is a condensed regime where the lead bus has a large gap to the next bus in front of it with bus-stops full of passengers in between. The other buses have small gaps between them. Thus the buses form a jam of buses and after a long delay all arrive at a bus-stop at once. The bus route model can be related to the zero-range process by a mean-field approximation in which we integrate out the non-conserved quantity (passengers). The idea is that a bus-stop, next to bus 1 say, will last have been visited by a bus (bus 2) a mean time ago of n/v where n is the distance between bus 2 and bus 1 and v is the steady state speed. Therefore the mean-field probability that the site next to bus 1 is not occupied by a passenger is exp(−λn/v). From this probability an effective hopping rate for a bus into a gap of size n is obtained by averaging the two possible hop rates 1, β:
u(n) = β + (1 − β) exp(−λn/v) .(26)
We can now see that this mean-field approximation to the bus-route model is equivalent to a homogeneous zero-range process discussed earlier.
Since u(n) decays exponentially the condition for a strict phase transition in the thermodynamic limit is not met. However on any finite system for λ sufficiently small, an apparent condensation will be seen. In the bus route problem this corresponds to the universally irritating situation of all the buses on the route arriving at once.
Conclusion
We have shown how the zero range process exhibits two kinds of condensation transition. One is due to having an inhomogeneous system i.e. we get condensation of particles onto the site with the slowest hopping rate. Although the condensation is spatial the mechanism is equivalent to Bose condensation in an Ideal Bose Gas.
The other type of condensation occurs on a homogeneous systems and involves the spontaneous selection of a site onto which a finite fraction of the particles condense. Recently this condensation mechanism has been used to understand the existence or non-existence of phase separation in a general class of one dimensional driven systems [19,20].
Figure 1 :
1Equivalence of zero range process and asymmetric exclusion process.
. S Katz, J Lebowitz, H Spohn, J. Stat. Phys34. 281655Phys. Rev. BS. Katz, J L Lebowitz and H Spohn (1983) Phys. Rev. B 28 1655; (1984) J. Stat. Phys34 497
B Schmittmann, R K P Zia, Statistical Mechanics of Driven Diffusive Systems. U.KAcademic Press17of Domb and Lebowitz seriesB. Schmittmann and R K P Zia (1995) Statistical Mechanics of Driven Diffusive Sys- tems vol. 17 of Domb and Lebowitz series, Academic Press, U.K.
E M L D Landau, Lifshitz, Statistical Physics I. New YorkPergamon PressL D Landau and E M Lifshitz, 1980, Statistical Physics I (Pergamon Press, New York)
. J Krug, Phys. Rev. Lett. 671882J. Krug (1991) Phys. Rev. Lett. 67 1882
. B Derrida, M R Evans, V Hakim, V Pasquier, J. Phys. A. 261493B. Derrida, M. R. Evans, V. Hakim and V. Pasquier (1993) J. Phys. A 26 1493
. G Schütz, E Domany, J. Stat. Phys. 72277G. Schütz and E. Domany (1993) J. Stat. Phys. 72 277
. T Sasamoto, J. Phys. A. 327109T. Sasamoto (1999) J. Phys. A 32 7109
. R A Blythe, M R Evans, F Colaiori, F H L Essler, J. Phys. A. 332313R. A. Blythe, M. R. Evans, F. Colaiori and F. H. L. Essler (2000) J. Phys. A 33 2313
. F Spitzer, Advances in Math. 5246F. Spitzer (1970) Advances in Math. 5 246
. M R Evans, Brazilian Journal of Physics. 3042M. R. Evans (2000) Brazilian Journal of Physics 30 42
. M R Evans, Europhys. Lett. 3613M. R. Evans (1996) Europhys. Lett. 36 13
. J Krug, P A Ferrari, J. Phys. A. 29465J. Krug and P. A. Ferrari (1996) J. Phys. A 29 L465
. K Mallick, J. Phys. A. 295375K. Mallick (1996) J. Phys. A 29 5375
. M R Evans, Y Kafri, H M Koduvely, D Mukamel, Phys. Rev. Lett. 802764Phys. Rev. EM. R. Evans, Y. Kafri, H. M. Koduvely and D. Mukamel (1998) Phys. Rev. Lett. 80 425; (1998) Phys. Rev. E 58 2764
. R Lahiri, M Barma, S Ramaswamy, Phys. Rev. Lett. 791648Phys. Rev. ER. Lahiri, M. Barma and S. Ramaswamy (1997) Phys. Rev. Lett. 79 1150; (2000) Phys. Rev. E 61 1648
. S N Majumdar, S Krishnamurthy, M Barma, Phys. Rev. Lett. 816337Phys. Rev. ES. N. Majumdar, S. Krishnamurthy, M. Barma (1998) Phys. Rev. Lett. 81 3691; (2000) Phys. Rev. E 61 6337
. O J O'loan, M R Evans, J. Phys. A. 3299O. J. O'Loan and M. R. Evans (1999) J. Phys. A 32 L99
. O J O'loan, M R Evans, M E Cates, Phys. Rev. E. 581404O. J. O'Loan, M.R.Evans and M.E.Cates (1998) Phys. Rev. E. 58 1404
. Y Kafri, E Levine, D Mukamel, G M Schütz, J Torok, Phys. Rev. Lett. 8935702Y. Kafri, E. Levine, D. Mukamel, G. M. Schütz, and J. Torok (2002) Phys. Rev. Lett. 89, 035702
. Y Kafri, D Levine, J Mukamel, Torok, J. Phys. A. 35459Y Kafri, E Levine, D Mukamel and J Torok (2002) J. Phys. A. 35 L459
| [] |
[
"Challenges in Designing Racially Inclusive Language Technologies",
"Challenges in Designing Racially Inclusive Language Technologies"
] | [
"Kimi V Wenzel [email protected] \nCarnegie Mellon University\n5000 Forbes Ave15213PittsburghPA\n",
"Geoff Kaufman \nCarnegie Mellon University\n5000 Forbes Ave15213PittsburghPA\n"
] | [
"Carnegie Mellon University\n5000 Forbes Ave15213PittsburghPA",
"Carnegie Mellon University\n5000 Forbes Ave15213PittsburghPA"
] | [] | We take a critical lens toward the pursuit of racially inclusive language technologies and identify several areas necessitating future work. We discuss the potential harms of conversational technologies, outline three challenges that arise in inclusive design, and lastly, argue that conversational user interface designers and researchers should go beyond racially inclusive design toward anti-racist and race positive designs. | 10.48550/arxiv.2303.13546 | [
"https://export.arxiv.org/pdf/2303.13546v1.pdf"
] | 257,757,125 | 2303.13546 | 026995071061c3ea1b6a85431578e6ab854ca9b7 |
Challenges in Designing Racially Inclusive Language Technologies
April 23-28, 2023
Kimi V Wenzel [email protected]
Carnegie Mellon University
5000 Forbes Ave15213PittsburghPA
Geoff Kaufman
Carnegie Mellon University
5000 Forbes Ave15213PittsburghPA
Challenges in Designing Racially Inclusive Language Technologies
Hamburg, Germany23April 23-28, 202310.1145/3334480.XXXXXXXPermission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).Author Keywords InclusionDecolonizationAnti-racismRace positive de- signCCS Concepts •Social and professional topics → Race and ethnicity•Human-centered computing → HCI theory, concepts and models
We take a critical lens toward the pursuit of racially inclusive language technologies and identify several areas necessitating future work. We discuss the potential harms of conversational technologies, outline three challenges that arise in inclusive design, and lastly, argue that conversational user interface designers and researchers should go beyond racially inclusive design toward anti-racist and race positive designs.
Introduction
Attention to "ethical" and "inclusive" developments in language technology has increased dramatically in recent years. A great deal of work has been devoted to understanding how racial bias can emerge in language model production, performance, and accuracy [4]; however, work aimed at understanding how conversational user interfaces (CUIs) may themselves perpetuate racism is limited. In this paper, we introduce potential harms of racism in CUIs, discuss three broad challenges of inclusive design, and conclude by urging CUI researchers to invest in anti-racist and race positive design research.
Harms of Non-Inclusive CUIs
Our recent work has revealed that miscommunication errors with conversational technologies can mirror that same effects as racist microaggressions in interpersonal interactions: A high speech recognition error rate can significantly increase Black users' levels of self-consciousness and significantly decrease their self-esteem and emotional affect. This effect was not found among white users [15]. Thus, CUIs that are not inclusive of marginalized dialects and vernaculars may pose not only a usability threat to these users but a psychological threat as well. These findings only increase the urgency for language technologists to create racially just CUIs.
Three Challenges of Inclusive Design
We outline three primary ethical challenges in designing racially inclusive conversational user interfaces: (1) Collecting representative voice and language data in a nonextractive, decolonial manner; (2) Understanding the CUI utility-surveillance trade off for vulnerable populations; and (3) Contributing to unmediated anti-racism.
Collecting representative voice and language data
The high error rate in conversational technologies for marginalized users has largely been attributed to lack of representative voice and language data [9]. Thus, the direct solution to minimizing this error rate is to gather voice data from those underrepresented groups. The methods for achieving this solution, however, are conventionally colonial and harmful [10,2]. Many researchers have contributed frameworks and guidelines for non-extractive data collection [7]; however, further investigation is needed to understand how these practices may uphold specifically in data collection for language models and conversational user interfaces. How might we create a representative language data set for conversational technologies while being respectful, nonextractive, and non-exploitative of the marginalized groups from whom we seek data from? Perhaps more importantly, in line with community-based participatory research practices [6], we should not ask how we may seek data from these groups, but rather, ask how we may seek data with these groups. This further necessitates an understanding of whether or not these technologies are of significant value to marginalized groups at all. Even if such technologies appear to be of significance on a superficial level, utilitysurveillance tradeoffs must also be considered and analyzed.
Understanding surveillance risks Current research follows a rhetoric of techno-solutionism [11], assuming that a positive outcome is inherent in racially inclusive technology. But how much would conversational technologies, even in the absence of their racial disparities, cause harm? There are various concerns regarding how emerging technologies can serve as tools to target and surveil marginalized groups [5]. While there are many privacy researchers focusing specifically on conversational devices [13] and CUI researchers working on inclusive design [14], these two lines of work are often divorced. When designing CUIs for marginalized groups, how might we consider their unique privacy needs? In response, we urge the CUI community to consider increased collaborations with privacy and security researchers.
Contributing to unmediated anti-racism
Framing the issue of inclusion as merely a dataset problem also ignores the structural and historical roots of dis-crimination, and fails to acknowledge the larger web of discrimination in which technology-mediated racism exists. Acknowledging this larger web of discrimination necessitates our understanding that CUIs are just one medium for present-day discrimination, and that mediums of discrimination have, and will continue to, change over time. While these mediums of discrimination may evolve, however, the targets of discrimination seldom do. Thus, we should not limit our focus solely to technology-mediated interactions. Prioritizing research on technology-mediated forms of discrimination often de-prioritizes unmediated forms of discrimination [12], ultimately implicitly suggesting that the form of discrimination matters more than the marginalized groups and individuals themselves. To this end, we suggest CUI scholars and practitioners take care to reflect on how their work may contribute to anti-discrimination in unmediated contexts. This suggestion is in line with broader efforts to go beyond inclusivity, toward anti-racism.
Beyond Inclusion: Toward Anti-Racism and Race Positive Design
"Inclusion" first and foremost centers the status-quo of privileged groups. "Inclusion" suggests that a privileged statusquo exists, and that previously excluded groups should be accepted to the said status-quo. Instead of simply aiming to offer marginalized groups the same opportunities as privileged groups through "inclusion," we should (1) actively aim to dismantle the structures that caused this power imbalance through anti-racist practices and (2) center and celebrate silenced and oppressed identities through race positive design.
Anti-racist design
As Ibram X. Kendi writes, "the only way to undo what is racist is to consistently identify and describe it -and then dismantle it" [8]. Dismantling this structure requires compre-hensive anti-racist education, and a reorienting of research and design work. How might we shift from critiquing, writing about, and discussing racism toward actively producing change in both our scholarly and local communities? (See [1] and [8] for further articulation and case studies.)
Race positive design Race positive design "enables thinking about race as a positive presence-as cultural capital, histories of resistance, bindings between lands and peoples-and the means by which it can be a generative force in technologies for just and sustainable futures" [3]. Recent work suggests that race positive identity affirmations may be a potential method for addressing the harms that discriminatory conversational technologies inflict on people of color; however, more work needs to be done to validate these design interventions [15]. Future work may consider: How might CUIs serve as a medium for race and culture to be celebrated? How might race and culture serve as a generative feature of language technologies, uplifting marginalized groups?
Conclusion
We outline three challenges in designing for inclusive conversational user interfaces, and provide suggestions for how researchers may begin to address these challenges: (1) Researchers should practice non-extractive decolonial data collection, and consider what legitimate value their data collection efforts may bring to the communities they are working with. (2) CUI designers and researchers should engage further in privacy and security work, especially as surveillance through CUIs may be especially harmful for marginalized populations. (3) Researchers should reflect on how their research on inclusivity in technology-mediated tools may contribute to broader unmediated anti-racism efforts. In addition to appreciating these challenges, we encourage researchers to consider race-positive CUI designs in their future work.
Anti-Racist HCI: notes on an emerging critical technical practice. Veronica Abebe, Gagik Amaryan, Marina Beshai, Ali Ekin Gurgen, Wendy Ho, R Naaji, Daniel Hylton, Christy Kim, Carina Lee, Katherine T Lewandowski, Miller, CHI Conference on Human Factors in Computing Systems Extended Abstracts. Veronica Abebe, Gagik Amaryan, Marina Beshai, Ali Ekin Gurgen, Wendy Ho, Naaji R Hylton, Daniel Kim, Christy Lee, Carina Lewandowski, Katherine T Miller, and others. 2022. Anti-Racist HCI: notes on an emerging critical technical practice. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1-12.
Indigenous research methodologies. Bagele Chilisa, Sage publicationsBagele Chilisa. 2019. Indigenous research methodologies. Sage publications.
Race-positive design: A generative approach to decolonizing computing. Ron Eglash, Audrey Bennett, Michael Lachney, William Babbitt, Human factors in computing systems. Ron Eglash, Audrey Bennett, Michael Lachney, and William Babbitt. 2020. Race-positive design: A generative approach to decolonizing computing. In Human factors in computing systems.
Anjalie Field, Su Lin Blodgett, arXiv:2106.11410Zeerak Waseem, and Yulia Tsvetkov. 2021. A survey of race, racism, and anti-racism in NLP. arXiv preprintAnjalie Field, Su Lin Blodgett, Zeerak Waseem, and Yulia Tsvetkov. 2021. A survey of race, racism, and anti-racism in NLP. arXiv preprint arXiv:2106.11410 (2021).
The downside of digital inclusion: Expectations and experiences of privacy and surveillance among marginal Internet users. Seeta Peña Gangadharan, New Media & Society. 19Seeta Peña Gangadharan. 2017. The downside of digital inclusion: Expectations and experiences of privacy and surveillance among marginal Internet users. New Media & Society 19, 4 (2017), 597-615.
Community-based participatory research. Karen Hacker, Sage publicationsKaren Hacker. 2013. Community-based participatory research. Sage publications.
Decolonising research approaches towards non-extractive research. Paul Agu Igwe, O Nnamdi, David Gamariel Madichie, Rugara, An International Journal ahead-of-print. Paul Agu Igwe, Nnamdi O Madichie, and David Gamariel Rugara. 2022. Decolonising research approaches towards non-extractive research. Qualitative Market Research: An International Journal ahead-of-print (2022).
How to be an antiracist. One world. X Ibram, Kendi, Ibram X Kendi. 2023. How to be an antiracist. One world.
Racial disparities in automated speech recognition. Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, Dan John R Rickford, Sharad Jurafsky, Goel, Proceedings of the National Academy of Sciences. 117Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R Rickford, Dan Jurafsky, and Sharad Goel. 2020. Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences 117, 14 (2020), 7684-7689.
Toward a non-extractive research ethics for transcultural, translingual research: perspectives from the coloniser and the colonised. Sandra Kouritzin, Satoru Nakagawa, Journal of Multilingual and Multicultural Development. 39Sandra Kouritzin and Satoru Nakagawa. 2018. Toward a non-extractive research ethics for transcultural, translingual research: perspectives from the coloniser and the colonised. Journal of Multilingual and Multicultural Development 39, 8 (2018), 675-687.
To save everything, click here: The folly of technological solutionism. Evgeny Morozov, Public AffairsEvgeny Morozov. 2013. To save everything, click here: The folly of technological solutionism. Public Affairs.
Decentering technology in discourse on discrimination. Jędrzej Seeta Peña Gangadharan, Niklas, Communication & Society. 22Seeta Peña Gangadharan and Jędrzej Niklas. 2019. Decentering technology in discourse on discrimination. Information, Communication & Society 22, 7 (2019), 882-899.
Alexa, what should we do about privacy: Protecting privacy for users of voice-activated devices. Anne Pfeifle, Wash. L. Rev. 93421Anne Pfeifle. 2018. Alexa, what should we do about privacy: Protecting privacy for users of voice-activated devices. Wash. L. Rev. 93 (2018), 421.
Voice Interfaces are Truly Inclusive. Jaisie Sin, Cosmin Munteanu, Jenny Waycott, Robin N Brewer, Sergio Sayago, Amanda Lazar, Astrid Weber, CHI Conference on Human Factors in Computing Systems Extended Abstracts. Jaisie Sin, Cosmin Munteanu, Jenny Waycott, Robin N. Brewer, Sergio Sayago, Amanda Lazar, and Astrid Weber. 2022. Alexa, Tell Me a Joke!:" Voice Interfaces are Truly Inclusive". In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1-3.
Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition. Kimi Wenzel, Nitya Devireddy, Cam Davidson, Geoff Kaufman, arXiv:2302.12326arXiv preprintKimi Wenzel, Nitya Devireddy, Cam Davidson, and Geoff Kaufman. 2023. Can Voice Assistants Be Microaggressors? Cross-Race Psychological Responses to Failures of Automatic Speech Recognition. arXiv preprint arXiv:2302.12326 (2023).
| [] |
[
"RANK FUNCTIONS AND INVARIANTS OF DELTA-MATROIDS",
"RANK FUNCTIONS AND INVARIANTS OF DELTA-MATROIDS"
] | [
"Matt Larson "
] | [] | [] | In this note, we give a rank function axiomatization for delta-matroids and study the corresponding rank generating function. We relate an evaluation of the rank generating function to the number of independent sets of the delta-matroid, and we prove a log-concavity result for that evaluation using the theory of Lorentzian polynomials. | null | [
"https://export.arxiv.org/pdf/2305.01008v1.pdf"
] | 258,437,130 | 2305.01008 | 434a4484a99771fa5b57740e9194819f7569dc35 |
RANK FUNCTIONS AND INVARIANTS OF DELTA-MATROIDS
Matt Larson
RANK FUNCTIONS AND INVARIANTS OF DELTA-MATROIDS
arXiv:2305.01008v1 [math.CO] 1 May 2023
In this note, we give a rank function axiomatization for delta-matroids and study the corresponding rank generating function. We relate an evaluation of the rank generating function to the number of independent sets of the delta-matroid, and we prove a log-concavity result for that evaluation using the theory of Lorentzian polynomials.
Introduction
Let [n, n] denote the set {1, . . . , n, 1, . . . , n}, equipped with the obvious involution (·). Let AdS n be the set of admissible subsets of [n, n], i.e., subsets S that contain at most one of i and i for each i ∈ [n]. Set e i := −e i ∈ R n , and for each S ∈ AdS n , set e S = a∈S e a . Definition 1.1. A delta-matroid D is a collection F ⊂ AdS n of admissible sets of size n, called the feasible sets of D, such that the polytope P (D) := Conv{e B : B ∈ F } has all edges parallel to e i or e i ± e j , for some i, j. We say that D is even if all edges of P (D) are parallel to e i ± e j .
Delta-matroids were introduced in [7] by replacing the usual basis exchange axiom for matroids with one involving symmetric difference. They were defined independently in [14,17]. For the equivalence of the definition of delta-matroids in those works with the one given above, and for general properties of delta-matroids, see [6,Chapter 4].
A delta-matroid is even if and only if all sets in {B ∩ [n] : B ∈ F } have the same parity. Even deltamatroids enjoy nicer properties than arbitrary delta-matroids. For instance, they satisfy a version of the symmetric exchange axiom [32].
There are many constructions of delta-matroids in the literature. Two of the most fundamental come from matroids: given a matroid M on [n], we can construct a delta-matroid on [n, n] whose feasible sets are the sets of the form B ∪ B c , for B a basis of M . We can also construct a delta-matroid whose feasible sets are the sets of the form I ∪ I c , for I independent in M . Additionally, there are delta-matroids corresponding to graphs [18], graphs embedded in surfaces [15,16], and points of a maximal orthogonal or symplectic Grassmannian. Delta-matroids arising from points of a maximal orthogonal or symplectic Grassmannian are called realizable. See [21, Section 6.2] for a discussion of delta-matroids associated to points of a maximal orthogonal Grassmannian.
Given S, T ∈ AdS n , we define S ⊔ T = {a ∈ S ∪ T : a ∈ S ∪ T }. A function g : AdS n → R is called bisubmodular if, for all S, T ∈ AdS n ,
f (S) + f (T ) ≥ f (S ∩ T ) + f (S ⊔ T ).
There is a large literature on bisubmodular functions, beginning with [19]. They have been studied both from an optimization perspective [23,24] and from a polytopal perspective [22,25]. Additionally, bisubmodular functions are closely related to jump systems [11].
For a delta-matroid D, define a function g D : AdS n → Z by g D (S) = max B∈F (|S ∩ B| − |S ∩ B|).
We call g D the rank function of D. Note that g D may take negative values. The collection of feasible subsets of D is exactly {S : g D (S) = n}, so D can be recovered from g D . Theorem 1.2. A function g : AdS n → Z is the rank function of a delta-matroid if and only if (1) g(∅) = 0 (normalization),
(2) |g(S)| ≤ 1 if |S| = 1 (boundedness), (3) g(S) + g(T ) ≥ g(S ∩ T ) + g(S ⊔ T ) (bisubmodularity)
, and (4) g(S) ≡ |S| (mod 2) (parity). Furthermore, D is even if and only if
g D (S) = g D (S ∪ i) + g D (S ∪ i) 2 whenever |S| = n − 1 and {i, i} ∩ S = ∅.
The function g D , as well as the observation that it is bisubmodular, has appeared before in the literature [8,14]. For example, in [8,Theorem 4.1] it is shown that, if D is represented by a point of the maximal symplectic Grassmannian, then g D can be computed in terms of the rank of a certain matrix. It was known that delta-matroids admit a description in terms of certain bisubmodular functions. However, the precise characterization in Theorem 1.2 does not appear to have been known before. Indeed, Theorem 1.2 answers a special case of [2,Question 9.4].
In [9,10], Bouchet gave a rank-function axiomatization of delta-matroids in the more general setting of multimatroids. His rank function differs from ours -in Section 2.2, we discuss the relationship between his results and Theorem 1.2.
Basic operations operations on delta-matroids -like products, deletion, contraction, and projectioncan be simply expressed in terms of rank functions. See Section 2.1.
One of the most important invariants of a matroid M of rank r on [n] is its Whitney rank generating function. If rk M is the rank function of M , then the rank generating function is defined as
R M (u, v) := A⊂[n] u r−rkM (A) v |A|−rkM (A) .
The more commonly used normalization is the Tutte polynomial, which is R M (u − 1, v − 1). The characterization of delta-matroids in terms of rank functions allows us to consider an analogously-defined invariant. Definition 1.3. Let D be a delta-matroid on [n, n]. Then we define
U D (u, v) = S∈AdSn u n−|S| v |S|−g D (S) 2 .
Note that the bisubmodularity of g D implies that the restriction of g D to the subsets of any fixed S ∈ AdS n is submodular. The boundedness of g D then implies that |g D (S)| ≤ |S|. Because of the parity requirement,
|S| − g D (S) is divisible by 2. Therefore U D (u, v) is indeed a polynomial. The normalization U D (u − 1, v − 1)
is more analogous to the Tutte polynomial, but it can have negative coefficients. However, the polynomial U D (u, v − 1) has non-negative coefficients (as follows, e.g., from Proposition 3.1).
The U -polynomial of a delta-matroid was introduced by Eur, Fink, Spink, and the author in [21,Definition 1.4] in terms of a Tutte polynomial-like recursion; see Proposition 3.1 for a proof that Definition 1.3 agrees with the recursive definition considered there. The specialization U D (0, v) is the interlace polynomial of D, which was introduced in [3] for graphs and in [13] for general delta-matroids. See [29] for a survey on the properties of the interlace polynomial.
Various Tutte polynomial-like invariants of delta-matroids have been considered in the literature, such as the Bollobás-Riordan polynomial and its specializations [5]. In [27], a detailed analysis of delta-matroid polynomials which satisfy a deletion-contraction formula is carried out. Set σ D (A) = |A| 2 + gD (A)+gD (Ā) 4 for A ⊂ [n]. Then in [27], the polynomial
A⊂[n] (x − 1) σD ([n])−σD (A) (y − 1) |A|−σD(A)
is shown to be, in an appropriate sense, the universal invariant of delta-matroids which satisfies a deletioncontraction formula. This polynomial is a specialization of the Bollobás-Riordan polynomial. In [20], it is shown that this polynomial has several nice combinatorial properties. (1) Let D be the delta-matroid arising from the independent sets of M . Then g D (S) = |S|+2 rk M (S + )− 2|S + |, and
U D (u, v) = (u + 1) n−r R M u + 3, 2u + v + 2 u + 1 .
(2) Let D be the delta-matroid arising from the bases of M . Then g D (S) = |S| − 2r + 2 rk M (S + ∪ V ) − 2|S + | + 2 rk M (S + ), and
U D (u, v) = T ⊂S⊂[n] u |S\T | v r−rkM (S)+|T |−rkM (T ) .
We study the U -polynomial as a delta-matroid analogue of the rank generating function of a matroid. For a matroid M , the evaluation R M (u, 0) is essentially the f -vector of the independence complex of the matroid, i.e., it counts the number of independent sets of M of a given size.
A set S ∈ AdS n is independent if it is contained in a feasible set of a delta-matroid D. In [9], Bouchet gave an axiomatization of delta-matroids in terms of their independent sets. The independent sets form a simplicial complex, called the independence complex of D. We relate U D (u, 0) to the f -vector of the independence complex of D (Proposition 3.4), which gives linear inequalities between the coefficients of U D (u, 0). Following a tradition in matroid theory (see, e.g., [28]), and inspired by the ultra log-concavity of R M (u, 0) [1,12], we make three log-concavity conjectures for U D (u, 0). These conjectures state the sequence of the number of independent sets of a delta-matroid of a given size satisfies log-concavity properties. Conjecture 1.5. Let D be a delta-matroid on [n, n], and let U D (u, 0) = a n + a n−1 u + · · · + a 0 u n . Then, for any k ∈ {1, . . . , n − 1},
(1) a 2 k ≥ n−k+1 n−k a k+1 a k−1 , (2) a 2 k ≥ 2n−k+1 2n−k k+1 k a k+1 a k−1 , and (3) a 2 k ≥ n−k+1 n−k k+1
k a k+1 a k−1 . Conjecture 1.5(1) follows from [21, Conjecture 1.5], and it is proven in [21,Theorem B] when D has an enveloping matroid (see Definition 3.8). This is a technical condition which is satisfied by many commonly occurring delta-matroids, including all realizable delta-matroids and delta-matroids arising from matroids (although not all delta-matroids, see [9,Section 4] and [21,Example 6.11]). The proof uses algebro-geometric methods. Here we prove a special case of Conjecture 1.5(2). Theorem 1.6. Let D be a delta-matroid on [n, n] which has an enveloping matroid. Let U D (u, 0) = a n + a n−1 u + · · · + a 0 u n . Then, for any k ∈ {1, . . . , n − 1}, a 2 k ≥ 2n−k+1 2n−k k+1 k a k+1 a k−1 , i.e., Conjecture 1.5(2) holds.
Our argument uses the theory of Lorentzian polynomials [12]. We strengthen Theorem 1.6 by proving that a generating function for the independent sets of D is Lorentzian (Theorem 3.11), which implies the desired log-concavity statement. We deduce that this generating function is Lorentzian from the fact that the Potts model partition function of an enveloping matroid is Lorentzian [12,Theorem 4.10].
When D is the delta-matroid arising from the independent sets of a matroid, Conjecture 1.5(3) follows from the ultra log-concavity of the number of independent sets of that matroid [1,12]. When D is the deltamatroid arising from the bases of a matroid M on [n], which has an enveloping matroid by [21, Proposition 6.10], Theorem 1.6 gives a new log-concavity result. If we set
a k = |{T ⊂ S ⊂ [n] : T independent in M and S spanning in M , |S \ T | = n − k}|, then Theorem 1.6 gives that a 2 k ≥ 2n−k+1 2n−k k+1 k a k+1 a k−1 for k ∈ {1, .
. . , n − 1}. Acknowledgements: We thank Nima Anari, Christopher Eur, Satoru Fujishige, and Steven Noble for enlightening conversations, and we thank Christopher Eur, Steven Noble, and Shiyue Li for helpful comments on a previous version of this paper. The author is supported by an NDSEG fellowship.
Rank functions of delta-matroids
The proof of Theorem 1.2 goes by way of a polytopal description of normalized bisubmodular functions, which we now recall. To a function f : AdS n → R with f (∅) = 0, we associate the polytope Under this dictionary, the bisubmodular function corresponding to the dilate kP (f ) is kf , and the bisubmodular function corresponding to the Minkowski sum P (f ) + P (g) is f + g.
P (f ) = {x : e S , x ≤ f (S) for all non-empty S ∈ AdS n }.
Proof of Theorem 1.2. By the polyhedral description of normalized bisubmodular functions, for each deltamatroid D there is a unique normalized bisubmodular function g such that P (D) = P (g). We show that the conditions on a normalized bisubmodular function g for P (g) to have all vertices in {−1, 1} n are exactly those given in Theorem 1.2, namely that |g(S)| ≤ 1 when |S| = 1 and g(S) ≡ |S| (mod 2).
The
g D (S) = g D (S ∪ i) + g D (S ∪ i) 2 whenever |S| = n − 1 and {i, i} ∩ S = ∅.
This gives the characterization of even delta-matroids.
2.1. Compatibility with delta-matroid operations. In this section, we consider several operations on delta-matroids, and we show that the rank function behaves in a simple way under these operations. First we consider minor operations on delta-matroids -contraction, deletion, and projection. First we describe the rank function of projections. The formula is analogous to the formula for the rank function of a matroid deletion. The rank functions of the contractions and deletions are described by the following result. The formula is analogous to the formula for the rank function of a matroid contraction. Before proving this, we will need the following property of delta-matroids. It follows, for instance, from the greedy algorithm description of delta-matroids in [11]. First we consider the case when we delete or contract a single element.
If i is not a coloop, then g D\i (S) = g D (S ∪ i) − 1, and Proof. We do the case of contraction; the case of deletion is identical. Assume that i is not a loop, and let F i denote the set of feasible sets in D which contain i. Note that F i is non-empty, so it is the collection of feasible sets B of D which maximize |{i} ∩ B|. For any S ∈ AdS n with S ∩ {i, i} = ∅, by Proposition 2.4 we have that max
B∈F |(S ∪ i) ∩ B| = max B∈Fi |(S ∪ i) ∩ B|. For any B, |(S ∪ i) ∩ B| − |(S ∪ i) ∩ B| = 2|(S ∪ i) ∩ B| − |S ∪ i|, so we see that max B∈F (|(S ∪ i) ∩ B| − |(S ∪ i) ∩ B|) = max B∈Fi (|(S ∪ i) ∩ B| − |(S ∪ i) ∩ B|).
The left-hand side is equal to g D (S ∪ i), and the right-hand side is equal to g D/i (S) + 1. We induct on the size of A ∪ B. We consider the case of adding an element i ∈ [n] to A; the case of adding it to B is identical. We compute:
g D/(A∪i)\B (S) = g D/A\B (S ∪ i) − g D/A\B (i) = g D (S ∪ A ∪ B ∪ i) − g D (A ∪ B) − (g D (A ∪ B ∪ i) − g D (A ∪ B)) = g D (S ∪ (A ∪ i) ∪ B) − g D ((A ∪ i) ∪ B).
For two non-negative integers n 1 , n 2 , identify the disjoint union of [n 1 ] and [n 2 ] with [n 1 + n 2 ]. Given two delta-matroids D 1 , D 2 on [n 1 ] and [n 2 ], let D 1 × D 2 be the delta-matroid on [n 1 + n 2 ] whose feasible sets are B 1 ∪ B 2 , for B i a feasible set of D i . Then we have the following description of the rank function of D 1 × D 2 .
∩ B 1 | − |S 1 ∩ B 1 | + |S 2 ∩ B 2 | − |S 2 ∩ B 2 | = g D1 (S 1 ) + g D2 (S 2 ).
We now study how the rank functions behave under the operation of twisting. Let W be the signed permutation group, the subgroup of the symmetric group on [n, n] which preserves AdS n . In other words, W consists of permutations w such that w(i) = w(i). As delta-matroids are collections of admissible sets, W acts on the set of delta-matroids on [n,n]. This action is usually called twisting in the delta-matroid literature.
Proposition 2.7. Let D be a delta-matroid on [n, n], and let w ∈ W . Then g w·D (S) = g D (w −1 · S).
Proof. Note that, for B a feasible set of D, Proof. Let F S be the collection of feasible sets B with |S ∩ B| = r. Then we have that
|S ∩ (w · D)| − |S ∩ (w · D)| = |(w −1 · S) ∩ D| − |(w −1 · S) ∩ D|,rk M (T ) = max B∈FS |T ∩ B| ≤ max B∈F |T ∩ B| = g D (T ) + |T | 2 .
On the other hand, by Proposition 2.4 there is a feasible set B which maximizes |T ∩ B| and has |S ∩ B| = r, so we have equality.
2.2.
An alternative normalization. The results of the previous section, particularly Proposition 2.8, suggest that an alternative normalization of the rank function of a delta-matroid has nice properties. Set
h D (S) := g D (S) + |S| 2 .
The function h D (S) is integer-valued and bisubmodular, and the polytope it defines is P (h D ) = 1 2 (P (D)+ ), where = [−1, 1] n is the cube. This is because the bisubmodular function corresponding to is S → |S|. Note that the function h D is non-negative and increasing, in the sense that if S ⊂ T ∈ AdS n , then h D (S) ≤ h D (T ). Theorem 1.2 implies the following characterization of the functions arising as h D for some deltamatroid D.
Corollary 2.9. A function h : AdS n → Z is equal to h D for some delta-matroid D if and only if (1) h(∅) = 0 (normalization), (2) h(S) ∈ {0, 1} if |S| = 1 (boundedness), (3) h(S) + h(T ) ≥ h(S ∩ T ) + h(S ⊔ T ) + |S ∩ T |/2.
Indeed, these are exactly the conditions we need for g(S) := 2h(S) − |S| to satisfy the conditions in Theorem 1.2.
The function h D was studied by Bouchet in [9,10]
(2) h(S) ≤ h(S ∪ a) ≤ h(S) + 1 if S ∪ a is admissible, (3) h(S) + h(T ) ≥ h(S ∩ T ) + h(S ∪ T ) if S ∪ T is admissible, and (4) h(S ∪ i) + h(S ∪ī) ≥ 2h(S) + 1 if S ∩ {i,ī} = ∅.
In [10, Theorem 2.16], a third characterizations of the functions h D is stated with a reference to an unpublished paper of Allys.
The U -polynomial
We now study the U -polynomial of delta-matroids. We prove the following recursion for U D (u, v), which was the original definition of the U -polynomial in [21,Definition 1.4].
U D (u, v) = U D/i (u, v) + U D\i (u, v) + uU D(i) (u, v), i is neither a loop nor a coloop (u + v + 1) · U D\i (u, v), i is a loop or a coloop.
First we study the behavior of the U -polynomial under products. Proof. We compute:
U D1 (u, v)U D2 (u, v) = S1∈AdSn 1 u n1−|S1| v |S 1 |−g D 1 (S 1 ) 2 S2∈AdSn 2 u n2−|S2| v |S 2 |−g D 2 (S 2 ) 2 = (S1,S2) u n1+n2−|S1|−|S2| v |S 1 |+|S 2 |−g D 1 (S 1 )−g D 2 (S 2 ) 2 = (S1,S2) u n1+n2−|S1|−|S2| v |S 1 |+|S 2 |−g D 1 ×D 2 (S 1 ∪S 2 ) 2 = U D1×D2 (u, v),
where the third equality is Proposition 2.6.
= u n−1−|S\i| v |S\i|−g D/i (S\i) 2 . If S contains i, then u n−|S| v |S|−g D (S) 2 = u n−1−|S\i| v |S\i|−g D\i (S\i) 2 . If S contains neither i not i, then u n−|S| v |S|−g D (S) 2 = u · u n−1−|S| v |S|−g D(i) (S) 2
. Adding these up implies the recursion in this case.
If i is a loop or a coloop, then D is the product of D \ i with a delta-matroid on 1 element with 1 feasible set. We observe that U -polynomial of a delta-matroid on 1 element with 1 feasible set is u + v + 1, and so Lemma 3.2 implies the recursion in this case.
3.1. The independence complex of a delta-matroid. In this section, we introduce the independence complex of a delta-matroid and use it to study the U -polynomial. Note that the f -vector of a pure simplicial complex, like the independence complex of a delta-matroid, is a pure O-sequence. Then [26] gives the following inequalities.
Corollary 3.5. Let U D (u, 0) = a n + a n−1 u + · · · a 0 u n . Then (a 0 , . . . , a n ) is the f -vector of a pure simplicial complex. In particular, a i ≤ a n−i for i ≤ n/2 and a 0 ≤ a 1 ≤ · · · ≤ a ⌊ n+1 2 ⌋ . Proposition 3.4 is a delta-matroid analogue of the fact that, for a matroid M , the coefficients of R M (u, 0), when written backwards, are the face numbers of the independence complex of M . The independence complex of a matroid is shellable [4], which is reflected in the fact that R M (u − 1, 0) has non-negative coefficients. The independence complex of a delta-matroid is not in general shellable or Cohen-Macaulay, and U D (u − 1, 0) can have negative coefficients.
Recall that = [−1, 1] n is the cube. The map S → e S induces a bijection between AdS n and lattice points of . We use this to give a polytopal description of the independent sets of D, which will be useful in the sequel.
Proposition 3.6. The map S → e S induces a bijection between independent sets of D and lattice points in Proof. If S is independent in D, then there is T ∈ AdS n such that S ∪ T ∈ F . Then e S = 1 2 (e S∪T + e S∪T ), so e S lies in 1 2 (P (D) + ). The correspondence between normalized bisubmodular functions and polytopes gives that
1 2 (P (D) + ) = x : e S , x ≤ g D (S) + |S| 2 .
If S is not independent, then e S violates the inequality e S , e S ≤ gD (S)+|S|
2
, so e S does not lie in 1 2 (P (D) + ). 3.2. Enveloping matroids. We now recall the definition of an enveloping matroid of a delta-matroid, which was introduced for algebro-geometric reasons in [21,Section 6]. A closely related notion was considered in [9]. Note that enveloping matroids necessarily have rank n. In [21,Section 6.3], it is shown that many different types of delta-matroids have enveloping matroids, such as realizable delta-matroids, delta-matroids arising from the independent sets or bases of a matroid, and delta-matroids associated to graphs or embedded graphs. We will need the following property of enveloping matroids. Proof. If S ∈ AdS n , then env(u S ) = e S , and S is the only admissible set with this property. Furthermore, if S ∈ AdS n has size n, then u S is the only indicator vector of a subset of [n,n] of size n which is a preimage e S under env. Because env(P (M )) = P (D), we see that if B is a feasible set of D, then B is a basis for M . This implies that the independent sets in D are independent in M .
By [21,Lemma 7.6], env(IP (M )) = 1 2 (P (D) + ). If S is admissible and independent in M , then env(u S ) = e S ∈ 1 2 (P (D) + ), so by Proposition 3.6, S is independent in D. 3.3. Lorentzian polynomials. For a multi-index m = (m 0 , m 1 , . . . ), let w m = w m0 0 w m1 1 · · · . A homogeneous polynomial f (w 0 , w 1 , . . . ) of degree d with real coefficients is said to be strictly Lorentzian if all its coefficients are positive, and the quadratic form obtained by taking d − 2 partial derivatives is nondegenerate with exactly one positive eigenvalue. We say that f is Lorentzian if it is a coefficient-wise limit of strictly Lorentzian polynomials. Lorentzian polynomials enjoy strong log-concavity properties, and the class of Lorentzian polynomials is preserved under many natural operations.
The following lemma is a special case of [31,Proposition 3.3]. Alternatively, it can be deduced from the proof of [12,Corollary 3.5]. We thank Nima Anari for discussing this lemma with us. If f is Lorentzian, then f is Lorentzian.
For S ∈ AdS n , let S ⊂ [n] denote the unsigned version of S, i.e., the image of S under the quotient of [n, n] by the involution. For a set T , let w T = a∈T w a . We now state a strengthening of Theorem 1.6. is Lorentzian. By Proposition 3.9, this polynomial is equal to the polynomial in Theorem 3.11. Remark 3.13. Let (U, Ω, r) be a multimatroid [9], i.e., U is a finite set, Ω is a partition of U , and r is a function on partial transversals of Ω satisfying certain conditions. An independent set is a partial transversal S of Ω with r(S) = |S|. A multimatroid is called shelterable if r can be extended to the rank function of a matroid on U . Then the argument used to prove Theorem 1.6 shows that, if a k is the number of independent sets of a shelterable multimatroid of size k, then a 2 k ≥ |U | − k + 1 |U | − k k + 1 k a k+1 a k−1 .
Example 1.4. [21, Example 5.5 and 5.6] Let M be a matroid of rank r on [n], and let S = S + ∪ S − ∈ AdS n be an admissible set with S + , S − ⊂ [n]. Set V = {i ∈ [n] : S ∩ {i,ī} = ∅}. Above, we gave two examples of delta-matroids constructed from M .
By [ 11 ,
11Theorem 4.5] (or [2, Theorem 5.2]), P (f ) has all edges parallel to e i or e i ± e j if and only if f is bisubmodular. In this case, P (f ) is a lattice polytope if and only if f is integer-valued. For a normalized (i.e., f (∅) = 0) bisubmodular function f , we can recover f from P (f ) via the formula f (S) = max x∈P (f ) e S , x .
polytope P (g) has all vertices in {±1} n if and only if 1 2 (P (g) + (1, . . . , 1)) is a lattice polytope which is contained in [0, 1] n . The normalized bisubmodular function h corresponding to the point (1, . . . , 1) takes value h(S) = |S + | − |S − | on an admissible set of the form S = S + ∪ S − , with S + , S − ⊂ [n]. The polytope
. . . , 1)) is P (f ), where f is the normalized bisubmodular function defined by f := 1 2 (g + h). We note that P (f ) is a lattice polytope which is contained in [0, 1] n if and only if(1) f (i) ∈ {0, 1} and f (i) ∈ {−1, 0}, and (2) f is integer-valued.A normalized bisubmodular function f satisfies these conditions if and only if g satisfies the conditions of Theorem 1.2, giving the characterization of rank functions of delta-matroids.By[2, Example 5.2.3], the polytope P (g D ) = P (D) has all edges parallel to e i ± e j if and only if g D satisfies the condition
Definition 2. 1 .
1Let D be a delta-matroid on [n, n] with feasible sets F , and let i ∈ [n]. We say that i is a loop of D if no feasible set contains i, and we say that i is a coloop if every feasible set contains i.(1) If i is not a loop of D, then the contraction D/i is the delta-matroid with feasible setsB \ i, for B ∈ F containing i. (2) If i is not a coloop of D, then the deletion D \ i is the delta-matroid with feasible sets B \ i, for B ∈ F containing i.(3) The projection D(i) is the delta-matroid with feasible sets B \ {i, i} for B ∈ F . (4) If i is a loop or coloop, then set D/i = D \ i = D(i).For A ⊂ [n], we define D/A, D \ A, and D(A) to be the delta-matroids on [n,n] \ (A ∪Ā) obtained by successively contracting, deleting, or projecting away from all elements of A. Contractions, deletions, and projections at disjoint sets commute with each other, so this is well defined. If A and B are disjoint subsets of [n], then D/A \ B is the delta-matroid obtained by contracting A and then deleting B, which is the same as first deleting B and then contracting A.
Proposition 2. 2 .
2Let D be a delta-matroid on [n, n], and let A ⊂ [n]. For each S ∈ AdS n disjoint from A ∪ A, g D(A) (S) = g D (S). Proof. As S is disjoint from A ∪ A, |B ∩ S| − |B ∩ S| depends only on B \ (A ∪ A). The feasible sets of D(A) are given by B \ (A ∪ A) for B a feasible set of D.
Proposition 2. 3 .
3Let D be a delta-matroid on [n, n]. Let A, B ⊂ [n] be disjoint subsets, and let S ∈ AdS n be disjoint from A ∪ B ∪ A ∪ B. Then g D/A\B (S) = g D (S ∪ A ∪ B) − g D (A ∪ B).
Proposition 2. 4 .
4Let D be a delta-matroid on [n, n], and let S ⊂ T ∈ AdS n . Let F S be the collection of feasible sets B of D that maximize |S ∩ B|, i.e., have |S ∩ B| = max B ′ ∈F |S ∩ B ′ |. Then max B∈FS |T ∩ B| = max B∈F |T ∩ B|.
Lemma 2. 5 .
5Let D be a delta-matroid on [n, n], and let i ∈ [n]. Then (1) If i is not a loop, then g D/i (S) = g D (S ∪ i) − 1,
Proof of Proposition 2 . 3 .
23First note that g D (i) = 1 if i is not a loop and is −1 if i is a loop, and similarly g D (i) = 1 if i is not a coloop and is −1 is i is a coloop. So Lemma 2.5 implies the result holds when |S| = 1.
Proposition 2 . 6 .
26Let D 1 , D 2 be delta-matroids on [n 1 ] and [n 2 ], and let S = S 1 ∪ S 2 be an admissible subset of [n 1 + n 2 , n 1 + n 2 ], with S 1 ⊂ [n 1 , n 1 ] and S 2 ⊂ [n 2 , n 2 ]. Then g D1×D2 (S) = g D1 (S 1 ) + g D2 (S 2 ). Proof. Let B 1 be a feasible set of D 1 with g D1 (S 1 ) = |S 1 ∩ B 1 | − |S 1 ∩ B 1 |, and let D 2 be a feasible set of D 2 with g D2 (S 2 ) = |S 2 ∩ B 2 | − |S 2 ∩ B 2 |. Then B 1 ∪ B 2 maximizes B → |S ∩ B| − |S ∩ B|, and so g D1×D2 (S) = |S 1
which implies the result. Let S ∈ AdS n be an admissible set of size n. For any delta-matroid D on [n, n], let r be the maximal value of |S ∩ B|. Then {S ∩ B : B ∈ F , |S ∩ B| = r} is the set of bases of a matroid on S. When S = [n], this is sometimes called the upper matroid of D. We describe the rank function of this matroid in terms of the rank function of D.
Proposition 2 . 8 .
28Let S ∈ AdS n be an admissible set of size n, and let D be a delta-matroid on [n, n] with r = max B∈F |S ∩ B|. The matroid M on S whose bases are {S ∩ B : B ∈ F , |S ∩ B| = r} has rank function rk M (T ) = g D (T ) + |T | 2 .
in the more general setting of multimatroids. The following characterization of the functions h D follows from [9, Proposition 4.2]: Proposition 2.10. A function h : AdS n → Z is equal to h D for some delta-matroid D if and only if (1) h(∅) = 0,
Proposition 3 . 1 .
31If n = 0, the U D (u, v) = 1. For any i ∈ [n], the U -polynomial satisfies
Lemma 3. 2 .
2Let D 1 , D 2 be delta-matroids on [n 1 , n 1 ] and [n 2 , n 2 ]. Then U D1×D2 (u, v) = U D1 (u, v)U D2 (u, v).
Proof of Proposition 3 . 1 .
31If n = 0, then the only admissible subset of [n, n] is the empty set, and g D (∅) = 0, so U D (u, v) = 1. Now choose some i ∈ [n]. First suppose that i is neither a loop nor a coloop. The admissible subsets of [n, n] are partitioned into sets containing i, sets containing i, and sets containing neither i nor i. If S contains i, then u n−|S| v |S|−g D (S) 2
Definition 3 . 3 .
33We say that S ∈ AdS n is independent in D if g D (S) = |S|, or, equivalently, if S is contained in a feasible subset of D. The independence complex of D is the simplicial complex on [n, n] whose facets are given by the feasible sets of D.Let S ∈ AdS n , and let T = {i ∈ [n] : S ∩ {i, i} = ∅}. Note S is independent if and only if S is a feasible set of D(T ). The following result is immediate from the definition of U D (u, 0).Proposition 3.4. Let f i (D) be the number of i-dimensional faces of the independence complex of D. Then U D (u, 0) = f n−1 (D) + f n−2 (D)u + · · · + f −1 (D)u n .
Remark 3 . 7 .
37Let U D (u, −1) = b n + b n−1 u + · · · + b 0 u n . In small examples, (b 0 , . . . , b n ) is the f -vector a pure simplicial complex of dimension (n − 1). When M is a matroid, the coefficients of R M (u, −1), when written backwards, are the f -vector of the broken circuit complex of M . This suggests that (b 0 , . . . , b n ) may be the f -vector of a delta-matroid analogue of the broken circuit complex, and, more generally, that there is an "activity" interpretation of the coefficients of U D (u, v − 1). See [30, Corollary 5.3] for an enumerative interpretation of b n .
For S ⊆ [n,n], let u S denote the corresponding indicator vector in R [n,n] . For a matroid M on [n,n], let P (M ) = Conv{u B : B basis of M }, and let IP (M ) = Conv{u S : S independent in M}.
Definition 3 . 8 .
38Let env : R [n,n] → R n be the map given by (x 1 , . . . , x n , x 1 , . . . , x n ) → (x 1 − x 1 , . . . , x n − x n ). Let D be a delta-matroid on [n, n], and let M be a matroid on [n, n]. We say that M is an enveloping matroid for D if env(P (M )) = P (D).
Proposition 3. 9 .
9Let M be an enveloping matroid for a delta-matroid D on [n, n]. Let S ∈ AdS n be an admissible set. Then S is independent in M if and only if it is independent in D.
Lemma 3 . 10 .
310For a polynomial f (w 0 , w 1 , . . . ) = m c m w m , let f (w 0 , w 1 , . . . ) = m:mi≤1 for i =0 c m w m .
Theorem 3 . 11 .w
311Let D be a delta-matroid on [n, n] which has an enveloping matroid. Then the polynomial S ∈ R[w 0 , w 1 , . . . , w n ] is Lorentzian.
Remark 3 . 12 .w
312In[21, Theorem 8.1], it is proven that if D has an enveloping matroid, then the polynomialS independent in D w |S| 0 |S|! w [n]\S ∈ R[w 0 , w 1 ,. . . , w n ] . By [12, Example 2.26], the coefficients of a Lorentzian polynomial in two variables of degree 2n are log-concave after dividing the coefficient of w 2n−i 0 y i by 2n i , which implies the result. Proof of Theorem 3.11. Let M be an enveloping matroid of D. By [12, Proof of Theorem 4.14], the polynomial S independent in M w 2n−|S| 0 w S ∈ R[w 0 , w 1 , . . . , w n , w 1 , . . . , w n ] is Lorentzian. Setting w i = w i , by [12, Theorem 2.10] the polynomial S∩[n] w S∩[n] ∈ R[w 0 , w 1 , . . . , w n ] is Lorentzian. A term w 2n−|S| 0 w S∩[n] w S∩[n] has degree at most 1 in each of the variables w 1 , . . . , w n if and only if S is admissible, in which case it is equal to w S . Therefore, by Lemma 3.10, the polynomial S∈AdSn independent in M w 2n−|S| 0 w S ∈ R[w 0 , w 1 , . . . , w n ]
Log-concave polynomials III: Mason's ultra-logconcavity conjecture for independent sets of matroids. Nima Anari, Kuikui Liu, Cynthia Shayan Oveis Gharan, Vinzant, arXiv:1807.00929v2Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant, Log-concave polynomials III: Mason's ultra-log- concavity conjecture for independent sets of matroids. arXiv:1807.00929v2.
Coxeter submodular functions and deformations of Coxeter permutahedra. Federico Ardila, Federico Castillo, Christopher Eur, Alexander Postnikov, 107039. MR4064768Adv. Math. 365Federico Ardila, Federico Castillo, Christopher Eur, and Alexander Postnikov, Coxeter submodular functions and defor- mations of Coxeter permutahedra, Adv. Math. 365 (2020), 107039. MR4064768
The interlace polynomial of a graph. Richard Arratia, Béla Bollobás, Gregory B Sorkin, J. Combin. Theory Ser. B. 9222099142Richard Arratia, Béla Bollobás, and Gregory B. Sorkin, The interlace polynomial of a graph, J. Combin. Theory Ser. B 92 (2004), no. 2, 199-233. MR2099142
The homology and shellability of matroids and geometric lattices, Matroid applications. Anders Björner, MR1165544Anders Björner, The homology and shellability of matroids and geometric lattices, Matroid applications, 1992, pp. 226-283. MR1165544
A polynomial invariant of graphs on orientable surfaces. Béla Bollobás, Oliver Riordan, Proc. London Math. Soc. 31851080Béla Bollobás and Oliver Riordan, A polynomial invariant of graphs on orientable surfaces, Proc. London Math. Soc. (3) 83 (2001), no. 3, 513-531. MR1851080
Alexandre V Borovik, Israel Gelfand, Neil White, Coxeter matroids. Boston, MABirkhäuser Boston, Inc2161989953Alexandre V. Borovik, Israel Gelfand, and Neil White, Coxeter matroids, Progress in Mathematics, vol. 216, Birkhäuser Boston, Inc., Boston, MA, 2003. MR1989953
Greedy algorithm and symmetric matroids. André Bouchet, Math. Programming. 382André Bouchet, Greedy algorithm and symmetric matroids, Math. Programming 38 (1987), no. 2, 147-159. MR904585
Representability of △-matroids. MR1221555Combinatorics (Eger. , Representability of △-matroids, Combinatorics (Eger, 1987), 1988, pp. 167-182. MR1221555
Coverings by independent sets. Multimatroids I , 626-646. MR1477659 [10Multimatroids. II. Orthogonality, minors and connectivity, Electron. J. Combin. 104SIAM J. Discrete Math.. Research Paper 8, 25. MR1491784, Multimatroids. I. Coverings by independent sets, SIAM J. Discrete Math. 10 (1997), no. 4, 626-646. MR1477659 [10] , Multimatroids. II. Orthogonality, minors and connectivity, Electron. J. Combin. 5 (1998), Research Paper 8, 25. MR1491784
Delta-matroids, jump systems, and bisubmodular polyhedra. André Bouchet, William H Cunningham, SIAM J. Discrete Math. 81André Bouchet and William H. Cunningham, Delta-matroids, jump systems, and bisubmodular polyhedra, SIAM J. Discrete Math. 8 (1995), no. 1, 17-32. MR1315956
Lorentzian polynomials. Petter Brändén, June Huh, 821-891. MR4172622Ann. of Math. 2192Petter Brändén and June Huh, Lorentzian polynomials, Ann. of Math. (2) 192 (2020), no. 3, 821-891. MR4172622
Interlace polynomials for multimatroids and delta-matroids. Robert Brijder, Hendrik Jan Hoogeboom, European J. Combin. 40Robert Brijder and Hendrik Jan Hoogeboom, Interlace polynomials for multimatroids and delta-matroids, European J. Combin. 40 (2014), 142-167. MR3191496
. R Chandrasekaran, Santosh N Kabadi, Pseudomatroids, Discrete Math. 713R. Chandrasekaran and Santosh N. Kabadi, Pseudomatroids, Discrete Math. 71 (1988), no. 3, 205-217. MR959006
Matroids, delta-matroids and embedded graphs. Carolyn Chun, Iain Moffatt, Steven D Noble, Ralf Rueckriemen, J. Combin. Theory Ser. A. 167Carolyn Chun, Iain Moffatt, Steven D. Noble, and Ralf Rueckriemen, Matroids, delta-matroids and embedded graphs, J. Combin. Theory Ser. A 167 (2019), 7-59. MR3938888
On the interplay between embedded graphs and delta-matroids. 675-700. MR3932785Proc. Lond. Math. Soc. 3118, On the interplay between embedded graphs and delta-matroids, Proc. Lond. Math. Soc. (3) 118 (2019), no. 3, 675-700. MR3932785
Some combinatorial properties of discriminants in metric vector spaces. Andreas Dress, Timothy F Havel, Adv. in Math. 623Andreas Dress and Timothy F. Havel, Some combinatorial properties of discriminants in metric vector spaces, Adv. in Math. 62 (1986), no. 3, 285-312. MR866162
Delta matroids whose fundamental graphs are bipartite. Alain Duchamp, MR1137846Linear Algebra Appl. 160Alain Duchamp, Delta matroids whose fundamental graphs are bipartite, Linear Algebra Appl. 160 (1992), 99-112. MR1137846
A greedy algorithm for solving a certain class of linear programmes. F D J Dunstan, Dominic Welsh, Math. Programming. 5MR335311F. D. J. Dunstan and Dominic Welsh, A greedy algorithm for solving a certain class of linear programmes, Math. Pro- gramming 5 (1973), 338-353. MR335311
Irreducibility of the Tutte polynomial of an embedded graph. Joanna A Ellis-Monaghan, Andrew J Goodall, Iain Moffatt, Steven D Noble, Lluís Vena, Algebr. Comb. 564529927Joanna A. Ellis-Monaghan, Andrew J. Goodall, Iain Moffatt, Steven D. Noble, and Lluís Vena, Irreducibility of the Tutte polynomial of an embedded graph, Algebr. Comb. 5 (2022), no. 6, 1337-1351. MR4529927
Christopher Eur, Alex Fink, Matt Larson, Hunter Spink, arXiv:2209.06752v2Signed permutohedra, delta-matroids, and beyond. Christopher Eur, Alex Fink, Matt Larson, and Hunter Spink, Signed permutohedra, delta-matroids, and beyond. arXiv:2209.06752v2.
Bisubmodular polyhedra, simplicial divisions, and discrete convexity. Satoru Fujishige, 115-120. MR3189029Discrete Optim. 12Satoru Fujishige, Bisubmodular polyhedra, simplicial divisions, and discrete convexity, Discrete Optim. 12 (2014), 115-120. MR3189029
Parametric bisubmodular function minimization and its associated signed ring family. MR3661422Discrete Appl. Math. 227, Parametric bisubmodular function minimization and its associated signed ring family, Discrete Appl. Math. 227 (2017), 142-148. MR3661422
Bisubmodular function minimization. Satoru Fujishige, Satoru Iwata, 1065- 1073. MR2206380SIAM J. Discrete Math. 194Satoru Fujishige and Satoru Iwata, Bisubmodular function minimization, SIAM J. Discrete Math. 19 (2005), no. 4, 1065- 1073. MR2206380
The box convolution and the Dilworth truncation of bisubmodular functions. Satoru Fujishige, Sachin Patkar, Research Institute for Discrete Mathematics, University of BonnReportSatoru Fujishige and Sachin Patkar, The box convolution and the Dilworth truncation of bisubmodular functions, 1994. Report No. 94823, Research Institute for Discrete Mathematics, University of Bonn.
What can be said about pure O-sequences?. Takayuki Hibi, J. Combin. Theory Ser. A. 502Takayuki Hibi, What can be said about pure O-sequences?, J. Combin. Theory Ser. A 50 (1989), no. 2, 319-322. MR989204
Thomas Krajewski, Iain Moffatt, Adrian Tanasa, 271-330. MR3759218Hopf algebras and Tutte polynomials. 95Thomas Krajewski, Iain Moffatt, and Adrian Tanasa, Hopf algebras and Tutte polynomials, Adv. in Appl. Math. 95 (2018), 271-330. MR3759218
Matroids: unimodal conjectures and Motzkin's theorem. John H Mason, Combinatorics (Proc. Conf. Combinatorial Math., Math. Inst. 0349445John H. Mason, Matroids: unimodal conjectures and Motzkin's theorem, Combinatorics (Proc. Conf. Combinatorial Math., Math. Inst., Oxford, 1972), 1972, pp. 207-220. MR0349445
The interlace polynomial. Ada Morse, 3790909Ada Morse, The interlace polynomial, Graph polynomials, 2017, pp. 1-23. MR3790909
Interlacement and activities in delta-matroids. European J. Combin. 783906851, Interlacement and activities in delta-matroids, European J. Combin. 78 (2019), 13-27. MR3906851
Julius Ross, Hendrik Süss, Thomas Wannerer, arXiv:2304.08399Dually Lorentzian polynomials. Julius Ross, Hendrik Süss, and Thomas Wannerer, Dually Lorentzian polynomials. arXiv:2304.08399.
∆-matroids with the strong exchange conditions. Walter Wenzel, 67-70. MR1349667Appl. Math. Lett. 65Walter Wenzel, ∆-matroids with the strong exchange conditions, Appl. Math. Lett. 6 (1993), no. 5, 67-70. MR1349667
. U Stanford, Department of Mathematics. 45094305 Email address: [email protected] U. Department of Mathematics, 450 Jane Stanford Way, Stanford, CA 94305 Email address: [email protected]
| [] |
[
"Quantum calculation of axion-photon transition in electromagnetodynamics for cavity haloscope",
"Quantum calculation of axion-photon transition in electromagnetodynamics for cavity haloscope"
] | [
"Tong Li [email protected]†[email protected] \nSchool of Physics\nNankai University\n300071TianjinChina\n",
"Rui-Jia Zhang \nSchool of Physics\nNankai University\n300071TianjinChina\n"
] | [
"School of Physics\nNankai University\n300071TianjinChina",
"School of Physics\nNankai University\n300071TianjinChina"
] | [] | The Witten effect implies the presence of electric charge of magnetic monople and possible relationship between axion and dyon. The axion-dyon dynamics can be reliably built based on the quantum electromagnetodynamics (QEMD) which was developed by Schwinger and Zwanziger in 1960's. A generic low-energy axion-photon effective field theory can also be realized in the language of "generalized symmetries" with higher-form symmetries and background gauge fields. In this work, we implement the quantum calculation of axion-single photon transition rate inside a homogeneous electromagnetic field in terms of the new axion interaction Hamiltonian in QEMD. This quantum calculation can clearly imply the enhancement of conversion rate through resonant cavity in axion haloscope experiments. We also show the promising potentials on the cavity search of new axion-photon couplings in QEMD. * | null | [
"https://export.arxiv.org/pdf/2305.01344v1.pdf"
] | 258,437,229 | 2305.01344 | 42a8829a59eaebedbc834ad63f52d9247af62a23 |
Quantum calculation of axion-photon transition in electromagnetodynamics for cavity haloscope
2 May 2023
Tong Li [email protected]†[email protected]
School of Physics
Nankai University
300071TianjinChina
Rui-Jia Zhang
School of Physics
Nankai University
300071TianjinChina
Quantum calculation of axion-photon transition in electromagnetodynamics for cavity haloscope
2 May 20231
The Witten effect implies the presence of electric charge of magnetic monople and possible relationship between axion and dyon. The axion-dyon dynamics can be reliably built based on the quantum electromagnetodynamics (QEMD) which was developed by Schwinger and Zwanziger in 1960's. A generic low-energy axion-photon effective field theory can also be realized in the language of "generalized symmetries" with higher-form symmetries and background gauge fields. In this work, we implement the quantum calculation of axion-single photon transition rate inside a homogeneous electromagnetic field in terms of the new axion interaction Hamiltonian in QEMD. This quantum calculation can clearly imply the enhancement of conversion rate through resonant cavity in axion haloscope experiments. We also show the promising potentials on the cavity search of new axion-photon couplings in QEMD. *
I. INTRODUCTION
It is well-known that the strong CP problem in quantum chromodynamics (QCD) arises from the source of CP violation in QCD Lagrangian with θG a µνGa µν . The Peccei-Quinn (PQ) mechanism solves the strong CP problem by introducing a pseudo-Goldstone boson a called axion after the spontaneous breaking of a QCD anomalous U (1) PQ global symmetry [1][2][3][4][5][6][7][8][9][10][11][12]. The chiral transformation of the quark fields with PQ charges also leads to the anomaly under QED and the coupling g aγγ aF µνF µν between axion and electromagnetic fields. In 1979, E. Witten showed that a CP violating term θF µνF µν with non-zero vacuum angle θ provides an electric charge −θe/2π for magnetic monopoles [13]. This so-called Witten effect implies a close relationship between axion and magnetic monopole due to the axion-photon coupling g aγγ a E · B.
This axion-dyon dynamics was first derived by W. Fischler et al. under the classical electromagnetism [14] and was proposed to solve cosmological problems in recent years [15][16][17][18][19]. In their works, however, the magnetic monopoles were treated as quasi-classical external sources and the quantization of electromagnetism is not complete. A reliable quantization in the presence of magnetic monopoles was developed by J. S. Schwinger and D. Zwanziger in 1960's and called quantum electromagnetodynamics (QEMD) [20][21][22]. Recently, based on the QEMD framework, Ref. [23] constructed a more generic axion-photon Lagrangian in the low-energy axion effective field theory (EFT). Besides the Witten effect term, more anomalous axion-photon interactions and couplings arise assuming the existence of heavy PQ-charged fermions with electric and magnetic charges. This is in contrary to the conventional axion EFT g aγγ aF µνF µν in the quantum electrodynamics (QED) framework. As a result of the above generic axion-photon Lagrangian, the conventional axion Maxwell equations [24] are further modified and some consequent new detection strategies of axion are studied in recent years [25][26][27][28].
The key property of QEMD is to extend the U (1) EM gauge group in the Standard Model (SM) by substituting two U (1) gauge groups U (1) E × U (1) M to introduce both electric and magnetic charges. Then, two four-potentials A µ and B µ (instead of only one in QED) are introduced corresponding to the two U (1) gauge groups, respectively. A non-trivial form of equal-time canonical commutation relations between them can be built [22]. It guarantees the preservation of the right degrees of freedom of physical photon.
The other property of QEMD is that it seemingly acts like a non-local quantum field theory (QFT). To obtain the covariant Maxwell equations in the presence of conserved electric current j e and magnetic current j m , one needs to introduce an arbitrary spacelike vector n µ = (0, n). The electromagnetic field strength tensor F µν and its dual tensor F d µν 1 are given in this way
F = ∂ ∧ A − (n · ∂) −1 (n ∧ j m ) d , F d = ∂ ∧ B + (n · ∂) −1 (n ∧ j e ) d ,(1)
where the integral operator (n · ∂) −1 satisfies n · ∂(n · ∂) −1 ( x) = δ( x). The second terms on the right-hand side of Eq. (1) induce likely non-local property in QEMD. This non-locality in QEMD can also be realized in the language of "Generalized Global Symmetries" with higher-form group symmetries [29][30][31]. We will show the action of higher-form symmetry transformation as a topological operator on the dynamical fields in a theory of two U (1) one-form symmetries. One can prove that the non-local part does not play any role in physical processes and the Lorentz invariance is not violated [32][33][34].
The QCD axion (see Ref. [35] for a recent review) can become as dark matter (DM) candidate through the misalignment mechanism [36,37]. The conventional axion haloscope experiments such as ADMX [38,39] are built based on resonant cavity technique to search for O(10) µeV axion DM. The cosmic axions resonantly convert into a monochromatic photon with the enhancement from a high quality factor Q when the cavity resonant frequency is tuned to the axion mass m a . The mean number of thermal photons in the cavity at finite temperature T is given by
n(ω a , T ) = 1 e ωa/k B T − 1 ,(2)
where k B is Boltzmann's constant. When m a O(10) µeV and T ≈ 20 mK, the occupation number of thermal photon is quite low and the cavity can be treated as a single photon emitter. Although the usual electromagnetic power radiated in cavity is calculated in classical field theory [24], the actual description is quantum mechanical process of axion to photon conversion [40]. To describe this axion-single photon conversion a → γ, the calculation of transition rate should be performed at quantum level [41,42]. In this work, we implement the quantum calculation of photon |0 → |1 transition rate inside a homogeneous electromagnetic field in terms of the new axion interaction Hamiltonian in QEMD. This quantum calculation can clearly imply the enhancement of conversion rate through resonance which is not certain in the classical picture. Our work will show the basic method for the generic cavity search of the new axion-photon couplings in QEMD.
This paper is organized as follows. In Sec. II, we introduce the generic axion-photon interactions. We will show the realizations of this theory in both QEMD and generalized symmetry. In Sec. III, we perform a complete quantum calculation of a → γ transition rate based on the new axion interaction Hamiltonian in QEMD. The transition rates from different types of cavity mode are obtained under external magnetic or electric background. We also show the sensitivity of resonant cavity to axion-photon couplings in QEMD in Sec. IV. Our conclusions are drawn in Sec. V.
II. THE REALIZATIONS OF GENERIC AXION-PHOTON INTERACTIONS IN QEMD AND
GENERALIZED SYMMETRY
A. The generic axion-photon interactions in QEMD
In the QEMD theory, the photon is described by two four-potentials A µ and B µ . Correspondingly, the gauge group of QEMD becomes U (1) E × U (1) M which inherently introduces both electric and magnetic charges. The equal-time canonical commutation relations between the two four-potentials were obtained [22]
[A µ (t, x), B ν (t, y)] = i µν κ0 n κ (n · ∂) −1 ( x − y) ,(3)[A µ (t, x), A ν (t, y)] = [B µ (t, x), B ν (t, y)] = −i(g µ 0 n ν + g ν 0 n µ )(n · ∂) −1 ( x − y) .(4)
As a result of Eq. (1), the electromagnetic field strength tensors F and F d are then introduced in the way that
n · F = n · (∂ ∧ A) , n · F d = n · (∂ ∧ B) ,(5)
where n µ = (0, n) is an arbitrary fixed spatial vector. Apparently, the two four-potentials have opposite parities. In the absence of electric and magnetic currents j e , j m , one has a simplified form
F = ∂ ∧ A = −(∂ ∧ B) d , F d = ∂ ∧ B = (∂ ∧ A) d .(6)
The n-related terms produce the non-locality in this theory. This non-local property can be realized by the two-particle irreducible representation in the QFT theory with both electric charge q and magnetic charge g. Each two-particle state (i, j) is characterized by the Dirac-Schwinger-Zwanziger (DSZ) quantization condition
q i g j − q j g i = 2πN , N ∈ Z .(7)
Thus, the cluster decomposition principle is obviously violated by irreducible two-particle state [34] and the Lorentz invariance is seemingly violated in this QEMD theory. However, it was formally showed that the observables of the QEMD are Lorentz invariant using the path-integral approach [32][33][34]. After all the quantum corrections are properly accounted for, the dependence on the spatial vector n µ in the action S factorizes into an integer linking number L n multiplied by the combination of charges in the DSZ quantization condition q i g j − q j g i . This n dependent part is then given by 2πN with N being an integer. Since S contributes to the generating functional in the form of exponent e iS , this Lorentz-violating part does not play any role in physical processes. The Lagrangian for the anomalous interactions between axion a and photon in QEMD is given by [23]
L ⊃ − 1 4 g aAA a tr[(∂ ∧ A)(∂ ∧ A) d ] − 1 4 g aBB a tr[(∂ ∧ B)(∂ ∧ B) d ] − 1 2 g aAB a tr[(∂ ∧ A)(∂ ∧ B) d ] .(8)
The first two dimension-five operators are CP-conserving axion interactions. Their couplings g aAA and g aBB are governed by the U (1) PQ U (1) 2 E and U (1) PQ U (1) 2 M anomalies, respectively. As A µ and B µ have opposite parities, the third operator is CP-violating one and its coupling g aAB is determined by the U (1) PQ U (1) E U (1) M anomaly. The inclusion of this term accounts for the intrinsic CP violation in a dyon theory. It is analogous to the interaction between electromagnetic field and a scalar φ with positive parity φF µν F µν [43]. In terms of classical electromagnetic fields, the above axion-photon Lagrangian becomes 2
L ⊃ − 1 4 (g aAA − g aBB ) a F µν F d µν + 1 2 g aAB a F µν F µν = (g aAA − g aBB ) a H · E + g aAB a ( H 2 − E 2 ) .(9)
This is the form of interactions that we will use for the quantum calculation of a → γ transition rate below. Taking care of the above anomalies, one can calculate the coupling coefficients as
g aAA = Ee 2 4π 2 v PQ , g aBB = M g 2 0 4π 2 v PQ , g aAB = Deg 0 4π 2 v PQ ,(10)
where e is the unit of electric charge, g 0 is the minimal magnetic charge with g 0 = 2π/e in the DSZ quantization condition, and v PQ is the U (1) PQ symmetry breaking scale. E(M ) is the electric (magnetic) anomaly coefficient and D is the mixed electric-magnetic CP-violating anomaly coefficient. They can be computed by integrating out heavy PQ-charged fermions with electric and magnetic charges. Ref. [23] performed the calculation of the anomaly coefficients by following Fujikawa's path integral method [44]. As the DSZ quantization condition indicates g 0 e, we have the scaling of the axion-photon couplings as g aBB |g aAB | g aAA .
B. Generalized symmetry realization
The QEMD theory describing monopole dynamics can also be realized in the language of higher-form symmetries as a topological QFT (TQFT) [29][30][31]. The above generic axion couplings then naturally arise when an axion-Maxwell theory couples to a TQFT [45]. Below we briefly show the spirit of generalized symmetry and the realization of above generic axion couplings in TQFT.
We consider a general p-form symmetry in d dimensions. The symmetry transformation as an operator is associated with a co-dimension p
+ 1 manifold M (d−p−1) U g (M (d−p−1) ) ,(11)
where g is an element of symmetry group G. The operators obey the multiplication rule
U g (M (d−p−1) )U g (M (d−p−1) ) = U g (M (d−p−1) ) ,(12)where g = gg ∈ G. The dependence of operator U g (M (d−p−1) ) on manifold M (d−p−1) is topo- logical and remains unchanged unless the deformation of M (d−p−1) crosses an operator V . The topological operator U g (M (d−p−1) ) acts on a p-dimensional operator V of manifold C (p) in the form of [31] U g (M (d−p−1) )V (C (p) ) = g(V ) M (d−p−1) ,C (p) V (C (p) )U g (M (d−p−1) ) ,(13)
where g(V ) is the representation of group element g of V , and M (d−p−1) , C (p) is the linking number for manifolds M (d−p−1) and C (p) . It is then natural to couple the system to flat background gauge field of the higher-form symmetry. Taking a d = 4 Maxwell theory for illustration, one claims there are two one-form U (1) symmetries, i.e. U (1) E × U (1) M . The symmetries are generated by the integral of d − 1 = 3-form currents
U E (M (2) ) = e iα M (2) je , g E = e iα ∈ U (1) E ,(14)U M (M (2) ) = e iβ M (2) jm , g M = e iβ ∈ U (1) M ,(15)
where g E(M) is the element of group U (1) E(M) , and the electric and magnetic charges are given by q(M (2) ) = M (2) j e and g(M (2) ) = M (2) j m , respectively. The representation then generally becomes
(g E(M) ) Q M (2) ,C (1) ,(16)
where Q denotes the conserved charge. These two operators act on the Wilson loop operator and the 't Hooft loop operator, respectively. Thus, we are able to introduce two two-form background gauge fields of higher-form symmetries, i.e. A µ and B µ as those in Zwanziger's QEMD theory. Inspired by this kind of higher-form symmetry realization, one can consider an axion-Maxwell theory coupled to a Z n TQFT as shown in Ref. [45]. The gauge field A µ is an one-form gauge field and F (2) A ≡ ∂ ∧ A is its two-form field strength for the U (1) EM group in the SM. The action of the axion-Maxwell theory in this sector becomes
S 0 = 1 2g 2 F (2) A F (2) A − iK A 8π 2 f a aF (2) A (F (2) A ) d ,(17)
where f a is the axion decay constant, and K A ∈ Z is a discrete coupling constant. This axion-Maxwell theory is considered to couple to a Z n TQFT. In this TQFT sector, the action of axion theory is
S 1 = in 2π B (2) (F (2) B ) d − iK B 4π 2 f a aF (2) B (F (2) B ) d − iK AB 8π 2 f a aF (2) A (F (2) B ) d ,(18)
where F
B ≡ ∂ ∧B (1) is the two-form field strength of an one-form Z n gauge field B (1) associated with another U (1) gauge group, and B (2) is a two-form gauge field associated with one-form Z (1) n gauge symmetry. Then, the action of theory with topological QFT couplings via axion-portal can be given by S 0 +S 1 [45]. The second term in S 0 and the last two terms in S 1 agree with the generic QEMD axion interactions shown in Eq. (8).
III. QUANTUM CALCULATION OF AXION-PHOTON TRANSITION IN QEMD
In this section we perform the quantum calculations of axion-photon transition in QEMD under an external magnetic field or electric field as background.
A. Magnetic background
Suppose an external magnetic field H 0 along the z-direction, according to Eq. (9), the axion and photon interaction can be written as
L aγγ = (g aAA − g aBB )a E · H 0 + g aAB a H · H 0 ,(19)
where we have set external electric field to zero, i.e., H 0 =ẑH 0 = 0, E 0 = 0. Due to the extremely light mass and low velocity of the axion DM, its de Brogile wavelength is of the order of 10 3 /m a . It is much larger than the typical size of the cavity in haloscope experiments ∼ 1/m a . Thus, the axion field inside the cavity can be approximately viewed as spatially independent and be given in the form of cosine oscillation: a( x, t) ≈ a 0 cos ω a t = √ 2ρa ma cos ω a t. The Hamiltonian for the above interaction can be written as follows
H I = − d 3 xL aγγ = √ 2ρ a m a H 0 cos (ω a t) (g aBB − g aAA ) d 3 xẑ · E − g aAB d 3 xẑ · H .(20)
We find that the key difference of axion in QEMD and QED lies in the axion induced electromagnetic fields. In QEMD, the axion induced fields E and H can occur simultaneously due to the presence of three couplings. As a result, the interactions between the axion field a and the electromagnetic fields are divided into two parts: g aBB − g aAA and g aAB , corresponding to E and H respectively. However, in the traditional axion QED, only theẑ · E term can appear given the substitution of (g aBB − g aAA ) → −g aγγ . Next, we will use a quantization approach to deal with the integrals d 3 xẑ · E and d 3 xẑ · H. In QEMD, the magnetic field and electric field can be given by using the curl of two vector potentials A and B, respectively [22]
H = ∇ × A, E = −∇ × B .(21)
We can expand the vector potentials A and B in terms of creation and annihilation operators as well as mode functions u k (x)
A(x, t) = k 1 √ 2ω k V (a k u (A) k (x)e −iω k t + a † k u (A) * k (x)e iω k t ) ,(22)B(x, t) = k 1 √ 2ω k V (a k u (B) k (x)e −iω k t + a † k u (B) * k (x)e iω k t ) ,(23)
where V is the volume of a cavity, and u k (x) satisfy the equations of motion with cavity-wall boundary conditions
n · ∂ n 2 (n · ∂A µ − ∂ µ n · A − n µ ∂ · A − µ νρσ n ν ∂ ρ B σ ) = 0 ,(24)
n · ∂ n 2 (n · ∂B µ − ∂ µ n · B − n µ ∂ · B − µ νρσ n ν ∂ ρ A σ ) = 0 .
The operators a k and a † k can annihilate and create physical single-photon state even though two vector potentials are introduced to describe photon. The above equations of motion are two firstorder differential equations. They constrain A and B together with the equation corresponding to the gauge-fixing condition
∂ 2 n · A + ∂ 2 n · B = 0 .(26)
They reduce the four degrees of A and B to the two degrees of freedom for a massless vector field. Furthermore, due to the equal-time commutation relations in Eq. (3) and Eq. (4), the total degrees of freedom of photon can be further reduced to two. Therefore, even if QEMD introduces two potentials A and B, the physical degrees of freedom of photon are preserved. Using the relations given by Eq. (21), the electromagnetic fields can be obtained. Although the exact forms of u (A,B) k are unknown, their curl in a cavity can be given by
∇ × u (A) k = ω k u H k , ∇ × u (B) k = ω k u E k ,(27)
where u H k and u E k are the actual electromagnetic field modes inside the cavity with the normaliza-
tion 1 V d 3 x|u E,H k | 2 = 1.
One can then calculate the |0 → |1 photon transition matrix element as well as the transition probability inside the cavity with an external magnetic field H 0 [42]. Up to the first order, we have
P ≈ 1| t 0 dtH I |0 2 ≈ ρ a m 2 a H 2 0 V (g aBB − g aAA ) 2 k ω k C E k + g 2 aAB k ω k C H k + 2g aAB (g aBB − g aAA ) k ω k C EH k × sin 2 [(ω k − ω a )t/2] 4[(ω k − ω a )/2] 2 ,(28)
where the relations in Eq. (27) are plugged into the above result, and C E,H k , C EH l are the form factors that characterize the coupling strength of cavity mode k to axions
C E k = | d 3 xẑ · u E k | 2 V d 3 x|u E k | 2 , C H k = | d 3 xẑ · u H k | 2 V d 3 x|u H k | 2 , C EH k = Re d 3 xẑ · u E k d 3 xẑ · u H * k V d 3 x|u E k | 2 d 3 x|u H k | 2 .(29)
In practice, the transition emission process of a single photon is expected to take a long time t. The time factor can thus approximately become
sin 2 [(ω k − ω a )t/2] 4[(ω k − ω a )/2] 2 ≈ πtδ(ω k − ω a )/2 .(30)
Finally, the transition rate in the cavity can be obtained as
R = dP/dt = π 2 ρ a m 2 a H 2 0 V Q (g aBB − g aAA ) 2 C E ωa + g 2 aAB C H ωa + 2g aAB (g aBB − g aAA )C EH ωa ,(31)
where the discrete summation over the cavity modes k is converted into continuous integrals with
k C E,H k ω k δ(ω k − ω a ) ≈ QC E,H ωa , k C EH k ω k δ(ω k − ω a ) ≈ QC EH ωa .
For a given cavity in a haloscope experiment, Q is assumed to be universal for any terms at the same moment t, we can thus factor out Q outside the parentheses. This result clearly shows the enhancement of axionphoton transition by the cavity's quality factor when ω k ≈ ω a . The modes existing in a cylindrical cavity include TE modes (transverse electric modes with E z = 0, H z = 0) and TM modes (transverse magnetic modes with H z = 0, E z = 0), and there may be also some TEM modes (E z = 0, H z = 0) embedded in. Based on different types of cavity mode, the transition rate R in Eq. (31) can then be simplified as
R TE = π 2 ρ a m 2 a H 2 0 V Qg 2 aAB C H ωa , with C E = C EH = 0 , R TM = π 2 ρ a m 2 a H 2 0 V Q(g aBB − g aAA ) 2 C E ωa , with C H = C EH = 0 .(32)
This indicates that TEM modes have vanishing form factors for pure transverse fields, while only TE and TM modes induce non-zero form factors with electromagnetic field in longitudinal direction.
In a cavity, the distribution of the electromagnetic field is usually quite different from that in vacuum. A cylindrical microwave resonant cavity can be viewed as a circular waveguide of length L with short-circuit at both ends. Two movable bulk copper rods can be placed inside the cavity to achieve tuning frequency [39]. The internal distribution of the electromagnetic field must satisfy both the Helmholtz equation and the corresponding boundary conditions, including those at the ends and on the walls of the cavity. The Helmholtz equation is ∇ 2 u(r, φ, z) + k 2 u(r, φ, z) = 0, E(r, φ, z) or H(r, φ, z) =ẑu(r, φ, z) ,
where only the z-component of the modes couples to the axion field according to the definition of the form factors in Eq. (29). Their solutions in different modes satisfy the following conditions TE modes (E z = 0, H z = u(r, φ, z)) :
r, ∂u(r, φ, z) ∂r r=a = 0 φ, u(r, φ, z) = u(r, φ + 2πm, z) z, u(r, φ, z) z=0, L = 0 ,(34)
TM modes (H z = 0, E z = u(r, φ, z)) :
r, u(r, φ, z) r=a = 0 φ, u(r, φ, z) = u(r, φ + 2πm, z) z, ∂u(r, φ, z) ∂z z=0, L = 0 ,(35)
where a is the radius of circular cross section, L is the length of cavity along the z-axis, and m = (0, ±1, ±2, · · · ) represents a series of integers required by the periodic boundary conditions.
The solutions of the above differential equations yield a series of possible electromagnetic resonant modes that can exist inside the cavity
TE mnp :ẑ · u H k (r, φ, z) = H z (r, φ, z) mnp = H mnp J m (k ρ r)
cos mφ sin mφ sin pπz L ,
TM mnp :ẑ · u E k (r, φ, z) = E z (r, φ, z) mnp = E mnp J m (k ρ r) cos mφ sin mφ cos pπz L ,(36)
where H mnp and E mnp are dimensionless coefficients to ensure the mode normalization.
From these solutions, it can be seen that the modes inside the cavity are represented by three integers m, n and p under the fixed boundary conditions. For TE modes, they are m = (0, 1, 2, · · · ), n = (1, 2, 3, · · · ), p = (1, 2, 3, · · · ), and for TM modes m = (0, 1, 2, · · · ), n = (1, 2, 3, · · · ), p = (0, 1, 2, · · · ). The eigenvalue of the radial part of Helmholtz equation is k ρ = (x m,n /a) for TE modes or k ρ = (x m,n /a) for TM modes with x and x being the n-th zero points of the Bessel function J m (x) and its first derivative J m (x), respectively. In this way, the form factor can be obtained by plugging the above solutions into their definitions in Eq. (29) and integrating over the volume of the cavity. For the TM modes, only when m = p = 0, the integrals in the z and φ directions are non-zero. Therefore, we only consider the TM 010 mode. For the TE modes, unlike TM modes, they satisfy the second boundary condition in r direction and the derivative of field is zero on the cavity wall. This leads to its vanishing integral over the cavity wall, even if those in z and φ directions are non-zero (m = 0, p = 1, 3, 5, · · · ). Fig. 1 shows the radial distributions of two modes, TM 010 and TE 011 . For the TM 010 mode, the amplitude of field strength E 010 decreases from the maximum at the cavity center to zero on the cavity wall. The field strength H 011 decreases to zero at the red circle and instead increases outside the circle, resulting in the cancellation of the cavity response to the axion.
Thus, one can conclude that in a cylindrical cavity TE mode has no coupling with axion, i.e C H = 0, and R TE in Eq. (32) is zero. It means under an external magnetic field H 0 , only the coupling (g aBB − g aAA ) ≈ g aBB can be measured through the TM mode. For illustration, we show the transition rate R TM in terms of practical units as .
Given this parameter setup, the axion cavity can be viewed as a device that emits single photon with a slow rate. The resolution of existing linear detectors is large enough to detect this signal. This approach is exactly the same as that measuring the conventional axion coupling g aγγ in axion electrodynamics.
B. Electric background
When the external magnetic field is replaced by an external electric field E 0 , Eq. (19) can be rewritten as
L aγγ = (g aAA − g aBB )a H · E 0 − g aAB a E · E 0 .(38)
Similarly, the photon emission rates for TE and TM modes in the axion cavity can be obtained as
R TE = π 2 ρ a m 2 a E 2 0 V Q(g aBB − g aAA ) 2 C H ωa , with C E = C EH = 0 , R TM = π 2 ρ a m 2 a E 2 0 V Qg 2 aAB C E ωa , with C H = C EH = 0 .(39)
We find that replacing H 0 with E 0 is associated with exchanging the couplings constrained by TE and TM modes. Now, the sensitivity to g aAB is still given by the transverse magnetic wave which couples to axion. Similar to Eq. (37), the transition rate R TM under the external electric field is given by
R TM = 0.41 Hz ρ a 7.1 × 10 −25 g/cm 3 10 −5 eV m a 2 E 0 10 3 kV/m 2 · V 0.001 m 3 Q 10 5 g aAB 10 −12 GeV −1 2 C E ωa 1 .(40)
IV. SENSITIVITY OF RESONANT CAVITY TO AXION-PHOTON COUPLINGS IN QEMD
Based on the above arguments, we propose the following scheme for detecting axion interactions in QEMD through the cavity haloscope. One can apply an external magnetic field H 0 to measure (g aBB − g aAA ) ≈ g aBB coupling or an external electric field E 0 to measure g aAB coupling. The corresponding sensitivities are given by the experimental configuration and the form factor C E ωa in TM mode. For the TM 0n0 mode, the form factor C E ωa has a simple analytic result: C E ωa = C E 0n0 = 4/(x 0,n ) 2 . Once the mode is fixed, the value of the form factor can be determined and is independent of a and L. For TM 010 , one gets C E 010 = 0.69 [40]. Note that the axion energy is determined by the eigenvalues k ρ and k z of the Helmholtz equation in Eq. (35), that is, ω a = k = k 2 ρ + k 2 z which simplifies to ω a = (x 0,1 )/a for the TM 010 mode with x 0,1 = 2.4. It means that the axion mass corresponding to the experimentally measured axion coupling is only related to the radius a of the cavity. When a is 5 cm, one has ω a ≈ 9.5 × 10 −6 eV. To determine the sensitivity in other mass regions, tuning is required by changing the value of a. In practical experiments such as ADMX [39], two movable bulk copper rods are placed inside the cavity to achieve tuning. Here, we do not intend to explore the details of techniques for adjusting the resonant frequency by changing the cavity structure, but only provide the cavity radius a required for the resonance condition when the axion mass is m a = ω a .
In an external magnetic field, the signal power is given by
P signal = m a R TM = π 2 ρ a m a H 2 0 V Qg 2 aBB C E 010 ,(41)
where the cavity volume V is regarded as a function of m a , i.e. V = πa 2 L = πL(x 0,1 /m a ) 2 , to ensure that m a corresponding to each sensitivity is always at the resonant point. Similarly, the signal power under external electric field can be obtained by making a replacement g aBB H 0 → g aAB E 0 . A typical detection device of axion cavity consists of a main cavity and an amplification chain. The main source of noise comes from the cryogenic high electron mobility transistor (HEMT). It contributes to an effective noise temperature T eff around a few Kelvin, but it will be further enhanced to above 10 K due to the microwave loss in channel as the microwave signal is emitted from cavity and then received by HEMT [42]. Currently, many axion cavity experiments have applied Josephson Parametric Amplifiers (JPA) to enhance detection sensitivity. For example, in ADMX, one can control the noise temperature at the order of O(10 2 ) mK [39]. Therefore, the signal-to-noise ratio is given by
SNR = P signal k B T eff t b ,(42)
where k B is the Boltzmann constant, t is the observation time and the ratio of frequency and Q factor b = f /Q is the detector bandwidth. To estimate the sensitivity of cavity experiment to g aBB or g aAB , we take Q = 10 5 and limit the SNR to 5. The results of sensitivity bound are shown in Fig. 2. Assuming an observation time of 7 days and a cavity length of 1 m, we obtain the corresponding bounds on couplings g aBB and g aAB in the external magnetic field H 0 = 10 T and electric field E 0 = 10 4 kV/m, respectively. For g aBB , we assume that the temperature parameter inside the cavity is taken to be T eff = 0.5 K, and for g aAB it is T eff = 0.1 K. It should be noticed that m a cannot be arbitrarily small or large since the smaller m a is, the larger the cavity radius a Existing limits from ADMX (2021) [39] and ADMX SLIC [46] are also shown for comparison.
is required to satisfy the resonance condition, and vice versa for large mass of axion. On the other hand, when m a is too small, the transition rate R increases sharply to GHz level and exceeds the detector resolution. Thus, we only consider axions with the mass range of 10 −6 ∼ 10 −4 eV. It turns out that the theoretical predictions of new axion couplings can be probed in this mass range. The result of current cavity experiments such as ADMX measuring the conventional axion coupling g aγγ can apply to confine the g aBB coupling. The existing ADMX bound given H 0 = 7.5 T and T eff = 0.5 K has already excluded a part of parameter space of g aBB coupling. To measure g aAB , a strong electric field E 0 and lower temperature are both required.
V. CONCLUSION
Motivated by the Witten effect, the axion-dyon dynamics can be reliably built based on the quantum electromagnetodynamics. We show a generic low-energy axion-photon effective field theory can also be realized in the language of "generalized symmetries" with higher-form symmetries and gauge fields. Two U (1) gauge groups and two four-potentials A µ and B µ are introduced to describe electric charge, magnetic charge and photon in this framework. As a result, three anomalous interactions between axion and photon arise in contrary to the conventional axion-photon coupling g aγγ .
In this work, we provide a complete quantum calculation of axion-single photon transition rate inside a homogeneous electromagnetic field in terms of the new axion interaction Hamiltonian in QEMD. This quantum calculation can clearly imply the enhancement of conversion rate through resonant cavity in axion haloscope experiments. Our work provides the basic method for the generic cavity search of new axion-photon couplings in QEMD. We find that an external magnetic field H 0 can be set to measure (g aBB − g aAA ) ≈ g aBB coupling or an external electric field E 0 to measure g aAB coupling. The corresponding sensitivity bounds are given by the cavity experimental configuration and the form factor C E 010 in TM mode for the QEMD axion couplings.
FIG. 1 .
1Radial distributions of TM 010 (left) and TE 011 (right), where the transverse cross-section of TE mode is taken at z = L/2.
FIG. 2 .
2The expected sensitivity of g aAB (red solid line) and g aBB (black solid line) in the cavity haloscope experiment. The dashed lines indicate the corresponding theoretical predictions (red for g aAB , black for g aBB ).
Below we define X d µν ≡X µν = µναβ X αβ /2 as the Hodge dual of tensor X µν . Also, (Y ∧Z) µν ≡ Y µ Z ν −Y ν Z µ for any four-vectors Y and Z.
We use symbol "H" rather than "B" to denote magnetic field in order not to conflict with the four-potential B µ .
ACKNOWLEDGMENTSWe thank Anton V. Sokolov, Andreas Ringwald, Yu Gao and Qiaoli Yang for useful comment and discussion. T. L. is supported by the National Natural Science Foundation of China (Grant No. 11975129, 12035008) and "the Fundamental Research Funds for the Central Universities", Nankai University (Grants No. 63196013).
. R D Peccei, H R Quinn, 10.1103/PhysRevLett.38.1440Phys. Rev. Lett. 381440R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977).
. R D Peccei, H R Quinn, 10.1103/PhysRevD.16.1791Phys. Rev. D. 161791R. D. Peccei and H. R. Quinn, Phys. Rev. D 16, 1791 (1977).
. S Weinberg, 10.1103/PhysRevLett.40.223Phys. Rev. Lett. 40223S. Weinberg, Phys. Rev. Lett. 40, 223 (1978).
. F Wilczek, 10.1103/PhysRevLett.40.279Phys. Rev. Lett. 40279F. Wilczek, Phys. Rev. Lett. 40, 279 (1978).
. V Baluni, 10.1103/PhysRevD.19.2227Phys. Rev. D. 192227V. Baluni, Phys. Rev. D 19, 2227 (1979).
. R J Crewther, P Di Vecchia, G Veneziano, E Witten, 10.1016/0370-2693(79)90128-XErratum: Phys.Lett.B. 88487Phys. Lett. BR. J. Crewther, P. Di Vecchia, G. Veneziano, and E. Witten, Phys. Lett. B 88, 123 (1979), [Erratum: Phys.Lett.B 91, 487 (1980)].
. J E Kim, 10.1103/PhysRevLett.43.103Phys. Rev. Lett. 43103J. E. Kim, Phys. Rev. Lett. 43, 103 (1979).
. M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(80)90209-6Nucl. Phys. B. 166493M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nucl. Phys. B 166, 493 (1980).
. M Dine, W Fischler, M Srednicki, 10.1016/0370-2693(81)90590-6Phys. Lett. B. 104199M. Dine, W. Fischler, and M. Srednicki, Phys. Lett. B 104, 199 (1981).
. A R Zhitnitsky, Sov. J. Nucl. Phys. 31260A. R. Zhitnitsky, Sov. J. Nucl. Phys. 31, 260 (1980).
. C A Baker, 10.1103/PhysRevLett.97.131801arXiv:hep-ex/0602020Phys. Rev. Lett. 97131801C. A. Baker et al., Phys. Rev. Lett. 97, 131801 (2006), arXiv:hep-ex/0602020.
. J M Pendlebury, 10.1103/PhysRevD.92.092003arXiv:1509.04411Phys. Rev. D. 9292003hep-exJ. M. Pendlebury et al., Phys. Rev. D 92, 092003 (2015), arXiv:1509.04411 [hep-ex].
. E Witten, 10.1016/0370-2693(79)90838-4Phys. Lett. B. 86283E. Witten, Phys. Lett. B 86, 283 (1979).
. W Fischler, J Preskill, 10.1016/0370-2693(83)91260-1Phys. Lett. B. 125165W. Fischler and J. Preskill, Phys. Lett. B 125, 165 (1983).
. M Kawasaki, F Takahashi, M Yamada, 10.1016/j.physletb.2015.12.075arXiv:1511.05030Phys. Lett. B. 753677hepphM. Kawasaki, F. Takahashi, and M. Yamada, Phys. Lett. B 753, 677 (2016), arXiv:1511.05030 [hep- ph].
. Y Nomura, S Rajendran, F Sanches, 10.1103/PhysRevLett.116.141803arXiv:1511.06347Phys. Rev. Lett. 116141803hep-phY. Nomura, S. Rajendran, and F. Sanches, Phys. Rev. Lett. 116, 141803 (2016), arXiv:1511.06347 [hep-ph].
. M Kawasaki, F Takahashi, M Yamada, 10.1007/JHEP01(2018)053arXiv:1708.06047JHEP. 0153hep-phM. Kawasaki, F. Takahashi, and M. Yamada, JHEP 01, 053 (2018), arXiv:1708.06047 [hep-ph].
. N Houston, T Li, arXiv:1711.05721hep-phN. Houston and T. Li, (2017), arXiv:1711.05721 [hep-ph].
. R Sato, F Takahashi, M Yamada, 10.1103/PhysRevD.98.043535arXiv:1805.10533Phys. Rev. D. 9843535hep-phR. Sato, F. Takahashi, and M. Yamada, Phys. Rev. D 98, 043535 (2018), arXiv:1805.10533 [hep-ph].
. J S Schwinger, 10.1103/PhysRev.144.1087Phys. Rev. 1441087J. S. Schwinger, Phys. Rev. 144, 1087 (1966).
. D Zwanziger, 10.1103/PhysRev.176.1489Phys. Rev. 1761489D. Zwanziger, Phys. Rev. 176, 1489 (1968).
. D Zwanziger, 10.1103/PhysRevD.3.880Phys. Rev. D. 3880D. Zwanziger, Phys. Rev. D 3, 880 (1971).
. A V Sokolov, A Ringwald, arXiv:2205.02605hep-phA. V. Sokolov and A. Ringwald, (2022), arXiv:2205.02605 [hep-ph].
. P Sikivie, 10.1103/PhysRevLett.51.1415Phys. Rev. Lett. 51695Phys.Rev.Lett.P. Sikivie, Phys. Rev. Lett. 51, 1415 (1983), [Erratum: Phys.Rev.Lett. 52, 695 (1984)].
. T Li, R.-J Zhang, C.-J Dai, 10.1007/JHEP03(2023)088arXiv:2211.06847JHEP. 0388hep-phT. Li, R.-J. Zhang, and C.-J. Dai, JHEP 03, 088 (2023), arXiv:2211.06847 [hep-ph].
. M E Tobar, C A Thomson, B T Mcallister, M Goryachev, A Sokolov, A Ringwald, arXiv:2211.09637hep-phM. E. Tobar, C. A. Thomson, B. T. McAllister, M. Goryachev, A. Sokolov, and A. Ringwald, (2022), arXiv:2211.09637 [hep-ph].
. B T Mcallister, A Quiskamp, C O'hare, P Altin, E Ivanov, M Goryachev, M Tobar, arXiv:2212.01971hep-phB. T. McAllister, A. Quiskamp, C. O'Hare, P. Altin, E. Ivanov, M. Goryachev, and M. Tobar, (2022), arXiv:2212.01971 [hep-ph].
. T Li, C.-J Dai, R.-J Zhang, arXiv:2304.12525hep-phT. Li, C.-J. Dai, and R.-J. Zhang, (2023), arXiv:2304.12525 [hep-ph].
. A Kapustin, N Seiberg, 10.1007/JHEP04(2014)001arXiv:1401.0740JHEP. 041hep-thA. Kapustin and N. Seiberg, JHEP 04, 001 (2014), arXiv:1401.0740 [hep-th].
. N Seiberg, 10.1007/JHEP07(2010)070arXiv:1005.0002JHEP. 0770hep-thN. Seiberg, JHEP 07, 070 (2010), arXiv:1005.0002 [hep-th].
. D Gaiotto, A Kapustin, N Seiberg, B Willett, 10.1007/JHEP02(2015)172arXiv:1412.5148JHEP. 02172hep-thD. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, JHEP 02, 172 (2015), arXiv:1412.5148 [hep-th].
. R A Brandt, F Neri, D Zwanziger, 10.1103/PhysRevLett.40.147Phys. Rev. Lett. 40147R. A. Brandt, F. Neri, and D. Zwanziger, Phys. Rev. Lett. 40, 147 (1978).
. R A Brandt, F Neri, D Zwanziger, 10.1103/PhysRevD.19.1153Phys. Rev. D. 191153R. A. Brandt, F. Neri, and D. Zwanziger, Phys. Rev. D 19, 1153 (1979).
. A V Sokolov, A Ringwald, arXiv:2303.10170hep-phA. V. Sokolov and A. Ringwald, (2023), arXiv:2303.10170 [hep-ph].
. L Di Luzio, M Giannotti, E Nardi, L Visinelli, 10.1016/j.physrep.2020.06.002arXiv:2003.01100Phys. Rept. 870hep-phL. Di Luzio, M. Giannotti, E. Nardi, and L. Visinelli, Phys. Rept. 870, 1 (2020), arXiv:2003.01100 [hep-ph].
. J Preskill, M B Wise, F Wilczek, 10.1016/0370-2693(83)90637-8Phys. Lett. B. 120127J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. B 120, 127 (1983).
. M Dine, W Fischler, 10.1016/0370-2693(83)90639-1Phys. Lett. B. 120137M. Dine and W. Fischler, Phys. Lett. B 120, 137 (1983).
. N Du, ADMX10.1103/PhysRevLett.120.151301arXiv:1804.05750Phys. Rev. Lett. 120151301hep-exN. Du et al. (ADMX), Phys. Rev. Lett. 120, 151301 (2018), arXiv:1804.05750 [hep-ex].
. C Bartram, ADMX10.1103/PhysRevLett.127.261803arXiv:2110.06096Phys. Rev. Lett. 127261803hep-exC. Bartram et al. (ADMX), Phys. Rev. Lett. 127, 261803 (2021), arXiv:2110.06096 [hep-ex].
. P Sikivie, 10.1103/RevModPhys.93.015004arXiv:2003.02206Rev. Mod. Phys. 9315004hep-phP. Sikivie, Rev. Mod. Phys. 93, 015004 (2021), arXiv:2003.02206 [hep-ph].
. M Beutter, A Pargner, T Schwetz, E Todarello, 10.1088/1475-7516/2019/02/026arXiv:1812.05487JCAP. 0226hepphM. Beutter, A. Pargner, T. Schwetz, and E. Todarello, JCAP 02, 026 (2019), arXiv:1812.05487 [hep- ph].
. Q Yang, Y Gao, Z Peng, arXiv:2201.08291hep-phQ. Yang, Y. Gao, and Z. Peng, (2022), arXiv:2201.08291 [hep-ph].
. C M Donohue, S Gardner, W Korsch, arXiv:2109.08163hep-phC. M. Donohue, S. Gardner, and W. Korsch, (2021), arXiv:2109.08163 [hep-ph].
. K Fujikawa, 10.1103/PhysRevLett.42.1195Phys. Rev. Lett. 421195K. Fujikawa, Phys. Rev. Lett. 42, 1195 (1979).
. T D Brennan, S Hong, L.-T Wang, arXiv:2302.00777hep-phT. D. Brennan, S. Hong, and L.-T. Wang, (2023), arXiv:2302.00777 [hep-ph].
. N Crisosto, P Sikivie, N S Sullivan, D B Tanner, J Yang, G Rybka, 10.1103/PhysRevLett.124.241101arXiv:1911.05772Phys. Rev. Lett. 124241101astro-ph.CON. Crisosto, P. Sikivie, N. S. Sullivan, D. B. Tanner, J. Yang, and G. Rybka, Phys. Rev. Lett. 124, 241101 (2020), arXiv:1911.05772 [astro-ph.CO].
| [] |
[
"Formation of secondary atmospheres on terrestrial plan- ets by late disk accretion",
"Formation of secondary atmospheres on terrestrial plan- ets by late disk accretion"
] | [
"Quentin Kral \nLESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance\n",
"Jeanne Davoult \nLESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance\n",
"Benjamin Charnay \nLESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance\n"
] | [
"LESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance",
"LESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance",
"LESIA\nObservatoire de Paris\nUniversit PSL\nCNRS\nSorbonne Universit\nUniv. Paris Diderot\nSorbonne Paris Cit\n5 place Jules Janssen92195MeudonFrance"
] | [] | Recently, gas disks have been discovered around main sequence stars well beyond the usual protoplanetary disk lifetimes (i.e., 10 Myrs), when planets have already formed 1-4 . These gas disks, mainly composed of CO, carbon, and oxygen 5-7 seem to be ubiquitous 3 in systems with planetesimal belts (similar to our Kuiper belt), and can last for hundreds of millions of years 8 . Planets orbiting in these gas disks will accrete 9, 10 a large quantity of gas that will transform their primordial atmospheres into new secondary atmospheres with compositions similar to that of the parent gas disk. Here, we quantify how large a secondary atmosphere can be created for a variety of observed gas disks and for a wide range of planet types.We find that gas accretion in this late phase is very significant and an Earth's atmospheric mass of gas is readily accreted on terrestrial planets in very tenuous gas disks. In slightly more massive disks, we show that massive CO atmospheres can be accreted, forming planets with up to sub-Neptune-like pressures. Our new results demonstrate that new secondary atmospheres with high metallicities and high C/O ratios will be created in these late gas disks, resetting their primordial compositions inherited from the protoplanetary disk phase, and providing a new birth to planets that lost their atmosphere to photoevaporation or giant 1 | 10.1038/s41550-020-1050-2 | [
"https://arxiv.org/pdf/2004.02496v1.pdf"
] | 214,802,025 | 2004.02496 | 97bffb2be31e8466cce1eb6138c67823b8ebff22 |
Formation of secondary atmospheres on terrestrial plan- ets by late disk accretion
Quentin Kral
LESIA
Observatoire de Paris
Universit PSL
CNRS
Sorbonne Universit
Univ. Paris Diderot
Sorbonne Paris Cit
5 place Jules Janssen92195MeudonFrance
Jeanne Davoult
LESIA
Observatoire de Paris
Universit PSL
CNRS
Sorbonne Universit
Univ. Paris Diderot
Sorbonne Paris Cit
5 place Jules Janssen92195MeudonFrance
Benjamin Charnay
LESIA
Observatoire de Paris
Universit PSL
CNRS
Sorbonne Universit
Univ. Paris Diderot
Sorbonne Paris Cit
5 place Jules Janssen92195MeudonFrance
Formation of secondary atmospheres on terrestrial plan- ets by late disk accretion
Recently, gas disks have been discovered around main sequence stars well beyond the usual protoplanetary disk lifetimes (i.e., 10 Myrs), when planets have already formed 1-4 . These gas disks, mainly composed of CO, carbon, and oxygen 5-7 seem to be ubiquitous 3 in systems with planetesimal belts (similar to our Kuiper belt), and can last for hundreds of millions of years 8 . Planets orbiting in these gas disks will accrete 9, 10 a large quantity of gas that will transform their primordial atmospheres into new secondary atmospheres with compositions similar to that of the parent gas disk. Here, we quantify how large a secondary atmosphere can be created for a variety of observed gas disks and for a wide range of planet types.We find that gas accretion in this late phase is very significant and an Earth's atmospheric mass of gas is readily accreted on terrestrial planets in very tenuous gas disks. In slightly more massive disks, we show that massive CO atmospheres can be accreted, forming planets with up to sub-Neptune-like pressures. Our new results demonstrate that new secondary atmospheres with high metallicities and high C/O ratios will be created in these late gas disks, resetting their primordial compositions inherited from the protoplanetary disk phase, and providing a new birth to planets that lost their atmosphere to photoevaporation or giant 1
impacts. We therefore propose a new paradigm for the formation of atmospheres on lowmass planets, which can be tested with future observations (JWST, ELT, ARIEL). We also show that this late accretion would show a very clear signature in Sub-Neptunes or cold exo-Jupiters. Finally, we find that accretion creates cavities in late gas disks, which could be used as a new planet detection method, for low mass planets a few au to a few tens of au from their host stars.
The discovery of large amounts of gas around main-sequence stars is recent with most detections occurring in the last few years 3,11 . These late gas disks are observed in systems that have planetesimal belts, which are older than 10 Myr and can last for hundreds of millions of years 8 .
It is thought that the observed gas is released from volatile-rich planetesimals when they collide with each other in the system's belts 6,12 . The gas then viscously evolves 4 , spreading inward and outwards 10,15 . Hence the observed gas is likely secondary (rather than of primordial origin) and this late disk (main-sequence) phase is different from the younger (<10 Myr) protoplanetary disks that are much more massive, hydrogen-rich and in which giant planets form within a few millions of years 16 .
These late gas disks are nearly ubiquitous around A-type stars; Gas has been detected around more than 70% of systems with bright planetesimal belts 3 . As for other stellar type stars or lower mass systems the statistics are still based on too small a sample as these gas disks are harder to detect but gas evolution models 6 predict that all stars surrounded by planetesimal belts should have gas at a certain level that depends on the mass of solids in the system's planetesimal belt. More than 25% of stars have planetesimal belts massive enough to be detected through their infraredexcess 17 , and it may be that most stars have belts below current detection limits (for instance an exact equivalent of the Kuiper-belt around a nearby star is not massive enough to be detectable with current facilities). These late gas disks might therefore not be the exception but rather the rule.
At >10 Myr in these late gas disks, planets have already formed. For instance, we now observe massive planets as early as 5 Myr 18-20 and we know that terrestrial planets such as Mars formed very early (<10 Myr from cosmochemical evidence 21 ) and the Earth took slightly longer Myr) but most of its mass was acquired within 10 Myr 22 . The planets embedded in these disks will be able to accrete the disk gas in a similar way as when planets accrete gas in younger protoplanetary disks but for much longer timescales because late gas disks can last for tens of millions of years. In this paper, we estimate how much gas can be accreted in this late phase onto the already-formed planets, and whether this gas can create new secondary atmospheres (with a composition similar to the source gas disk, i.e. rich in carbon and oxygen and depleted in hydrogen) that would replace their original atmospheres.
To compute the amount of gas accretion onto terrestrial or Super-Earth planets in these late disks, we used an accretion model first developed for protoplanetary disk environments 9, 10 that we adapted to work in the late disk phase studied in this paper (Methods). A planet embedded in a gas disk will quickly fill its Hill sphere and whether it is able to accrete mass from the Hill sphere to its atmosphere depends on how fast the planet can cool (radiate away energy), and then contract to eventually accrete some more gas into its Hill sphere 10 and grow an atmosphere. In the protoplanetary disk case, it is predicted that gas accretion should not depend strongly on disk density. However, in late gas disks (much more tenuous than protoplanetary disks), the mass available can be much lower than the mass that can be accreted from cooling/contraction and atmosphere growth is therefore limited by the gas disk mass (Methods).
We run the accretion model starting from an Earth-like planet of mass 1 M ⊕ at 1 au from its host star and assuming that the planet's atmosphere has no, or very little, mass, because of photoevaporation (i.e. it is in the desiccated part of the radius valley) 23 or desiccated after a large impact 24 (that should happen frequently in the late stages of planetary formation 25 ). Figure 1 shows the temporal evolution of the gas-to-core ratio (GCR) of the accreting planet for different gas crossing rates from the diskṀ at the planet location (a more massive belt provides higherṀ , see methods). If the disk is at steady state,Ṁ is the same as the gas release rate in the planetesimal belt 10 . For the most massive belts, we observe that the CO input rate in the belt can reach ∼ 10 −1 M ⊕ /Myr but for less massive belts releasing less CO per unit time, we expect values that can go below 10 −7 M ⊕ /Myr 8 and this is why we explore the effect of differentṀ (10 −8 , 10 −6 , 10 −4 , 10 −2 M ⊕ /Myr) on the total atmospheric mass accreted by a planet. We assume that most of the accretion happens within 100 Myr (Methods). Mass accretion for different accretion times can be straightforwardly extrapolated from Figure 1.
We find that a planet starting with a low atmospheric mass will very rapidly accrete gas even at low gas input ratesṀ . Indeed, within 1 Myr, all simulations withṀ ≥ 10 −6 M ⊕ /Myr accreted more than an Earth's atmosphere mass. Given enough time, late accretion transforms a terrestrial planet (1 M ⊕ at 1 au) with no atmospheres (or even starting with an Earth-like or Venus atmosphere, see Extended data Figure 8) into a planet with a massive atmosphere with a GCR up to > 10 −2 forṀ ≥ 10 −4 M ⊕ /Myr. We also show that this late accretion is very efficient on both larger (e.g., 5 M ⊕ super-Earth) and smaller (e.g., Mars-size) planets and for both closer (e.g., at 0.1 au) and more distant planets (e.g., at 10 au) in Fig. 2 and in Extended data Figure 6. We note that in our accretion scenario, there is no risk of runaway because the gas the planet can accrete from is limited and creating a Jupiter from a terrestrial mass core in these low-mass disks is simply not possible (Methods).
In Figure 2, we show the effect of gas accretion on planet's pressure and bulk density over time (Methods) and confirm that a variety of pressures from Earth-like (1 bar) to Sub-Neptune-like (> 10 4 bar) can be reached on low-mass planets formed in these disks (Fig. 2 left). Even though pressures as high as 10 5 bar (Neptune-like) can be reached on these planets, their densities never reach values as low as that of Neptune because CO atmospheres have a much higher mean molecular weight compared to hydrogen-dominated atmospheres. We note that in the new accretion scenario proposed here, CO accretion does not depend much on the core mass (at least for planets with R H /H > 1, Methods) so that the density is higher for higher mass cores. We thus predict statistically that rocky planets (with R H /H > 1 and excluding H 2 -dominated atmospheres) with larger cores will have higher densities, which is in contrast with current models of planet formation and may be in line with current observations not seeing any clear correlation between the core mass and GCR but rather a large spread in GCRs 20,26 . We also note that as accretion is very efficient, we expect the outermost planets to accrete most disk material before it has time to spread further in, leading to decreasing densities for increasing semi-major axes (for a given core mass but note that for planets with R H /H < 1 accretion will be slightly less efficient, Methods). There is a competing but less important effect, where closer-in planets have higher temperatures and thus lower densities for a given accreted mass (Fig. 2 for the 4 outermost planets (e, f, g and h, respectively), where the low densities of TRAPPIST-1 f, g and h could be explained by massive CO atmospheres of ∼ 10 5 bar.
Another strong prediction for this type of accretion is that planet atmospheres formed with this mechanism should have a high mean molecular weight and be mostly made of carbon and oxygen (rather than hydrogen) with a C/O ratio close to 1 (as CO is expected to be the main volatile released but see Methods for detail as other molecules may also be released). Current HST observations of super-Earths revealed flat transit spectra, interpreted as the presence of atmospheres with high mean molecular weights and clouds [29][30][31] . JWST and ARIEL will have the power (Methods) to take spectra for many more terrestrial planets (e.g. the temperate TRAPPIST-1 planets 32 ) and
Super-Earths and find out whether it is common for these planets to have such high metallicities (and high C/O ratio) to test whether this new phase of late accretion really happens widely.
We find that our late accretion scenario is much more efficient and favourable to accrete volatiles on a terrestrial planet than delivery from impacts, i.e., even if a large heavy bombardment- We also show that for existing (initially hydrogen-rich) Sub-Neptunes or more massive planets, the accretion is also going to affect their atmospheres once they mix with this new gas (Supplementary information). In Figure 3 (left), we show that the metallicity in Sub-Neptunes could reach > 1000 times the solar metallicity, down to a factor 1.25 for more massive Jupiter-like planets. There are now some direct measurements of atmospheric metallicities in Neptunes and Sub-Neptunes 33,34 . Some studies find near-solar metallicities (e.g., GJ 3470 b) 33 , while some others find super-solar metallicities (e.g., HAT-P-11 b or HAT-P-26 b or K2-18 b) [34][35][36][37] and more measurements will be welcome to test our scenario. The C/O ratio in these giant planets may also become close to 1 for a great variety of atmosphere masses andṀ , and even increase by 10% in a for smaller Sub-Neptune planets, will help to test our scenario.
We also make predictions for detecting ongoing accretion onto young Jupiter-like planets or more massive brown dwarfs in direct imaging (Supplementary information). When the gas accretes onto the outer envelope of the giant planet, it accumulates and diffuses inwards over time 13 . We show that this accumulation (see Supplementary data figure 3) will be detectable on With our accretion model, we find that accretion onto planets from late gas disks is very efficient, and for most configurations, a large fraction of the incoming gas is accreted rather than passed on further in (Methods). This means that these gas disks will often be very depleted inwards of a planet and one could infer the presence of a planet from the gas distribution. As the gas extends in the inner region of planetary systems, this new planet detection method could allow us to indirectly detect low-mass planets at a few au or further from their host star. Using the high spatial resolution of ALMA, it would be possible to pinpoint the planet location. In Figure 4, we show a synthetic ALMA image of the carbon emission of a late gas disk with a terrestrial planet at 10 au from its host star located 50 pc away from Earth (Methods). The cavity is well resolved and the ALMA sensitivity is high enough to detect the signal at the 0.12" resolution. The carbon emission does not seem to extend all the way to the star for the few carbon gas disks 15, 44 observed with ALMA so far but the resolution needs to be pushed further with ALMA to make any strong conclusions. Code availability The particular scripts used for the analysis are made in python and available on reasonable request from the corresponding author.
Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Competing Interests
The authors declare no competing interests.
Correspondence Correspondence and requests for materials should be addressed to Q.K. (email: [email protected]).
Reprints and permissions information is available at www.nature.com/reprints
Methods Input rate of gas and lifetime of gas disks. The input rate of gas from the disk at the planet location is given byṀ in our study. This is the quantity of gas per unit time integrated over the whole disk scaleheight that is transferred radially inwards. These late gas disks are expected to viscously evolve 10,45 (maybe owing to the magnetorotational instability 4 ) and there will be a transfer of most of the gas mass inwards (and angular momentum outwards). When steady-state is reached,Ṁ becomes equal to the gas mass released in the planetesimal belt, which flows constantly inwards over time and accrete onto a planet or the central star. In this case, we can relateṀ to the surface density of the gas 46 Σ asṀ = 3πνΣ, where ν = αc s H is the disk viscosity that can be parametrized with an α value 47 and c s , H = c s /Ω are the sound speed and disk scaleheight, respectively, with Ω the Keplerian frequency. The value of α has been estimated in a few studies by comparing the carbon quantity to what is expected from the measured gas input rate at the planetesimal belt location and it can vary from 10 −4 to 0.1 10,15,48 . Using population synthesis of these gas disks, it seems that all observations so far are consistent with α being close to 0.1 49 . An analytic study of the magnetorotational instability in these debris disks 4 show that it could be very active owing to the high ionisation fraction in these disks (compared to protoplanetary disks) and
lead to α values close to 0.1, or indeed smaller if non-ideal effects such as ambipolar diffusion are at play.
How fast the disk spreads viscously is set by α such that the viscous timescale t visc equals r 2 Ω/(αc 2 s ), with r the location of the planetesimal belt from which gas is released 46 . In Extended data Figure 1, we plot t ν as a function of α for different radial locations r of the belt and gas temperatures T . The viscous timescales are typically 1 Myr (even when using the highest α values), meaning that the spreading is often slow but steady state should be reached within the first tens of Myr evolution.
The release rate of gas in late gas disks is expected to decrease with time because planetesimals get destroyed over time and less gas can be released 49-51 . TheṀ we use in this study is an average of the real decreasing input rate over 100 Myr. We suppose that most of the accretion happens when the system is still young (as these belts are mostly observed in systems younger than 100 Myr 3 ) before its planetesimal belt loses too much mass 52 , i.e. we do not follow the evolution beyond 100 Myr in our model but it is possible that accretion still happens after 100 Myr at a lower rate. We can quantify the gas release rate as a function of time assuming that gas is released together with dust along the collisional cascade in Solar System comet proportions 6 . It is well known that mass loss rates in collisional cascades decrease with time and a simplified model of the evolution of the mass M s and mass loss rateṀ s of solids with time is given by 53,54 We let the disk mean radius r and the initial mass of solids M init be free parameters. We choose to vary r from 50 to 100 au, which is typical of extra-solar Kuiper belt distances 55 and the initial total mass of solids M init between a Kuiper-belt like mass of 0.1 to 100 M ⊕ , which is the maximum mass of solids available in protoplanetary progenitors to debris disks, inferred from sub-mm surveys 28 . We then plot the temporal evolution of the gas release rateṀ in Extended data Figure 2 assuming that the gas release rate is 10% of the dust release rate 6 Figure 2: Extended data -Evolution ofṀ with time. We plot the temporal evolution ofṀ for different belt locations (50 and 100 au) and different initial belt masses (0.1, 1, 10, and 100 M ⊕ ).
10 4 t ν (Myr) r =50 au, T =30 K r =100 au, T =30 K r =150 au, T =30 K r =100 au, T =10 K r =100 au, T =100 KM s (t) = M init 1 + t/t col (1) M s (t) = M s (t) 2 M init t col (2) t col = 1.4 × 10 −9 r 13/3 dr r D c Q 5/6 D e 5/3 M 4/3 M tot(3)(M ⊕ /Myr) r =100 au r =50 au M init =100 M ⊕ M init =10 M ⊕ M init =1 M ⊕ M init =0.1 M ⊕
above.
Model. The accretion model we used was first developed to explain the formation of Super-Earths in protoplanetary disks 9 . A good understanding of the way accretion works was obtained through analytical model fitting to numerical simulations 10 . It is shown that the accretion rate hardly depends on the gas density from which it accretes but rather on the planet's ability to cool (or radiate away its energy). The more a planet can cool, the more it contracts, emptying parts of its Hill sphere, which get refilled very rapidly and the same process can happen again on a Kelvin-Helmholtz time. We also note that this accretion model is valid even in low-gas density environments where planetary atmospheres are optically thin to incident starlight 57 . Therefore, a good estimate for the accretion timescale is given by the cooling timescale equal to
t cool = |E|/L cool ,(4)
where E is the atmosphere energy and L cool its luminosity 10 . From that timescale, we calculate the gas-to-core ratio GCR=M gas /M core as a function of time t
GCR(t) = −bt + √ b 2 t 2 − 4abt 2a ,(5)
where a = 3f E k
1+ 1 γ−1 B (4π) 1 3 (3− 1 γ−1 ) ρ 1 3 (3− 1 γ−1 ) b κ rcb , and b = −4 2 π 2 G 1+1/(γ−1) σT 3−1/(γ−1) rcb (µ rcb m H ) 1+1/(γ−1) ∇ 1+1/(γ−1) ad M 2 3 ( 1 γ−1 −1) core 3 1 3 (3− 1 γ−1 ) . The term f E equals G(4πρ b /3) 1/3
, and ρ b is the bulk density of the core, M core is the core mass, ∇ ad = (γ − 1)/γ is the adiabatic gradient, T rcb , µ rcb , κ rcb are the temperature, mean molecular weight and atmospheric opacity at the radiative/convective boundary (rcb). Finally, m H is the proton mass, k B the Boltzmann constant, G the gravitational constant and σ the Stefan-Boltzmann constant. This new analytic expression given by Eq. 5 is otherwise and do not make the assumption that GCR 1 as in previous work 10 . Unless otherwise stated, we fix all values as given in the reference paper 10 . To be realistic, we use real opacities derived for highly metal enriched (100 times solar abundances) Super-Earths 58 and in this case GCR(t) must be obtained numerically (because now opacities vary with GCR). We fix γ = 1.4 and the mean molecular weight as being 28, which corresponds to an atmosphere accreting a majority of CO as would be the case in our scenario 15 .
We note that in non-shielded disks (i.e. disks not massive enough for carbon production to be sufficiently high that neutral carbon shields CO from photodissociating 15 ), µ is closer to 14, i.e., dominated by carbon and oxygen 8, 10 rather than CO but our final results are not so much affected by the value of µ (except for the very highṀ values) as demonstrated further in Extended data Figure 3. There could also be some other molecules released in small quantities in the belt, such as CN, but only CO is detected so far.
We find that GCR(t) can be simplified in the limit where −a/(bt) 1 or −a/(bt) 1.
For −a/(bt) 1, we find GCR = −bt/a and when −a/(bt) 1, GCR = −bt/a. The latter regime has never been studied so far as it only appears at large t and large µ, which is typical of late gas disks but not of protoplanetary disks.
The maximum rate at which a planet can accrete an atmosphere is given byṀ gas , which we obtain by deriving M gas =GCR(t)M core . We finḋ
M gas = M core 2a × −b + 1 2 × b 2 t 2 − 4abt −1/2 × 2b 2 t − 4ab ,(6)
where a and b are defined above. In the previous Eq. 6, we have assumed a constant opacity but in reality the opacity varies with time as the GCR increases. In our code, we compute this derivative (Ṁ gas ) numerically to take this complexity into account. We also deriveṀ gas analytically for the case where κ depends on time. The opacity can be parametrized 58
as κ ∝ κ 0 ρ α rcb T β rcb Z δ , where
the time dependence comes in the density at the rcb ρ rcb , which is proportional to GCR 10 . We find that there are also two regimes depending on the value of −a/(bt). If −a/(bt) 1 theṅ evolution of the gas-to-core ratio (GCR) with varying µ. We note that the GCR grows more slowly than expected for the cases µ = 7 and 14 whenṀ = 10 −2 M ⊕ /Myr, which is because the theoretical cooling accretion rate becomes smaller than 10 −2 M ⊕ /Myr for lower values of µ (see Fig. 4).
M gas = M core /(2 + 3α) −b/(at) and when −a/(bt) 1 thenṀ gas = −M core /(1 + α)(b/a).GCR ≡ M gas /M coreṀ =10 −2 M ⊕ /Myṙ M=10 −4 M ⊕ /Myr µ = 7 µ = 14 µ = 28
We also see that for lower values of µ, the accretion is less efficient from the start because the gas disk scaleheight is higher and less gas is accreted (see Fig. 5).
For the case where −a/(bt) 1, we thus find
M gas ∝ T 3−β−(1+α)/(γ−1) rcb (µ rcb ∇ ad ) 1+(1+α)/(γ−1) ρ −4/3−α+1/3(1+α)/(γ−1) b M 2/3(1+α)/(γ−1)+4/3+2α core Z −δ κ −1 0 t 1/(2(1+α)) M gas ∝ T 3−β−(1+α)/(γ−1) rcb (µ rcb ∇ ad ) 1+(1+α)/(γ−1) ρ −4/3−α+1/3(1+α)/(γ−1) b M 2/3(1+α)/(γ−1)+4/3+2α core Z −δ κ −1 0 t −1−2α 1/(2(1+α))(7)
and for the case −a/(bt) 1, we find
M gas ∝ T 3−β−(1+α)/(γ−1) rcb (µ rcb ∇ ad ) 1+(1+α)/(γ−1) ρ −4/3−α+1/3(1+α)/(γ−1) b M 2/3(1+α)/(γ−1)+1/3+α core Z −δ κ −1 0 t 1/(1+α) M gas ∝ T 3−β−(1+α)/(γ−1) rcb (µ rcb ∇ ad ) 1+(1+α)/(γ−1) ρ −4/3−α+1/3(1+α)/(γ−1) b M 2/3(1+α)/(γ−1)+1/3+α core Z −δ κ −1 0 t −α 1/(1+α)(8)
Taking an opacity with α = 0.6, β = 2.2 and δ = 1 valid for dust free atmospheres beyond 1 au 10 and that γ = 1.4 (for CO atmospheres), we find for the case −a/(bt) 1 that
M gas ∝ T −1 rcb µ 1.6 rcb ρ −0.19 b M 1.6 core Z −0.3 κ −0.6 0 t 0.3 M gas ∝ T −1 rcb µ 1.6 rcb ρ −0.19 b M 1.6 core Z −0.3 κ −0.6
and for the case −a/(bt) 1, we find
M gas ∝ T −2 rcb µ 3.1 rcb ρ −0.38 b M 2.3 core Z −0.6 κ −0.6 0 t 0.6 M gas ∝ T −2 rcb µ 3.1 rcb ρ −0.38 b M 2.3 core Z −0.6 κ −0.6 0 t −0.4(10)
In late gas disks, the total gas mass is much smaller than in protoplanetary disks and it may happen that the gas available per unit time is smaller thanṀ gas . Therefore, in our code, at each time-step, we compare the gas crossing rateṀ toṀ gas . For most cases studied in this paper (except the Mars-mass planet case, see Fig. 6), we find thatṀ is indeed lower thanṀ gas and the accretion is limited by the quantity available rather than by the planet cooling. For the cases whereṀ <Ṁ gas , the mass that accumulates is lower than the theoretical mass M gas given by Eq. 5 so that when computingṀ gas one should take the theoretical accretion rate for the mass that actually accumulated rather than the theoretical mass, which we do in our code. For the regime where −a/(bt) 1 this means that at a given time t, one should take the accretion rate at time t = t(Ṁ t/M gas ) 2+α , which gives an accretion rate which is higher by a factor M gas /(Ṁ t). We show the theoreticalṀ gas in Extended data figure 4 where we plot the temporal evolution ofṀ gas (taking into account that the opacity varies with time) for different values of planet semi-major axes a pl , atmospheric mean molecular weight µ and core mass M core , corresponding to the typical range of values that we are interested in in our study.
We also compare the Hill radius paper the theoretical accretion rate is higher than ourṀ parameter (which is always the case for µ = 28, large M core or distant planets). We note that for the µ = 28 case, the green lines become less steep at large t. This is because as t increases, one reaches the second regime for which −a/(bt) 1, whereṀ gas scales as t −0.4 instead of t −0.7 in the other regime (see Eqs 9 and 10).
We also note that for the case at 0.1 au (for which T = 1000 K), the opacity varies more slowly with T for high enough densities and β becomes smaller 10 as shown in Extended data Figure 5), we obtain a new input rate that is ∼0.98 times of the fullṀ . Hill sphere radius-to-scaleheight ratio Vs. the distance to host star. For the lowest-mass planets, the Hill sphere radius can be smaller than the disk scaleheight and gas can flow inwards rather than being accreted.
This value is therefore usually very close to 1 and does not make important differences even when taking extreme cases such as a Mars-mass (0.1 M ⊕ ) planet at Mars distance (1.5 au) or a 0.5 M ⊕ planet at 10 au, for which we find R H /H ∼ 0.37 for both cases, and the new input rateṀ new is 0.53 of the fullṀ , therefore lowering the final masses accreted by a factor 2 on such planets. We show the results for these two planets in Extended data Figure 6. We end up with GCR values very similar to the case in Figure 1 (1 M ⊕ ) for the case 0.5 M ⊕ because although the final gas mass is divided by 2, the core mass is also twice smaller, hence the GCR is similar. Owing to the much lower core mass of the Mars-mass case, the final GCR is 5 times higher than in the 1 M ⊕ case except for the highest input rates where the theoretical cooling accretion rate becomes smaller thaṅ M and the accreted mass becomes smaller, explaining the shallower slope of GCR at large t. For cases with R H /H > 1, e.g. planets with more massive cores, the results are similar to the case shown in Figure 1 after a correction of the GCR to take account of the new M core value (i.e. the GCR goes down by a factor ∝ M core ) but we note that in this case the hydrodynamics of the flow can be complicated owing to shock waves developing near the planet-disk interface 59 .
To verify that the Hill radius is the radius of interest, which sets the length scale of accretion as assumed here, we also compute the Bondi radius (equal to 2GM pl /c 2 s ) for a large variety of planet masses and planet semi-major axes. Extended data Figure 7 shows that indeed the Bondi radius is always greater than the Hill radius in our study, confirming that the Hill radius should be used in previous computations.
To compute the planet's bulk density given its GCR and core mass, we improve upon previous work 60 and do not assume that the gravity g is constant with pressure so that the thickness of the adiabatic convective region of the atmosphere is given by (when R core > c p /g batm (T batm − T atm ))
d atm = c p g batm (T batm − T atm ) 1 1 − c p /(R core g batm )(T batm − T atm ) ,(11)
where P atm = 0.1 bar typical of the pressure at which the atmosphere transitions from adiabatic to isothermal 61 , T batm = T atm (P batm /P atm ) κ , where κ = 2/(2 + n), with n the number of degrees of freedom equal to 5 for diatomic gases, with T atm = T /2 R /(2a) (T and R are the temperature and radius of the host star), and we calculate P batm by integrating dm = 4π(R core + z) 2 dP/g(P )
from P atm up to a value P for which the atmosphere mass is M atm =GCRM core . In the integral, we use that g(z) = g batm /(1 + z/R core ) 2 and z = T atm c p /g batm ((P batm /P atm ) κ − (P/P atm ) κ )/(1 − c p T atm /(R core g batm )((P batm /P atm ) κ − (P/P atm ) κ )), where c p is the heat capacity, T atm the atmosphere surface temperature, g batm and T batm the gravity and the temperature at the bottom of the atmosphere, respectively. Since z depends on P batm , we calculate P batm by iteration, starting from an initial value equal to M atm g batm /(4πR 2 core ). We then add the isothermal part of the atmospheric thickness from 100 to 20 mbar: R T atm /(g(100mbar)µ) ln(100/20) (where R is the universal gas constant) assuming that the transit radius is typically observed at around 20 mbar 60 .
Accretion onto a pre-existing Earth-like atmosphere. In Extended data Figure 8, we show what happens when an atmosphere starts growing from a pre-existing Earth-mass atmosphere rather than an empty atmosphere as in Figures 1 and 2. The results are the same, with all the lines being just shifted up, starting at the pre-existing level.
This example shows that the final atmosphere will be dominated by volatiles delivered from Formation of cavities owing to planets Our accretion models for both Earth-like and giant plan-
ets (Supplementary information) suggest that accretion onto planets is very efficient and in most
cases, the rate of gas that can be accreted is higher than what is available (i.e.Ṁ ). For this reason, when the gas disk spreads inwards and crosses a planet's orbit, it may not be able to spread further in as all of the inflowing gas is accreted onto the planet that is being crossed. However, this depends on R H /H. As shown in Extended Figure 7, for small or distant planets, R H can be smaller than the disk scaleheight H and a certain fraction of the gas will flow inwards. We note that for atmospheres with low µ, high γ or accreting for a very long time, the theoretical cooling accretion rate may become smaller thanṀ and gas would flow inwards. We also note that the gas model used for low-mass planets is 1-D and thus assumes an axisymmetric gas flow but the gas flow geometry around the planet may be more complicated in 3-D 62 , which may lead to gas flowing inwards anyway 63 . However, this is still unclear how much these complications affect the overall 1-D accretion rates 64 , but recent 3-D simulations find gas accretion rates that are comparable to 1-D derivations 65 . In the end, we expect accretion to be efficient and a gap in density should be seen in the gas distribution after crossing a planet, which would pinpoint the accreting planet location.
Gas distribution with ALMA to infer the planet position. In Figure 4, we used carbon observations as a good tracer for these cavities instead of CO. This is for two main reasons. First, carbon emission in band 8 seems to be a better tracer of this gas than CO in either band 6 or 7 according to models 6 and to the first few observations of neutral carbon in these disks 7,15,44 . Second, carbon is always expected to spread in the inner region, given enough time, while CO photodissociates in about 100 yr in unshielded disks and remains colocated with the parent belt of planetesimals, implying that the CO cavity observed is then due to photodissociation rather than accreting planets. Only in the case of massive gas disks CO can be shielded by carbon 15 and may have time to viscously spread further inwards than the planetesimal belt. For the latter case, for systems where CO had time to spread in the inner region where planets are located, then CO cavities could also be used to infer planets but carbon is much more general as it is not subject to photodissociation.
We now explain the details of how we produced the synthetic ALMA image shown in Figure 4. We first created a density profile for a late gas disk at steady state that scales as r −1 inwards 15 .
The center of the belt is at 50 au where the gas temperature is 20 K (scaling as r −0.5 ). An input rate of 10 −3 M ⊕ /Myr of CO has been chosen. We put the planet at 10 au and therefore impose that the gas density drops to zero within 10 au.
We then use the radiative transfer code RADMC-3D 66 to obtain the emission of the carbon fine structure line at 492. 16 GHz assuming that the inclination is 30 deg and position angle 45 deg. We then use the CASA software to simulate an ALMA observation at a resolution of 0.12" in band 8 for an hour on source (we use the C43-5 configuration together with the C43-2 to get more extended emission). Finally, we create a moment-0 image of the final cube and obtain Figure 4.
We see that the gas disk is well detected and the cavity within 10 au is well resolved. over the last few years is that when the Hill radius of the planet is greater than H, the majority of the gas flowing in will be accreted by the planet 1, 2 . We can check in Extended data Fig. 5 that R H > H is most likely the case for planets with masses greater than 10 M ⊕ (or > 0.03 M Jup ), and we therefore expect all gas mass flowing through such a planet to be accreted. The disk scaleheight in 49 Ceti where we have the best constraints so far 3 is lower than 0.04 r, where r is the distance of the disk to the host star. This translates as a condition on the planet mass M pl to have R H > H, leading to M pl > 6×10 −5 M or roughly that the planet mass is greater than a few Neptune masses.
Another possibility is that if the viscous spreading originates in the magnetorotational instability as suggested recently 4 , the instability could be quenched by a rather low magnetic field of the order of 1 µG (because in this case the waves propagating the instability in these low density disks do not anymore fit in the disk scaleheight), which would be the case in the close vicinity to the planet.
This would mean that all gas would accumulate at the planet's location as it cannot spread inwards anymore (unless the planet is able to provide some viscosity to the gas disk by exciting density waves that transport angular momentum locally, which may also spread the disk for high enough surface densities 5, 6 ), which would also justify our initial assumption.
When CO gas accretes onto a Sub-Neptune or more massive planet, it can increase the average mean molecular weight of the existing atmosphere. After accretion and mixing, we can calculate the new metallicity Z of the atmosphere (assuming it is initially solar) as a function of time (plotted in Supplementary data Figure 1). To do so, we calculate We also compute the evolution of the C/O ratio on the accreting planets assuming that the initial C/O ratio 8 is 0.5496 as follows where we fix t=100 Myr in Figure 3. This assumes that it is mostly CO that is released from the planetesimals but it could be that CO is trapped within CO 2 ices 9 and some CO 2 is also released and quickly photodissociates into CO, hence lowering the final C/O ratio to a value between 0.5 and 1 (if water is also released, which is not likely 10 , it would also lower the final C/O ratio). In
M atm =10 −2 M ⊕ M atm =10 −1 M ⊕ M atm =10 1 M ⊕ M atm =1 M Juṗ M=10 −2 M ⊕ /Myṙ M=10 −4 M ⊕ /Myṙ M=10 −6 M ⊕ /Myṙ M=10 −8 M ⊕ /MyrC/O = 0.5496 + (Z − 1) Z ,(13)
Supplementary data Figure 2, we also show our results after 10 Myr of evolution to see what can be expected in young systems.
When material is accreted onto a Jupiter-like planet, gas first accretes in the outer envelope and then slowly diffuses inwards to a pressure P on a timescale t diff (P ) that depends on the eddy mixing coefficient K zz (P ) such that 11
t diff = 2H 2 atm K zz (P ) ,(14)
where H atm = k b T /(µm H g) is the atmospheric scaleheight, with g = GM p /R 2 p and T is taken to be the skin temperature 12 . We will follow the gas that can accumulate in the upper enveloppe (<0.1 bar) and use the following prescription for K zz
13 K zz = 10 −2 H atm 3 R σT 4 eff µρ atm c p 1/3 ,(15)
where R is the universal gas constant, σ the Stefan-Boltzman constant, ρ atm the atmosphere density and c p the specific heat.
We then estimate the mass that accumulates at a pressure P < 0.1 bar (as most of the emission comes from the planet's photosphere close to 0.1 bar 13 ) as being M accu =Ṁ t diff . Next, we estimate the gas mass that is already present M pres at < 0.1 bar assuming hydrostatic equilibrium so that M pres = 4πR 2 pl P/g. Finally, we focus on the CO mass that can accumulate at the surface compared to the CO mass that is already present. To calculate the latter, we use the mixing ratio of CO in the upper atmosphere derived from models, which varies with temperature as most of the carbon is in CO for warm planets (e.g. 1700 K) but in the form of CH 4 for the colder cases (e.g.
K).
Supplementary data figure 3 shows the CO mass accumulated over the CO mass present calculated from our model as a function of the temperature for varyingṀ (assuming all accreted gas is CO). As the diffusion timescale is often smaller than about a year, the gas input rate can be assumed much higher than when averaging over 100 Myr and it is reasonable to expect that bursts of CO ejection would lead toṀ values as large as 10 M ⊕ /Myr owing to e.g. a sudden change in viscosity at the planet location (due to a change in the magnetic field strength next to the planet or different ionisation fraction 4 ) for instance, or because when the system is very young and indeed very massive the non-averagedṀ can be very large (see Extended data Figure 2). In Supplementary data Figure 3, we see that for the warm planet case, the amount of CO at the surface can only quadruple at most for a Jupiter-like planet. However, for a colder planet, as K zz is smaller (hence it takes slightly longer for gas to diffuse inwards) and most carbon is originally in the form of CH 4 rather than CO (hence the contrast with the CO present is higher), the amount of CO at the surface can reach more than 100 times the amount of CO originally present for T =350 K. We note that the CO mass that can accumulate over the CO present do not vary with the mass of the planet but only with its radius (because g cancels out in the ratio M accu /M pres and this ratio scales as R −2 pl because there is less CO present in smaller planets initially), and we find that more CO accumulates on Jupiter-like planets or brown dwarfs because their radii (calculated from mass and age 14 ) 1 and 0.95 R Jup , respectively, are smaller than that of a 10 M Jup β Pic-like planet equal to
∼ 1.46 R Jup .
Detection of an accretion signature in giant planet spectra. We now estimate whether the ex- . This is mostly because in these low-temperature
Jupiter-like planets, carbon is mostly in the form of CH 4 and the presence of so much CO would clearly be a sign of late accretion from late gas disks. Supplementary data figure 4 shows the spectra of accreting planets with different masses (log(g) = 3.5 and 5) and temperatures (350, 500 and 700 K) in red compared to their spectra with no ongoing accretion (in blue). We see that the extra accretion creates a significant extra absorption in the CO band between 4.5 and 5 microns that would be detectable with NIRSpec (IFU spectrograph) and NIRCam (imager with coronagraph)
on the JWST and instruments on ELTs (e.g. IFU spectrograph HARMONI on the E-ELT) for the 500 and 700 K cases 16 . We see that the 500 K case is the most favourable and the brown dwarf case (with log(g) = 5) shows the biggest effect owing to the initially lower amount of CO in these atmospheres. Some planets that would be interesting to observe to confirm our scenario would be the 51 Eri b planet at 700 K (there is a JWST/NIRCam GTO observation planned) or GJ 504 b at 500 K.
The spectra shown on Supplementary data figure 4 were computed using Exo-REM 13,17 combined with a line-by-line radiative transfer code. We produced spectra (R = 6000) using self-consistent silicate and iron clouds and non-equilibrium chemistry. Spectra between 2 and 5 microns are dominated by H 2 O, CH 4 and CO absorptions.
Maximum total mass available for accretion. Assuming that only CO is released from planetesimals in late gas disks (and no water or very little 10 ), we can work out the maximum CO mass available that can potentially be accreted on a planet by estimating all the mass in CO initially available in the protoplanetary disk. The protoplanetary disk mass is roughly 0.5% of the stellar mass 18 . Assuming an A-type star similar to β Pic, the total protoplanetary disk mass would be around 3000 M ⊕ . Assuming a standard CO-to-H 2 abundance ratio of 10 −4 , we find that the total CO mass would be around 4 M ⊕ . This is an upper limit of what could end up in the planetesimals of the belt as some of this CO gas will go into forming planets or will be lost through accretion/photoevaporation.
We conclude that a CO release rate of 0.1 M ⊕ /Myr is not ruled out in the early evolution of late gas disks (e.g. see Supplementary data figure 2) but would not be sustained for 100 Myr as all the initially available CO would be lost after a few tens of Myr at most. In this case where only CO is released, we cannot build planets as big as Saturn or Jupiter from an Earth-like core, unless water or other volatiles are abundant (but yet unseen) and released together with CO in these belts.
Comparing to observations We have already presented a few key tests to corroborate our new mechanism to form atmospheres on terrestrial planets and Super-Earths. The most important of these would be observing the atmospheres of super-Earths and sub-Neptunes with JWST and ARIEL to look for atmospheres with high metallicities and high C/O ratios (∼1) or trying to detect the signature of ongoing late accretion in very cool Jupiter-like planets. We have also shown that in contrast to the current planet formation paradigm of Super-Earths, we do not predict more massive cores to have higher GCRs, but rather the opposite, which needs further confirmation by observations. We now present a few more points that could be tested or that can help explaining current observations.
Our work suggests that the outermost planet of a system would be accreting most material.
This means that given a core mass, the GCR of the outermost planet would be the highest and its density the smallest. Of course, planets within a same system do not always have the same core mass but we should see an overall trend that outermost planets (with R H /H > 1) must on average have lower densities. Though, this only works for planets that are not H 2 -dominated for which the secondary late accretion of gas has a significant impact on the initial atmospheric mass (i.e., it is valid for desiccated planets after a giant impact or owing to photoevaporation and being at the bottom of the radius valley 19 ). A recent study 20 found that indeed there may be a trend of decreasing bulk density with increasing orbital period, which would need to be further studied in the future with e.g. new TESS results.
Comparing to delivery from impacts The new scenario of late disk accretion we propose here is different from the late veneer or late accretion 21 proposed for the young Earth in which material was delivered through impacts rather than from gas disk accretion. It is thought that less than 1% of an Earth mass of material was delivered by late impacts 22 to Earth. Therefore, assuming that the impactors are asteroid-like and have a maximum of 1% of CO by mass 23,24 , we find that the total amount of CO that could have been delivered to Earth through impacts after the moonforming collision is lower than 10 −4 M ⊕ (or GCR in CO of order 10 −4 ). In our disk accretion scenario, the amount of CO delivered is at least equal to this for the least massive gas disks we considered, and can be orders of magnitude higher for more massive disks. One major difference between delivering volatiles through late disk accretion or impacts is that in the first scenario, most material delivered would likely be CO whereas in the impact scenario, water would dominate (and refractories), leading to a much lower C-to-O ratio for the latter.
Some impacts could also happen later in the system's history (after a few 100 Myr when late gas accretion may become much lower) due to instabilities perturbing an external belt and producing Large Heavy Bombardment-like (LHB-like) events 25 or owing to long term scattering of solids from an outer belt to the inner regions through a chain of planets 26 . In our Solar System, it is found that accretion due to an effective source of comets such as a potential LHB leading to an accretion of ∼ 3 × 10 −5 M ⊕ of solids 25 would hardly change the Earth's atmospheric mass by more than 10% 27 . However, the amount of mass accreted by a planet in this late stage could be higher in extrasolar systems and typically grow in mass by ∼ 1% of the impactor mass accreted 27 .
It is also known that if a certain mass of solids is scattered inwards from an outer belt, there is a 0.1-1% accretion efficiency to reach planets in the terrestrial region 26 . This could result in atmospheres ∼100 times more massive than that on Earth for bombardment involving ∼ 1 to 10 M ⊕ of planetesimals scattered inwards, which is already more massive than the Kuiper belt by one to two orders of magnitude but could still be a fraction of a massive planetesimal belt. Assuming that 5% of the belt mass is scattered over a few Gyr 26 , it means that the total initial belt mass should be between 20 to 200 M ⊕ , which is very massive because in the most optimistic case we are left with 100 M ⊕ of solid material at the end of the protoplanetary disk stage 28 . This means that for volatile delivery from very late impacts to be as important as what can be readily delivered from late gas accretion in low-mass disks, one requires the most top heavy belts, which are only a few in our surroundings and are negligible in number compared to the whole population.
Moreover, there are also some limitations to the growth of an atmosphere from impacts.
First, there is a specific region in the planet mass Vs. distance parameter space where an atmosphere can effectively grow and not deplete from impacts, which is defined by the line where the escape velocity becomes greater than the local Keplerian velocity of the planet 29 . In contrast, in our scenario, all planets can grow. Second, there is a limit to atmospheric growth from impacts because when the atmosphere becomes too dense, atmospheric loss becomes more important and atmospheric growth might stall 30 . Last but not least, the impact scenario requires impactors to be ejected and then to impact on the given planet (usually after a few interactions with other planets in the system). This is not very efficient and many comets/asteroids are ejected outwards or passed inwards without impacting the planets in a given system 26 . In our scenario, once the gas is released, it will viscously diffuse inwards and will cross the planets' orbits, hence allowing accretion. This is why (in addition to stalling) atmospheres would never grow to reach Sub-Neptune pressure-like planets in an impact scenario 31 , which they can however do from late disk accretion.
Even worse, the scattering timescale of late impacts through scattering by a chain of planets can be very long. Generally speaking, the scattering timescale depends on the planet that is scattering solids inwards and on the chain of planets that will keep transferring the planetesimals inwards. M is the mass of the star in solar masses, M pl the planet mass in Earth masses, a pl the planet semimajor axis in au and t scat in Gyr 26 . This means that for planets with very low masses, the scattering timescale can be much longer than a Gyr and greater than the age of the system, therefore, making it impossible to scatter particles in at a high rate. For example, particles scattered by a 5 M ⊕ planet at 50 au will evolve on timescales of ∼ 10 Gyr, which is far too long for scattering to have an effect on the atmospheric compositions of impacted planets in most systems. One would need a much more massive planet that scatters solids inwards but if too massive (e.g. Jupiter-like), the planet is in the ejection regime 29 and most particles are ejected, meaning that the scattering rate goes down.
There is a strong compromise to find on the planet chain architecture for scattering to send solids in the inner regions. We conclude that our scenario is much more efficient than impacts and does not need any fine tuning of its planetary architecture or planet masses or positions for it to work.
Comparing to outgassing We can also evaluate our scenario in terms of volatile delivery compared to outgassing on a potential Earth-like planet. For a plate-tectonic degassing planet, we use the outgassing rate on Earth as an upper bound as plate tectonics is very active on our planet and we expect it to be similar or less efficient/active on other planets. Roughly 22 km 3 of basaltic magmas are produced each year on Earth 33 . Therefore, we estimate the degassing rate 31 to be 6 × 10 13 kg/yr (given the magma density of 2600 kg/m 3 ). Assuming a typical 34 CO 2 content of 1wt% and a perfect case of 100% efficient degassing (with no recycling in the planet's mantle), we find an upper limit of 10 −7 M ⊕ /Myr on the tectonically produced CO 2 . This degassing rate is in the lower range of the CO input rate that the planet can accrete and can be orders of magnitude higher in our late gas disk accretion scenario. Degassing from volcanism would lead to a rather lower C-to-O-ratio (as it is mostly water and CO 2 that are released) than in our new proposed scenario.
2 M /Myr M=10 4 M /Myr M=10 6 M /Myr M=10 8 M /Myr GCR of present Earth
Figure 1 :
1Formation of massive secondary atmospheres. Temporal evolution of the gas-to-core ratio (GCR) of an Earth-like (1 M ⊕ at 1 au) planet starting with no atmosphere and orbiting in a late gas disk, for disks with gas crossing ratesṀ at the planet varying from 10 −8 and 10 −2 M ⊕ /Myr. The dashed line shows the Earth's GCR at 8.6 × 10 −7 .
Figure 2 :
2Pressure and Density evolution of an initially desiccated planet embedded in a late gas disk. Left: Pressure Vs. GCR of an Earth-like (1 M ⊕ at 1 au) planet starting with no atmosphere for different gas input rates, evolving up to 100 Myr. Right: Bulk density Vs.Ṁ for different planet masses and semi-major axes, plotted at 100 Myr up to a maximum GCR of 0.1.
Jupiter-like planet for large values ofṀ (Figure 3 right). In Supplementary data Figure 2, we plot the metallicity and C/O ratio after 10 Myr of evolution to show that these effects can occur early in the planet's history. Measurements of C/O ratios are still scarce but first results show super-solar (e.g. HR 8799 c) 38 as well as sub-solar values (e.g. β Pic b) 39 and more measurements, especially
Figure 3 :
3Signature of late gas accretion on giant planets. Temporal variation of metallicity (left) and C/O ratio (right) as accretion proceeds for 100 Myr from an initially hydrogen-rich primordial atmosphere for different gas input rates and different initial atmosphere masses (Sub-Neptune up to Jupiter). spectra observed with JWST-NIRCam or JWST-NIRSpec as well as instruments on ELTs in the M-band around 4.5-5 µm (see Supplementary data figure 4) for the coldest giant planets or browndwarfs (<800 K), which would be a clear signature of this accretion. Only a few spectra of planets cooler than 800 K (e.g. GJ 504b 41 ) have been obtained so far in direct imaging but none targeted the required CO bands42,43 .
Figure 4 :
4Cavity created by a low-mass planet in a late gas disk. ALMA synthesised[CI] image (at 492.16 GHz, band 8) of a late disk whose cavity is clearly resolved and carved by a planet at 0.2 arcsec, i.e. 10 au at 50 pc (with 5 hour on source and a beam of 0.12 arcsec).32. Morley, C. V., Kreidberg, L., Rustamkulov, Z., Robinson, T., Fortney, J. J. Observing the Atmospheres of Known Temperate Earth-sized Planets with JWST. Astrophys. J. 850, 121 (2017). 33. Benneke, B., et al. A sub-Neptune exoplanet with a low-metallicity methane-depleted atmosphere and Mie-scattering clouds. Nature Astronomy 3, 813-821 (2019). 34. Fraine, J., et al. Water vapour absorption in the clear atmosphere of a Neptune-sized exoplanet. Nature 513, 526-529 (2014). 35. Wakeford, H. R., et al. HAT-P-26b: A Neptune-mass exoplanet with a well-constrained heavy element abundance. Science 356, 628-631 (2017). 36. Benneke, B., et al. Water Vapor and Clouds on the Habitable-zone Sub-Neptune Exoplanet K2-18b. Astrophys. J. 887, L14 (2019). 37. Tsiaras, A., Waldmann, I. P., Tinetti, G., Tennyson, J., Yurchenko, S. N. Water vapour in the atmosphere of the habitable-zone eight-Earth-mass planet K2-18 b. Nature Astronomy 3, 1086-1091 (2019). 38. Konopacky, Q. M., Barman, T. S., Macintosh, B. A., Marois, C. Detection of Carbon Monoxide and Water Absorption Lines in an Exoplanet Atmosphere. Science 339, 1398-1401 (2013). 39. GRAVITY Collaboration, et al. Peering into the formation history of beta Pictoris b with VLTI/GRAVITY long baseline interferometry. Astron. Astrophys. 633, A110 (2020).
Figure 1 :
1Extended data -Typical viscous evolution timescales. We plot the viscous timescale t ν as a function of the viscous parameter α for different belt locations (50, 100 and 150 au) and different gas temperatures (10, 30 and 100 K).
with t col the initial collisional timescale of largest planetesimals, where we took typical values 52 of largest planetesimal size D c = 10 km, belt width dr = 0.5r, mean eccentricity e = 0.1, planetesimal strength Q D = 330 J/kg and stellar mass M =1 M to compute Extended data Figure 2.
. We can see thatṀ remains approximately constant over 100 Myr of evolution except for the most massive belts (M init =100 M ⊕ ) that are closer-in (50 au) and have a much faster collisional evolution. Our approximation of takingṀ constant over 100 Myr works in most cases. However, for the most massive belts, our value ofṀ should be understood as being the mean gas input rate integrated over time and one can trace back theṀ value at a given time t using Extended dataFigure 2or the set of 3 equations
Figure 3 :
3Extended data -GCR for different values of mean molecular weight µ. Temporal
Figure 4 :
4R H (equal to a pl [M pl /(3M )] 1/3 ) of the planet (of semimajor axis a pl and mass M pl ) with the scale height H of the disk. For very small Earth-like planets Extended data -Potential accretion rate on a planet Vs. available accretion rates. We plot the temporal evolution of the potential theoretical accretion rate on a planetṀ gas (numerical derivative with κ varying with time) for different values of planet semi-major axes a pl , atmospheric mean molecular weight µ and core mass M core . The fiducial model is a pl = 1 au, µ = 14, and M core = 1 M ⊕ . We overplot horizontal lines with different input rate values of our parameterṀ , including the case of 10 −2 M ⊕ /Myr over 100 Myr to verify whether in the cases studied in this
Figure 5 :
5Extended data -Is a planet accreting all gas flowing through the disk? We plot the
MFigure 6 :
6core = 0.1 M ⊕ , a pl = 1.5 AU M core = 0.5 M ⊕ , a pl = 10 AU M = 10 −2 M⊕/Myṙ M = 10 −4 M⊕/Myṙ M = 10 −6 M⊕/Myṙ M = 10 −8 Extended data -GCR and pressures for a Mars-mass and a distant planet. Temporal evolution of the gas-to-core ratio (left) and pressure (right) for a Mars-mass (0.1 M ⊕ at 1.5 au)planet and a distant planet (10 au) with a core mass of 0.5 M ⊕ up to a GCR of 0.5. In the pressure plot (right), the thick solid line is for the Mars-like planet case and the thinner line is for the core of mass 0.5 M ⊕ at 10 au. We note that for the Mars-mass case, GCR grows more slowly than expected whenṀ = 10 −2 M ⊕ /Myr, which is because the theoretical cooling accretion rate becomes smaller than 10 −2 M ⊕ /Myr in this case (seeFig. 4).
Figure 7 :
7Extended data -Hill Vs. Bondi radii. Hill and Bondi radii Vs. planet mass for different planet semi-major axes.
Figure 8 :
8Extended data -GCR and pressures for a planet starting with an Earth atmospheric mass. Temporal evolution of the gas-to-core ratio (left) and pressure (right) starting from a preexisting atmosphere with an Earth atmospheric mass. late gas accretion for all cases withṀ > 10 −8 M ⊕ /Myr, i.e. in all cases with belts more massive than the Kuiper belt (see Extended data figure 2), which is the least massive belt we know of so far. One could start with an even more massive atmosphere and just shift the lines up by the pre-existing mass or pressure to predict the final atmospheric masses and pressures. For instance, starting with a Venus-like atmosphere would still end up with an atmosphere being dominated by late-accretion after 100 Myr forṀ > 10 −6 M ⊕ /Myr.
45 .
45Xie, J.-W., Brandeker, A., Wu, Y. On the Unusual Gas Composition in the β Pictoris Debris Disk. Astrophys. J. 762, 114 (2013). 46. Lynden-Bell, D., Pringle, J. E. The evolution of viscous discs and the origin of the nebular variables.. Mon. Not. R. Astron. Soc 168, 603-637 (1974). 47. Shakura, N. I., Sunyaev, R. A. Reprint of 1973A&A....24..337S. Black holes in binary systems. Observational appearance.. Astron. Astrophys. 500, 33 (1973). 48. Moór, A., et al. New Millimeter CO Observations of the Gas-rich Debris Disks 49 Cet and HD 32297. Astrophys. J. 884, 108 (2019). 49. Marino, S., et al. Population synthesis of exocometary gas around A stars. Mon. Not. R. Astron. Soc, in press 50. Thébault, P., Augereau, J.-C. Collisional processes and size distribution in spatially extended debris discs. Astron. Astrophys. 472, 169 (2007). 51. Kral, Q., Thébault, P., Charnoz, S. LIDT-DD: A new self-consistent debris disc model that includes radiation pressure and couples dynamical and collisional evolution. Astron. Astrophys. 558, A121 (2013).
Figure 1 :
1Supplementary data -Evolution of metallicity for a planet accreting from a late gas disk. Temporal evolution of the metallicity (in log) for different gas input rate and initial atmosphere masses.Z = [O]/[H] + ([O]/[H]) solar ([O]/[H]) solar , (12) where ([O]/[H]) solar = 4.898 × 10 −4 and [O]/[H]=Ṁ t/µ CO /(2M atm H abun /µ proto ), with H abun = 0.912, µ CO = 28 and µ proto = 2.22 7, 8 (see also the metallicity heat-maps in Figure 3 computed at 100 Myr).
Figure 2 :
2Supplementary data -Signature of late gas accretion on young giant planets. Temporal variation of metallicity (left) and C/O ratio (right) as accretion proceeds for 10 Myr (instead of 100 Myr inFigure 3) from an initially hydrogen-rich primordial atmosphere for different gas input rates and different initial atmosphere masses (Sub-Neptune up to Jupiter).
Figure 3 :
3cess of CO accumulated compared to CO present (see Supplementary data figure 3) can be detectable in atmosphere spectra. For a planet as hot as β Pic b with a temperature of T ∼1700 K 15 , Supplementary data -Gas accumulation in upper atmospheres of giant planets. CO mass accumulated through accretion in a late gas disk over CO mass already present in a giant planet Vs. Temperature. The different colours show different input ratesṀ , while the solid, dashed and dotted lines are for a 10, 1 and 50 Jupiter-mass planet, respectively. the accumulation of gas in the outer layer of the atmosphere is not very efficient because the gas diffuses inwards very rapidly and maximising the gas input rate, we can only double (or quadruple for a 1 or 50 M Jup planet) the amount of CO present in the outer layer (<0.1 bar) compared to the amount of CO expected in a primordial atmosphere with a solar metallicity. For a low temperature giant planet however, the inward diffusion is slower and assuming anṀ of 5 M ⊕ /Myr, up to 20 (5, 125) times more CO can be accumulated compared to the CO already present at T = 500 K (700 K, 350 K) (see Supplementary data figure 3)
Figure 4 :
4Supplementary data -Signature of ongoing late gas accretion on giant planets. Spectra of planets with different temperatures (350, 500 and 700 K) and masses (log(g) = 3.5 and 5) suffering ongoing accretion of CO (in red) compared to the case with no late accretion (blue). In each case, we clearly see that the extra CO in the atmosphere creates a significant extra absorption in the M-band around 4.7 µm.
It can be approximated by the cometary diffusion time 32 such that t scat ∼ M
like event happens after several 100s of Myr after the gas disk dissipated, the atmosphere would still be dominated by volatiles accreted by late gas accretion (Supplementary information). The impact and late gas accretion scenarios could be distinguished based on the final composition of the observed planets, in particular looking at their C/O ratio (Supplementary information). We also find that our scenario of late accretion works for a wide range of planets going from Mars-like to Super-earths and for close-in as well as distant planets from their stars. It also works if the planet is not initially devoid of atmosphere and has an Earth-like or a Venus atmosphere initially as itwould replace the bulk of these atmospheres with gas coming from late disk accretion, even for
cases where the disk is not very massive (Methods). Finally, if the gas accretion happens only for
10 Myr or for a period longer than 100 Myr, we find that the quantity of gas accreted by planets
always remains considerable leading to planetary atmospheres with masses at least that of the
Earth's atmosphere for disks more massive than typical very-low-mass disks such as the Kuiper
belt (Methods).
, hence leading to a higherṀ gas .(see Extended data figure5), it can happen that R H < H and in this case, some gas crossing at a rateṀ cannot be accreted by the planet. We then recalculate a newṀ that can be accreted by only considering theṀ that crosses the planet's Hill sphere rather than the whole scaleheight. To calculate the quantity of gas that cannot be accreted, we assume a sphere of radius H on top of the sphere of radius R H , and take out the parts of the large H sphere with a height greater than hence enhancing accretion. Therefore, we find thatṀ new scales roughly with R H /H rather than its square as would be expected from Bondi-accretion or dividing the volume of two tori of radii R H and H, respectively. This is because the orbital timescale of the planet is much faster than the viscous drift timescale of the gas flow and the newṀ will be approximately equal to the ratio of cylindrical collisional cross section of the planet 2πa pl R H and the cylindrical flow cross section 2πa pl H, which equals R H /H. Furthermore, we compare our typical accretion rateṀ with a Bondi-like accretion that may be relevant in the regime where R H < H. Bondi-like accretion happens at a rateṀ B = 4πR 2 H ΣΩ,where Σ is the local secondary disk gas surface density (with Σ =Ṁ /(3πν) at steady state), most cases as R H /H > 0.1, the Bondi accretion rate is higher than the rate at which gas is delivered, and one is still limited byṀ . For instance, for a 1 M ⊕ planet at 1 au (with R H /H ∼ 0.9R H . This gives a newṀ new value that is 3/2(R H /H) − 1/2(R H /H) 3 of the fullṀ . Note that for
cases that reach GCR values approaching 1, R H can become larger and R H /H becomes greater,
and Ω is the Keplerian frequency. We find thatṀ B /Ṁ = 4/(3α)(R H /H) 2 orṀ B /Ṁ new ∼
(1/α)(R H /H), with α typically between 10 −4 and 0.1 as explained earlier 4, 10 . It means that in
t −0.7(9)
Acknowledgements We thank Giovanni Rosotti, Philippe Thebault and Andrew Shannon for discussions. to input into the model, expertise in atmosphere observations and produced the synthetic spectra shown in the paper. All authors contributed to the interpretation of the results and commented on the paper.A Supplementary InformationAccretion onto giants and gas accumulation in their outer envelopes. For giant planets, we assume that all gas that flows through the planet is accreted. This assumption is valid in two cases.First, the scaleheight H of late gas disks is found to be small compared to protoplanetary disks (because of lower temperatures and higher mean molecular weight) and a criteria that emerged
Molecular Gas Clumps from the Destruction of Icy Bodies in the β Pictoris Debris Disk. W R F Dent, Science. 343Dent, W. R. F., et al. Molecular Gas Clumps from the Destruction of Icy Bodies in the β Pictoris Debris Disk. Science 343, 1490-1492 (2014).
Debris Disks in the Scorpius-Centaurus OB Association Resolved by ALMA. J Lieman-Sifry, Astrophys. J. 82825Lieman-Sifry, J., et al. Debris Disks in the Scorpius-Centaurus OB Association Resolved by ALMA. Astrophys. J. 828, 25 (2016).
Molecular Gas in Debris Disks around Young A-type Stars. A Moór, Astrophys. J. 849123Moór, A., et al. Molecular Gas in Debris Disks around Young A-type Stars. Astrophys. J. 849, 123 (2017).
L Matrà, K I Öberg, D J Wilner, J Olofsson, A Bayo, On the Ubiquity and Stellar Luminosity Dependence of Exocometary CO Gas: Detection around M Dwarf TWA 7. Astron. Matrà, L.,Öberg, K. I., Wilner, D. J., Olofsson, J., Bayo, A. On the Ubiquity and Stellar Luminosity Dependence of Exocometary CO Gas: Detection around M Dwarf TWA 7. Astron.
. J. 157117J. 157, 117 (2019).
Herschel/HIFI observations of ionised carbon in the β Pictoris debris disk. G Cataldi, Astron. Astrophys. 56366Cataldi, G., et al. Herschel/HIFI observations of ionised carbon in the β Pictoris debris disk. Astron. Astrophys. 563, A66 (2014).
Predictions for the secondary CO, C and O gas content of debris discs from the destruction of volatile-rich planetesimals. Q Kral, L Matrà, M C Wyatt, G M Kennedy, Mon. Not. R. Astron. Soc. 469Kral, Q., Matrà, L., Wyatt, M. C., Kennedy, G. M. Predictions for the secondary CO, C and O gas content of debris discs from the destruction of volatile-rich planetesimals. Mon. Not. R. Astron. Soc 469, 521-550 (2017).
Detection of Submillimeter-wave [C I] Emission in Gaseous Debris Disks of 49 Ceti and β Pictoris. A E Higuchi, Astrophys. J. 83914Higuchi, A. E., et al. Detection of Submillimeter-wave [C I] Emission in Gaseous Debris Disks of 49 Ceti and β Pictoris. Astrophys. J. 839 L14 (2017).
Detection of Exocometary CO within the 440 Myr Old Fomalhaut Belt: A Similar CO+CO 2 Ice Abundance in Exocomets and Solar System Comets. L Matrà, Astrophys. J. 8429Matrà, L., et al. Detection of Exocometary CO within the 440 Myr Old Fomalhaut Belt: A Similar CO+CO 2 Ice Abundance in Exocomets and Solar System Comets. Astrophys. J. 842, 9 (2017).
Make Super-Earths, Not Jupiters: Accreting Nebular Gas onto Solid Cores at 0.1 AU and Beyond. E J Lee, E Chiang, C W Ormel, Astrophys. J. 79795Lee, E. J., Chiang, E., Ormel, C. W. Make Super-Earths, Not Jupiters: Accreting Nebular Gas onto Solid Cores at 0.1 AU and Beyond. Astrophys. J. 797, 95 (2014).
Cool is to Accrete: Analytic Scalings for Nebular Accretion of Planetary Atmospheres. E J Lee, E Chiang, Astrophys. J. 81141Lee, E. J., Chiang, E. To Cool is to Accrete: Analytic Scalings for Nebular Accretion of Planetary Atmospheres. Astrophys. J. 811, 41 (2015).
ALMA Observations of the Molecular Gas in the Debris Disk of the 30 Myr Old Star HD 21997. Á Kóspál, Astrophys. J. 77677Kóspál,Á., et al. ALMA Observations of the Molecular Gas in the Debris Disk of the 30 Myr Old Star HD 21997. Astrophys. J. 776, 77 (2013).
A 40 Myr Old Gaseous Circumstellar Disk at 49 Ceti: Massive COrich Comet Clouds at Young A-type Stars. B Zuckerman, I Song, Astrophys. J. 75877Zuckerman, B., Song, I. A 40 Myr Old Gaseous Circumstellar Disk at 49 Ceti: Massive CO- rich Comet Clouds at Young A-type Stars. Astrophys. J. 758, 77 (2012).
The magnetorotational instability in debris-disc gas. Q Kral, H Latter, Mon. Not. R. Astron. Soc. 461Kral, Q., Latter, H. The magnetorotational instability in debris-disc gas. Mon. Not. R. Astron. Soc 461, 1614-1620 (2016).
A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris. Q Kral, M Wyatt, R F Carswell, J E Pringle, L Matrà, A Juhász, Mon. Not. R. Astron. Soc. 461Kral, Q., Wyatt, M., Carswell, R. F., Pringle, J. E., Matrà, L., Juhász, A. A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris. Mon. Not. R. Astron. Soc 461, 845-858 (2016).
Imaging [CI] around HD 131835: reinterpreting young debris discs with protoplanetary disc levels of CO gas as shielded secondary discs. Q Kral, S Marino, M C Wyatt, M Kama, L Matrà, Mon. Not. R. Astron. Soc. 489Kral, Q., Marino, S., Wyatt, M. C., Kama, M., Matrà, L. Imaging [CI] around HD 131835: reinterpreting young debris discs with protoplanetary disc levels of CO gas as shielded sec- ondary discs. Mon. Not. R. Astron. Soc 489, 3670-3691 (2019).
Hunting for Planets in the HL Tau Disk. L Testi, Astrophys. J. 81238Testi, L., et al. Hunting for Planets in the HL Tau Disk. Astrophys. J. 812, L38 (2015).
An unbiased study of debris discs around A-type stars with Herschel. N D Thureau, Mon. Not. R. Astron. Soc. 445Thureau, N. D., et al. An unbiased study of debris discs around A-type stars with Herschel. Mon. Not. R. Astron. Soc 445, 2558-2573 (2014).
Discovery of a planetary-mass companion within the gap of the transition disk around PDS 70. M Keppler, Astron. Astrophys. 61744Keppler, M., et al. Discovery of a planetary-mass companion within the gap of the transition disk around PDS 70. Astron. Astrophys. 617, A44 (2018).
Orbital and atmospheric characterization of the planet within the gap of the PDS 70 transition disk. A Müller, Astron. Astrophys. 6172Müller, A., et al. Orbital and atmospheric characterization of the planet within the gap of the PDS 70 transition disk. Astron. Astrophys. 617, L2 (2018).
Two accreting protoplanets around the young star PDS 70. S Y Haffert, Nature Astronomy. 3Haffert, S. Y., et al. Two accreting protoplanets around the young star PDS 70. Nature Astron- omy 3, 749-754 (2019).
How rapidly did Mars accrete? Uncertainties in the Hf W timing of core formation. F Nimmo, T Kleine, Icarus. 191Nimmo, F., Kleine, T. How rapidly did Mars accrete? Uncertainties in the Hf W timing of core formation. Icarus 191, 497-504 (2007).
The Hf-W Isotopic System and the Origin of the Earth and Moon. S B Jacobsen, Annual Review of Earth and Planetary Sciences. 33Jacobsen, S. B. The Hf-W Isotopic System and the Origin of the Earth and Moon. Annual Review of Earth and Planetary Sciences 33, 531-570 (2005).
Atmospheric Escape and the Evolution of Close-In Exoplanets. J E Owen, Annual Review of Earth and Planetary Sciences. 47Owen, J. E. Atmospheric Escape and the Evolution of Close-In Exoplanets. Annual Review of Earth and Planetary Sciences 47, 67-90 (2019).
Atmospheric mass-loss from high-velocity giant impacts. A Yalinewich, H Schlichting, Mon. Not. R. Astron. Soc. 486Yalinewich, A., Schlichting, H. Atmospheric mass-loss from high-velocity giant impacts. Mon. Not. R. Astron. Soc 486, 2780-2789 (2019).
The Frequency of Giant Impacts on Earth-like Worlds. E V Quintana, T Barclay, W J Borucki, J F Rowe, J E Chambers, Astrophys. J. 821126Quintana, E. V., Barclay, T., Borucki, W. J., Rowe, J. F., Chambers, J. E. The Frequency of Giant Impacts on Earth-like Worlds. Astrophys. J. 821, 126 (2016).
Understanding the Mass-Radius Relation for Sub-neptunes: Radius as a Proxy for Composition. E D Lopez, J J Fortney, Astrophys. J. 7921Lopez, E. D., Fortney, J. J. Understanding the Mass-Radius Relation for Sub-neptunes: Radius as a Proxy for Composition. Astrophys. J. 792, 1 (2014).
Breeding Super-Earths and Birthing Super-puffs in Transitional Disks. E J Lee, E Chiang, Astrophys. J. 81790Lee, E. J., Chiang, E. Breeding Super-Earths and Birthing Super-puffs in Transitional Disks. Astrophys. J. 817, 90 (2016).
The nature of the TRAPPIST-1 exoplanets. S L Grimm, Astron. Astrophys. 61368Grimm, S. L., et al. The nature of the TRAPPIST-1 exoplanets. Astron. Astrophys. 613, A68 (2018).
Clouds in the atmosphere of the super-Earth exoplanet GJ1214b. L Kreidberg, Nature. 505Kreidberg, L., et al. Clouds in the atmosphere of the super-Earth exoplanet GJ1214b. Nature 505, 69-72 (2014).
3D Modeling of GJ1214b's Atmosphere: Formation of Inhomogeneous High Clouds and Observational Implications. Astrophys. B Charnay, V Meadows, A Misra, J Leconte, G Arney, J. 8131Charnay, B., Meadows, V., Misra, A., Leconte, J., Arney, G. 3D Modeling of GJ1214b's Atmosphere: Formation of Inhomogeneous High Clouds and Observational Implications. As- trophys. J. 813, L1 (2015).
Thermal Emission and Reflected Light Spectra of Super Earths with Flat Transmission Spectra. C V Morley, Astrophys. J. 815110Morley, C. V., et al. Thermal Emission and Reflected Light Spectra of Super Earths with Flat Transmission Spectra. Astrophys. J. 815, 110 (2015).
A Selfconsistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations. B Charnay, B Bézard, J.-L Baudino, M Bonnefoy, A Boccaletti, R Galicher, Astrophys. J. 854172Charnay, B., Bézard, B., Baudino, J.-L., Bonnefoy, M., Boccaletti, A., Galicher, R. A Self- consistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations. Astrophys. J. 854, 172 (2018).
Direct Imaging Detection of Methane in the Atmosphere of GJ 504 b. Astrophys. M Janson, J. 7784Janson, M., et al. Direct Imaging Detection of Methane in the Atmosphere of GJ 504 b. Astro- phys. J. 778, L4 (2013).
Discovery and spectroscopy of the young jovian planet 51 Eri b with the Gemini Planet Imager. B Macintosh, Science. 350Macintosh, B., et al. Discovery and spectroscopy of the young jovian planet 51 Eri b with the Gemini Planet Imager. Science 350, 64-67 (2015).
The LEECH Exoplanet Imaging Survey: Characterization of the Coldest Directly Imaged Exoplanet, GJ 504 b, and Evidence for Superstellar Metallicity. A J Skemer, Astrophys. J. 817166Skemer, A. J., et al. The LEECH Exoplanet Imaging Survey: Characterization of the Coldest Directly Imaged Exoplanet, GJ 504 b, and Evidence for Superstellar Metallicity. Astrophys. J. 817, 166.
ALMA Resolves C I Emission from the β Pictoris Debris Disk. G Cataldi, Astrophys. J. 86172Cataldi, G., et al. ALMA Resolves C I Emission from the β Pictoris Debris Disk. Astrophys. J. 861, 72 (2018).
Evolution of debris disks. M C Wyatt, Annual Review of Astron. Astrophys. 46Wyatt, M. C. Evolution of debris disks. Annual Review of Astron. Astrophys. 46, 339-383 (2008).
Age Dependence of the Vega Phenomenon: Theory. C Dominik, G Decin, Astrophys. J. 598626Dominik, C., Decin, G. Age Dependence of the Vega Phenomenon: Theory. Astrophys. J. 598, 626 (2003).
Steady State Evolution of Debris Disks around A Stars. M C Wyatt, Astrophys. J. 663365Wyatt, M. C., et al. Steady State Evolution of Debris Disks around A Stars. Astrophys. J. 663, 365 (2007).
An Empirical Planetesimal Belt Radius-Stellar Luminosity Relation. L Matrà, S Marino, G M Kennedy, M C Wyatt, K I Öberg, D J Wilner, Astrophys. J. 85972Matrà, L., Marino, S., Kennedy, G. M., Wyatt, M. C.,Öberg, K. I., Wilner, D. J. An Empirical Planetesimal Belt Radius-Stellar Luminosity Relation. Astrophys. J. 859, 72 (2018).
Protoplanetary Disks and Their Evolution. J P Williams, L A Cieza, Annual Review of Astron. Astrophys. 49Williams, J. P., Cieza, L. A. Protoplanetary Disks and Their Evolution. Annual Review of Astron. Astrophys. 49, 67-117 (2011).
Optically thin core accretion: how planets get their gas in nearly gas-free discs. E J Lee, E Chiang, J W Ferguson, Mon. Not. R. Astron. Soc. 4762199Lee E. J., Chiang E., Ferguson J. W. Optically thin core accretion: how planets get their gas in nearly gas-free discs. Mon. Not. R. Astron. Soc 476, 2199 (2018).
Gaseous Mean Opacities for Giant Planet and Ultracool Dwarf Atmospheres over a Range of Metallicities and Temperatures. R S Freedman, Astrophys. J. Supplement Series. 21425Freedman, R. S., et al. Gaseous Mean Opacities for Giant Planet and Ultracool Dwarf Atmo- spheres over a Range of Metallicities and Temperatures. Astrophys. J. Supplement Series 214, 25 (2014).
Distribution of Accreting Gas and Angular Momentum onto Circumplanetary Disks. T Tanigawa, K Ohtsuki, M N Machida, Astrophys. J. 74747Tanigawa, T., Ohtsuki, K., Machida, M. N. Distribution of Accreting Gas and Angular Mo- mentum onto Circumplanetary Disks. Astrophys. J. 747, 47 (2012).
Interior Characterization in Multiplanetary Systems: TRAPPIST-1. C Dorn, K Mosegaard, S L Grimm, Y Alibert, Astrophys. J. 86520Dorn, C., Mosegaard, K., Grimm, S. L., Alibert, Y. Interior Characterization in Multiplanetary Systems: TRAPPIST-1. Astrophys. J. 865, 20 (2018).
Common 0.1bar tropopause in thick atmospheres set by pressure-dependent infrared transparency. T D Robinson, D C Catling, Nature Geoscience. 7Robinson, T. D., Catling, D. C. Common 0.1bar tropopause in thick atmospheres set by pressure-dependent infrared transparency. Nature Geoscience 7, 12-15 (2014).
Hydrodynamics of embedded planets' first atmospheres -II. A rapid recycling of atmospheric gas. C W Ormel, J.-M Shi, R Kuiper, Mon. Not. R. Astron. Soc. 447Ormel, C. W., Shi, J.-M., Kuiper, R. Hydrodynamics of embedded planets' first atmospheres - II. A rapid recycling of atmospheric gas. Mon. Not. R. Astron. Soc 447, 3512-3525 (2015).
Envelopes of embedded super-Earths -II. Three-dimensional isothermal simulations. W Béthune, R R Rafikov, Mon. Not. R. Astron. Soc. 488Béthune, W., Rafikov, R. R. Envelopes of embedded super-Earths -II. Three-dimensional isothermal simulations. Mon. Not. R. Astron. Soc 488, 2365-2379 (2019).
The 3D Flow Field Around an Embedded Planet. J Fung, P Artymowicz, Y Wu, Astrophys. J. 811101Fung, J., Artymowicz, P., Wu, Y. The 3D Flow Field Around an Embedded Planet. Astrophys. J. 811, 101 (2015).
Three-dimensional Radiation-hydrodynamics Calculations of the Envelopes of Young Planets Embedded in Protoplanetary Disks. G D'angelo, P Bodenheimer, Astrophys. J. 77877D'Angelo, G., Bodenheimer, P. Three-dimensional Radiation-hydrodynamics Calculations of the Envelopes of Young Planets Embedded in Protoplanetary Disks. Astrophys. J. 778, 77 (2013).
RADMC-3D: A multi-purpose radiative transfer tool. C P Dullemond, ascl:1202.015Astrophysics Source Code Library. Dullemond, C. P., et al. RADMC-3D: A multi-purpose radiative transfer tool. Astrophysics Source Code Library ascl:1202.015 (2012).
Binary accretion rates: dependence on temperature and mass ratio. M D Young, C J Clarke, Mon. Not. R. Astron. Soc. 452Young, M. D., Clarke, C. J. Binary accretion rates: dependence on temperature and mass ratio. Mon. Not. R. Astron. Soc 452, 3085-3091 (2015).
Suppression of the accretion rate in thin discs around binary black holes. E Ragusa, G Lodato, D J Price, Mon. Not. R. Astron. Soc. 460Ragusa, E., Lodato, G., Price, D. J. Suppression of the accretion rate in thin discs around binary black holes. Mon. Not. R. Astron. Soc 460, 1243-1253 (2016).
Radial Surface Density Profiles of Gas and Dust in the Debris Disk around 49 Ceti. A M Hughes, Astrophys. J. 83986Hughes, A. M., et al. Radial Surface Density Profiles of Gas and Dust in the Debris Disk around 49 Ceti. Astrophys. J. 839, 86 (2017).
The magnetorotational instability in debris-disc gas. Q Kral, H Latter, Mon. Not. R. Astron. Soc. 461Kral, Q., Latter, H. The magnetorotational instability in debris-disc gas. Mon. Not. R. Astron. Soc 461, 1614-1620 (2016).
Planetary Torques as the Viscosity of Protoplanetary Disks. J Goodman, R R Rafikov, Astrophys. J. 552793Goodman, J., Rafikov, R. R. Planetary Torques as the Viscosity of Protoplanetary Disks. Astro- phys. J. 552, 793 (2001).
Save the Planet, Feed the Star: How Super-Earths Survive Migration and Drive Disk Accretion. J Fung, E Chiang, Astrophys. J. 839100Fung, J., Chiang, E. Save the Planet, Feed the Star: How Super-Earths Survive Migration and Drive Disk Accretion. Astrophys. J. 839, 100 (2017).
Solar System Abundances and Condensation Temperatures of the Elements. K Lodders, Astrophys. J. 5911220Lodders, K. Solar System Abundances and Condensation Temperatures of the Elements. Astro- phys. J. 591, 1220 (2003).
The Chemical Composition of the Sun. M Asplund, N Grevesse, A J Sauval, P Scott, Annual Review of Astron. Astrophys. 47Asplund, M., Grevesse, N., Sauval, A. J., Scott, P. The Chemical Composition of the Sun. Annual Review of Astron. Astrophys. 47, 481-522 (2009).
Entrapment of CO in CO 2 Ice. A Simon, K I Öberg, M Rajappan, P Maksiutenko, Astrophys. J. 88321Simon, A.,Öberg, K. I., Rajappan, M., Maksiutenko, P. Entrapment of CO in CO 2 Ice. Astro- phys. J. 883, 21 (2019).
A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris. Q Kral, M Wyatt, R F Carswell, J E Pringle, L Matrà, A Juhász, Mon. Not. R. Astron. Soc. 461Kral, Q., Wyatt, M., Carswell, R. F., Pringle, J. E., Matrà, L., Juhász, A. A self-consistent model for the evolution of the gas produced in the debris disc of β Pictoris. Mon. Not. R. Astron. Soc 461, 845-858 (2016).
. B Bézard, E Lellouch, D Strobel, J.-P Maillard, P Drossart, Carbon Monoxide on Jupiter: Evidence for Both Internal and External Sources. Icarus. 159Bézard, B., Lellouch, E., Strobel, D., Maillard, J.-P., Drossart, P. Carbon Monoxide on Jupiter: Evidence for Both Internal and External Sources. Icarus 159, 95-111 (2002).
Exoplanet Atmospheres: Physical Processes.Exoplanet Atmospheres: Physical Processes. S Seager, Princeton University PressBy Sara SeagerSeager, S. Exoplanet Atmospheres: Physical Processes.Exoplanet Atmospheres: Physical Pro- cesses. By Sara Seager. Princeton University Press (2010).
A Selfconsistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations. B Charnay, B Bézard, J.-L Baudino, M Bonnefoy, A Boccaletti, R Galicher, Astrophys. J. 854172Charnay, B., Bézard, B., Baudino, J.-L., Bonnefoy, M., Boccaletti, A., Galicher, R. A Self- consistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations. Astrophys. J. 854, 172 (2018).
The theory of brown dwarfs and extrasolar giant planets. A Burrows, W B Hubbard, J I Lunine, J Liebert, Reviews of Modern Physics. 73Burrows, A., Hubbard, W. B., Lunine, J. I., Liebert, J. The theory of brown dwarfs and extra- solar giant planets. Reviews of Modern Physics 73, 719-765 (2001).
Magellan Adaptive Optics First-light Observations of the Exoplanet β Pic b. II. 3-5 µm Direct Imaging with MagAO+Clio, and the Empirical Bolometric Luminosity of a Self-luminous Giant Planet. K M Morzinski, Astrophys. J. 815108Morzinski, K. M., et al. Magellan Adaptive Optics First-light Observations of the Exoplanet β Pic b. II. 3-5 µm Direct Imaging with MagAO+Clio, and the Empirical Bolometric Luminosity of a Self-luminous Giant Planet. Astrophys. J. 815, 108 (2015).
A White Paper Submitted to The National Academy of Science's Committee on Exoplanet Science Strategy: Observing Exoplanets with the James Webb Space Telescope. C A Beichman, T P Greene, arXiv:1803.03730Beichman, C. A., Greene, T. P. A White Paper Submitted to The National Academy of Sci- ence's Committee on Exoplanet Science Strategy: Observing Exoplanets with the James Webb Space Telescope. arXiv e-prints arXiv:1803.03730 (2018).
Interpreting the photometry and spectroscopy of directly imaged planets: a new atmospheric model applied to β Pictoris b and SPHERE observations. J.-L Baudino, Astron. Astrophys. 58283Baudino, J.-L., et al. Interpreting the photometry and spectroscopy of directly imaged planets: a new atmospheric model applied to β Pictoris b and SPHERE observations. Astron. Astrophys. 582, A83 (2015).
The Mass Dependence between Protoplanetary Disks and their Stellar Hosts. S M Andrews, K A Rosenfeld, A L Kraus, D J Wilner, Astrophys. J. 771129Andrews, S. M., Rosenfeld, K. A., Kraus, A. L., Wilner, D. J. The Mass Dependence between Protoplanetary Disks and their Stellar Hosts. Astrophys. J. 771, 129 (2013).
The California-Kepler Survey. III. A Gap in the Radius Distribution of Small Planets. B J Fulton, Astron. J. 154109Fulton, B. J., et al. The California-Kepler Survey. III. A Gap in the Radius Distribution of Small Planets. Astron. J. 154, 109 (2017).
Breeding Super-Earths and Birthing Super-puffs in Transitional Disks. E J Lee, E Chiang, Astrophys. J. 81790Lee, E. J., Chiang, E. Breeding Super-Earths and Birthing Super-puffs in Transitional Disks. Astrophys. J. 817, 90 (2016).
Late Accretion and the Late Veneer. A Morbidelli, B J Wood, Washington DC American Geophysical Union Geophysical Monograph Series. 212Morbidelli, A., Wood, B. J. Late Accretion and the Late Veneer. Washington DC American Geophysical Union Geophysical Monograph Series 212, 71-82 (2015).
Late veneer and late accretion to the terrestrial planets. R Brasser, S J Mojzsis, S C Werner, S Matsumura, S Ida, Earth and Planetary Science Letters. 455Brasser, R., Mojzsis, S. J., Werner, S. C., Matsumura, S., Ida, S. Late veneer and late accretion to the terrestrial planets. Earth and Planetary Science Letters 455, 85-93 (2016).
Elemental and Isotopic Abundances of Carbon and Nitrogen in Meteorites. M M Grady, I P Wright, Space Science Reviews. 106Grady, M. M., Wright, I. P. Elemental and Isotopic Abundances of Carbon and Nitrogen in Meteorites. Space Science Reviews 106, 231-248 (2003).
Chemistry of atmospheres formed during accretion of the Earth and other terrestrial planets. L Schaefer, B Fegley, Icarus. 208Schaefer, L., Fegley, B. Chemistry of atmospheres formed during accretion of the Earth and other terrestrial planets. Icarus 208, 438-448 (2010).
Origin of the cataclysmic Late Heavy Bombardment period of the terrestrial planets. R Gomes, H F Levison, K Tsiganis, A Morbidelli, Nature. 435Gomes, R., Levison, H. F., Tsiganis, K., Morbidelli, A. Origin of the cataclysmic Late Heavy Bombardment period of the terrestrial planets. Nature 435, 466-469 (2005).
Scattering of exocomets by a planet chain: exozodi levels and the delivery of cometary material to inner planets. S Marino, A Bonsor, M C Wyatt, Q Kral, Mon. Not. R. Astron. Soc. 479Marino, S., Bonsor, A., Wyatt, M. C., Kral, Q. Scattering of exocomets by a planet chain: exozodi levels and the delivery of cometary material to inner planets. Mon. Not. R. Astron. Soc 479, 1651-1671 (2018).
Susceptibility of planetary atmospheres to mass loss and growth by planetesimal impacts: the impact shoreline. M C Wyatt, Q Kral, C A Sinclair, Mon. Not. R. Astron. Soc. 491Wyatt, M. C., Kral, Q., Sinclair, C.A. Susceptibility of planetary atmospheres to mass loss and growth by planetesimal impacts: the impact shoreline. Mon. Not. R. Astron. Soc, 491, 782-802 (2020).
Protoplanetary Disks and Their Evolution. J P Williams, L A Cieza, Annual Review of Astron. Astrophys. 49Williams, J. P., Cieza, L. A. Protoplanetary Disks and Their Evolution. Annual Review of Astron. Astrophys. 49, 67-117 (2011).
How to design a planetary system for different scattering outcomes: giant impact sweet spot, maximizing exocomets, scattered discs. M C Wyatt, A Bonsor, A P Jackson, S Marino, A Shannon, Mon. Not. R. Astron. Soc. 464Wyatt, M. C., Bonsor, A., Jackson, A. P., Marino, S., Shannon, A. How to design a plane- tary system for different scattering outcomes: giant impact sweet spot, maximizing exocomets, scattered discs. Mon. Not. R. Astron. Soc 464, 3385-3407 (2017).
Impact induced erosion of hot and dense atmospheres. V Shuvalov, E Kührt, D De Niem, K Wünnemann, Planetary and Space Science. 98Shuvalov, V., Kührt, E., de Niem, D., Wünnemann, K. Impact induced erosion of hot and dense atmospheres. Planetary and Space Science 98, 120-127 (2014).
Cometary impactors on the TRAPPIST-1 planets can destroy all planetary atmospheres and rebuild secondary atmospheres on planets f, g, and h. Q Kral, Mon. Not. R. Astron. Soc. 479Kral, Q., et al. Cometary impactors on the TRAPPIST-1 planets can destroy all planetary atmospheres and rebuild secondary atmospheres on planets f, g, and h. Mon. Not. R. Astron. Soc 479, 2649-2672 (2018).
Embedded star clusters and the formation of the Oort cloud. III. Evolution of the inner cloud during the Galactic phase. R Brasser, M J Duncan, H F Levison, Icarus. 196Brasser, R., Duncan, M. J., Levison, H. F. Embedded star clusters and the formation of the Oort cloud. III. Evolution of the inner cloud during the Galactic phase. Icarus 196, 274-284 (2008).
Rates of magma emplacement and volcanic output. J A Crisp, Journal of Volcanology and Geothermal Research. 20Crisp, J. A. Rates of magma emplacement and volcanic output. Journal of Volcanology and Geothermal Research 20, 177-211 (1984).
Carbon dioxide in magmas and implications for hydrothermal systems. Mineralium Deposita. J Lowenstern, 36Lowenstern, J. Carbon dioxide in magmas and implications for hydrothermal systems. Miner- alium Deposita 36, 490-502 (2001).
| [] |
[
"Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials",
"Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials",
"Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials",
"Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials"
] | [
"Adal Lima-Hernández \nInstituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain\n",
"Carlos Hernández-Monteagudo \nInstituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain\n",
"Jonás Chaves-Montero \nDonostia International Physics Center\nPaseo Manuel de Lardizábal 4E-20018Donostia-San SebastiánSpain\n",
"Adal Lima-Hernández \nInstituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain\n",
"Carlos Hernández-Monteagudo \nInstituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain\n",
"Jonás Chaves-Montero \nDonostia International Physics Center\nPaseo Manuel de Lardizábal 4E-20018Donostia-San SebastiánSpain\n"
] | [
"Instituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain",
"Instituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain",
"Donostia International Physics Center\nPaseo Manuel de Lardizábal 4E-20018Donostia-San SebastiánSpain",
"Instituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain",
"Instituto de Astrofísica de Canarias (IAC)\nC/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nAvenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain",
"Donostia International Physics Center\nPaseo Manuel de Lardizábal 4E-20018Donostia-San SebastiánSpain"
] | [] | We compute the linear order, general relativistic corrections to angular redshift fluctuations (ARF), a new cosmological observable built upon density-weighted twodimensional (2D) maps of galaxy redshifts. We start with an existing approach for galaxy/source counts developed in the Newtonian gauge, and generalize it to ARF, modifying for this purpose a standard Boltzmann code. Our calculations allow us identifying the velocity terms as the leading corrections on large scales, emphasizing the sensitivity of ARF to peculiar, cosmological velocity fields. Just like for standard 2D clustering, the impact of gravitational lensing on ARF is dominant on small angular scales and for wide redshift shells, while the signatures associated to gravitational potentials are extremely small and hardly detectable. The ARF also present interesting correlation properties to anisotropies of the Cosmic Microwave Background (CMB): they are highly correlated to CMB lensing potential fluctuations, while also exhibiting a significant (S/N∼ 4-5) anti-correlation with the Integrated Sachs-Wolfe effect (ISW). This negative ARF×ISW signal is quite complementary to the standard 2D clustering×ISW correlation, since the former appears mostly at higher redshift (z ∼ 2) than the latter (z 1), and the combination of the two observables significantly increases the χ 2 statistics testing the null (no ISW) hypothesis. We conclude that ARF constitute a novel, alternative, and potentially powerful tool to constrain the nature of Dark Energy component that gives rise to the ISW. | 10.1088/1475-7516/2022/09/038 | [
"https://export.arxiv.org/pdf/2203.15008v1.pdf"
] | 247,778,920 | 2203.15008 | 951b59dfd4bd850582ac65937c4cc311bdbe4ba6 |
Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials
28 Mar 2022
Adal Lima-Hernández
Instituto de Astrofísica de Canarias (IAC)
C/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna
Avenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain
Carlos Hernández-Monteagudo
Instituto de Astrofísica de Canarias (IAC)
C/Vía Láctea, s/n, La LagunaE-38205TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna
Avenida Francisco Sánchez, s/n, La LagunaE-38205TenerifeSpain
Jonás Chaves-Montero
Donostia International Physics Center
Paseo Manuel de Lardizábal 4E-20018Donostia-San SebastiánSpain
Prepared for submission to JCAP Relativistic Angular Redshift Fluctuations embedded in Large Scale Varying Gravitational Potentials
28 Mar 20221 Corresponding author.
We compute the linear order, general relativistic corrections to angular redshift fluctuations (ARF), a new cosmological observable built upon density-weighted twodimensional (2D) maps of galaxy redshifts. We start with an existing approach for galaxy/source counts developed in the Newtonian gauge, and generalize it to ARF, modifying for this purpose a standard Boltzmann code. Our calculations allow us identifying the velocity terms as the leading corrections on large scales, emphasizing the sensitivity of ARF to peculiar, cosmological velocity fields. Just like for standard 2D clustering, the impact of gravitational lensing on ARF is dominant on small angular scales and for wide redshift shells, while the signatures associated to gravitational potentials are extremely small and hardly detectable. The ARF also present interesting correlation properties to anisotropies of the Cosmic Microwave Background (CMB): they are highly correlated to CMB lensing potential fluctuations, while also exhibiting a significant (S/N∼ 4-5) anti-correlation with the Integrated Sachs-Wolfe effect (ISW). This negative ARF×ISW signal is quite complementary to the standard 2D clustering×ISW correlation, since the former appears mostly at higher redshift (z ∼ 2) than the latter (z 1), and the combination of the two observables significantly increases the χ 2 statistics testing the null (no ISW) hypothesis. We conclude that ARF constitute a novel, alternative, and potentially powerful tool to constrain the nature of Dark Energy component that gives rise to the ISW.
Introduction
Cosmologists study the Universe using different, complementary observational avenues to improve the constraining power of each observable by itself. Sub-millimeter observations of the Cosmic Microwave Background (CMB) from ground-and space-based experiments such as Planck [51], ACTPol [49], SPT-3G [6], or BICEP3 [1] have provided an exquisite view of the primeval, quasi-homogeneous, and quasi-isotropic universe at z 1 050, setting strict constraints on the parameters defining the standard cosmological model and the inflationary epoch immediately following the Big Bang. Furthermore, these observations are sensitive to the low redshift universe as CMB photons interact with matter in their way towards us, giving rise to a plethora of so-called secondary effects, including gravitational lensing, which consists in the bending of CMB geodesics by intervening matter [8,18], the Sunyaev-Zeldovich effect [64][65][66], which is caused by the scattering of CMB photons off free electrons, and the Integrated Sachs-Wolfe (ISW) effect [55], which is induced by the time-evolution of large-scale gravitational potentials.
On the other hand, cosmologists study the late universe using optical and infrared observations of galaxies and quasars, aiming to build three-dimensional maps of the underlying density field. Using these surveys of the Large Scale Structure (LSS) of the universe, cosmologists extract precise cosmological information from multiple observables, including the spatial/angular distribution of galaxies [25], the distortions of galaxy shapes caused by gravitational lensing [44], and the abundance and size function of under-dense regions [33] and clusters of galaxies [2]. Extracting cosmology from LSS surveys requires modeling observational systematics and multiple physical ingredients such as how galaxies trace the matter density field, baryonic effects, and nonlinearities induced by gravity, which present a larger impact on LSS studies than on CMB analyses. In fact, there is an significant ongoing effort to constrain, model and/or marginalize over the uncertainties associated to observational systematics [11,16,53,60] and the aforementioned physical phenomena [3,5,20,21,57,58,61,62,72].
Ideally, the cosmological constraints obtained from CMB or LSS surveys should be compatible and complementary. In practice, the analysis of data from the latest CMB and LSS cosmological surveys has given rise to tensions [52] in the value of key cosmological quantities such as the Hubble's expansion parameter, the amplitude linear matter perturbations, and the effective amplitude of gravitational lensing. Because of this, a greater focus has been put on issues like precise error computation, confirmation bias, the impact of known and unknown systematics, and consistency tests via alternative cosmological probes. Among the latter, several new observables have been proposed. The abundance, shape and size, and spatial distribution of voids have proven to be highly cosmology-sensitive and have been the subject of research in the last few years [12,27,27]. Cosmic chronometers, first introduced by [32], are also becoming more widely used in cosmological parameter estimation [45,46,50,63]. The use of Fast Radio Bursts (FRBs) [26,31,39,40,48], intensity mapping [4,15,42], and redshift drifts [41,56] have also been suggested in a cosmological context.
This work focuses on the study of "density-weighted Angular Redshift Fluctuations" (ARF), a novel cosmological observable first introduced by [29] (Letter I hereafter) that refers to fluctuations in the redshift field sampled by galaxies selected under a particular redshift window. As shown in Letter I, fluctuations around the average redshift of sources selected under a window are sensitive to the growth rate of perturbations, large-scale galaxy bias, and the level of primordial non-Gaussianity. The constraining power of ARF was first shown in [30] (Letter II hereafter), where the authors leveraged the sensitivity of ARF to peculiar radial velocities to set strict constraints on the nature of gravity using galaxies from the Baryon Oscillation Spectroscopic Survey [23]. The constraining power of ARF for upcoming LSS surveys Dark Energy Spectroscopic Instrument (DESI) [22] and Euclid [34] will be even more significant, and the combination of ARF and galaxy clustering has the potential to deliver ten times more strict constraints on the time-evolution of dark energy than galaxy clustering on its own [36]. Furthermore, [17] used the cross-correlation of ARF maps built upon state-of-the-art spectroscopic surveys and CMB maps from the Planck experiment to extract the highest significance detection (S/N 11) of the kinetic Sunyaev-Zeldovich [66] effect up to date and characterize the distribution of intergalactic gas from z 0.1 to 5.
In this work, we further investigate the potential of the ARF by studying the linear order corrections from General Relativity (GR) [24]. Following the same strategy as [14] for density fluctuations, we derive first-order GR terms for ARF. Then, we modify the Boltzmann code CAMB sources [13] to estimate the ARF auto-angular power spectra and the cross-correlation of ARF and primordial CMB anisotropies. Finally, we investigate the potential of using ARF to measure the ISW effect, finding that the cross-correlation of ARF and CMB observations is sensitive to this effect for redshifts at which the sensitivity of the cross-correlation of galaxy clustering and CMB observations is low. These results let us conclude that the combination of ARF and galaxy clustering yields significantly more precise measurements of the ISW effect than galaxy clustering alone. About a week before the submission of this work, [43] have presented an independent computation of the GR linear corrections to ARF: despite we are working on different gauges, the comparison of their different correcting terms to ours show very good agreement (as it will be discussed below). Their forecast work seems also to be in good agreement with that of [36], at least for the cases where the parameter configuration in both works match each other.
In Sect. 2 we review the post-Newtonian derivation of the ARF, while in Sect. 3 we present the GR description of the ARF in the Newtonian gauge. This section first summarises very briefly the work by [14] (CL11 hereafter), (of which a more detailed description can be found in Appendix A), and then outlines the basic steps required to obtain the ARF transfer functions from the ADF ones. A more detailed description of this derivation can be found in Appendix B). In Sect. 4 we present our results for the ARF auto power spectra, and their cross-correlation to CMB observables (T , E-mode of polarization, and the φ deflection potential). Finally, in Sect. 5 we discuss our results in the context of recent results and ISW science, and conclude.
Throughout this work, we shall consider a flat ΛCDM scenario compatible with Planck DR3 cosmology: Ω b = 0.049117, Ω Λ = 0.684857, Ω m = 0.315143, n s = 0.963 and h = 0.6726. Unless otherwise specified, we assume galaxy bias equal to unity (b g = 1). We use Greek indices running from 0 to 3 to denote spacetime variables, and Latin indices running from 1 to 3 to refer to the spatial part of a four-tensor.
Post-Newtonian description of Angular Redshift Fluctuations
The ARF observable
In this Section, we briefly introduce the reader to the ARF first presented in Letter I. In that work, the ARF field was defined as
z + (δz) I (n) = j∈n z j W j j∈n W j ,(2.1)
where the superscript I stands for this first ARF definition,z denotes the average (monopole) redshift over the footprint, and W j corresponds to a Gaussian weight given by W j ≡ exp −(z obs − z j ) 2 /(2σ 2 z ) . The sum index j runs through all galaxies falling in a sky region pointing ton. The weight W j measures the distance of the redshift of the j-th galaxy to a Gaussian redshift shell of center z obs and width σ z 1 . Both z obs and σ z are chosen conveniently by the observer, and motivate the tomographic character of the ARF.
However, the denominator of Eq. 2.1 can be noisy (and even zero) for surveys with sparse sampling, and for practical reasons a second definition was proposed in Letter II:
(δz) II (n) = j∈n (z j −z)W j j∈n W j n ,(2.2)
where the ensemble average ... n takes place over all pixelsn in the survey's footprint, i.e., it is an area average. It can be shown that both definitions yield the same expressions under linear theory of cosmological perturbations, although their sensitivity to potential systematics biasing the observed number of tracers is not the same: it can be easily seen that (δz) I is robust against multiplicative systematics, unlike (δz) II , while both of them are robust against additive systematics that do not appreciably vary under the redshift shell. For the sake of simplicity, we shall hereafter adopt the second definition for the ARF, Eq. 2.2.
Post-Newtonian theoretical derivation
In the post-Newtonian limit, we can define the observed redshift of the j-th observed galaxy as
z j = z H + (1 + z H )v ·n,(2.3)
where z H stands for the Hubble drift redshift 1 + z H = 1/a, a is the expansion factor, and v denotes the (physical/proper) peculiar velocity vector in units of the speed of light. Applying either of the two definitions of the ARF above, we obtain
z + δz(n) = F [z H ] + F [b g δ m (z H − F [z H ])] + F (z φ + v ·n (1 + z H )) 1 − d log W dz (z H − F [z H ]) + O 2 nd , (2.4)
where b g is the linear galaxy bias and δ m is the matter density contrast. Note that we use linear perturbation theory to derive the previous previous equation, i.e. it only includes terms at first order in δ m and v. Here we have defined the normalised functional
F[Y ] = drr 2n (r)W (z obs − z H [r]) Y (r) drr 2n (r)W (z cen − z H [r]) = 1 n ang dr r 2n (r)W (z cen − z H [r]) Y (r), (2.5)
withn ang the average angular number density of galaxies 2 under the considered Gaussian shell W (z) centered upon z cen , and r the comoving radial distance. Note here z φ refers to redshift fluctuations of gravitational origin 3 .
General Relativistic derivation of the ARF
In this section, we derive the linear order, general relativistic corrections to ARF in a flat universe, and we compute their sensitivity to different aspects of cosmological physics.
The framework for source number counts
Our starting point will be the observed number counts of galaxies per unit redshift and solid angle, given by n(n, z)dzdΩ, to which we shall refer hereafter as "angular density fluctuations" or ADF. As mentioned above, there is abundant literature computing the GR corrections to this observable [10,68,69,71], although in this work we shall follow closely the approach of CL11. We next briefly revisit the main findings of CL11 that we need to build upon in order to compute the GR corrections of ARF, and defer the reader to Appendix A for a more detailed derivation.
CL11 work in the Newtonian gauge in a flat Friedmann-Lemaître-Robertson-Walker (FLRW) Universe described by the metric
ds 2 = g µν dx µ dx ν = a 2 (η) −(1 + 2ψ)dη 2 + (1 − 2φ)δ ij dx i dx j , (3.1)
where a(η) is the cosmological scale factor, ψ and φ are the scalar potentials perturbing this otherwise homogeneous metric, and where vector and tensor perturbations are ignored. CL11 compute the linear-order estimation of the redshift measured by an observer from a source emitting light at conformal time η:
1 + z(η) = a o a(η) 1 + v i e i − ψ g o + η η 0 dr (ψ +φ) = a o a(η) 1 + ψ o − ψ +n · [v − v o ] + η ηo (φ +ψ)dη ,(3.2)
where dot variables refer to derivatives with respect to η, and the vector v is the spatial vector of the four-velocity of a comoving observer (with the "o" subscript denoting the observer's position in space-time). CL11 map observed redshifts (which are dependent upon metric perturbations) into perturbed radial coordinates of the sources. In this way, setting η = η s +δη for a source at observed redshift z s , such that 1 + z s = a 0 /a(η s ), the perturbation to the conformal time (δη) assigned to that source reads
H(η s )δη ≡ ∆z(η s ) = ψ o − ψ +n · [v − v o ] + ηs ηo (φ +ψ)dη + H 0 δη 0 ,(3.3)
and this can be translated into the radial, comoving distance assigned to that source as
r(n, z s ) = r s + δr = η o − η s − δη − ηs ηo (φ + ψ)dη . (3.4)
With this in mind, CL11 computed the angular power spectra associated to ADF as an integral of the curvature power spectrum P(k) and the squared ADF transfer function,
C ADF = 2 π dk k 2 P(k)|∆ ADF, W (k)| 2 ,(3.5)
where the ADF transfer function is given by
∆ ADF, W N,l (k) = ηo 0 dη W (η) δ N j (kr) + kv H j (kr) + W δη (η) ψj (kr) + vj (kr) +(ψ +φ)j (kr) η 0 W δη η dη + (φ + ψ)j (kr) η 0 (2 − 5s) W (η ) r dη + l(l + 1) 2 η 0 r − r rr (2 − 5s)W η dη + W (η)j (kr) 1 Hφ + ψ + (5s − 2)φ . (3.6)
This line-of-sight integral is conducted under the window function W (η) = W (z)(1 + z)H, with W (z) the observed redshift window function of the sources, and contains the metric scalar potentials ψ and φ, among other quantities (like the parameter lensing magnification bias s) that are defined in CL11 and in Appendix A. In order to compute the ARF angular power spectra, and/or the cross-correlation of the ARF with any other cosmological field (like, e.g., the CMB temperature anisotropy field), we must compute the ARF transfer function, for which Eq. 3.6 is our starting point.
Modifications required by the ARF
For ARF the sources of the anisotropy are not built upon the number of matter probes (galaxies) only, but upon number-weighted redshift fluctuations with respect to an average redshift (z) computed under a redshift shell that, for simplicity, we take to be symmetric and Gaussian (Letter I). We thus modify the redshift window function present in Eq. 3.6 as
W(z,z) ≡ W (z)(z −z),(3.7)
withz = dz W (z) z for a normalized source window function W (z).
In practice, when implementing these modifications in the code CAMB sources, we had to further modify the integrals associated to the spherical Bessel function derivatives (j (kr), j (kr)): these derivatives do not appear in the code since they are avoided via integration by parts, and this latter step had to be repeated under the new window function W(z,z). Since the code is written in the so-called CDM gauge (or zero acceleration frame), we had to transform the equations to this frame, for which use of the sympy-based symbolic module of the python wrapper of CAMB sources was made. A detailed description of these changes can be found in Appendix B.
Results
In Fig. 1 we display illustrative examples of the ARF auto angular power spectra and the ARF cross-power spectra with primordial CMB fluctuations for redshift windows with different central redshifts and widths. We are adopting s = 0 in this case. For the auto ARF power spectra (left panel), we find very similar shapes to those found in Letter I, with a flat profile at low multipoles and a decay on smaller angular scales (high multipoles). Only for the intermediate width (σ z = 0.1) we find some sensitivity of the ARF to the baryonic acoustic oscillations (BAO), since for this width the ARF are sensitive to the radial gradient of source density on scales close to the BAO scale. The same argument applies for the turn-over peak of the matter power spectrum P m (k), which becomes visible only for redshift widths close to the linear scale (∼ 1/k peak ) at which P m (k) shows its maximum (σ z = 0.35, or
1/k ∼ 1 h(z) h −1 Gpc, with h(z) the dimensionless Hubble parameter h(z) ≡ H(z)/H 0 ).
The ARF cross correlation with the CMB intensity/temperature is shown in the right panel of Fig. 1, and depending on the shell width and central redshift may flip sign at different angular scales. For low central redshifts of the shells the ARF×CMB cross-correlation is small and un-measurable. Only at high redshifts (z > 1.5) and large widths (σ z > 0.3) an anticorrelation between the ARF and the CMB becomes measurable. As it will be shown below in Sect. 4.2, this anti-correlation (which is enhanced by the lensing term) is built on top of the ISW effect in the CMB field, since it vanishes when the ISW term is dropped from the CMB anisotropy computation. The ARF provide thus an alternative window to dark energy via an ISW anti-correlation, although this requires probing the density field at high redshifts (1 < z < 3).
ARF angular auto-spectra
The ratio of the angular power spectrum with all relativistic corrections (C C ) and considering only the redshift and lensing contributions 4 (C P N +len ), as it is usually considered in the literature, is displayed in Fig. 2. Here we show results for ADF and ARF in dashed and solid lines respectively. It shows the importance of other corrections, which we will see are velocity-related corrections, at low multipoles for both ADF and ARF. We can see however that such GR corrections are more important for ARF than for ADF in most cases, although restricted to very large angular scales ( 10). As we move to higher , the lensing terms start to dominate over the other corrections both for ADF and ARF. Here we show the relative amplitudes of the different terms in Eq. A.15 when neglecting all other contributions: RSD refers to Redshift Space Distortions (radial derivative of velocity, quoted as the redshift term in Appendix B), lensing to the convergence term, velocity to the terms proportional to v ·n without the radial displacement (included in radial ), and we break the terms of gravitational origin into the ISW term, the (Shapiro) time delay term, and potential term for all remaining contributions from potentials. We also consider s = 0.
For a deeper insight into this behaviour, we study the different contributions to the linearorder ARF/ADF angular power spectrum separately. In Fig. 3 we single-out the different terms which appear in Eq. A.15, and show the fractional error compared to the full result ∆C /C C for the ARF (solid) and ADF (dashed) angular power spectra. Here, ∆C ≡ |C (i) − C N C | refers to the absolute difference of the power spectra obtained after adding only the correction of the ith term minus the un-corrected case (C N C ).
The redshift term refers to the radial derivative of the peculiar velocity (∂v/∂r), and is usually included in a post-Newtonian approach (like, e.g., the one in Letter I). This term has a major contribution for both ADF and ARF. Interestingly, ARF seem to be significantly more sensitive than ADF to all other velocity-related corrections (namely the velocity and the radial terms) than ADF, at least for narrow shells. On the other hand, the relative importance of the lensing term (given by green curves in Fig. 3) seems comparable for both ADF and ARF: for both observables the impact of lensing increases with the shell width, and becomes the most important linear correction on small angular scales/high multipoles. For wide shells, the ARF lensing term is below that of ADF at low redshifts, although both become similar at intermediate (z = 1) and high (z = 2) redshifts.
Finally, all terms related to gravitational potentials (time delay, ISW, and potentials) are generally smaller than the previous ones, just as CL11 found for ADF: the amplitude of these terms increases with the shell width (like for the lensing term). Note that these corrections for ARF are significantly smaller than for ADF, for reasons we discuss below.
ARF×CMB angular cross-spectra
In this sub-section we study the cross-correlation of the ARF with the primordial CMB temperature T , E-mode polarization (E), and lensing deflection (φ) maps. Let X be any of the three CMB observables T, E, and φ, and Y either the ADF (δ g ) or the ARF (δ z ) fields. We next use the usual Fourier decomposition of an arbitrary function
f (n) defined on the 2D-sphere, a f ,m = dnY ,m (n) f (n),
where the a f ,m -s are the Fourier multipoles of the function f (n), Y ,m (n) are the usual spherical harmonics, and the star " " is denoting here "complex conjugate". The cross power spectrum between the fields X and Y is then defined as C X, Y = a X ,m (a Y ,m ) , with ... denoting ensemble averages, and whose exclusive dependence upon follows from the (assumed) statistical isotropy of the fields. For any pair of the X and Y fields, we define χ 2 as
χ 2 ≡ (C X,Y ) 2 (2 + 1)f sky C XX C Y Y + (C X,Y ) 2 , (4.1)
where all non-cosmological sources of uncertainty (instrumental noise or foreground signal in the CMB fields, shot noise in ADF/ARF) are being ignored, and where the factor (2 + 1)f sky factor denotes the numbers of degree of freedom per multipole . In what follows we shall assume that f sky = 1. This ratio χ 2 shows, per multipole and for Gaussian X and Y fields, the squared signal-to-noise (S/N) ratio of the cross angular spectrum multipole and its uncertainty, i.e., (C X,Y ) 2 /σ 2 C X,Y . As shown in Fig. 4, the ARF display non trivial correlations with CMB observables. In particular, the left panel of this plot is showing that for large widths (σ z = 0.8), the ARF show a similar (albeit lower) level of (anti-)correlation to CMB anisotropies than the standard ADF observable. It turns out that this anti-correlation of the ARF with CMB temperature anisotropies (already shown in the right panel of Fig. 1) is triggered by the ISW. The dominant, density term of the ARF overlaps with the evolving potentials at z 2. Unlike ADF, the (z −z)δ g term in the ARF kernel makes this observable sensitive to radial/redshift/time asymmetries under the redshift shell, and since only those potentials in the low redshift wing of the Gaussian shell (z <z) are actually evolving, typically negative amplitudes of the ARF field will correlate with regions giving rise to positive ISW amplitudes, thus giving rise to this anti-correlation. The sum of Eq. 4.1 from all multipoles (χ 2 ≡ max =2 χ 2 ) equals, for z = 1 and cross-correlations with CMB temperature, χ 2 = (0.8) 2 , (1.8) 2 , and (5.6) 2 for ADF and σ z = 0.05, 0.2, and 0.8, respectively, while the corresponding figures for ARF and z = 2 are χ 2 = (0.4) 2 , (0.5) 2 , and (4.9) 2 . This shows that the ARF contains additional sensitivity to the ISW effect for large widths at high redshifts, as we shall explore further below. The level of correlation of both ADF and ARF with the E-type of CMB polarization is rather low (see middle panel of Fig. 4), giving rise to negligible cumulative values of χ 2 : the sum of this statistics over all multipoles up to max = 2 000 never reaches the value of unity. On the contrary, the ARF are highly correlated with the lensing potential φ, although typically at not such a high level as the ADF. When integrating this linear-theory predictions up to max = 2 000, the cumulative S/N for the ADF/ARF×φ cross-correlations reaches or exceeds the level of a hundred for σ z ≥ 0.1.
A new window to the ISW effect
Given the sensitivity of the ARF to the ISW on the large angular scales, we conduct here a more detailed study on how ISW constraints can be improved by adding ARF in crosscorrelation studies.
In Fig. 5 we consider one single redshift shell of varying widths (σ z = 0.05, 0.2, and 0.8) and placed at the central redshifts given by the abscissa axis. We also study the impact of a galaxy bias by including a constant, linear bias value of b g = 2 (depicted by dashed lines). Since we are looking at the ISW-induced ADF/ARF cross correlation with CMB temperature anisotropies, we limit the cumulative sum of χ 2 up to max = 300. This plot is showing a different but complementary sensitivity of ARF compared to ADF: the latter show larger levels of ISW-induced correlation at intermediate to low redshifts (z ∈ [0.5, 1.5]), from intermediate width to thick shells (σ z = 0.2, 0.8). ARF instead show significant (χ 2 ∼ 20) angular ISW-induced (anti-)correlations at higher redshifts (typically z 2), and only for larger widths (σ z = 0.8). At very high redshifts (z > 2.5), when the χ 2 for ARF and σ z = 0.8 starts to fall, corresponding statistic for ADF grows again: we have checked this is due to an anti-correlation between the ADF and the ISW anisotropies, extending up to high redshifts that cannot be easily accessed by upcoming LSS surveys due to the impact of reasonable levels of shot noise and values of the magnification bias parameter s. For the ARF×T anticorrelation under a shell centred upon z = 2 and σ z = 0.8, the shot-noise of survey of average galaxy density ofn ang ∼ 10 7 sr −1 (or ∼ 3000 galaxies per square degree) should degrade the ideal χ 2 from ∼ 5.1 2 down to ∼ 4.3 2 (b g = 1). In Table 1 we show how the ISW S/N degrades Figure 5. Cumulative χ 2 for the ADF/ARF×T angular cross-correlation when considering one single shell versus its central redshift. We are considering different shell widths, which scale with the thickness of the lines. Blue (green) colours refer to ADF (ARF), and dashed lines consider a linear, constant bias of b g = 2. All multipoles from = 2 up to = 300 have been included when computing χ 2 = max min χ 2 . We have verified that these cross-correlation amplitudes are built upon the ISW effect, since this χ 2 statistic falls at the level of unity when switching off the ISW contribution in our modified version of the CAMB sources Boltzmann code. This also motivates the choice for max = 300, since most of the ISW signal is contained at low multipoles ( < 100).
in ADF/ARF×T cross-correlation analyses for different levels of shot noise and luminosity function slope s. However, detailed predictions for a given LSS survey should include precise estimations from both the number density and the bias versus redshift. We can also see from Fig. 5 that the bias impacts more strongly the ADF×T than the ARF×T cross-correlations χ 2 statistics: this is due to the different scaling of the ADF/ARF auto-and cross-angular power spectra when introducing a bias greater than one. Both terms increase, but to a different extent that depends on the cross talk between the density and lensing terms, which are dominant at these relatively large widths.
We next address the question on how much sensitivity on the ISW can be gained by adding the ARF to the ADF in a cross-correlation analysis. We consider a data vector d containing the cross angular power spectra of the type C ADF,T , C ARF,T , or (C ADF,T , C ARF,T ), where the boldface indicate vectors containing all multipoles from min = 2 up to max , for all redshift shells under consideration. The Gaussian shell configuration is such that the shell centers sample a redshift interval [z min , z max ], where all equally-spaced Gaussian redshift shells have widths given by the abscissas axis of Fig. 6. Wide shells with centers close to z max will incorporate galaxies at redshifts above z max . We adopt 3 σ z for the separation between We approximate the shot-noise angular power spectra as a constant given by C SN = 1/n ang and C SN = σ 2 z /n ang for ADF and ARF, respectively. For both ADF/ARF observables, we adopt a single Gaussian shell of width σ z = 0.8, and whose central redshifts are located at z = 3 and z = 2 for ADF and ARF, respectively. These are roughly the central redshifts where the corresponding χ 2 show a local maximum for each probe. We are considering different values for the magnification bias parameter s, including negative ones (see discussion). adjacent redshift shells, so the number of shells is given by N shells = (z max − z min )/(3σ z ). For our choices of σ z , this means that the number of shells range from 50 down to one. After neglecting all non-cosmological sources of uncertainty (shot noise, CMB foreground emission, survey systematics, etc) we compute the covariance matrix C ≡ dd t − d d t (here d t denotes the "transposed" version of the d array). Our covariance matrix thus accounts for the non-zero correlation of ADF, ARF in all redshift shells under study. This way we define the χ 2 statistic as
χ 2 ≡ d C −1 d t . (4.2)
In the left panel of Fig. 6 we take max = 100 in our cross-correlation analysis, and adopt z min = 0, z max = 3. The estimates values of χ 2 for ADF are displayed by the blue line and blue circles: this line display a rather flat behaviour for low values of σ z (σ z = 0.02-0.1, N shells = 12-50), but decreases significantly when considering broader redshift shells (down to χ 2 12 for σ z = 0.8). Interestingly, the same statistics for the ARF×T cross-correlation shows a distinct pattern: as the green circles show, it has a minimum around σ z = 0.1, but then increases up to χ 2 16 for σ z = 0.8. This agrees with our previous result in Fig. 5 pointing a higher ARF sensitivity to the ISW for wide shells at z 2-3. When we combine both observables (ADF and ARF) in this cross-correlation analysis, we obtain the χ 2 statistics given by the red line: for a thin-shell configuration the typical gain of adding the ARF is about a 10 % in χ 2 , so a ∼ 5 % in S/N, although this improvement increases to ∼ 150 % in χ 2 for wide redshift shells (σ z = 0.8).
Provided that many LSS surveys may not be able to sample the z ∼ 3 universe densely, in the middle panel of Fig. 6 we decrease z max to 2, finding that the main pattern found in the left panel is not significantly affected: the χ 2 for the ADF×T correlation decreases slightly (particularly when losing the wide shells at z > 2), but the one for ARF×T remains roughly unchanged. The χ 2 for the joint ADF+ARF analysis reflects the changes on the ADF side. Finally, in the right panel of Fig. 6 we increase max to 300, finding no visible differences with the previous choice of max = 100 (as expected for ISW cross-correlations, [28]). In this panel we again estimate the impact of galaxy bias by adopting b g = 2 (dashed lines). As in Fig. 5, we find that b g has a stronger impact in ADF than on ARF. Its impact in the joint ADF+ARF χ 2 statistics is rather modest (around 5 %). Figs. 5,6 show that the ARF provide an different view to the ISW, alternative to that provided by ADF: by studying ARF on wide shells at z ∼ 2 we should be able to constrain ISW with with about half the accuracy (in units of χ 2 ) achieved by cross-correlation analyses with ADF/2D clustering at z 1.
Discussion and Conclusions
Our analyses have shown that relativistic corrections for ARF are particularly relevant for all terms involving peculiar velocities (redshift, radial, and velocity). Lensing is also important, specially for wide shells, on the small angular scales, while other related gravitational potential terms are negligible. These results match the expectations derived from the Post-Newtonian approach of the ARF already presented in Letter I: given the extra factor (z −z) present in the ARF kernel relative to the ADF kernel, this observable is typically sensitive to those cosmological fields that vary significantly under the redshift shell width, thus becoming ideal for probing cosmic densities and velocities for shell widths of the order σ z ∼ 0.01-0.1. Gravitational potentials typically vary on yet larger scales, and would a priori require wider shells: we find that indeed the contribution from gravitational potential terms (time delay, ISW, potentials) increases under larger redshift shell widths (e.g., one can easily compare the amplitudes of these terms in the left and right columns in Fig. 3), but their intrinsic low amplitude (compared to typical density and velocity contributions) makes their detection challenging. The fact that the (z −z) kernel makes the ARF sensitive to only radial/redshifts gradients of the sources also reflects in the right panel of Fig. 4: ARF only pick up the radial gradient part of the φ×δ g cross-correlation (missing the constant or redshift-symmetric part), and for this reason χ 2 is lower for ARF×φ than for ADF×φ.
As in Letter I, we are finding that corrections involving the velocity field are typically higher for ARF than for ADF, particularly for narrow (σ z ∼ 0.01-0.02) redshift shells for which opposite-sign cancellation of line-of-sight integrals are less likely. ARF constitute thus a sensitive tool to measure peculiar velocities and the nature of gravity on cosmological scales, as already demonstrated in Letter II. Our efforts will next focus on the estimation of ARF statistics in the midly non-linear regime, and for that we shall resort to cosmological numerical simulations.
During the final edition stage of this work, [43] presented a parallel work where the ARF GR corrections at first order are computed in an arbitrary gauge. These authors are concerned about the gauge-dependence of redshift fluctuations as defined in Eq. A.5. In their work they argue that redshift fluctuations must be defined with respect to an average redshift (z), estimated from the observed redshifts of the galaxy sample under W (z). They conclude with a definition of their "redshift-weighted number counts" which coincides with the ARF definition provided at the beginning of Letter I. 5 . Their computation of the linear-order GR corrections coincides with ours in the formulation (the addition of the new modified function W(z) ≡ (z−z)W (z)), and also in the numerical results: when comparing their Fig. 1, the amplitudes of the total contribution to the angular power spectrum lies in absolute agreement with ours. We also find perfect agreement if we identify their "large scale gravitational potential terms" with the joint contribution of our time delay, ISW, and potential corrections. Likewise, their RSD angular power spectrum coincides with the addition of our redshift, radial, and velocity terms. Finally, after adopting the value s = 0.81 (motivated by their Eq. 4.2 for s(z) evaluated at z ∼ 1), we obtain a lensing contribution to the C -s whose amplitude and shape is practically indistinguishable from theirs.
A relevant aspect of our analysis involves the ARF×ISW cross correlation. The ISW constitutes one of the very few windows we have towards the nature of Dark Energy, and a very distinct one, since it is the only observable measuring the impact of the universe's expansion rate on the evolution of gravitational potentials. However, it involves so large scales that it is heavily affected by cosmic variance, and so far the statistical significance for the ISW is relatively weak (∼ 4 if one includes the CMB lensing map, ∼ 3 only if one restricts the cross-correlation analysis to the CMB temperature map and LSS surveys, [19]). This type of analyses also involves the largest scales observable in the current universe, where GR effects should be more important but whose sampling is most difficult. In this context, we have shown that ARF's sensitivity to the ISW complements nicely that of the traditional approaches based on ADF: ARF must be strongly anti-correlated to CMB temperature anisotropies when probing the LSS at z ∼ 2 under wide redshift shells, and this is a significantly different universe to that where ADF are more sensitive to the ISW (z ∈ [0. 5,1]). An interesting aspect lies in the dependence of the statistical significance of the ARF×ISW cross-correlation on the source's luminosity function slope s. A priori, one should always expect s to remain positive for a flux/magnitude limited sample, and this is indeed the case in the forecasts for flux-limited surveys like Euclid and SKA ( [9,37]). However, the value of s can be negative for galaxy samples selected according to other criteria; for instance, the value of s is well below zero for BOSS galaxies (see [35]). This is because the BOSS survey aimed to observe the reddest, brightest galaxies, and at fixed redshift the fraction of red galaxies decreases with magnitude. This suggests the possibility of exploring, within large photometric samples, new sub-sample definitions for which s(z) becomes negative, hence enhancing the S/N of the ARF×ISW crosscorrelation.
In this work we have explored the ARF in the context of general relativistic linear order of perturbations. We have followed the approach of CL11, and generalized their expressions for the source counts in the Newtonian gauge to the ARF, by including the observed redshift in the total observable under study. We have computed the linear-order general relativistic corrections, and found that all those terms relate to velocity introduce measurable effects. The corrections due to lensing are also detectable, and are actually dominant on the small angular scales for intermediate-width to wide (σ z > ∼ 0.1) redshift shells. All other terms related to the gravitational potentials are introducing negligibly small corrections in all cases.
We have also studied the ARF cross-correlation to CMB intensity, polarization, and lensing potencial fluctuations. Similarly to ADF, the ARF show negligible correlation to the E-type of CMB polarization, but a high level of correlation with the φ CMB lensing potential field. This correlation for ARF×φ is not as high as for ADF×φ since the ARF are only picking up the time/radial gradient of the density-potential correlation due to the z −z part of the kernel. The ARF present alternative correlation properties with CMB temperature fluctuations in the presence of the ISW effect. The density term in the ARF kernel gives rise to an anti-correlation with the ISW temperature anisotropies if the redshift shells are wide (σ z > ∼ 0.5) and are placed at a high redshift (z ∼ 2). The ideal S/N of this anti-correlation lies at the 4-5 level, comparable (albeit lower) to that of the ADF×T cross-correlation, although it arises at complementary redshift and shell width ranges. By combining ADF and ARF in cross-correlation studies with the CMB intensity maps, the χ 2 statistic against the null hypothesis (no ISW) is increased by about ∼ 5-10 % when conducting LSS tomography with using narrow shells in z ∈ [0, 2], and by ∼ 150 % when resorting to a few, wide (σ z ∼ 0.8) redshift shells in the same redshift range. The ARF thus provide a novel, potentially powerful, and complementary tool to test the nature of dark energy in the late universe.
Bertacca, and Dr. Chema Diego for useful discussions. CHM acknowledges partial support from the Spanish Ministry of Science, Innovation and Universities (MCIU/AEI/FEDER, UE) through the grant PGC2018-097585-B-C21. JCM likewise acknowledges partial support from the partner grant PGC2018-097585-B-C22.
A General relativistic derivation of the source number counts
This section outlines the main findings of CL11 on the GR first order corrections to source number counts (or ADF), and are included here as a quick reference for the main text.
We shall consider a flat Friedmann-Lemaître-Robertson-Walker (FLRW) Universe described by the metric
ds 2 = g µν dx µ dx ν = a 2 (η) (1 + 2ψ)dη 2 − (1 − 2φ)δ ij dx i dx j . (A.1)
written under the conformal-Newtonian gauge 6 , with a the expansion parameter at conformal time η and the two scalar potentials φ and ψ [47].
A.1 Redshift perturbations
We must express n(n, z) (z −z) dzdΩ in term of covariant quantities which transform under coordinate transformation as dictated by the space-time metric. The observed redshift can be trivially expressed as a function of the contraction of covariant 4-vectors following its definition
1 + z s = (k µ u µ ) s (k µ u µ ) o , (A.2)
where k µ is the photon null momentum and u µ denotes the 4-velocity of galaxies, and subscripts s and o refer to quantities evaluated at the source's and the observer's positions, respectively. For a photon moving along a geodesic x µ (λ) with λ an affine parameter along the geodesic, we can define its null-momentum k µ = dx µ dλ as
k 0 = −ν a (1 + δν) k i =ν a e i + δe i , (A.3)
in the observer's rest frame 7 . Here we defineν as the photon frequency 8 and e i as the photon propagation direction measured by the observer in a homogeneous universe ( e ≡n); and δν, δe i its respective dimensionless corrections as we expand the null vector to first order in perturbations, which can be related to the metric perturbations φ, ψ by solving the null and geodesic equations. For this choice of coordinates we can define the 4-velocity of a comoving observer as
u µ = dx µ √ −ds 2 = dx µ dτ , (A.4)
with components u 0 = a −1 (1−ψ) and u i = a −1 v i . Here τ the proper time along the observer's worldline and v i (η,n) 1 the physical, peculiar velocity in units of the speed of light. As previously stated the redshift at conformal time η along the line of sight will be given by the ratio of photon energies at the source's and observer's positions (Eq. A.2), so up to linear order
1 + z(η) = a o a(η) 1 + v i e i − ψ o + η η 0 dr (ψ +φ) = a o a(η) 1 + ψ o − ψ +n · [v − v o ] + η ηo (φ +ψ)dη , (A.5)
where dot variables refer to derivatives with respect to the conformal time η. Furthermore, in a homogeneous and isotropic universe, at the observer's position a(η o ) = a o , which is a constant. However, taking into account perturbations in the conformal time at the observer's position due to local gravitational potential effects
a o = a(η o + δη o ) ≈ a(η o ) +ȧ o δη o = a(η o )(1 + H o δη o ), (A.6)
with H o =ȧ(η o )/a(η o ) the conformal Hubble parameter 9 . This way we can express the redshift to linear order as
1 + z(η) = a o a(η) (1 + ∆z), (A.7)
with ∆z depending on the gravitational potentials and peculiar velocities at observer's and source's positions and on the evolution of these potentials. 10 Following again CL11, we can 7 All subsequent calculation will be performed in the observer's rest frame. 8 In this case, the bar denotes a background quantity and not a mean. 9 The addition of this constant term only impacts the monopole and is thus unobservable. 10 Although we have taken the spatial part of the metric to be gij = a 2 (η)(1 − 2φ)δij, the result yields for a more general choice gij = a 2 (η)(1 − 2φ)ḡij, whereḡij is defined as the 3-space background metric map these perturbations of the observed redshift to perturbations of the conformal time we assign to sources. Setting η = η s + δη for a source at observed redshift z s , such as 1 + z s = a o /a(η s ), following Eq. (A.7) one can express, to linear order,
H(η s )δη = ∆z(η s ) = ψ o − ψ +n · [v − v o ] + ηs ηo (φ +ψ)dη + H 0 δη 0 , (A.8)
and in the same way CL11 write the (perturbed) radial position of a photon at z s as
r(n, z s ) = r s + δr = η o − η s − δη − ηs ηo (φ + ψ)dη . (A.9)
Note that in this expression the lensing-induced transversal shift in the apparent position of a source does not appear, since it does not impact its radial coordinate. We conclude this subsection by stressing that redshift perturbations present in the observed redshift are mapped into the (perturbed) radial coordinates assigned to observed sources.
A.2 Corrections to the angular source number counts
We again follow the approach of CL11 for expressing the galaxy angular number counts n(n, z) in a covariant form. Hereafter we shall refer to these standard 2D, angular source number counts anisotropies as "angular redshift fluctuations" or ADF. The determinant of Jacobi map det D [38,54,59] CL11 compute the linear order perturbations δ n (n, z, m < m ) to the observed angular number counts n(n, z, m < m ) =n(z, m < m )[1 + δ n (n, z, m < m )] per angle and redshift interval, providing the following expression in which terms are sorted according to their amplitude/importance:
δ n (n, z, m < m ) = δ N L >L − 1 Hn · ∂v ∂r + (5s − 2) κ − 1 r ηo (φ + ψ)dη + 2 − 5s Hr + 5s − ∂ ln a 3N L >L H∂η +Ḣ H 2 ψ + ηo (φ +ψ)dη −n · v + 1 Hφ + ψ + (5s − 2)φ. (A.12)
This expression neglects uninteresting monopole and dipole terms, and refers to the relative source count perturbations for sources brighter than an apparent magnitude m (or conversely more luminous than a threshold luminosity L ). In the expression above s is defined as s = ∂ log 10N (z, L > L )/∂m , withN (z, L > L ) the background physical (proper) number density of sources above the luminosity threshold L . The convergence κ appears in the determinant of the Jacobi map D defined above (see e.g. [38]), and is defined as 11
κ(n, η) = − 1 2 ∇ 2 n η ηo dη η − η (η o − η)(η o − η ) (ψ + φ), (A.13)
where ∇ 2 n is the Laplacian on the 2D sphere, and η A refers to the conformal time at the observer's position. In Eq. A.12, the first term (δ N ) constitutes the intrinsic, Newtonian-gauge source number perturbation, which is followed by the so-called redshift space distortion (RSD) term expressing the radial gradient of the source's peculiar velocity (H −1n · ∂v/∂r). This term is followed by the convergence (κ) lensing term, plus other (usually sub-dominant) terms associated to the Shapiro time delay, counts number evolution, the integrated Sachs-Wolfe effect (ISW), and other potential terms.
A.3 Transfer function for ADF
In order to compute the angular power spectrum of the source counts relative fluctuations, CL11 integrate Eq. A.15 along the line of sight for every mode in Fourier space, and then project the result of that integral in k-space against the primordial curvature power spectrum:
C ADF = 2 π dk k 2 P(k)|∆ ADF, W (k)| 2 .
(A.14)
In this equation, P(k) is the primordial curvature power spectrum, and ∆ ADF, W (k) is the transfer function containing the line-of-sight integral of Eq. A.15 after expanding the plane waves in spherical Bessel functions (j (x)s):
∆ ADF, W N,l (k) = ηo 0 dη W (η) δ N j (kr) + kv H j (kr) + W δη (η) ψj (kr) + vj (kr) + (ψ +φ)j (kr) η 0 W δη η dη + (φ + ψ)j (kr) η 0 (2 − 5s) W (η ) r dη + l(l + 1) 2 η 0 r − r rr (2 − 5s)W η dη +W (η)j (kr) 1 Hφ + ψ + (5s − 2)φ . (A.15)
Here W (η) = W (z)(1 + z)H, with W (z) the observed redshift window function of the sources, whereas this other window function
W δη (η) ≡ 2 − 5s Hr + 5s − ∂ ln a 3N L >L s * H∂η +Ḣ H 2 η W (η) (A.16) 11
In the literature different definitions of the convergence has been adopted, including different terms that CL11 included separately. See [7,70].
contains the term accounting for the source evolution versus redshift (∝ ∂(a 3N )/∂η). In Eq. A.15, µ ≡n ·k terms have been integrated by parts (giving rise to derivatives of the spherical Bessel functions), and the order of double line-of-sight integrals have been switched).
B Transfer functions in the CDM frame
Defining the Newtonian velocity as a gauge-invariant quantity v N = v+σ (from which one can infer that σ = 0 in the Newtonian gauge), and accounting for the fact that in the zero acceleration frame the CDM velocity v = 0, one finds that v N ≡ σ in the CDM frame. Moreover, in this frame ψ = φ. Hence, we can rewrite Eq. (A.12) in this frame and explicitly integrate by parts to eliminate the Bessel spherical function derivatives. As it is done in the CAMB sources code, we will present this result distinguishing among the different terms included in Eq. (A.12), describing the exact changes introduced towards the computation of the ARF angular power spectrum. Among all relativistic corrections, we neglect the one associated to the evolution of sources, since we assume that the effective source redshift window function is actually a sub-sample of the total, underlying galaxy sample with a Gaussian shape.
Density term
This term refers to the intrinsic, underlying perturbation in the background number density of sources under the Gaussian shell with luminosity exceeding L . Following its definition in the Newtonian gauge in CL11,
δ n = bδ syn m +ṅ s n s v k = bδ syn m + d ln (a 3n s ) dη − 3H v k , (B.1)
it is readily expressed in the CDM frame just after substituting v with σ, as the bias (as it is classically defined) is defined in an orthogonal, synchronous and comoving (v/k = 0) gauge [67]. After multiplication by the kernel (z −z), such that W(η) ≡ (z −z)W (η), we obtain the equivalent expression for the density term in the ARF transfer function:
∆ Den (k) = ηo 0 dη W(η) bδ syn m + d ln (a 3n s ) dη − 3H σ k j (kr[η]). (B.2)
We stress that for ARF computations we shall assume we are taking a sub-sample of the observed galaxy sample following a Gaussian redshift distribution, and thus we shall always ignore the intrinsic time evolution of the comoving density of sources (d ln (a 3n s )/dη) present in the square bracket of Eq. B.2.
Redshift term or redshift space distortion term
The term refers to the radial gradient of the line-of-sight velocity, and is expressed in Eq. (A.15) as the second derivative of the spherical Bessel function j (kr). In order to obtain it we first note that CAMB sources neglects the contributions of anisitropic stress Π in the source integration. Hence for a flat universe with zero anisotropic stress, we can express the evolution equations for the scalar shear σ as σ = −2Hσ + η k ,σ = (−2Ḣ + 4H 2 )σ − 2Hη k +q 2 , (B.3) with η k = k η s for η s the usual synchronous gauge scalar perturbation, andq = 8πGa 2 q the total heat flux so thatη k =q/2. We also rewrite the Friedmann equation as
Radial term
The so-called radial term corresponds to the ∝ v/(Hr) j (kr) term present in Eq. A.15. Formally, according to this equation, this term is included in W δη v j (kr), but CL11 prefer to code it separately given its ∝ 1/r dependence that should make it relevant only for relatively nearby sources. For ADF, CL11 find
The velocity term
This term describes the product W δη v j (kr) in the ADF case, and, according to Eq. A.15, it includes the previous, radial term, which must be subtracted. CL11 write it in CAMB sources as where W δη (η) ≡ (z −z)W δη (η).
Time delay term
The extension of the time delay (or Shapiro) term to ARF from ADF is trivially found by substituting W (η) by W(η), after noting that φ is taken equal to ψ in the CDM frame (which applies to the terms following below, namely lensing, ISW, and potentials):
∆
Lensing term
The lensing term contains the convergence κ (which involves another line-of-sight integral). The counterpart for ARF is again found by introducing the modified window function W(η): The ISW term
The ISW term accounts for time evolution of gravitational potentials [55] along the photon's path towards the observer. After the usual substitution of the window function, we write (B.14)
Figure 1 .
1Left panel: Angular power spectrum for ARF for a Gaussian window function at different redshifts with widths σ z = 0.01 (solid), σ z = 0.1 (dashed) and σ z = 0.35 (point-dashed). Right panel: Cross-correlation power spectrum of ARF with CMB temperature for windows with widths σ z = 0.25 (solid), σ z = 0.5 (dashed) and σ z = 0.8 (point-dashed). Both cases assume s = 0. Thin and thick lines display negative and positive results, respectively.
.
Figure 2 .
2Relative importance of GR corrections on ARF (solid) and ADF (dashed) power spectra at different redshifts for s = 0. Each panel display the result for different Gaussian widths.
Figure 3 .
3Fractional difference compared to the full result for the ARF (solid lines) and ADF (dashed lines) angular power spectra for z = 0.25, 1 and 2.
Figure 4 .
4Signal-to-noise ratio per multipole (χ 2 ) of the ARF (solid) and ADF (dashed) crosscorrelation with CMB temperature (T , left panel), E-mode of CMB polarization (middle panel), and lensing deflection potential φ at z = 1.0, for σ z = 0.05, 0.2, and 0.8. In this case and all subsequent cases we consider s = 0.
Figure 6 .
6Cumulative χ 2 statistics (defined in Eq.4.2) accounting for the cross-correlation of ADF, ARF, and ADF+ARF observables with CMB intensity anisotropies (T ) for different configurations of redshift Gaussian shells. All redshift shells are taken to have the same width σ z given in the X-axis, and sweep the redshift range given in the titles at the panels' top with a regular shell separation of 3σ z . Blue, green, and red curves and symbols refer to d = C ADF,T , C ARF,T , or (C ADF,T , C ARF,T ), respectively. The differences between the middle and left panel show the (small [ARF] to moderate [ADF]) impact of decreasing z max from 3 to 2 on the wide shell configurations. The right panel considers the case of max = 100, leaving previous results virtually unchanged, and also studies the inclusion of a galaxy bias b g = 2 (dashed lines). In all cases we adopt s = 0.
Figure 7 .
7Same as in Fig. 6, but for s = −0.2. See the discussion section for the possibility of having negative values of s in samples defined under colour cuts.
is used to relate the area covered by the unit of solid angle at the observer's and at the source's position: a ray bundle within solid angle dΩ o at the observer's position projects an invariant area dΩ o det D o . Thus the volume element sampled by the ray bundle at the source's position equals dΩ o det D o k µ u µ s dλ when the affine parameter changes by an amount dλ, making the wavefront advance by an amount of k µ u µ s . The quantity J µ = n s u µ s expresses the source 4-current, with n s the source proper number density as seen in the source rest frame given by u µ s . CL11 next write the amount of galaxies swept in the volume element at the source's position as dN (n, z) = dΩ o n(n, z) dz dλ dλ = dΩ o det D o n s k µ u µ s dλ, (A.10) leading to the general result n(n, z) = det D o k a J a dλ dz . (A.11)
∆
8πGa 2 ρ andp = 8πGa 2 p the total density and pressure parameters in a flat universe with no anisotropic stress. Having this present, CL11 code the following result for redshift term in the ADF transfer function in the CDM frame: Redshift(kr[η]) S Redshift, ADF (k), (B.5) where W (η) ≡ W (η)/H and the dot again denotes derivative with respect to η. Here S Redshift, ADF (k) the redshift source term for the ADF, and so for the ARF ∆ Redshift, ARF ((kr[η]) S Redshift, ARF (k). (B.6)
∆For
RadialARF, we need to introduce the modified window function W(η), yielding ∆ Radial, ARF (
δη (η) σ j (kr) − ∆ Radial, ADF ((kr[η]) S velocity, ADF (k). (B.9) The extension for ARF trivially reads as ∆ velocity, ARF (k) = ηo 0 dη W δη (η) σ j (kr) − ∆ Radial, ARF ((kr[η]) S velocity, ARF (k), (B.10)
(kr[η]) S time delay, ARF (k). (B.11)
(kr[η]) S lensing, ARF (k). (B.12)
(kr[η]) S ISW, ARF (k).(B.13)The residual potentials termThe rest of the terms on Eq. A.15 involving gravitational potentials are merged together. The substitution is again trivial changing the window functions and corresponding ψ = φ ∆ potentials, ARF ((kr[η]) S potentials, ARF (k).
Corresponding author.
Like in all previous work, throughout this work we shall be referring to Gaussian windows exclusively; this is done for the sake of simplicity. We stress however that ARF can naturally be defined under any redshift window as long as it has "ends", i.e. it is confined to a redshift interval.
Henceforth we shall refer to "galaxies" to any type of extragalactic luminous matter tracer, be it galaxies or quasars.3 Negligible in this approach, we shall compute their amplitude explicitly in Sect. 4.
See Appendix B for further details on these terms.
Even if they rename this observable as "redshift-weighted galaxy number counts" rather than "densityweighted angular redshift fluctuations" introduced in Letter I (which thereafter we have dubbed as "ARF" for the sake of simplicity), they use exactly the same observable introduced in Letter I: the (galaxy) densityweighted redshift fluctuations with respect an average redshift (z) estimated from a galaxy sample under a given redshift window function W (z)
The choice of gauge coincides with that of CL11 in their work (although it differs from the gauge used in the code CAMB sources).
AcknowledgmentsWe thank Dr. Juan Betancourt, Dr. Evencio Mediavilla, and Dr. Jordi Cepa for the useful and insightful comments that they provided as members of the committee for the under-graduate master thesis that gave rise to this work. We also thank Prof. Roy Marteens, Dr. Daniele
. P A R Ade, 10.3847/1538-4357/ac4886The Astrophysical Journal. 92777Ade P. A. R., et al., 2022, The Astrophysical Journal, 927, 77
. S W Allen, A E Evrard, A B Mantz, 10.1146/annurev-astro-081710-102514Annual Review of Astronomy and Astrophysics. 491409Allen S. W., Evrard A. E., Mantz A. B., 2011, Annual Review of Astronomy and Astrophysics, vol. 49, issue 1, pp. 409-470, 49, 409
. G Aricò, R E Angulo, C Hernández-Monteagudo, S Contreras, M Zennaro, M Pellejero-Ibañez, Y Rosas-Guevara, arXiv:1911.08471astro-phAricò G., Angulo R. E., Hernández-Monteagudo C., Contreras S., Zennaro M., Pellejero-Ibañez M., Rosas-Guevara Y., 2019, arXiv:1911.08471 [astro-ph]
. R A Battye, R D Davies, J Weller, 10.1111/j.1365-2966.2004.08416.xMonthly Notices of the Royal Astronomical Society. 3551339Battye R. A., Davies R. D., Weller J., 2004, Monthly Notices of the Royal Astronomical Society, 355, 1339
. P Behroozi, R Wechsler, A Hearin, C Conroy, 10.1093/mnras/stz1182Monthly Notices of the Royal Astronomical Society. 4883143Behroozi P., Wechsler R., Hearin A., Conroy C., 2019, Monthly Notices of the Royal Astronomical Society, 488, 3143
. B A Benson, 10.1117/12.2057305arXiv:1407.297391531eprintBenson B. A., et al., 2014. eprint: arXiv:1407.2973, p. 91531P, doi:10.1117/12.2057305, https://ui.adsabs.harvard.edu/abs/2014SPIE.9153E..1PB
. F Bernardeau, C Bonvin, F Vernizzi, 10.1103/PhysRevD.81.083002Physical Review D. 8183002Bernardeau F., Bonvin C., Vernizzi F., 2010, Physical Review D, 81, 083002
. A Blanchard, J Schneider, 0004-6361Astronomy and Astrophysics. 1841-21Blanchard A., Schneider J., 1987, Astronomy and Astrophysics (ISSN 0004-6361), vol. 184, no. 1-2, Oct. 1987, p. 1-6., 184, 1
. A Bonaldi, I Harrison, S Camera, M L Brown, 10.1093/mnras/stw2104MNRAS. 4633686Bonaldi A., Harrison I., Camera S., Brown M. L., 2016, MNRAS, 463, 3686
. C Bonvin, R Durrer, 10.1103/PhysRevD.84.063505Phys. Rev. D. 8463505Bonvin C., Durrer R., 2011, Phys. Rev. D, 84, 063505
. S Bridle, L King, 10.1088/1367-2630/9/12/444New Journal of Physics. 9444Bridle S., King L., 2007, New Journal of Physics, 9, 444
. Y.-C Cai, N Padilla, B Li, 10.1093/mnras/stv777Monthly Notices of the Royal Astronomical Society. 4511036Cai Y.-C., Padilla N., Li B., 2015, Monthly Notices of the Royal Astronomical Society, 451, 1036
A Challinor, A Lewis, ascl:1105.013CAMB Sources: Number Counts, Lensing & Dark-age 21cm Power Spectra. Challinor A., Lewis A., 2011a, CAMB Sources: Number Counts, Lensing & Dark-age 21cm Power Spectra (ascl:1105.013)
. A Challinor, A Lewis, 10.1103/PhysRevD.84.043516Physical Review D. 8443516Challinor A., Lewis A., 2011b, Physical Review D, 84, 043516
. Chang T.-C Pen, U.-L Peterson, J B Mcdonald, P , 10.1103/PhysRevLett.100.091303Physical Review Letters. 10091303Chang T.-C., Pen U.-L., Peterson J. B., McDonald P., 2008, Physical Review Letters, 100, 091303
. J Chaves-Montero, R E Angulo, C Hernández-Monteagudo, 10.1093/mnras/sty924MNRAS. 4773892Chaves-Montero J., Angulo R. E., Hernández-Monteagudo C., 2018, MNRAS, 477, 3892
. J Chaves-Montero, C Hernández-Monteagudo, R E Angulo, J D Emberson, 10.1093/mnras/staa3782Monthly Notices of the Royal Astronomical Society. 5031798Chaves-Montero J., Hernández-Monteagudo C., Angulo R. E., Emberson J. D., 2021, Monthly Notices of the Royal Astronomical Society, 503, 1798
. S Cole, G Efstathiou, 10.1093/mnras/239.1.195Monthly Notices of the Royal Astronomical Society. 239195Cole S., Efstathiou G., 1989, Monthly Notices of the Royal Astronomical Society, 239, 195
. P Collaboration, 10.1051/0004-6361/201525831Astronomy and Astrophysics. 59421Collaboration P., et al., 2016, Astronomy and Astrophysics, 594, A21
. S Contreras, R E Angulo, M Zennaro, 10.1093/mnras/stab2560Monthly Notices of the Royal Astronomical Society. 508175Contreras S., Angulo R. E., Zennaro M., 2021, Monthly Notices of the Royal Astronomical Society, 508, 175
. D J Croton, 10.1111/j.1365-2966.2005.09675.xMonthly Notices of the Royal Astronomical Society. 36511Croton D. J., et al., 2006, Monthly Notices of the Royal Astronomical Society, 365, 11
The DESI Experiment Part I: Science,Targeting, and Survey Design. DESI Collaboration et al., 2016, The DESI Experiment Part I: Science,Targeting, and Survey Design, https://ui.adsabs.harvard.edu/abs/2016arXiv161100036D
. K S Dawson, 10.1088/0004-6256/145/1/10AJ. 14510Dawson K. S., et al., 2013, AJ, 145, 10
. A Einstein, 10.1002/andp.19163540702Annalen der Physik. 354769Einstein A., 1916, Annalen der Physik, 354, 769
. D J Eisenstein, 10.1086/466512ApJ. 633560Eisenstein D. J., et al., 2005, ApJ, 633, 560
. H Gao, Z Li, B Zhang, 10.1088/0004-637X/788/2/189The Astrophysical Journal. 788189Gao H., Li Z., Zhang B., 2014, The Astrophysical Journal, 788, 189
. N Hamaus, A Pisani, J.-A Choi, G Lavaux, B D Wandelt, J Weller, 10.1088/1475-7516/2020/12/023Journal of Cosmology and Astroparticle Physics. 202023Hamaus N., Pisani A., Choi J.-A., Lavaux G., Wandelt B. D., Weller J., 2020, Journal of Cosmology and Astroparticle Physics, 2020, 023
. C Hernandez-Monteagudo, 10.1051/0004-6361:200809871Astronomy & Astrophysics. 49015Hernandez-Monteagudo C., 2008, Astronomy & Astrophysics, 490, 15
. C Hernández-Monteagudo, J Chaves-Montero, R E Angulo, 10.1093/mnrasl/slaa172Monthly Notices of the Royal Astronomical Society. 50356Hernández-Monteagudo C., Chaves-Montero J., Angulo R. E., 2021a, Monthly Notices of the Royal Astronomical Society, 503, L56
. C Hernández-Monteagudo, J Chaves-Montero, R E Angulo, G Aricò, 10.1093/mnrasl/slab021Monthly Notices of the Royal Astronomical Society. 50362Hernández-Monteagudo C., Chaves-Montero J., Angulo R. E., Aricò G., 2021b, Monthly Notices of the Royal Astronomical Society, 503, L62
. M Jaroszynski, 10.1093/mnras/sty3529Monthly Notices of the Royal Astronomical Society. 4841637Jaroszynski M., 2019, Monthly Notices of the Royal Astronomical Society, 484, 1637
. R Jimenez, A Loeb, 10.1086/340549The Astrophysical Journal. 57337Jimenez R., Loeb A., 2002, The Astrophysical Journal, 573, 37
. E Krause, T.-C Chang, O Doré, K Umetsu, 10.1088/2041-8205/762/2/L20The Astrophysical Journal. 76220Krause E., Chang T.-C., Doré O., Umetsu K., 2013, The Astrophysical Journal, 762, L20
. R Laureijs, arXiv:1110.3193preprintLaureijs R., et al., 2011, preprint, (arXiv:1110.3193)
. A Leauthaud, 10.1093/mnras/stw117MNRAS. 4574021Leauthaud A., et al., 2016, MNRAS, 457, 4021
. L Legrand, C Hernández-Monteagudo, M Douspis, N Aghanim, R E Angulo, 10.1051/0004-6361/202039049Astronomy and Astrophysics. 646109Legrand L., Hernández-Monteagudo C., Douspis M., Aghanim N., Angulo R. E., 2021, Astronomy and Astrophysics, 646, A109
. F Lepori, arXiv:2110.05435arXiv e-printsLepori F., et al., 2021, arXiv e-prints, p. arXiv:2110.05435
. A Lewis, A Challinor, 10.1016/j.physrep.2006.03.002Physics Reports. 4291Lewis A., Challinor A., 2006, Physics Reports, 429, 1
. Z.-X Li, H Gao, X.-H Ding, G.-J Wang, B Zhang, 10.1038/s41467-018-06303-0Nature Communications. 93833Li Z.-X., Gao H., Ding X.-H., Wang G.-J., Zhang B., 2018, Nature Communications, 9, 3833
. Z Li, H Gao, J.-J Wei, Y.-P Yang, B Zhang, Z.-H Zhu, 10.3847/1538-4357/ab18feThe Astrophysical Journal. 876146Li Z., Gao H., Wei J.-J., Yang Y.-P., Zhang B., Zhu Z.-H., 2019, The Astrophysical Journal, 876, 146
. A Loeb, 10.1086/311375The Astrophysical Journal. 499111Loeb A., 1998, The Astrophysical Journal, 499, L111
. A Loeb, J S B Wyithe, 10.1103/PhysRevLett.100.161301Physical Review Letters. 100161301Loeb A., Wyithe J. S. B., 2008, Physical Review Letters, 100, 161301
. W Matthewson, D Stock, R Durrer, arXiv:2203.07414astro-phMatthewson W., Stock D., Durrer R., 2022, arXiv:2203.07414 [astro-ph]
. J Miralda-Escude, 10.1086/170555ApJ. 3801Miralda-Escude J., 1991, ApJ, 380, 1
. M Moresco, R Jimenez, L Verde, L Pozzetti, A Cimatti, A Citro, arXiv:1804.05864[astro-ph10.3847/1538-4357/aae829Moresco M., Jimenez R., Verde L., Pozzetti L., Cimatti A., Citro A., 2018, arXiv:1804.05864 [astro-ph 10.3847/1538-4357/aae829
. M Moresco, R Jimenez, L Verde, A Cimatti, L Pozzetti, arXiv:2003.07362[astro-ph10.3847/1538-4357/ab9eb0Moresco M., Jimenez R., Verde L., Cimatti A., Pozzetti L., 2020, arXiv:2003.07362 [astro-ph 10.3847/1538-4357/ab9eb0
. V F Mukhanov, H A Feldman, R H Brandenberger, 10.1016/0370-1573(92)90044-ZPhysics Reports. 215203Mukhanov V. F., Feldman H. A., Brandenberger R. H., 1992, Physics Reports, 215, 203
. J B Muñoz, E D Kovetz, L Dai, M Kamionkowski, 10.1103/PhysRevLett.117.091301Physical Review Letters. 11791301Muñoz J. B., Kovetz E. D., Dai L., Kamionkowski M., 2016, Physical Review Letters, 117, 091301
. M D Niemack, 10.1117/12.857464arXiv:1006.504977411eprintNiemack M. D., et al., 2010. eprint: arXiv:1006.5049, p. 77411S, doi:10.1117/12.857464, https://ui.adsabs.harvard.edu/abs/2010SPIE.7741E..1SN
. R C Nunes, S Pan, E N Saridakis, physics:gr-qc10.1103/PhysRevD.94.023508arXiv:1605.01712astro-phNunes R. C., Pan S., Saridakis E. N., 2016, arXiv:1605.01712 [astro-ph, physics:gr-qc 10.1103/PhysRevD.94.023508
. 10.1051/0004-6361/201321529A&A. 5711Planck Collaboration et al., 2014, A&A, 571, A1
. 10.1051/0004-6361/201833910A&A. 6416Planck Collaboration et al., 2020, A&A, 641, A6
. A J Ross, 10.1093/mnras/stw2372MNRAS. 4641168Ross A. J., et al., 2017, MNRAS, 464, 1168
. R Sachs, 10.1098/rspa.1961.0202Proceedings of the Royal Society of London Series A. 264309Sachs R., 1961, Proceedings of the Royal Society of London Series A, 264, 309
. R K Sachs, A M Wolfe, 10.1086/148982ApJ. 14773Sachs R. K., Wolfe A. M., 1967, ApJ, 147, 73
. A Sandage, 10.1086/147385The Astrophysical Journal. 136319Sandage A., 1962, The Astrophysical Journal, 136, 319
. J Schaye, 10.1093/mnras/stu2058MNRAS. 446521Schaye J., et al., 2015, MNRAS, 446, 521
. A Schneider, R Teyssier, 10.1088/1475-7516/2015/12/049Journal of Cosmology and Astroparticle Physics. 49Schneider A., Teyssier R., 2015, Journal of Cosmology and Astroparticle Physics, 2015, 049
P Schneider, J Ehlers, E E Falco, 10.1007/978-3-662-03758-4.Gravitational Lenses. Schneider P., Ehlers J., Falco E. E., 1992, Gravitational Lenses, doi:10.1007/978-3-662-03758-4. , https://ui.adsabs.harvard.edu/abs/1992grle.book.....S
. E S Sheldon, E Huff, 10.3847/1538-4357/aa704bThe Astrophysical Journal. 84124Sheldon E. S., Huff E. M., 2017, The Astrophysical Journal, 841, 24
. R S Somerville, P F Hopkins, T J Cox, B E Robertson, L Hernquist, 10.1111/j.1365-2966.2008.13805.xMonthly Notices of the Royal Astronomical Society. 391481Somerville R. S., Hopkins P. F., Cox T. J., Robertson B. E., Hernquist L., 2008, Monthly Notices of the Royal Astronomical Society, 391, 481
. V Springel, 10.1093/mnras/stx3304MNRAS. 475676Springel V., et al., 2018, MNRAS, 475, 676
. D Stern, R Jimenez, L Verde, M Kamionkowski, S A Stanford, arXiv:0907.3149[astro-ph10.1088/1475-7516/2010/02/008Stern D., Jimenez R., Verde L., Kamionkowski M., Stanford S. A., 2009, arXiv:0907.3149 [astro-ph 10.1088/1475-7516/2010/02/008
. R A Sunyaev, Y B Zeldovich, 10.1007/BF00653471Ap&SS. 73Sunyaev R. A., Zeldovich Y. B., 1970, Ap&SS, 7, 3
. R A Sunyaev, Y B Zeldovich, Comments on Astrophysics and Space Physics. 4173Sunyaev R. A., Zeldovich Y. B., 1972, Comments on Astrophysics and Space Physics, 4, 173
. R A Sunyaev, I B Zeldovich, MNRAS. 190413Sunyaev R. A., Zeldovich I. B., 1980, MNRAS, 190, 413
. D Wands, A Slosar, 10.1103/PhysRevD.79.123507Physical Review D. 79123507Wands D., Slosar A., 2009, Physical Review D, 79, 123507
. J Yoo, 10.1103/PhysRevD.79.023517Phys. Rev. D. 7923517Yoo J., 2009, Phys. Rev. D, 79, 023517
. J Yoo, 10.1103/PhysRevD.82.083508Phys. Rev. D. 8283508Yoo J., 2010, Phys. Rev. D, 82, 083508
. J Yoo, A L Fitzpatrick, M Zaldarriaga, 10.1103/PhysRevD.80.083514Physical Review D. 8083514Yoo J., Fitzpatrick A. L., Zaldarriaga M., 2009a, Physical Review D, 80, 083514
. J Yoo, A L Fitzpatrick, M Zaldarriaga, 10.1103/PhysRevD.80.083514Phys. Rev. D. 8083514Yoo J., Fitzpatrick A. L., Zaldarriaga M., 2009b, Phys. Rev. D, 80, 083514
. Z Zheng, 10.1086/466510ApJ. 633791Zheng Z., et al., 2005, ApJ, 633, 791
| [] |
[
"Structure Constants of Diagonal Reduction Algebras of gl Type",
"Structure Constants of Diagonal Reduction Algebras of gl Type"
] | [
"Sergei Khoroshkin \nInstitute of Theoretical and Experimental Physics\n117218MoscowRussia\n\n) Higher School of Economics\n20 Myasnitskaya Str101000MoscowRussia\n",
"Oleg Ogievetsky [email protected] \n) J.-V. Poncelet French-Russian Laboratory, UMI 2615 du CNRS\nIndependent University of Moscow\n11 B. Vlasievski per119002MoscowRussia\n\nCentre de Physique Théorique 1\n13288Luminy, MarseilleFrance\n\nTheoretical Department\n) On leave of absence from P.N. Lebedev Physical Institute\n53 Leninsky Prospekt119991MoscowRussia\n"
] | [
"Institute of Theoretical and Experimental Physics\n117218MoscowRussia",
") Higher School of Economics\n20 Myasnitskaya Str101000MoscowRussia",
") J.-V. Poncelet French-Russian Laboratory, UMI 2615 du CNRS\nIndependent University of Moscow\n11 B. Vlasievski per119002MoscowRussia",
"Centre de Physique Théorique 1\n13288Luminy, MarseilleFrance",
"Theoretical Department\n) On leave of absence from P.N. Lebedev Physical Institute\n53 Leninsky Prospekt119991MoscowRussia"
] | [
"Symmetry, Integrability and Geometry: Methods and Applications SIGMA"
] | We describe, in terms of generators and relations, the reduction algebra, related to the diagonal embedding of the Lie algebra gl n into gl n ⊕ gl n . Its representation theory is related to the theory of decompositions of tensor products of gl n -modules. | 10.3842/sigma.2011.064 | [
"https://arxiv.org/pdf/1101.2647v2.pdf"
] | 14,437,147 | 1101.2647 | cd8c36bf9853056529fdbc75364353f1c0c43500 |
Structure Constants of Diagonal Reduction Algebras of gl Type
Jul 2011. 2011
Sergei Khoroshkin
Institute of Theoretical and Experimental Physics
117218MoscowRussia
) Higher School of Economics
20 Myasnitskaya Str101000MoscowRussia
Oleg Ogievetsky [email protected]
) J.-V. Poncelet French-Russian Laboratory, UMI 2615 du CNRS
Independent University of Moscow
11 B. Vlasievski per119002MoscowRussia
Centre de Physique Théorique 1
13288Luminy, MarseilleFrance
Theoretical Department
) On leave of absence from P.N. Lebedev Physical Institute
53 Leninsky Prospekt119991MoscowRussia
Structure Constants of Diagonal Reduction Algebras of gl Type
Symmetry, Integrability and Geometry: Methods and Applications SIGMA
734Jul 2011. 201110.3842/SIGMA.2011.064Received January 14, 2011, in final form June 27, 2011;reduction algebraextremal projectorZhelobenko operators 2010 Mathematics Subject Classification: 16S3017B35
We describe, in terms of generators and relations, the reduction algebra, related to the diagonal embedding of the Lie algebra gl n into gl n ⊕ gl n . Its representation theory is related to the theory of decompositions of tensor products of gl n -modules.
Introduction
This paper completes the work [7]: it contains a derivation of basic relations for the diagonal reduction algebras of gl type, their low dimensional examples and properties.
Let g be a Lie algebra, k ⊂ g its reductive Lie subalgebra and V an irreducible finitedimensional g-module, which decomposes, as an k-module, into a direct sum of irreducible kmodules V i with certain multiplicities m i ,
V ≈ i V i ⊗ W i .
(1.1)
Here W i = Hom k (V i , V ) are the spaces of multiplicities, m i = dim W i . While the multiplicities m i present certain combinatorial data, the spaces W i of multiplicities itself may exhibit a 'hidden structure' of modules over certain special algebras [4]. The well-known example is the Olshanski centralizer construction [9], where g = gl n+m , k = gl m and the spaces W i carry the structure of irreducible Yangian Y (gl n )-modules. In general, the multiplicity spaces W i are irreducible modules over the centralizer U(g) k of k in the universal enveloping algebra U(g) [8]. However, these centralizers have a rather complicated algebraic structure and are hardly convenient for applications. Besides, under the above assumptions, the direct sum W = ⊕ i W i becomes a module over the reduction (or Mickelsson) algebra. The reduction algebra is defined as follows. Suppose k is given with a triangular decomposition k = n − + h + n.
(1. 2) Denote by I + the left ideal of A := U(g), generated by elements of n, I + := An . Then the reduction algebra S n (A), related to the pair (g, k), is defined as the quotient Norm(I + )/I + of the normalizer of the ideal I + over I + . It is equipped with a natural structure of the associative algebra. By definition, for any g-module V the space V n of vectors, annihilated by n, is a module over S n (A). Since V is finite-dimensional, V n is isomorphic to ⊕ i W i , so the latter space can be viewed as an S n (A)-module as well; the zero-weight component of S n (A), which contains a quotient of the centralizer U(g) k , preserves each multiplicity space W i . The representation theory of the reduction algebra S n (A) is closely related to the theory of branching rules g ↓ k for the restrictions of representations of g to k.
The reduction algebra simplifies after the localization over the multiplicative set generated by elements h γ + k, where γ ranges through the set of roots of k, k ∈ Z, and h γ is the coroot corresponding to γ. Let U(h) be the localization of the universal enveloping algebra U(h) of the Cartan subalgebra h of k over the above multiplicative set. The localized reduction algebra Z n (A) is an algebra over the commutative ring U(h); the principal part of the defining relations is quadratic but the relations may contain linear or degree 0 terms, see [10,6].
Besides, the reduction algebra admits another description as a (localized) double coset space n − A\A/An endowed with the multiplication map defined by means of the insertion of the extremal projector [6] of Asherova-Smirnov-Tolstoy [3]. The centralizer A k is a subalgebra of Z n (A).
It was shown in [7] that the general reduction algebra Z n (A) admits a presentation over U(h) such that the defining relations are ordering relations for the generators, in an arbitrary order, compatible with the natural partial order on h * . The set of ordering relations for the general reduction algebra Z n (A) was shown in [7] to be "algorithmically efficient" in the sense that any expression in the algebra can be ordered with the help of this set.
The structure constants of the reduction algebra are in principle determined with the help of the extremal projector P or the tensor J studied by Arnaudon, Buffenoir, Ragoucy and Roche [1]. However the explicit description of the algebra is hardly achievable directly.
In the present paper, we are interested in the special restriction problem, when g is the direct sum of two copies of a reductive Lie algebra a and k is the diagonally embedded a. The resulting reduction algebra for the symmetric pair (a ⊕ a, a) we call diagonal reduction algebra DR(a) of a. The theory of branching rules for a ⊕ a ↓ a is the theory of decompositions of the tensor products of a-modules into a direct sum of irreducible a-modules.
We restrict ourselves here to the Lie algebra a = gl n of the general linear group. In this situation finite-dimensional irreducible modules over g are tensor products of two irreducible gl nmodules, the decomposition (1.1) is the decomposition of the tensor product into the direct sum of irreducible gl n -modules, and the multiplicities m i are the Littlewood-Richardson coefficients.
The reduction algebra DR(gl n ) for brevity will be denoted further by Z n . In [7] we suggested a set R of relations for the algebra Z n . We demonstrated that the set R is equivalent, over U(h), to the set of the defining ordering relations provided that all relations from the set R are valid.
The main goal of the present paper is the verification of all relations from the system R. There are two principal tools in our derivation. First, we use the braid group action by the Zhelobenko automorphisms of reduction algebras [10,6]. Second, we employ the stabilization phenomenon, discovered in [7], for the multiplication rule and for the defining relations with respect to the standard embeddings gl n ֒→ gl n+1 ; stabilization provides a natural way of extending relations for Z n to relations for Z n+1 (Z n is not a subalgebra of Z n+1 ). We perform necessary calculations for low n (at most n = 4); the braid group action and the stabilization law allow to extend the results for general n.
As an illustration, we write down the complete lists of defining relations in the form of ordering relations for the reduction algebras DR(sl 3 ) and DR(sl 2 ). Although for a finite n the task of deriving the set of defining (ordering) relations for DR(sl n ) is achievable in a finite time, it is useful to have the list of relations for small n in front of the eyes.
We return to the stabilization and cut phenomena and make more precise statements concerning now the embedding of the Lie algebra gl n ⊕gl 1 into the Lie algebra gl n+1 (more generally, of gl n ⊕ gl m into gl n+m ). As a consequence we find that cutting preserves the centrality: the cut of a central element of the algebra Z n+m is central in the algebra Z n ⊗ Z m . We also show that, similarly to the Harish-Chandra map, the restriction of the cutting to the center is a homomorphism. As an example, we derive the Casimir operators for the algebra DR(sl 2 ) by cutting the Casimir operators for the algebra DR(sl 3 ).
The relations in the diagonal reduction algebra have a quadratic and a degree zero part. The algebra, defined by the homogeneous quadratic part of the relations, tends, in a quite simple regime, to a commutative algebra (the homogeneous algebra can be thus considered as a "dynamical" deformation of a commutative algebra; "dynamical" here means that the left and right multiplications by elements of the ring U(h) differ). This observation about the limit is used in the proof in [7] of the completeness of the set of derived relations over the field of fractions of U(h). We prove the completeness by establishing the equivalence between the set of derived relations and the set of ordering relations.
The stabilization law enables one to give a definition of the reduction "algebra" Z ∞ related to the diagonal embedding of the inductive limit gl ∞ of gl n into gl ∞ ⊕ gl ∞ (strictly speaking, Z ∞ is not an algebra, some relations have an infinite number of terms).
We also discuss the diagonal reduction algebra for the special linear Lie algebra sl n ; it is a direct tensor factor in Z n .
Such a precise description, as the one we give for Z n , is known for a few examples of the reduction algebras: the most known is related to the embedding of gl n to gl n+1 [10]. Its representation theory was used for the derivation of precise formulas for the action of the generators of gl n on the Gelfand-Zetlin basic vectors [2]. The reduction algebra for the pair (gl n , gl n+1 ) is based on the root embedding gl n ⊂ gl n+1 of Lie algebras. In contrast to this example, the diagonal reduction algebra DR(a) is based on the diagonal embedding of a into a ⊕ a, which is not a root embedding of reductive Lie algebras.
Notation
Let E ij , i, j = 1, . . . , n, be the standard generators of the Lie algebra gl n , with the commutation relations
[E ij , E kl ] = δ jk E il − δ il E kj ,
where δ jk is the Kronecker symbol. We shall also use the root notation H α , E α , E −α , . . . for elements of gl n .
Let E
(1)
ij and E (2) ij , i, j = 1, . . . , n, be the standard generators of the two copies of the Lie algebra gl n in g := gl n ⊕ gl n ,
[E (a) ij , E (b) kl ] = δ ab δ jk E (a) il − δ il E (a) kj . Set e ij := E (1) ij + E (2) ij , E ij := E (1) ij − E (2)
ij .
The elements e ij span the diagonally embedded Lie algebra k ≃ gl n , while E ij form an adjoint k-module p. The Lie algebra k and the space p constitute a symmetric pair, that is,
[k, k] ⊂ k, [k, p] ⊂ p, and [p, p] ⊂ k: [e ij , e kl ] = δ jk e il − δ il e kj , [e ij , E kl ] = δ jk E il − δ il E kj , [E ij , E kl ] = δ jk e il − δ il e kj .
In the sequel, h a means the element e aa of the Cartan subalgebra h of the subalgebra k ∈ gl n ⊕gl n and h ab the element e aa − e bb . Let {ε a } be the basis of h * dual to the basis {h a } of h, ε a (h b ) = δ ab . We shall use as well the root notation h α , e α , e −α for elements of k, and H α , E α , E −α for elements of p.
The Lie subalgebra n in the triangular decomposition (1.2) is spanned by the root vectors e ij with i < j and the Lie subalgebra n − by the root vectors e ij with i > j. Let b + and b − be the corresponding Borel subalgebras, b + = h ⊕ n and b − = h ⊕ n − . Denote by ∆ + and ∆ − the sets of positive and negative roots in the root system ∆ = ∆ + ∪ ∆ − of k: ∆ + consists of roots ε i − ε j with i < j and ∆ − consists of roots ε i − ε j with i > j. Let Q be the root lattice, Q := {γ ∈ h * | γ = α∈∆ + ,nα∈Z n α α}. It contains the positive cone Q + ,
Q + := γ ∈ h * | γ = α∈∆ + ,nα∈Z, nα≥0 n α α .
For λ, µ ∈ h * , the notation λ > µ (2.1) means that the difference λ − µ belongs to Q + , λ − µ ∈ Q + . This is a partial order in h * . We fix the following action of the cover of the symmetric group S n (the Weyl group of the diagonal k) on the Lie algebra gl n ⊕ gl n by automorphismś
σ i (x) := Ad exp(e i,i+1 ) Ad exp(−e i+1,i ) Ad exp(e i,i+1 ) (x), so that σ i (e kl ) = (−1) δ ik +δ il e σ i (k)σ i (l) ,σ i (E kl ) = (−1) δ ik +δ il E σ i (k)σ i (l) .
Here σ i = (i, i + 1) is an elementary transposition in the symmetric group. We extend naturally the above action of the cover of S n to the action by automorphisms on the associative algebra A ≡ A n := U(gl n ) ⊗ U(gl n ). The restriction of this action to h coincides with the natural action σ(h k ) = h σ(k) , σ ∈ S n , of the Weyl group on the Cartan subalgebra.
Besides, we use the shifted action of S n on the polynomial algebra U(h) (and its localizations) by automorphisms; the shifted action is defined by
σ • h k := h σ(k) + k − σ(k), k = 1, . . . , n; σ ∈ S n . (2.2)
It becomes the usual action for the variables
h k := h k − k,h ij :=h i −h j ; (2.3)
by (2.2) for any σ ∈ S n we have
σ •h k =h σ(k) , σ •h ij =h σ(i)σ(j) .
It will be sometimes convenient to denote the commutator [a, b] of two elements a and b of an associative algebra bŷ
a(b) := [a, b].
(2.4)
Reduction algebra Z n
In this section we recall the definition of the reduction algebras, in particular the diagonal reduction algebras of the gl type. We introduce the order for which the ordering relations for the algebra Z n will be discussed. The formulas for the Zhelobenko automorphisms for the algebra Z n are given; some basic facts about the standard involution, anti-involution and central elements for the algebra Z n are presented at the end of the section. 1. Let U(h) andĀ be the rings of fractions of the algebras U(h) and A with respect to the multiplicative set, generated by elements
h ij + l, l ∈ Z, 1 ≤ i < j ≤ n.
Define Z n to be the double coset space ofĀ by its left ideal I + :=Ān, generated by elements of n, and the right ideal I − := n −Ā , generated by elements of n − , Z n :=Ā/(I + + I − ).
The space Z n is an associative algebra with respect to the multiplication map
a ⋄ b := aP b. (3.1)
Here P is the extremal projector [3] for the diagonal gl n . It is an element of a certain extension of the algebra U(gl n ) satisfying the relations e ij P = P e ji = 0 for all i and j such that 1 ≤ i < j ≤ n. The algebra Z n is a particular example of a reduction algebra; in our context, Z n is defined by the coproduct (the diagonal inclusion) U(gl n ) → A.
2. The main structure theorems for the reduction algebras are given in [7,Section 2].
In the sequel we choose a weight linear basis {p K } of p (p is the k-invariant complement to k in g, g = k + p) and equip it with a total order ≺. The total order ≺ will be compatible with the partial order < on h * , see (2.1), in the sense that µ K < µ L ⇒ p K ≺ p L . We shall sometimes write I ≺ J instead of p I ≺ p J . For an arbitrary element a ∈Ā let a be its image in the reduction algebra; in particular, p K is the image in the reduction algebra of the basic vector p K ∈ p.
3. In our situation we choose the set of vectors E ij , i, j = 1, . . . , n, as a basis of the space p. The weight of E ij is ε i − ε j . The compatibility of a total order ≺ with the partial order < on h * means the condition
E ij ≺ E kl if i − j > k − l.
The order in each subset {E ij |i − j = a} with a fixed a can be chosen arbitrarily. For instance, we can set
E ij ≺ E kl if i − j > k − l or i − j = k − l and i > k. (3.2)
Denote the images of the elements E ij in Z n by z ij . We use also the notation t i for the elements z ii and t ij := t i − t j for the elements z ii − z jj . The order (3.2) induces as well the order on the generators z ij of the algebra Z n :
z ij ≺ z kl ⇔ E ij ≺ E kl .
The statement (d) in the paper [7, Section 2] implies an existence of structure constants B (ab),(cd),(ij),(kl) ∈ U(h) and D (ab),(cd) ∈ U(h) such that for any a, b, c, d = 1, . . . , n we have
z ab ⋄ z cd = i,j,k,l:z ij z kl B (ab),(cd),(ij),(kl) z ij ⋄ z kl + D (ab),(cd) . (3.3)
In particular, the algebra Z n (in general, the reduction algebra related to a symmetric pair (k, p), g := k + p) is Z 2 -graded; the degree of z ab is 1 and the degree of any element from U(h) is 0. The relations (3.3) together with the weight conditions
[h, z ab ] = (ε a − ε b )(h)z ab
are the defining relations for the algebra Z n . Note that the denominators of the structure constants B (ab),(cd),(ij),(kl) and D (ab),(cd) are products of linear factors of the formh ij + ℓ, i < j, where ℓ ≥ −1 is an integer, see [7].
4. The algebra Z n can be equipped with the action of Zhelobenko automorphisms [6]. Denote byq i the Zhelobenko automorphismq i : Z n → Z n corresponding to the transposition σ i ∈ S n . It is defined as follows [6]. First we define a mapq i : A →Ā/I + by
q i (x) := k≥0 (−1) k k!ê k i,i+1 (σ i (x))e k i+1,i k a=1 (h i,i+1 − a + 1) −1 mod I + . (3.4)
Hereê i,i+1 stands for the adjoint action of the element e i,i+1 , see (2.4). The operatorq i has the property
q i (hx) = (σ i • h)q i (x) (3.5)
for any x ∈ A and h ∈ h; σ • h is defined in (2.2). With the help of (3.5), the mapq i can be extended to the map (denoted by the same symbol)q i :Ā →Ā/I − by settingq i (a(h)x) = (σ i • a(h))q i (x) for any x ∈ A and a(h) ∈ U(h). One can further prove thatq i (I + ) = 0 anď q i (I − ) ⊂ (I − + I + )/I + , so thatq i can be viewed as a linear operatorq i : Z n → Z n . Due to [6], this is an algebra automorphism, satisfying (3.5).
The operatorsq i satisfy the braid group relations [10]:
q iqi+1qi =q i+1qiqi+1 , q iqj =q jqi , |i − j| > 1,
and the inversion relation [6]:
q 2 i (x) = 1 h i,i+1 + 1σ 2 i (x)(h i,i+1 + 1), x ∈ Z n . (3.6) In particular,q 2 i (x) = x if x is of zero weight. 5. The Chevalley anti-involution ǫ in U(gl n ⊕ gl n ), ǫ(e ij ) := e ji , ǫ(E ij ) := E ji , induces the anti-involution ǫ in the algebra Z n : ǫ(z ij ) = z ji , ǫ(h k ) = h k . (3.7)
Besides, the outer automorphism of the Dynkin diagram of gl n induces the involutive automorphism ω of Z n ,
ω(z ij ) = (−1) i+j+1 z j ′ i ′ , ω(h k ) = −h k ′ ,(3.8)
where i ′ = n + 1 − i. The operations ǫ and ω commute, ǫω = ωǫ. Central elements of the subalgebra U(gl n ) ⊗ 1 ⊂ A, generated by n Casimir operators of degrees 1, . . . , n, as well as central elements of the subalgebra 1 ⊗ U(gl n ) ⊂ A project to central elements of the algebra Z n . In particular, central elements of degree 1 project to central elements
I (n,h) := h 1 + · · · + h n (3.9)
and I (n,t) := t 1 + · · · + t n (3.10)
of the algebra Z n . The difference of central elements of degree two projects to the central element
n i=1 (h i − 2i)t i (3.11)
of the algebra Z n . The images of other Casimir operators are more complicated.
Main results
This section contains the principal results of the paper. We first give preliminary information on the new basis in which the defining relations for the algebra Z n can be written down in an economical fashion. The braid group action on the new generators is then explicitly given in Subsection 4.2. The complete set of the defining relations for the algebra Z n is written down in Subsection 4.3. The regime for which both the set of the derived defining relations and the set of the defining ordering relation have a controllable "limiting behavior" is introduced in Subsection 4.4. Subsection 4.5 deals with the diagonal reduction algebra for sl n ; the quadratic Casimir operator for DR(sl n ) as well as for the diagonal reduction algebra for an arbitrary semi-simple Lie algebra k is given there. Subsection 4.6 is devoted to the stabilization and cut phenomena with respect to the embedding of the Lie algebra gl n ⊕gl m into the Lie algebra gl n+m ; the theorem about the behavior of the centers of the diagonal reduction algebra under the cutting is proved there.
New variables
We shall use the following elements of U(h):
A ij :=h ij h ij − 1 , A ′ ij :=h ij − 1 h ij , B ij :=h ij − 1 h ij − 2 , B ′ ij :=h ij − 2 h ij − 1 , C ′ ij :=h ij − 3 h ij − 2 , the variablesh ij are defined in (2.3). Note that A ij A ′ ij = B ij B ′ ij = 1. Define elementst 1 , . . . ,t n ∈ Z n bẙ t 1 := t 1 ,t 2 :=q 1 (t 1 ),t 3 :=q 2q1 (t 1 ), . . . ,t n :=q n−1 · · ·q 2q1 (t 1 ). Using (3.4) we find the relationš q i (t i ) = − 1 h i,i+1 − 1 t i +h i,i+1 h i,i+1 − 1 t i+1 ,q i (t i+1 ) =h i,i+1 h i,i+1 − 1 t i − 1 h i,i+1 − 1 t i+1 , q i (t k ) = t k , k = i, i + 1, (4.1)
which can be used to convert the definition (4.1) into a linear over the ring U(h) change of variables:
t l = t l l−1 j=1 A jl − l−1 k=1 t k 1 h kl − 1 k−1 j=1 A jl , t l =t l l−1 j=1 A ′ jl + l−1 k=1t k 1 h kl l−1 j=1 j =k A ′ jk . (4.2)
For example,
t 2 = − 1 h 12 − 1 t 1 +h 12 h 12 − 1 t 2 , t 2 = 1 h 12t 1 +h 12 − 1 h 12t 2 , t 3 = − 1 h 13 − 1 t 1 −h 13 (h 13 − 1)(h 23 − 1) t 2 +h 13h23 (h 13 − 1)(h 23 − 1) t 3 , t 3 =h 12 + 1 h 12h13t 1 +h 12 − 1 h 12h23t 2 + (h 13 − 1)(h 23 − 1) h 13h23t 3 .
In terms of the new variablest's, the linear in t central element (3.10) reads
t i = t i a:a =ih ia + 1 h ia .
Braid group action
Sinceq 2 i (x) = x for any element x of zero weight, the braid group acts as its symmetric group quotient on the space of weight 0 elements. It follows from (4.1) andq i (t 1 ) = t 1 for all i > 1 thatq
σ (t i ) =t σ(i) (4.3)
for any σ ∈ S n .
The action of the Zhelobenko automorphisms, see Section 3, on the generators z kl looks as follows:
q i (z ik ) = −z i+1,k A i,i+1 ,q i (z ki ) = −z k,i+1 , k = i, i + 1, q i (z i+1,k ) = z i,k ,q i (z k,i+1 ) = z k,i A i,i+1 , k = i, i + 1, (4.4) q i (z i,i+1 ) = −z i+1,i A i,i+1 B i,i+1 ,q i (z i+1,i ) = −z i,i+1 , q i (z j,k ) = z j,k , j, k = i, i + 1.
Denote i ′ = n + 1 − i, as before. The braid group action (4.4) is compatible with the antiinvolution ǫ and the involution ω (note that ω(h ij ) =h j ′ i ′ ), see (3.7) and (3.8), in the following sense:
ǫq i =q −1 i ǫ, (4.5) ωq i =q i ′ −1 ω. (4.6)
Let w 0 be the longest element of the Weyl group of gl n , the symmetric group S n . Similarly to the squares of the transformations corresponding to the simple roots, see (3.6), the action ofq 2 w 0 is the conjugation by a certain element of U(h).
Lemma 1. We havě
q 2 w 0 (x) = S −1 xS, (4.7) where S = i,j:i<jh ij . (4.8)
The proof shows that the formula (4.7) works for an arbitrary reductive Lie algebra, with S = α∈∆ +h α .
Proposition 2. The action ofq w 0 on generators readš
q w 0 (z ij ) = (−1) i+j z i ′ j ′ a:a<i ′ A ai ′ b:b>j ′ A j ′ b , (4.9) q w 0 (t i ) =t i ′ . (4.10)
The proofs of Lemma 1 and Proposition 2 are in Section 5.
Defining relations
To save space we omit in this section the symbol ⋄ for the multiplication in the algebra Z n . It should not lead to any confusion since no other multiplication is used in this section. Each relation which we will derive will be of a certain weight, equal to a sum of two roots. From general considerations the upper estimate for the number of terms in a quadratic relation of weight λ = α + β is the number |λ| of quadratic combinations z α ′ z β ′ with α ′ + β ′ = λ. There are several types, excluding the trivial one, λ = 2(ε i − ε j ), |λ| = 1:
1. λ = ±(2ε i − ε j − ε k ),
where i, j and k are pairwise distinct. Then |λ| = 2.
2. λ = ε i − ε j + ε k − ε l with pairwise distinct i, j, k and l. Then |λ| = 4.
3. λ = ε i − ε j , i = j. For z α ′ z β ′ , there are 2(n − 2) possibilities (subtype 3a) with α ′ = ε i − ε k , β ′ = ε k − ε j or α ′ = ε k − ε j , β ′ = ε i − ε k with k = i, j and 2n possibilities (subtype 3b) with α ′ = 0, β ′ = ε i − ε j or α ′ = ε i − ε j , β ′ = 0. Thus |λ| = 4(n − 1).
4. λ = 0. There are n 2 possibilities (subtype 4a) with α ′ = 0, β ′ = 0 and n(n − 1) possibilities
(subtype 4b) with α ′ = ε i − ε j , β ′ = ε j − ε i , i = j.
Here |λ| = n(2n − 1).
Below we write down relations for each type (and subtype) separately. The relations of the types 1 and 2 have a simple form in terms of the original generators z ij . To write the relations of the types 3 and 4, it is convenient to renormalize the generators z ij with i = j. Namely, we setz
ij = z ij i−1 k=1 A ki .
(4.11)
In terms of the generatorsz ij , the formulas (4.4) for the action of the automorphismsq i translate as follows:
q i (z ik ) = −z i+1,k ,q i (z i+1,k ) =z i,k A i+1,i , k = i, i + 1, q i (z ki ) = −z k,i+1 ,q i (z k,i+1 ) =z k,i A i,i+1 = A ′ i+1,izk,i , k = i, i + 1, q i (z i,i+1 ) = −A ′ i+1,izi+1,i ,q i (z i+1,i ) = −z i,i+1 A i+1,i , q i (z j,k ) =z j,k , j, k = i, i + 1.
1. The relations of the type 1 are:
z ij z ik = z ik z ij A kj , z ji z ki = z ki z ji A ′ kj , for j < k, i = j, k. (4.12) 2. Denote D ijkl := 1 h ik − 1 h jl .
Then, for any four pairwise different indices i, j, k and l, we have the following relations of the type 2:
[z ij , z kl ] = z kj z il D ijkl , i < k, j < l, z ij z kl − z kl z ij A ′ jl A ′ lj = z kj z il D ijkl , i < k, j > l. (4.13) 3a. Let i = k = l = i. Denote E ikl := − (t i −t k )h il + 1 h ikhil + (t k −t l )h il − 1 h klhil z il + a:a =i,k,lz alzia B ai h ka + 1 .
With this notation the first group of the relations of the type 3 is:
z ikzkl A ′ ik −z klzik B ki =E ikl , i < k < l, z ikzkl A ′ ik A ′ lk B lk −z klzik B ki =E ikl , i < l < k, z ikzkl A ki −z klzik B ki =E ikl , k < i < l, (4.14) z ikzkl A ki A li B ′ li −z klzik B ki =E ikl , k < l < i, z ikzkl A ′ ik A ′ lk B lk A li B ′ li −z klzik B ki =E ikl , l < i < k, z ikzkl A ki A ′ lk B lk A li B ′ li −z klzik B ki =E ikl , l < k < i.
The relations (4.14) can be written in a more compact way with the help of both systems, z ij andz ij , of generators. Let now
E ikl := − (t i −t k )h il + 1 h ikhil + (t k −t l )h il − 1 h klhil z il + a:a =i,k,lz al z ia B ai h ka + 1 . Then z ikzkl A ′ ik −z kl z ik B ki = E ikl , k < l, z ikzkl A ′ ik A ′ lk B lk −z kl z ik B ki = E ikl , l < k. (4.15)
Moreover, after an extra redefinition:z kl =z kl B lk for k > l, the left hand side of the second line in (4.15) becomes, up to a common factor, the same as the left hand side of the first line,
namely, it reads (z ikzkl A ′ ik −z kl z ik B ki )A ′ lk . 3b. Let i = j = k = i.
The second group of relations of the type 3 reads:
z ijti =t izij C ′ ji −t jzij 1 h ij + 2 − a:a =i,jz ajzia 1 h ia + 2 , z ijtj = −t izij C ′ ji h ij − 1 +t jzij A ij A ′ ji B ji + a:a =i,jz ajzia A ij A ′ ji B aihja + 1, (4.16) z ijtk =t izij (h ij + 3)B ji (h 2 ik − 1)(h jk − 1) +t jzij (h ij + 1)B ji (h ik − 1)(h jk − 1) 2 +t kzij A ik A ki A jk B ′ jk −z kjzik (h ij + 1)B ki (h ik − 1)(h jk − 1) − a:a =i,j,kz ajziah ij + 1 (h ik − 1)(h jk − 1) B ai h ka + 1 .
4a. The relations of the weight zero (the type 4) are also divided into 2 groups. This is the first group of the relations:
[t i ,t j ] = 0. (4.17)
As follows from the proof, the relations (4.17) hold for the diagonal reduction algebra for an arbitrary reductive Lie algebra: the images of the generators, corresponding to the Cartan subalgebra, commute. 4b. Finally, the second group of the relations of the type 4 is
[z ij ,z ji ] =h ij − 1 h ij (t i −t j ) 2 + a:a =i,j 1 h ja + 1z aizia − 1 h ia + 1z ajzja ,(4.18)
where i = j. Main statement. Denote by R the system (4.12), (4.13), (4.14), (4.16), (4.17) and (4.18) of the relations. The derivation of the system R of the relations is given in Section 5. The validity in Z n of relations from the set R, together with the results from [7], completes the proof of Theorem 3 (Section 5.4).
Limit
Let R ≺ be the set of ordering relations (3.3). Denote by R 0 the homogeneous (quadratic) part of the system R and by R ≺ 0 the homogeneous part of the system R ≺ . 1. Placing coefficients from U(h) in all relations from R 0 to the same side (to the right, for example) from the monomials p L ⋄ p M , one can give arbitrary numerical values to the variables h α (α's are roots of k).
The structure of the extremal projector P or the recurrence relation (5.4) implies that the system R 0 admits, for an arbitrary reductive Lie algebra, the limit at h α i = c i h, h → ∞ (α i ranges through the set of simple positive roots of k and c i are generic positive constants). Moreover, this homogeneous algebra becomes the usual commutative (polynomial) algebra in this limit; so this limiting behavior of the system R 0 , used in the proof, generalizes to a wider class of reduction algebras, related to a pair (g, k) as in the introduction.
2. The limiting procedure from paragraph 1 establishes the bijection between the set of relations and the set of unordered pairs (L, M ), where L, M are indices of basic vectors of p. The proof in [7] shows that over D(h) the system R can be rewritten in the form of ordering relations for an arbitrary order on the set { p L } of generators. Here D(h) is the field of fractions of the ring U(h).
By definition, the relations from R ≺ are labeled by pairs (L, M ) with L > M . The above bijection induces therefore a bijection between the sets R and R ≺ . 4.5 sl n 1. Denote the subalgebra of Z n , generated by two central elements (3.9) and (3.10), by Y n ; the algebra Y n is isomorphic to Z 1 .
Since the extremal projector for sl n is the same as for gl n , the diagonal reduction algebra DR(sl n ) for sl n is naturally a subalgebra of Z n . The subalgebra DR(sl n ) is complementary to Y n in the sense that Z n = Y n ⊗ DR(sl n ).
The algebra DR(sl n ) is generated by z ij , i, j = 1, . . . , n, i = j, and t i,i+1 := t i − t i+1 , i = 1, . . . , n − 1 (and the Cartan subalgebra h, generated by h i,i+1 , of the diagonally embedded sl n ). The elements t i,i+1 form a basis in the space of "traceless" combinations c m t m (traceless means that c m = 0), c m ∈ U(h). 2. The action of the braid group restricts onto the traceless subspace:
q i (t i−1,i ) = t i−1,i +h i,i+1 h i,i+1 − 1 t i,i+1 ,q i (t i+1,i+2 ) =h i,i+1 h i,i+1 − 1 t i,i+1 + t i+1,i+2 , q i (t i,i+1 ) = −h i,i+1 + 1 h i,i+1 − 1 t i,i+1 ,q i (t k,k+1 ) = t k,k+1 , k = i − 1, i, i + 1.
The traceless subspace with respect to the generators t i and the traceless subspace with respect to the generatorst i (that is, the space of linear combinations c mtm , c m ∈ U(h), with c m = 0) coincide. Indeed, in the expression of t l as a linear combination oft k 's (the second line in (4.2)), we find, calculating residues and the value at infinity, that the sum of the coefficients is 1,
l−1 j=1 A ′ jl + l−1 k=1 1 h kl l−1 j=1 j =k A ′ jk = 1.
Therefore, in the decomposition of the difference t i − t j as a linear combination oft k 's, the sum of the coefficients vanishes, so it is traceless with respect tot k 's; t l,l+1 is a linear combination oft 12 ,t 23 , . . . ,t l,l+1 (and vice versa). It should be however noted that in contrast to (4.2), the coefficients in these combinations do not factorize into a product of linear monomials, the lowest example ist 34 :
t 34 =h 34 h 14 − 1 − 1 h 13 − 1 t 12 −h 14 (h 13 − 1) +h 23 (h 24 − 1) (h 13 − 1)(h 23 − 1)(h 24 − 1) t 23 +h 14h24 (h 24 − 1)(h 34 − 1) t 34 .
3. One can directly see that the commutations between z ij and the differences t k − t l close. The renormalization (4.11) is compatible with the sl-condition and, as we have seen, the set {t i,i+1 } of generators can be replaced by the set {t i,i+1 }. Therefore, one can work with the generatorsz ij , i, j = 1, . . . , n, i = j, andt i,i+1 := t i − t i+1 , i = 1, . . . , n − 1. A direct look at the relations (4.12), (4.13), (4.14), (4.16), (4.17) and (4.18) shows that the only non-trivial verification concerns the relations (4.16); one has to check here the following assertion: when z moves throught i,i+1 , only traceless combinations oft l 's appear in the right hand side. Write a relation from the list (4.16) in the formz ijtl = m χ (i,j,l,m) mtmzij + · · · , χ (i,j,l,m) m ∈ U(h), where dots stand for terms withzz. The assertion follows from the direct observation that for all i, j and l the sum of the coefficients χ
n i=1 (h i − 2i)t i − 1 n n i=1 h i − n − 1 n j=1 t j .
It clearly depends only on the differences h i − h j and belongs therefore to the center of the subalgebra DR(sl n ). One can write this central element in the form
n−1 u,v=1 C uv h u,u+1 t v,v+1 + n−1 v=1 (n − v)vt v,v+1 = n−1 u,v=1 C uv (h u,u+1 + 1)t v,v+1 ,(4.19)
where C uv is the inverse Cartan matrix of sl n . In general, let k be a semi-simple Lie algebra of rank r with the Cartan matrix a ij . Let b ij be the symmetrized Cartan matrix and ( , ) the scalar product on h * induced by the invariant non-degenerate bilinear form on k, so that
a ij = d i b ij , b ij = (α i , α j ), d i = 2/(α i , α i ).
For each i = 1, . . . , r let α ∨ i be the coroot vector corresponding to the simple root α i , so that
α j (α ∨ i ) = a ij . Let d ij be the matrix, inverse to c ij = d i b ij d j .
Let ρ ∈ h * be the half-sum of all positive roots. Write
ρ = 1 2 r i=1 n i α i ,
where n i are nonnegative integers. Let t α i be the images of
H α i = α ∨ i (1) − α ∨ i (2) in the diagonal reduction algebra DR(k) and h α i = α ∨ i (1) +α ∨ i
(2) be the coroot vectors of the diagonally embedded Lie algebra k. The generalization of the central element (4.19) to the reduction algebra DR(k) reads
r i,i=1 d ij h α i t α j + r i=1 n i (α i , α i )t α i .
Stabilization and cutting
In [7] we discovered the stabilization and cut phenomena which are heavily used in our derivation of the set of defining relations for the diagonal reduction algebras of gl-type. The consideration in [7] uses the standard (by the first coordinates) embedding of gl n into gl n+1 . In this subsection we shall make several more precise statements about the stabilization and cut considering now the embedding of gl n ⊕ gl 1 into gl n+1 (more generally, gl n ⊕ gl m into gl n+m ). These precisions are needed to establish the behavior of the center of the diagonal reduction algebra: namely we shall see that cutting preserves the centrality. Notation: h in this subsection denotes the Cartan subalgebra of gl n+m . Consider an embedding of gl n ⊕ gl m into gl n+m , given by an assignment e ij → e ij , i, j = 1, . . . , n, and e ab → e n+a,n+b , a, b = 1, . . . , m, where e kl in the source are the generators of gl n ⊕ gl m and target e kl are in gl n+m . This rule together with the similar rule E ij → E ij and E ab → E n+a,n+b defines an embedding of the Lie algebra (gl n ⊕ gl m ) ⊕ (gl n ⊕ gl m ) into the Lie algebra gl n+m ⊕gl n+m and of the enveloping algebras A n ⊗A m = U(gl n ⊕gl n )⊗U(gl m ⊕gl m ) into A n+m = U(gl n+m ⊕ gl n+m ). This embedding clearly maps nilpotent subalgebras of gl n ⊕ gl m to the corresponding nilpotent subalgebras of gl n+m and thus defines an embedding ι n,m : Z n ⊗ Z m → Z n+m of the corresponding double coset spaces. However, the map ι n,m is not a homomorphism of algebras. This is because the multiplication maps are defined with the help of projectors, which are different for gl n ⊕ gl m and gl n+m .
However, as we will explain now we can control certain differences between the two multiplication maps. Let V n,m be the left ideal of the algebra Z n+m generated by elements z ia with i = 1, . . . , n and a = n + 1, . . . , n + m; let V ′ n,m be the right ideal of the algebra Z n+m generated by elements z ai with i = 1, . . . , n and a = n + 1, . . . , n + m.
Write any element λ ∈ Q + (the positive cone of the root lattice of gl n+m ) in the form λ = n+m k=1 λ k ε k . The element λ can be presented as a sum
λ = λ ′ + λ ′′ ,(4.20)
where λ ′ is an element of the root lattice of gl n ⊕ gl m , and λ ′′ is proportional to the simple root ε n − ε n+1 : λ ′ = n+m k=1 λ ′ k ε k with n k=1 λ ′ k = n+m k=n+1 λ ′ k = 0 and λ ′′ = c(ε n − ε n+1 ).
Lemma 4.
The left ideal V n,m ⊂ Z n+m consists of images in Z n+m of sums ia X ia E ia with X ia ∈Ā n+m , i = 1, . . . , n and a = n + 1, . . . , n + m. The right ideal V n,m ⊂ Z n+m consists of images in Z n+m of sums ai E ai Y ai with Y ai ∈Ā n+m , i = 1, . . . , n and a = n + 1, . . . , n + m.
Proof . Present the projector P for the Lie algebra gl n+m as a sum of terms
ξe −γ 1 · · · e −γt e γ ′ 1 · · · e γ ′ t ′ ,(4.21)
where ξ ∈ U(h), γ 1 , . . . , γ t and γ ′ 1 , . . . , γ ′ t ′ are positive roots of gl n+m . For any λ ∈ Q + denote by P λ the sum of above elements with γ 1 + · · · + γ t = γ ′ 1 + · · · + γ ′ t ′ = λ. Then P = λ∈Q + P λ . For any X, Y ∈Ā define the element X ⋄ λ Y as the image of XP λ Y in the reduction algebra. We have X ⋄ Y = λ∈Q + X ⋄ λ Y .
For any X ∈Ā n+m , i = 1, . . . , n and a = n + 1, . . . , n + m consider the product X ⋄ λ z ia . The product X ⋄ λ z in is zero if λ ′′ = 0 (the component λ ′′ is defined by (4.20)). Indeed, in this case in each summand of P λ one of e γ ′ k ′ is equal to some e jb , j = 1, . . . , n and b = n+1, . . . , n+m. Choose an ordered basis of n + which ends by all such e jb (ordered arbitrarily); any element of U(n + ) can be written as a sum of ordered monomials, that is, monomials in which all such e jb stand on the right. Since [e jb , E ia ] = 0 for any i, j = 1, . . . , n and a, b = n + 1, . . . , n + m, the product e γ ′ k ′ E ia belongs to the left ideal I + and thus X ⋄ λ z ia = 0 in Z n+m .
If λ ′′ = 0 then generators of n + in monomials entering the decomposition of P λ are among the elements e ij , 1 ≤ i < j ≤ n, and e ab , n + 1 ≤ a < b ≤ n + m and thus their adjoint action leaves the space, spanned by all E ia , i = 1, . . . , n, a = n + 1, . . . , n + m invariant, so X ⋄ λ z ia can be presented as an image of the sum jb X jb E jb with X jb ∈Ā n+m , j = 1, . . . , n, b = n + 1, . . . , n + m. Thus, the left ideal, generated by all z ia is contained in the vector space of images in Z n+m of sums jb X jb E jb .
Moreover, for any X ∈Ā n+m the element X ⋄z ia is the image of XE ia + j,b: j<i, b>a X (jb) E jb for some X (jb) and the double induction on i and a proves the inverse inclusion.
The second part of lemma is proved similarly.
Corollary 5. We have the following decomposition of the free left (and right) U(h)-modules:
Z n+m = I n,m ⊕ U(h) · ι n,m (Z n ⊗ Z m ), (4.22)
where I n,m := V n,m + V ′ n,m .
Proof . The double coset space Z n+m is a free left and right U(h)-module with a basis consisting of images of ordered monomials on elements E ij , i, j = 1, . . . , n + m; recall that we always use orders compatible with the partial order < on h * , see (c) in Section 3, paragraph 2. We can choose an order for which all ordered monomials are of the form XY Z, where X is a monomial on E ai with i = 1, . . . , n and a = n + 1, . . . , n + m, Z is a monomial on E ia with i = 1, . . . , n and a = n + 1, . . . , n + m while Y is a monomial on E ij with i, j = 1, . . . , n or i, j = n + 1, . . . , n + m. Then we apply the lemma above.
For a moment denote for each k > 0 the multiplication map in Z k by ⋄ (k) : Z k ⊗ Z k → Z k (instead of the default notation ⋄, see (3.1)); denote also for each k, l > 0 by ⋄ (k,l) the multiplication map ⋄ (k) ⊗ ⋄ (l) in Z k ⊗ Z l . Let h n and h m be the Cartan subalgebras of gl n and gl m , respectively. Denote the space Z n ⊗ U(hn) U(h) ⊗ U(hm) Z m by U(h) · (Z n ⊗ Z m ). The composition law ⋄ (n,m) naturally extends to the space U(h) · (Z n ⊗ Z m ) equipping it with an associative algebra structure (we keep the same symbol ⋄ (n,m) for the extended composition law in U(h) · (Z n ⊗ Z m )). Also, the map ι n,m admits a natural extension to a map ι n,m : U(h) · (Z n ⊗ Z m ) → Z n+m denoted by the same symbol and defined by the rule ι n,m (ϕx) := ϕ ι n,m (x) for any ϕ ∈ U(h) and x ∈ Z n ⊗ Z m . The statement of Proposition 6 remains valid for this extension as well, that is, one can take x, y ∈ U(h) · (Z n ⊗ Z m ) in the formulation.
Proof of Proposition 6. Denote by P n,m := P n ⊗P m the projector for the Lie algebra gl n ⊕gl m .
It is sufficient to prove the following statement. Suppose X and Y are (non-commutative) polynomials in E ij with i, j = 1, . . . , n Then the product of x and y in Z n+m coincides with the image in Z n+m of X P n,m Y modulo the left ideal V n,m and modulo the right ideal V ′ n,m ). Due to the structure of the projector the condition λ ′′ = 0, see (4.20), implies that the product X ⋄ λ Y related to gl n ⊕ gl m coincides with product X ⋄ λ Y related to gl n+m .
Let now λ ′′ = 0. Then each monomial e γ ′ 1 · · · e γ ′ t ′ in the decomposition of P λ , see (4.21), contains generators e ia with i ∈ {1, . . . , n} and a ∈ {n+1, . . . , n+m}; these e ia can be assumed to be right factors of the corresponding monomial (like in the proof of Lemma 4). The commutator of any such generator e ia with every factor in Y is a linear combination of the elements E jb with j ∈ {1, . . . , n} and b ∈ {n + 1, . . . , n + m}. Moving the resulting E jb to the right we see that the product X ⋄ λ Y is the image in Z n+m of an element of the form s X s Y s where each Y s belongs to the left ideal ofĀ n+m generated by E jb with j ∈ {1, . . . , n} and b ∈ {n + 1, . . . , n + m} (one can say more: each Y s can be written in a form j,b Y (jb) s E jb where each Y (jb) s ∈Ā n+m does not involve generators E ck with k ∈ {1, . . . , n} and c ∈ {n + 1, . . . , n + m}; we don't need this stronger form). Thus, due to Lemma 4, X ⋄ λ Y ∈ V n,m .
Similarly, each X s participating in the sum s X s Y s , see above, belongs to the right ideal ofĀ n+m generated by the elements E bj with j ∈ {1, . . . , n} and b ∈ {n + 1, . . . , n + m}. So, again by Lemma 4, X ⋄ λ Y ∈ V ′ n,m .
Suppose that we have a relation whereā k = ι n,m (a k ),b k = ι n,m (b k ) and z ∈ J n,m = V n,m ∩ V ′ n,m . On the other hand, suppose we have the following relation in Z n+m :
kā k ⋄ (m+n)bk = u, (4.25)
where all a k and b k are elements of Z n ⊗ Z m ,ā k = ι n,m (a k ),b k = ι n,m (b k ), and u ∈ I n,m = V n,m + V ′ n,m . Then the elements a k and b k satisfy the relation (4.23) and u ∈ J n,m . Indeed, suppose that the relation (4.25) is satisfied and k a k ⋄ (n,m) b k = v for some v ∈ Z n ⊗ Z m . It follows from Proposition 6 that kā k ⋄ (m+n)bk −v belongs to J n,m ; herev = ι n,m (v). Then (4.25) implies thatv ∈ I n,m and thusv = 0 due to Corollary 5. Thus v = 0, since the map ι n,m is an inclusion, and u ∈ J n,m .
We refer to the implication (4.23) ⇒ (4.24) as stabilization. Call cutting the (almost inverse) implication (4.25) ⇒ (4.23) which can be understood as a procedure of getting relations in Z n ⊗ Z m from relations in Z n+m ; we say that (4.23) is the cut of (4.25). Clearly all relations in Z n ⊗ Z m can be obtained by cutting appropriate relations in Z n+m .
Let π n,m : Z n+m → U(h) · (Z n ⊗ Z m ) be the composition of the projectionπ n,m of Z n+m onto ι n,m (U(h) · Z n ⊗ Z m ) = U(h) · ι n,m (Z n ⊗ Z m ) along I n,m , see (4.22), and of the inverse to the inclusion ι n,m : π n,m = ι −1 n,m •π n,m .
We have the following consequence of Proposition 6 and Corollary 5. Proof . Denote X = π n,m (x). Then, by definition, x = ι n,m (X) + z, where z ∈ I n,m . Since x is central, it is of zero weight; so X and z are of zero weight as well. Thus each monomial entering the decomposition of z contains both types of generators, E ai and E ia , where i ∈ {1, . . . , n} and a ∈ {n + 1, . . . , n + m}, which implies that z ∈ J n,m = V n,m ∩ V ′ n,m . Take any Y ∈ Z n ⊗ Z m . We now prove that X ⋄ (n,m) Y − Y ⋄ (n,m) X = 0. Denote y = ι n,m (Y ). Due to Proposition 6,
ι n,m (X ⋄ (n,m) Y − Y ⋄ (n,m) X) = (x − z) ⋄ (m+n) y − y ⋄ (m+n) (x − z) + z ′ , (4.26)
where z ′ ∈ J n,m = V n,m ∩ V ′ n,m . Since x is central in Z n+m , the right hand side of (4.26) is equal to
y ⋄ (m+n) z − z ⋄ (m+n) y + z ′ ,
which is an element of I n,m = V n,m ⊕ V ′ n,m since z, z ′ ∈ J n,m . On the other hand, the left hand side of (4.26) belongs to U(h) · ι n,m (Z n ⊗ Z m ). Thus, by Corollary 5, both sides of (4.26) are equal to zero and X ⋄ (n,m) Y − Y ⋄ (n,m) X = 0 since the map ι n,m is injective.
The map π n,m obeys properties similar to those of the Harish-Chandra map U(g) h → U(h) (U(g) h is the space of elements of zero weight). For instance, its restriction to the center of Z n+m is a homomorphism. More precisely, if x is a central element of Z n+m , then π n,m (x ⋄ (m+n) y) = π n,m (x) ⋄ (n,m) π n,m (y) and π n,m (y ⋄ (m+n) x) = π n,m (y) ⋄ (n,m) π n,m (x) (4.27)
for any y ∈ Z n+m . Indeed, let X = π n,m (x), Y = π n,m (y). Then
x = ι n,m (X) − z, y = ι n,m (Y ) − u,
where u ∈ I n,m while, as it was noted in the proof of Proposition 7, z ∈ J n,m . Moreover, it is clear that z can be written in the form z = a z ′ a z a , where z a ∈ V n,m and z ′ a ∈ V ′ n,m (for instance, use the order as in the proof of Corollary 5). Then (dropping for brevity the multiplication symbol ⋄ (m+n) ) we have ι n,m (X)ι n,m (Y ) = (x + z)(y + u) = x + a z ′ a z a (y +z ′ +z) = xy + a z ′ a z a (y +z ′ +z) + xz +z ′ x ≡ xy mod I n,m . (4.28)
Herez ∈ V n,m andz ′ ∈ V ′ n,m . In the last equality we used the centrality of x. Due to Proposition 6, (4.28) is precisely equivalent to the fist part of (4.27). The second part of (4.27) is proved similarly.
Proofs
Tensor J
The multiplication map ⋄ in Z n (we return to the original notation) is given by the prescription (3.1), as in any reduction algebra. It can be formally expanded into a series over the root lattice of certain bilinear maps as follows. Set
U(b ± ) := U(h) ⊗ U(h) U(b ± ), U 12 (b) := U(b − ) ⊗ U(h) U(b + ).
All these are associative algebras. Besides, both algebras U(b ± ) are U(h)-bimodules. The algebra U 12 (b) admits three commuting actions of U(h). Two of them are given by the assignments
X(Y ⊗ Z) := XY ⊗ Z, (Y ⊗ Z)X := Y ⊗ ZX,
for any X ∈ U(h), Y ∈ U(b − ) and Z ∈ U(b + ). The third action associates to any X ∈ U(h),
Y ⊗ Z ∈ U 12 (b) the element Y X ⊗ Z = Y ⊗ XZ ∈ U 12 (b).
Present the projector P in an ordered form:
P = γ,iF γ,iÈγ,iHγ,i = γ,iH γ,iFγ,iÈγ,i , (5.1)
the summation is over γ ∈ Q + and i ∈ Z ≥0 ; everyF γ,i is an element of U(n − ) of the weight −γ, everyÈ γ,i is an element of U(n + ) of the weight γ andH γ,i ∈ U(h). Let J be the following element of U 12 (b):
J := γ,iF γ,i ⊗È γ,iHγ,i = γ,iH γ,iFγ,i ⊗È γ,i , γ ∈ Q + , i ∈ Z ≥0 .
Due to the PBW theorem in U(gl n ) the tensor J is uniquely defined by the projector P ; it is of total weight zero: hJ = Jh for any h ∈ h. We have the weight decomposition of J with respect to the adjoint action of h in the second tensor factor of U 12 (b):
J = λ∈Q + J λ ,
where J λ consists of all the terms, corresponding toF λ,iÈλ,iHλ,i in (5.1) (contributing to λ ∈ Q + in the summation),
J λ := iF λ,i ⊗È λ,iHλ,i .
By definition of J, the multiplication ⋄ in the double coset space Z n can be described by the relation
a ⋄ b = m ((a ⊗ 1)J(1 ⊗ b)) ,(5.2)
where m( i c i ⊗ d i ) is the image in Z n of the element i c i d i . Moreover, in (5.2) we can replace all productsÈ γ,i b in the second tensor factor by the adjoint action ofÈ γ,i on b (in fact, for E γ,i = e γm · · · e γ 1 , we can replaceÈ γ,i b by [È γ,i , b] or byê γm · · ·ê γ 1 (b), see (2.4)) and likewise all products aF γ,i in the first tensor factor by the opposite adjoint action ofF γ,i on a. We have a decomposition of the product ⋄ into a sum over Q + :
a ⋄ b = λ∈Q + a ⋄ λ b, where a ⋄ λ b := m ((a ⊗ 1)J λ (1 ⊗ b)) . (5.3)
If a and b are weight elements of Z n of weights ν(a) and ν(b), then the product a ⋄ λ b is the image in Z n of the sum i a i b i , where the weight of each b i is ν(b) + λ, and the weight of each a i is ν(a) − λ.
The tensor J satisfies the Arnaudon-Buffenoir-Ragoucy-Roche (ABRR) difference equation [1], see also [5] for the translation of the results of [1] to the language of reduction algebras. To describe the equation, let ϑ = 1 2 n k=1h 2 k ∈ U(h); for any positive root γ ∈ ∆ + , denote by T γ the following linear operator on the vector space U 12 (b):
T γ (X ⊗ Y ) := Xe −γ ⊗ e γ Y.
The ABRR equation means the relation [1, 5]:
[1 ⊗ ϑ, J] = − γ∈∆ + T γ (J).
This relation is equivalent to the following system of recurrence relations for the weight components J λ :
J λ · h λ + (λ, λ) 2 = − γ∈∆ + T γ (J λ−γ ) , (5.4)
whereh λ := k λ khk for λ = k λ k ε k . The recurrence relations (5.4) together with the initial condition J 0 = 1 ⊗ 1 uniquely determine all weight components J λ . It should be noted that the recurrence relations (5.4) provides less information about the structure of the denominators (from U(h)) of the summands of the extremal projector P than the information implied by the product formula (see [3]) for the extremal projector.
Using (5.4) we get in particular:
J α = −(h α + 1) −1 e −a ⊗ e α , α = ε i − ε i+1 , (5.5) J α+β = (h α+β + 1) −1 −e −α−β ⊗ e α+β + (h α + 1) −1 e −α e −β ⊗ e β e α + (h β + 1) −1 e −β e −α ⊗ e α e β , α = ε i−1 − ε i , β = ε i − ε i+1 , (5.6) J ε i −ε j +ε k −ε l = J ε i −ε j · J ε k −ε l , i < j < k < l. (5.7)
Braid group action
The proof of the relations (4.1) and (4.4) consists of the following arguments, valid for any reduction algebra. Let α be any simple root of gl n , α = ε i − ε i+1 and g α the corresponding sl 2 subalgebra of gl n . It is spanned by the elements e α = e i,i+1 , e −α = e i+1,i and h α = h i − h i+1 . Letσ α =σ i be the corresponding automorphism of the algebra A andq α =q i the Zhelobenko automorphism of Z n . Assume that Y ∈ A belongs, with respect to the adjoint action of g α , to an irreducible finite-dimensional g α -module of dimension 2j + 1, j ∈ {0, 1/2, 1, . . . }. Assume further that Y is homogeneous, of weight 2m, [h α , Y ] = 2mY . Identify Y with its image in Z n . Thenq α (Y ) coincides with the image in Z n of the element
j+m i=1 (h α + i + 1) ·σ α (Y ) · j+m i=1 (h α − i + 1) −1 .
This can be checked directly using [6,Proposition 6.5].
In the realization of irreducible sl 2 -modules as the spaces of homogeneous polynomials in two variables u and v, , u), or, in the basis |j, k := x j+k y j−k (j labels the representation; k = 0, 1, . . . , 2j),
e α → u ∂ ∂v , h α → u ∂ ∂u − v ∂ ∂v and e −α → v ∂ ∂u , the operatorσ α becomes (σ α f )(u, v) = f (−vσ α : |j, −j + k → (−1) k |j, j − k .
Proof of Lemma 1, Subsection 4.2. To see this, write a reduced expression forq w 0 ,q w 0 = q α i 1 · · ·q α i M with α i 1 , . . . , α i M simple roots. Thenq w 0 =q α i M · · ·q α i 1 as well. Writing, forq 2 w 0 , the second expression after the first one, we get squares ofq α is 's (which are conjugations bẙ h −1 α is 's; they thus commute) one after another. Moving these conjugations to the left through the remainingq's, we produce, exactly like in the construction of a system of all positive roots from a reduced expression for the longest element of the Weyl group of a reductive Lie group, the conjugation by the product (4.8) over all positive roots.
Proof of Proposition 2, Subsection 4.2. Only formula (4.9) needs a proof (formula (4.10) is a particular case of (4.3)).
For a moment, denote the longest element of the symmetric group S n byq (n) w 0 . Let ψ j := q jqj−1 · · ·q 1 (the product in the descending order). We haveq
(n+1) w 0 =q (n)
w 0 ψ n andq (n+1) w 0 = ψ 1 ψ 2 · · · ψ n (the product in the ascending order).
For j < n it follows from (4.4) that ψ j (z n+1,1 ) = (−1) j z n+1,j+1 (say, by induction on j). So, ψ n (z n+1,1 ) = q n ψ n−1 (z n+1,j+1 ) = (−1) n−1 q n (z n+1,n ) = (−1) n z n,n+1 , again by (4.4). Next, ψ k ψ k+1 · · · ψ n−1 (z n,n+1 ) = z k,n+1 by induction on n − k and again (4.4). Thus,q w 0 (z n+1,1 ) = (−1) n z 1,n+1 , (5.8) establishing (4.9) for i = n + 1 and j = 1. We now prove (4.9) for i > j (positions below the main diagonal) by induction backwards on the height i − j of a negative root; the formula (5.8) serves as the induction base. Assume that (4.9) is verified for a given level i − j and i − j − 1 > 0 (so that the positions (i, j + 1) and (i − 1, j) are still under the main diagonal). By (4.4),
z i,j+1 = −q j (z ij ), thereforě q w 0 (z i,j+1 ) = −q w 0 (q j (z ij )) = −q j ′ −1 (q w 0 (z ij )) = (−1) i+j+1q j ′ −1 z i ′ j ′ a:a<i ′ A ai ′ b:b>j ′ A j ′ b = (−1) i+j+1 z i ′ ,j ′ −1 A j ′ −1,j ′ a:a<i ′ A ai ′ b:b>j ′ A j ′ −1,b = (−1) i+j+1 z i ′ ,(j+1) ′ a:a<i ′ A ai ′ b:b>(j+1) ′ A (j+1) ′ ,b .
In the second equality we used the identityq w 0q j =q j ′ −1qw0 in the braid group; the third equality is the induction assumption; in the fourth equality we used that i ′ = j ′ − 1 (since i − j − 1 > 0) and then (4.4); in the fifth equality we replaced j ′ − 1 by (j + 1) ′ . The calculation forq w 0 (z i−1,j ) is similar; it uses z i−1,j =q i−1 (z ij ). The proof of the formula (4.9) for positions below the main diagonal is finished. The proof of (4.9) for i < j (positions above the main diagonal) follows now from Lemma 1.
Derivation of relations
The set of defining relations in Z n divides into several different types, see Section 4.3. We prove the necessary amount of relations of each type and get the rest by applying the transformations from the braid group as well as the anti-involution ǫ, see (3.7).
We never use the automorphism ω, defined in (3.8), in the derivation of relations. However, the involution ω is compatible with our set of relations in the sense explained in Section 5.4.
In the following we denote by the symbol ≡ the equalities of elements fromĀ modulo the sum (I − + I + ) of two ideals I − and I + defined in the beginning of Section 3. Moreover, for any two elements X and Y of the algebraĀ we may regard the expressions X ⋄ Y and X ⋄ λ Y as the sums of elements fromĀ defined in (5.2) and (5.3). The sum X ⋄ λ Y is finite. By the construction, all but a finite number of terms in the product X ⋄ Y belong to (I − + I + ). Unlike to the system of notation adopted in Section 3, our proof of each relation in Z n will use equalities inĀ taken modulo (I − + I + ). Elements z 12 and z 13 are images in Z n of E 12 and E 13 . Consider the product E 12 ⋄ λ E 13 . Since the adjoint action of gl n preserves the space p, see Section 2, this product is the sum of such monomials E ij E kl , with coefficients in U(h), that (i): the weight ε k − ε l of E kl is equal to the weight ε 1 − ε 3 of E 13 plus λ ∈ Q + , while (ii): the weight ε i − ε j of E ij is equal to the weight ε 1 − ε 2 of E 12 minus λ. Assume that E 12 ⋄ λ E 13 = 0. By (i), λ = −ε 1 + ε 3 + ε k − ε l and it can be positive only if k = 1 and l ≥ 3. So, the condition (i) implies that either λ = 0 or λ = ε 3 − ε l with l > 3. The possibility λ = ε 3 − ε l , l > 3, is excluded by the condition (ii). Therefore, λ = 0 and
E 12 ⋄ E 13 ≡ E 12 E 13 . (5.10)
Similarly, for λ ∈ Q + , which can non-trivially contribute to the product E 13 ⋄ E 12 , the analogue of the condition (i) on the weight λ gives the restriction λ = 0 or λ = ε 2 −ε k , k > 2; the analogue of the condition (ii) further restricts λ: λ = 0 or λ = ε 2 − ε 3 , so we have since J ε 2 −ε 3 = −e 32 ⊗ e 23 (h 23 + 1) −1 as it follows from the ABRR equation, see (5.5), or from the precise explicit expression for the projector P , see [3]. Thus, since E 12 and E 13 commute in the universal enveloping algebra
E 13 ⋄ ε 2 −ε 3 E 12 ≡ −EE 13 ⋄ E 12 ≡ E 13 E 12 + E 13 ⋄ ε 2 −ε 3 E 12 = E 12 E 13 1 + 1 h 23 ,(5.11)
Comparing (5.10) and (5.11) we find (5.9). Applying to (5.9) the anti-involution ǫ, see (3.7), we get the relation
z 21 ⋄ z 31 = z 31 ⋄ z 21h 23 + 1 h 23 . (5.12)
The rest of the relations (4.12) are obtained from (5.9) and (5.12) by applying different transformationsq w from the Weyl group.
2. Now we prove in Z n the relation
z 13 ⋄ z 24 − z 24 ⋄ z 13 = 1 h 12 − 1 h 34 z 23 ⋄ z 14 . (5.13)
We begin by the proof of this relation in Z 4 . We proceed in the same manner as for the derivation of the relation (5.9), Combining the three latter equalities we obtain (5.13) in Z 4 . The difference of the left and right hand sides of (5.13) in Z n is a linear combination of monomials in z ij of the total weight ε 1 + ε 2 − ε 3 − ε 4 . The weight is non-trivial, so the monomials can be only quadratic. Due to the stabilization phenomenon, each monomial should contain z ij with i > 4 or j > 4, but, by the weight arguments, there is no such non-zero possibility, which completes the proof of the relation (5.13) in Z n .
E 13 ⋄ E 24 ≡ E 13 E 24 + E 13 ⋄ ε 1 −ε 2 E
The rest of relations (4.13) is then obtained by applications of the transformations from the braid group.
3a. We continue and derive in Z 4 the relation (we remind the notation t ij := z ii − z jj , see Section 3, and H ij = E ii − E jj ):
z 23 ⋄ z 12 − z 12 ⋄ z 23 = t 12 ⋄ z 13 1 h 12 + t 23 ⋄ z 13 1 h 23 − z 43 ⋄ z 14h 34 + 1 h 34h24 .
(5.14)
Using (5.5)-(5.7), we calculate, to obtain the result for Z 4 : Combining the above equalities and taking into account that [E 12 , E 23 ] = e 13 ≡ 0, we get (5.14) in Z 4 . We could apply here the stability arguments (as we shall do in the sequel) but we give some more details at this point to give a flavor of how such derivations of relations work. For the same, as (5.15)-(5.19), calculations for Z n , the analogues of the conditions (i) and (ii), see paragraph 1 of this subsection, restrict λ to be of the form ε 1 − ε 2 + ε 3 − ε k , k ≥ 3 for (5.15); ε 2 − ε k , k ≥ 2 for (5.16); ε 3 − ε k , k ≥ 3 for (5.17) and (5.18); ε 4 − ε k , k ≥ 4 for (5.19). It follows, for, say, n = 5, that the right hand sides of (5.15)-(5.19) might be modified only by an addition of the term proportional to E 53 E 15 ; and this will be compensated by an addition of the term, proportional to z 53 ⋄ z 15 to the right hand side of (5.14), since E 53 ⋄ E 15 ≡ E 53 E 15 ; the proportionality coefficient is uniquely defined. This pattern clearly continues and we conclude that there is a relation in Z n of the form z 23 ⋄ z 12 − z 12 ⋄ z 23 = t 12 ⋄ z 13 1 h 12 + t 23 ⋄ z 13 1
E 12 ⋄ E 23 ≡ E 12 E 23 + E 12 ⋄ ε 1 −ε 2 E 23 + E 12 ⋄ ε 1 −ε 2 +ε 3 −ε 4 E 23 ≡ E 12 E 23 − H 12 E 13 1 h 12 , (5.15) E 23 ⋄ E 12 ≡ E 23 E 12 + E 23 ⋄ ε 2 −ε 3 E 12 + E 23 ⋄ ε 2 −ε 4 E 12 ≡ E 23 E 12 + H 23 E 13 1 h 23 − E 43 E 14 (h 23 − 1) h 23h24 ,(5.h 23 − k>3 z k3 ⋄ z 1k X k ,(5.20)
with certain, uniquely defined, coefficients X k ∈ U(h), k = 4, . . . , n, and already known X 4 = (h 34 +1)h −1 34h −1 24 . To find X 5 , . . . , X n , we apply to (5.20) the automorphismsq k , k = 4, . . . , n−1, which leave invariant the left hand side and the first two terms in the right hand side of (5.20). The uniqueness of the relation of the form (5.20), together with the equalityq k (z k3 ⋄ z 1k ) = z k+1,3 ⋄ z 1,k+1 (h k,k+1 + 1)h −1 k,k+1 , imply the recurrence relation X k+1 =q k (X k ) · (h k,k+1 + 1)h −1 k,k+1
and we find Here only the calculation of E 12 ⋄ E 21 deserves a little explanation; by the analogues of the conditions (i) and (ii), see paragraph 1 of this subsection, non-trivial contributions to E 12 ⋄ E 21 from the weights 0, ε 1 −ε 2 , 2(ε 1 −ε 2 ), ε 1 −ε 3 and 2ε 1 −ε 2 −ε 3 are possible. By the ABRR equation, J 2(ε 1 −ε 2 ) (2h 12 + 4) = −T ε 1 −ε 2 (J ε 1 −ε 2 ) and J 2ε 1 −ε 2 −ε 3 (2h 1 −h 2 −h 3 + 3) = −T ε 1 −ε 2 (J ε 1 −ε 3 ) − T ε 1 −ε 3 (J ε 1 −ε 2 ) − T ε 2 −ε 3 (J 2(ε 1 −ε 2 ) ). We leave further details to the reader. By the stabilization law in Z 4 we have a relation z 12 ⋄ z 21 = h 12 − t 12 ⋄ t 12 1 h 12 − 1 + 1≤i<j≤n z ji ⋄ z ij X ij , X ij ∈ U(h) (5.35) with n = 4, which differs from (5.29) by a presence of terms z 43 ⋄ z 34 , z 42 ⋄ z 24 , z 41 ⋄ z 14 , with coefficients in U(h). Consider in Z 4 the products z 12 ⋄z 21 , t 12 ⋄t 12 and z ji ⋄z ij , 1 ≤ i < j ≤ 4. The weights (ε 3 − ε 4 ) − (ε i − ε j ) do not belong to the cone Q + if 1 ≤ i < j < 4. Thus in the decomposition
E ji ⋄ E ij ≡ k<l E lk E kl a kl , a kl ∈ U(h), 1 ≤ i < j < 4,
the term with E 43 E 34 has a zero coefficient, a 34 = 0. The same statement holds for the products E 41 ⋄ E 14 and E 42 ⋄ E 24 since the weights (ε 3 − ε 4 ) − (ε i − ε 4 ) do not belong to Q + for i = 1, 2. Consider the product E 12 ⋄ E 21 . Here the term with E 43 E 34 is equal to E 12 ⋄ ε 1 −ε 2 +ε 3 −ε 4 E 21 . By (5.7), J ε 1 −ε 2 +ε 3 −ε 4 = e 43 e 21 ⊗ e 12 e 34 (h 12 + 1) −1 (h 34 + 1) −1 and E 12 ⋄ ε 1 −ε 2 +ε 3 −ε 4
t α ⋄ z −α−β = z −α−β ⋄ t α z α t 1 = t 1 z α h α + 3 h α + 2 − t 2 z α 1 h α + 2 − z −β z α+β h β + 2 (h β + 1)(h α + h β + 3)
, z α t 2 = −t 1 z α q α (z α ) = −z −α h α + 1 h α − 1 ,q α (z −α ) = −z α , q α (z β ) = z α+β ,q α (z α+β ) = −z β h α + 1 h α , (6.32)
q α (z −α−β ) = −z −β ,q α (z −β ) = z −α−β h α + 1 h α .
The set of ordering relations (6.2)-(6.29) is covariant with respect to the braid group generated byq α andq β . "Covariant" means that the elements of the braid group map a relation to a linear over U(h) combination of relations. For example, the operatorq α , up to multiplicative factors from U(h), transforms the relation (6.27) into itself and permutes the relations (6.28) and (6.29). Due to the choice (6.1) of the order, the set of relations (6.2)-(6.29) is invariant with respect to the anti-involution ǫ. The set of relations (6.2)-(6.29) is covariant under the involution ω as well.
4. Central elements of DR(sl 3 ). The degree 1 and degree 2 (in generators z ij ) central elements of the reduction algebra DR(sl 3 ) are:
C {DR(sl 3 ),1} = t α (2h α + h β + 6) + t β (h α + 2h β + 6),
C {DR(sl 3 ),2} = 1 3 (t α ⋄ t α + t β ⋄ t β + t α ⋄ t β + h 2 α + h 2 β + h α h β ) + z −α ⋄ z α h α + 3 h α + 2 + z −β ⋄ z β h β + 3 h β + 2 + z −α−β ⋄ z α+β h α + h β + 4 h α + h β + 3 1 + 1 h α + 1 + 1 h β + 1 + 2(h α + h β ).
Both Casimir operators, C {DR(sl 3 ),1} and C {DR(sl 3 ),2} arise from the quadratic Casimir operator C {sl 3 ,2} of the Lie algebra sl 3 , whose ordered form is
C {sl 3 ,2} = (E −α E α + E −β E β + E −α−β E α+β ) + 1 3 (H 2 α + H 2 β + H α H β ) + H α + H β .
Theorem 3 .
3The relations R are the defining relations for the weight generators z ij and t i of the algebra Z n . In particular, the set (3.3) of ordering relations follows over U(h) from (and is equivalent to) R.
With the help of the central elements (3.9), (3.10) and (3.11) one can build a unique linear in t's traceless combination:
Proposition 6 .
6For any x, y ∈ Z n ⊗ Z m we have ι n,m (x) ⋄ (n+m) ι n,m (y) = ι n,m (x ⋄ (n,m) y) + z, where z is some element of J n,m := V n,m ∩ V ′ n,m .
a k and b k are elements of Z n ⊗ Z m . Then, due to Proposition 6, we have the following relation in Z n+m : kā k ⋄ (m+n)bk = z, (4.24)
Proposition 7 .
7Let x be a central element of Z n+m . Then π n,m (x) is a central element of U(h) · (Z n ⊗ Z m ).
13 e 32 e 23 1 h 23 + 1 E 12 ≡ −E 13 e 32 e 23 E 12 1h 23
≡ E 12 E 13
1
h 23
,
24 ≡ E 13 E 24 − E 13 e 21 e 12 1 h 12 + 1 E 24 ≡ E 13 E 24 + E 23 E 14 1 h 12 , E 24 ⋄ E 13 ≡ E 24 E 13 + E 24 ⋄ ε 3 −ε 4 E 13 ≡ E 24 E 13 − E 24 e 43 e 34 1 h 34 + 1 E 13 ≡ E 13 E 24 + E 23 E 14 1 h 34 , E 23 ⋄ E 14 ≡ E 23 E 14 .
16 )
16H 12 ⋄ E 13 ≡ H 12 E 13 + H 12 ⋄ ε 3 −ε 4 E 13 ≡ H 12 E 13 ,(5.17) H 23 ⋄ E 13 ≡ H 23 E 13 + H 23 ⋄ ε 3 −ε 4 E 13 ≡ H 23 E 13 + E 43 E 14 1h 34
,
(5.18)
E 43 ⋄ E 14 ≡ E 43 E 14 .
(5.19)
(the first term in the right hand side is h 12 , without hat). The relation (5.29) is a corollary of the following calculations for Z 3 , together with the commutation relation [E 12 , E 21 ] = h 12 ,E 12 ⋄ E 21 ≡ E 12 E 21 − H 2 E 21 ⋄ E 12 ≡ E 21 E 12 − E 31 E 13 1 E 32 ⋄ E 23 ≡ E 32 E 23 − E 31 E 13 1 E 31 ⋄ E 13 ≡ E 31 E 13 . (5.34)X k =
1
h 2k
k−1
j=3h
jk + 1
h jk
.
+ z 31 ⋄ z 13
(h 12 − 1)(h 13 + 2)
h 12h23 (h 13 + 1)
− z 32 ⋄ z 23h
23 + 2
(h 23 + 1)h 13
(5.29)
12
1
(h 12 − 1)
+ E 21 E 12
2
(h 12 − 1)h 12
− E 32 E 23h
12 − 2
(h 12 − 1)h 13
+ E 31 E 13h
12 − 2
(h 12 − 1)h 12h13
,
(5.30)
H 12 ⋄ H 12 ≡ H 2
12 − E 21 E 12
4
h 12 + 1
− E 32 E 23
1
h 23 + 1
+ E 31 E 13 −1 +
1
h 23 + 1
+
4
h 12 + 1
1
h 13 + 1
,
(5.31)
h 23
,
(5.32)
h 12
,
(5.33)
E 21 = E 12 e 43 e 21 e 12 e 34 E 21 1 since [e 34 , E 21 ] = 0 (and [E 12 , e 43 ] = 0). In the similar manner, the term with E 43 E 34 in H 12 ⋄H 12 equals H 12 ⋄ ε 3 −ε 4 H 12 and vanishes since [e 34 , H 12 ] = 0.(h 12 − 1)(h 34 + 1)
≡ 0,
Unité Mixte de Recherche (UMR 6207) du CNRS et des Universités Aix-Marseille I, Aix-Marseille II et du Sud Toulon -Var; Laboratoire Affiliéà la FRUMAM (FR 2291)
t 12 =h 12 h 12 − 1 t 12 ,t 23 =h 23 h 13 − 1 − 1 h 12 − 1 t 12 +h 13 h 23 − 1 t 23 ,
. Relations for DR(gl 3 ). The ordering relations for the reduction algebra DR(gl 3 ) are easily restored from the list (6.2)-(6.29): the gl(3) generators t 1 , t 2 and t 3 , with t α = t 1 − t 2 and t β = t 2 − t 3 , can be written ast 1 = 1 3 (2t α + t β + I (3,t) ), t 2 = 1 3 (−t α + t β + I (3,t) ), t 1 = 1 3 (−t α − 2t β + I (3,t) ),where I(3,t) is the image of the central generator of gl(3), I (3,t) = t 1 + t 2 + t 3 . Since I (3,t) is central, one immediately writes relations for DR(gl 3 ). We illustrate it on the example of relations between the generator z α and the gl(3) generators t 1 , t 2 and t 3 :
AcknowledgmentsWe thank Loïc Poulain d'Andecy for computer calculations for the algebra Z 4 . We thank Elena Ogievetskaya for the help in preparation of the manuscript. The present work was partly done during visits of S.K. to CPT and CIRM in Marseille. The authors thank both Institutes. S.K. was supported by the RFBR grant 11-01-00962, joint CNRS-RFBR grant 09-01-93106-NCNIL, and by Federal Agency for Science and Innovations of Russian Federation under contract 14.740.11.0347.After the renormalization(4.11)and the change of variables(4.2), the derived relation becomes one of the relations in the first line of (4.14).Applying the transformations from the braid group, we obtain the rest of the relations from the list (4.14).3b. We have the following equalities in Z 3 :z 12 ⋄ t 1 = t 1 ⋄ z 12h 12 + 2 h 12 + 1 − t 2 ⋄ z 12 1 h 12 + 1 − z 32 ⋄ z 13h 23 + 1 h 23 (h 13 + 1),(5.21)z 12 ⋄ t 2 = −t 1 z 12 1 h 12 + 1 + t 2 ⋄ z 12h 12 + 2 h 12 + 1 + z 32 ⋄ z 13h 13 + 2 h 23 (h 13 + 1),(5.22)and the equality in Z 4 : We shall make a comment about the line (5.24) only. Here one might expect, by the analogues of the conditions (i) and (ii), see paragraph 1 of this subsection, non-trivial contributions to E 12 ⋄ H 4 from the weights 0, ε 1 − ε 2 , ε 1 − ε 3 and ε 1 − ε 4 . So we need, in addition to (5.5)-(5.7), some information about J ε 1 −ε 4 . It follows from the ABRR equation. Since e 13 and e 12 commute with H 4 , the parts T ε 2 −ε 4 (J ε 1 −ε 2 ) and T ε 3 −ε 4 (J ε 1 −ε 3 ) do not contribute; e 42 and e 43 commute with E 12 , so the parts T ε 1 −ε 2 (J ε 2 −ε 4 ) and T ε 1 −ε 3 (J ε 3 −ε 4 ) do not contribute either; J ε 1 −ε 2 +ε 3 −ε 4 = J ε 3 −ε 4 J ε 1 −ε 2 does not contribute again since e 12 commute with H 4 . Thus the only contribution is from T ε 1 −ε 4 (J 0 ) and we quickly arrive at (5.24). Applying the automorphismq 3 of the algebra Z 4 to the relation (5.23), see (4.1) and (4.4), we findThe stabilization arguments for (5.21), (5.22) and (5.25) imply the existence of the following relations in Z n :where all X (i) k belong to U(h) and the initial X (i) k are known:3 =h 13 + 2 h 23 (h 13 + 1) , XBy the braid group transformation laws, XAfter the renormalization (4.11) and the change of variables (4.2), the relations (5.26)-(5.28) turn into the relations (4.16) for i = 1, j = 2 and k = 3.Applying the transformations from the braid group, we obtain the rest of the relations from the list (4.16).4a. We now prove the relations (4.17) using the arguments similar to [10, Subsection 6.1.2]. Consider the products H k ⋄ λ H l and H l ⋄ λ H k with λ = 0. These products are linear combinations, over U(h), of monomials a kl; γ := H k e −γ 1 · · · e −γm e γm · · · e γ 1 H l and a lk; γ := H l e −γ 1 · · · e −γm e γm · · · e γ 1 H k , respectively; here m ≥ 0 and γ := {γ 1 , . . . , γ m }. By construction, the coefficient, from U(h), of the monomial a kl; γ in H k ⋄ λ H l equals the coefficient of a lk; γ in H l ⋄ λ H k . The expressions a kl; γ and a lk; γ are both equal in Z n to (γ 1 , ε k )(γ 1 , ε l )E −γ 1 e −γ 2 · · · e −γm e γm · · · e γ 2 E γ 1 .Thus H k ⋄ λ H l ≡ H l ⋄ λ H k for any λ = 0. In the zero weight part ⋄ 0 of the product ⋄ we have the equality H k H l = H l H k as well. Therefore,4b. The last group (4.18) of relations is left. Like above, we first explicitly derive the following relation in Z 3 :On the other hand, the product E 43 ⋄ E 34 definitely contains E 43 E 34 = E 43 ⋄ 0 E 34 . We thus conclude that the term z 43 ⋄ z 34 is absent in (5.35), that is X 34 = 0.For n > 4, again by the stabilization law, we have a unique relation of the form (5.35). By uniqueness, it is invariant with respect to the transformationsq 3 ,q 4 , . . . ,q n−1 which do not change the product z 12 ⋄ z 21 . Since X 34 = 0, we find, applyingq 4 ,q 5 , . . . ,q n−1 , that X 3j = 0, j > 3, wherefrom we further conclude, applyingq 3 ,q 4 , . . . ,q j−2 , that X ij = 0, 2 < i < j. We get finally the following relation in Z n :with knownApplying to (5.36) the transformationsq 3 ,q 4 , . . . ,q n−1 we find by uniquenessand thusAfter the renormalization (4.11) and the change of variables (4.2), the relations (5.36) with the obtained X 1k and X 2k turns into the relation (4.18) for i = 1 and j = 2.Applying the transformations from the braid group, we obtain the rest of the relations from the list (4.18).Proof of Theorem 3For the proof of Theorem 3 we just apply the results of[7], which state that the system R is the system of defining relations once it is satisfied in the algebra Z n .Remark 1. An attentive look shows that the system R is closed under the anti-involution ǫ; that is, ǫ transforms any relation from R into a linear over U(h) combination of relations from R. Moreover, R and ǫ(R) are equivalent over U(h). Indeed, all relations in Section (5.3) were derived in three steps: first we derive a relation in Z n with n ≤ 4; next by the stabilization principle we extend the derived relation to Z n with arbitrary n; and then we find the whole list of relations of a given (sub)type by applying the braid group transformations (products of the generatorš q i ). Due to (3.6) one could useq −1 i instead ofq i equivalently over U(h). A straightforward calculation establishes the equivalence of the extended to arbitrary n lists R and ǫ(R) over U(h) for Z n with n ≤ 4 (this verification is lengthy for some relations). Then with the help of (4.6) we finish the check of the equivalence of R and ǫ(R) over U(h) for Z n with arbitrary n.Similar arguments establish the equivalence of R and ω(R) over U(h); here ω is the involution defined in(3.8). In[7]this equivalence was obtained differently, as a by-product of the equivalence, over U(h), of the system R and the system (3.3) of ordering relations. In this section we write down the complete list of ordering relations for the diagonal reduction algebras DR(sl 3 ) and DR(sl 2 ). For completeness we include the formulas for the action of the braid group generators and the expressions for the central elements.We first give the list of relations for sl 3 . It is straightforward to give the list for sl 2 directly; we comment however on how the list of relations and the expressions for the central elements for sl(2)can be obtained by the cut procedure.The list of relations for gl 3 follows immediately from the list for sl 3 . 1. Relations for DR(sl 3 ). We write the ordering relations for the natural set of generators z ij , without redefinitions. We use here the following notation for sl 3 : z α := z 12 , z β := z 23 , z α+β := z 13 , z −α := z 21 , z −β := z 23 , z −α−β := z 31 , t α := t 12 , t β := t 23 , h α := h 12 , h β := h 23 .The relations are given for the following order≻ (this order was used in the proof in[7]of the completeness of the set of relations):Due to the established in Theorem 3 and remarks in Section 4.4 bijection between the set R and the set R ≺ of defining relations, one can divide the ordering relations into the types, in the same way as we divided the defining relations from the list R.The relations of type 1 are immediately rewritten as ordering relations:The relations of type 2 are absent for sl 3 . The ordering relations corresponding to the relations of type 3 we collect according to their weights. For each weight there is one relation of subtype (3a) and two relations of subtype (3b).Weight α + β:, (6.9). (6.10)Weight α:Weight β:, (6.14).Weight −α:.Weight −α − β:.Finally, we rewrite the relations of the type 4, that is, of weight 0, in the form of ordering relations. In addition to the general commutativity relation (subtype (4a))we have three relations of subtype (4b):, (6.28)where in one factor in the numerator of the last coefficient we returned to the notationh α = h α + 1 andh β = h β + 1 to make the expression fit into the line.,3. Braid group action. There are two braid group generators,q α andq β , for the diagonal reduction algebra DR(sl 3 ). Given the action ofq α , the action ofq β on DR(sl 3 ) can be reconstructed by using the automorphism ω, see(3.8), arising from the outer automorphism of the root system of sl 3 , which exchanges the roots α and β,The action of the automorphism ω on the Cartan subalgebra h α , h β of the diagonal Lie algebra sl 3 and on the generators of the reduction algebra DR(sl 3 ) readsThe action of the braid group generatorq α on the Cartan subalgebra h α , h β of the diagonal Lie algebra sl 3 reads:This action reduces to the standard action of the Weyl group for the shifted generatorsh α = h α + 1 andh β = h β + 1. The action ofq α on the zero weight generators {t α , t β } of the diagonal reduction algebra DR(sl 3 ) is given by:Finally, the action ofq α on the rest of the generators iš. We calculate C {sl 3 ,2} ⊗ 1 + 1 ⊗ C {sl 3 ,2} and replace the multiplication by the product ⋄. Here one needs, in addition to (5.30)-(5.34), the expression for H 23 ⋄ H 23 which is obtained by applying the involution ω to (5.31) and the equality (in the notation of Section 5.The central elements C {DR(sl 3 ),1} and C {DR(sl 3 ),2} are invariant with respect to the anti-involution ǫ and the involution ω as well. 5. Diagonal reduction algebra DR(sl 2 ). For the diagonal reduction algebra of sl 2 we use the following notation:The cut provides the following description of the algebra Z 2 with generators z + , z − and t:The Casimir operators for DR(sl 2 ) are C {DR(sl 2 ),1} := (h + 2)t, (6.36)Both operators, C {DR(sl 2 ),1} and C {DR(sl 2 ),2} arise from the quadratic Casimir operator C {sl 2 ,2} of the Lie algebra sl 2 ,The Casimir operators can be obtained by the cutting also, as explained in Subsection 4.6, see Proposition 7. One replaces the sl 3 generators by the gl 3 generators in the Casimir operators for sl 3 then cuts and rewrites, using the notation (3.9) and (3.10), the result according to the gl 2 formulas2 .The cut of C {DR(sl 3 ),1} is3 h 3 (6.38) and the cut of C {DR(sl 3 ),2} isAs expected, the coefficients of (t3 ) ⋄i h j 3 for all i and j in the expressions (6.38) and (6.39) are central elements of the algebra Z 2 .Due to (6.30), (6.31) and (6.32), the action of the braid group generator readš It should be noted thatq can be included in a family of more general automorphisms of the reduction algebra DR(sl 2 ). Lemma 8. The most general automorphism of the reduction algebra DR(sl 2 ) transforming the weights of elements in the same way as the operatorq and linear over U(h) in the generators z + , z − and t iswhere β = ±1 is a constant and γ(h) is an arbitrary function.Proof . We are looking for an invertible transformation which preserves the relations (6.33)-(6.35) and has the form where c is a constant. Applying the transformation (6.42) to the relation (6.34) and collecting the free term and the terms with t ⋄ t and z − ⋄ z + , we find (after simplifications)1 + h f 1 (h) + 3 G(h) f 1 (h) + 2 f 1 (h) + 1 = 0, (6.46) where G(h) := f 3 (h)f 4 (h − 2). Excluding G from the system (6.46), (6.47), we obtainwith β 2 = 1. The substitution of (6.45) and (6.49) into (6.43) leads to c = −2 and it then follows from (6.46) thatThe remaining relation (6.48) is now automatically satisfied. The proof is finished.The Casimir operator C {DR(sl 2 ),2} is invariant under the general automorphism (6.41). The Casimir operator C {DR(sl 2 ),1} is invariant under the automorphism (6.41) iff β = −1.The mapq defined by (6.40) is a particular choice of (6.41), corresponding to β = −1 and γ(h) = − 1 h+1 . The map (6.40) is not an involution (but it squares to the identity on the weight zero subspace of the algebra). However, the general map (6.41) squares to the identity on the whole algebra iff the function γ is odd, γ(−h) = −γ(h).
. ∈ A H I For The Element E Ii, H Ij = H I − H J = E Ii − E Jj, also use the notation H i for the element E ii ∈ A and H ij = H i − H j = E ii − E jj .
Universal solutions of quantum dynamical Yang-Baxter equations. D Arnaudon, E Buffenoir, E Ragoucy, Roche Ph, 10.1023/A:1007498022373q-alg/9712037Lett. Math. Phys. 44Arnaudon D., Buffenoir E., Ragoucy E., Roche Ph., Universal solutions of quantum dynamical Yang-Baxter equations, Lett. Math. Phys. 44 (1998), 201-214, q-alg/9712037.
Projection operators for simple Lie groups. II. General scheme for construction of lowering operators. The groups SU(n). R M Asherova, Yu F Smirnov, V N Tolstoy, 10.1007/BF01028268Theoret. and Math. Phys. 15English translTeoret. Mat. Fiz.Asherova R.M., Smirnov Yu.F., Tolstoy V.N., Projection operators for simple Lie groups. II. General scheme for construction of lowering operators. The groups SU(n), Teoret. Mat. Fiz. 15 (1973), 107-119 (English transl.: Theoret. and Math. Phys. 15 (1973), 392-401).
Description of a certain class of projection operators for complex semi-simple Lie algebras. R M Asherova, Yu F Smirnov, V N Tolstoy, 10.1007/BF01140268Mat. Zametki. 26English translMath. NotesAsherova R.M., Smirnov Yu.F., Tolstoy V.N., Description of a certain class of projection operators for complex semi-simple Lie algebras, Mat. Zametki 26 (1979), 15-25 (English transl.: Math. Notes 26 (1979), 499-504).
Quantum groups as hidden symmetries of classical representation theory. I Cherednik, Differential Geometric Methods in Theoretical Physics. I. SolomonTeaneck, NJChesterCherednik I., Quantum groups as hidden symmetries of classical representation theory, in Differential Ge- ometric Methods in Theoretical Physics (Chester, 1988), Editor A.I. Solomon, World Sci. Publ., Teaneck, NJ, 1989, 47-54.
An extremal projector and dynamical twist. S Khoroshkin, 10.1023/B:TAMP.0000022749.42512.fdTheoret. and Math. Phys. 139Teoret. Mat. Fiz.Khoroshkin S., An extremal projector and dynamical twist, Teoret. Mat. Fiz. 139 (2004), 158-176 (English transl.: Theoret. and Math. Phys. 139 (2004), 582-597).
Mickelsson algebras and Zhelobenko operators. S Khoroshkin, O Ogievetsky, 10.1016/j.jalgebra.2007.04.020math.QA/0606259J. Algebra. 319Khoroshkin S., Ogievetsky O., Mickelsson algebras and Zhelobenko operators, J. Algebra 319 (2008), 2113- 2165, math.QA/0606259.
Diagonal reduction algebras of gl type. S Khoroshkin, O Ogievetsky, 10.1007/s10688-010-0023-0arXiv:0912.4055Funktsional. Anal. i Prilozhen. 443Funct. Anal. Appl.Khoroshkin S., Ogievetsky O., Diagonal reduction algebras of gl type, Funktsional. Anal. i Prilozhen. 44 (2010), no. 3, 27-49 (English transl.: Funct. Anal. Appl. 44 (2010), 182-198), arXiv:0912.4055.
On the determination of irreducible modules by restriction to a subalgebra. J Lepowsky, G W Mccollum, http:/www.ams.org/leavingmsn?url=http:/dx.doi.org/10.2307/1996195Trans. Amer. Math. Soc. 176Lepowsky J., McCollum G.W., On the determination of irreducible modules by restriction to a subalgebra, Trans. Amer. Math. Soc. 176 (1973), 45-47.
Extension of the algebra U (g) for infinite-dimensional classical Lie algebras g, and the Yangians Y (gl(m)). G Olshanski, Dokl. Akad. Nauk SSSR. 297English translSoviet Math. Dokl.Olshanski G., Extension of the algebra U (g) for infinite-dimensional classical Lie algebras g, and the Yangians Y (gl(m)), Dokl. Akad. Nauk SSSR 297 (1987), 1050-1054 (English transl.: Soviet Math. Dokl. 36 (1988), 569-573).
D Zhelobenko, Representations of reductive Lie algebras. Nauka, Moscowin RussianZhelobenko D., Representations of reductive Lie algebras, Nauka, Moscow, 1994 (in Russian).
| [] |
[
"Procedure for improving cross-resonance noise resistance using pulse-level control",
"Procedure for improving cross-resonance noise resistance using pulse-level control"
] | [
"David Danin \nDepartment of Physics\nof Great Britain and Northern\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom, Ireland\n",
"Felix Tennie \nDepartment of Physics\nof Great Britain and Northern\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom, Ireland\n\nDepartment of Aeronautics\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUnited Kingdom of Great Britain and Northern Ireland\n"
] | [
"Department of Physics\nof Great Britain and Northern\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom, Ireland",
"Department of Physics\nof Great Britain and Northern\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUnited Kingdom, Ireland",
"Department of Aeronautics\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUnited Kingdom of Great Britain and Northern Ireland"
] | [] | Current implementations of superconducting qubits are often limited by the low fidelities of multiqubit gates. We present a reproducible and runtime-efficient pulse-level approach for calibrating an improved cross-resonance gate CR(θ) for arbitrary θ. This CR(θ) gate can be used to produce a wide range of other two-qubit gates via the application of standard single-qubit gates. By performing an interleaved randomised benchmarking experiment, we demonstrate that our approach leads to a significantly higher noise resistance than the circuit-level approach currently used by IBM. Hence, our procedure provides a genuine improvement for applications where noise remains a limiting factor. | null | [
"https://export.arxiv.org/pdf/2303.12771v1.pdf"
] | 257,663,370 | 2303.12771 | 3e2e64f1d6d9c5881508d7943596f22041d37808 |
Procedure for improving cross-resonance noise resistance using pulse-level control
22 Mar 2023
David Danin
Department of Physics
of Great Britain and Northern
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUnited Kingdom, Ireland
Felix Tennie
Department of Physics
of Great Britain and Northern
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUnited Kingdom, Ireland
Department of Aeronautics
Imperial College London
South Kensington CampusSW7 2AZLondonUnited Kingdom of Great Britain and Northern Ireland
Procedure for improving cross-resonance noise resistance using pulse-level control
22 Mar 2023(Dated: March 23, 2023)
Current implementations of superconducting qubits are often limited by the low fidelities of multiqubit gates. We present a reproducible and runtime-efficient pulse-level approach for calibrating an improved cross-resonance gate CR(θ) for arbitrary θ. This CR(θ) gate can be used to produce a wide range of other two-qubit gates via the application of standard single-qubit gates. By performing an interleaved randomised benchmarking experiment, we demonstrate that our approach leads to a significantly higher noise resistance than the circuit-level approach currently used by IBM. Hence, our procedure provides a genuine improvement for applications where noise remains a limiting factor.
Quantum computers promise to provide unprecedented computational power in applications such as optimisation or simulation by exploiting the fact that information is not encoded in classical but in quantum systems [1,2]. In recent years, the field has seen the rapid development of improved quantum hardware [3]. However, the practical benefit of commercially available quantum computers based on superconducting qubits remains limited by the relatively low fidelities of multi-qubit interactions [4].
In addition to the improvement of hardware components, it is possible to enhance gate fidelities using optimised control approaches [5]. With the introduction of Qiskit Pulse [6], it is now possible to precisely control real quantum hardware via the IBM Quantum Lab [7]. As explained in Ref. [8], one can specify the amplitude, frequency and phase of the physical microwave pulses that drive the qubits to implement custom single-qubit and multi-qubit gates [9]. Hence, Qiskit Pulse allows for designing and testing control approaches on the level of physical operations instead of logical operations [10].
The cross-resonance gate (CR) is a particularly important two-qubit interaction on superconducting qubits, as it combines various desirable features and enables the construction of the Controlled-NOT gate [11] which is the standard entangling operation of universal gate sets [2]. As demonstrated in Ref. [12], one can implement a high-fidelity CR gate using the pulse sequence schematically illustrated in Fig. 1. When successfully calibrated, this corresponds to implementing the interaction H I ≈ g(Z ⊗ X) with g some coupling constant that depends on the hardware components and the drive amplitude [12,13]. The time evolution operator generated by this Hamiltonian reads U (t) = cos(gt)1 ⊗ 1 − i sin(gt)Z ⊗ X [9]. Setting gt = π/4 by changing the amplitude or duration of the pulses, a CR(π/2) gate is implemented [12].
By using Qiskit Pulse on publicly available quantum backends via the IBM Quantum Lab, the method presented in Ref. [12] can be extended to significantly improve the noise resistance of multi-qubit gates. Specifically, we describe a pulse-level approach for calibrating a set of cross-resonance gates and demonstrate that they achieve significantly higher noise resistances than their FIG. 1. The schematic adapted from Ref. [12] illustrates the pulse schedule for a CR(π/2) cross-resonance gate. The control qubit QC is driven at the resonant frequency ωT of the target qubit QT . Unwanted terms in the interaction Hamiltonian HI are suppressed by the echo sequence on QC (i.e. the upper two drive lines) and cancellation tones on QT (i.e. the lower drive line), such that HI ≈ g(Z ⊗ X) follows.
circuit-level implementations used by IBM. Crucially, the procedure we present is straightforwardly replicated. Accordingly, we provide a powerful extension to the set of high-fidelity, multi-qubit gates on currently available quantum computers based on superconducting qubits.
First, we introduce a runtime-efficient procedure for calibrating a Z ⊗X cross-resonance interaction CR(θ) via the IBM Quantum Lab, thereby extending the approach presented in Ref. [12] to values of θ other than π/2. Second, we describe how this CR(θ) gate can be used to straightforwardly implement a range of other two-qubit interactions. And finally, we demonstrate that our pulselevel implementation achieves significantly higher noise resistances, compared to the circuit-level implementation which IBM currently uses, by performing a modified interleaved randomised benchmarking experiment [14].
We begin by presenting our procedure for calibrating a CR(θ) gate. The method is adapted from Ref. [12] but differs in two important respects. First, we generalise the procedure to values of θ other than π/2. And second, we streamline the procedure to make it more runtimeefficient. This enables us to perform the full calibration procedure on publicly available quantum backends via the IBM Quantum Lab, even under runtime constraints.
First, we need to determine the correct amplitude 2. Results for the amplitude calibration experiment with θ = π/5. We determine the correct pulse amplitude by reading off A θ which is defined by Z(A θ ) = cos(θ). Using this amplitude will lead to a CR(π/5) gate which has the same duration as the CR(π/2) pulse that forms part of the standard Controlled-NOT implementation between QC and QT . The error bars are smaller than the marker size.
for the CR(θ) pulse between our control qubit Q C and target qubit Q T . For this, we define a flat-top pulse with Gaussian edges and some real amplitude A. The width and Gaussian rise time of the pulse are inherited from the CR(π/2) pulse that forms part of the standard Controlled-NOT implementation between Q C and Q T . While testing these parameters might lead to a more precise calibration, we adopt this assumption to significantly reduce the calibration runtime. We note that this assumption is self-consistent since it leads to a high-fidelity CR(θ) gate as shown in the subsequent experiments. Then, we sweep through different real amplitude values A and measure Q T in the computational basis to calculate the Pauli expectation value Z(A) . We repeat the experiment with Q C initialised in |0 and |1 . Assuming that the Z ⊗ X or Z ⊗ Y component in H I is much larger than the other contributions, we find Z ≈ cos(θ), as for an ideal CR(θ) gate we have Z = cos(θ). Note that the assumption made here is consistent with the results of the subsequent tomography experiments. Hence, for a given θ, we can find the amplitude A θ that leads to the correct value of Z and use this amplitude for our pulse.
Second, we need to determine the correct phase for the CR(θ) pulse. For this, we sweep through different pulse widths with our flat-top Gaussian pulse using the real amplitude that we previously determined. We repeat the experiment with Q C initialised in |0 and |1 . By measuring the expectation values X , Y , and Z on the target qubit, we reconstruct the coefficients of the terms in the cross-resonance interaction Hamiltonian H I . For details regarding the Hamiltonian tomography experiment, we refer the reader to Ref. [12] and Ref. [15].
Hence, we can determine the coefficients C ZX and C ZY of the cross-resonance Z ⊗ X and Z ⊗ Y components in H I , respectively. Recognising that C ZX ∝ cos(φ − φ 0 ) and C ZY ∝ sin(φ − φ 0 ), where φ is the phase of the cross-resonance pulse [11], we can set the phase of the
pulse to φ 0 = − tan −1 (C ZY /C ZX ) such that the Z ⊗ Y FIG. 3.
Results of the Hamiltonian tomography experiment using the fully calibrated cross-resonance pulse sequence. We fit the data as described in Ref. [15] to extract the coefficients of the contributions in the interaction Hamiltonian HI and find the values indicated at the bottom of the figure. The coefficient CZX of the Z ⊗ X term is significantly larger than all other coefficients which indicates a successful calibration. The error bars are smaller than the marker size. component in H I vanishes. Thereby, we can calibrate the phase of the cross-resonance pulse in a single experiment. This provides a far more efficient method than sweeping through phases as described in Ref. [8] and Ref. [12].
Third, we need to determine the correct phase and amplitude for the cancellation pulse, which is a resonant flat-top Gaussian pulse on the target qubit with the same duration and Gaussian rise times as the cross-resonance pulse. The purpose of the cancellation pulse is to neutralise the 1 ⊗ X and 1 ⊗ Y components in H I . The correct phase for the cancellation tone can be inferred from the Hamiltonian tomography experiment we already performed. By reading off the C 1X and C 1Y coefficients of the 1 ⊗ X and 1 ⊗ Y components in H I , we can calculate φ 1 = − tan −1 (C 1Y /C 1X ). As the phase of the cross-resonance pulse is set to φ 0 , the correct phase for the cancellation tone is φ 0 − φ 1 as presented in Ref. [12].
To determine the correct amplitude, we perform two Hamiltonian tomography experiments for the full pulse sequence in Fig. 1. In the first experiment, the cancellation tone amplitude is set to zero while in the second experiment, we set it to some value A 0 . The correct order of magnitude for A 0 can be estimated from the cancellation tone of the CR(π/2) pulse that forms part of the Controlled-NOT implementation between Q C and Q T . Hence, we can extract the values C 1 1X and C 1 1Y as well as C 2 1X and C 2 1Y from the two experiments. Assuming a linear relationship between the cancellation tone amplitude and the coefficients as seen in Ref. [12], we find A X =
QT ZY(θ) = ZX(θ) QC S † S QT ZZ(θ) = ZX(θ) QC H H QT XX(θ) = H ZX(θ) H QC QT XY(θ) = H ZX(θ) H QC S † S QT YY(θ) = S † H ZX(θ) H S QC S † S FIGA 0 C 1 1X /(C 1 1X − C 2 1X ) and A Y = A 0 C 1 1Y /(C 1 1Y − C 2 1Y
). If the value of φ 1 is calibrated correctly, then we find A X ≈ A Y as the unique solution for the correct amplitude of the cancellation tone [12]. Hence, to calibrate the full cross-resonance pulse sequence, we only require four Hamiltonian tomography experiments which provides a far more efficient procedure than the calibration methods described in Ref. [8] and Ref. [12]. Furthermore, it is now possible to calibrate the pulse sequence such that it implements a CR(θ) gate for values of θ other than π/2.
We have implemented this calibration procedure using the seven-qubit IBM Quantum backend ibm_oslo with qubit 2 and qubit 1 as the control qubit and target qubit, respectively. The resonance frequency and anharmonicity of the control qubit are f 2 = 4.962 GHz and δ 2 = −0.344 GHz, and f 1 = 5.046 GHz and δ 1 = −0.343 GHz for the target qubit [16]. To illustrate our generalised procedure by way of example, we calibrate a CR(θ) gate for θ = π/5. In all calibration experiments, we use at least 4 000 repetitions per circuit such that the statistical errors become negligibly small. We also mitigate readout errors using the method described in Ref. [17]. The results of the amplitude calibration experiment are illustrated in Fig. 2 and the results of an experiment that verifies the calibration are displayed in Fig. 3. For these two experiments, we have used 20 000 repetitions per circuit. Setting the pulse width to the inherited width as described above, we receive the CR(π/5) gate. This demonstrates that we can use our runtime-efficient, pulse-level procedure to calibrate a CR(θ) gate for θ other than π/2.
Having implemented the CR(θ) gate with H ∝ Z ⊗ X, it is straightforward to implement a range of other twoqubit interactions. As illustrated in Fig. 4, we can use standard single-qubit gates on Q T and Q C to convert the Z ⊗ X interaction into any A ⊗ B interaction with A, B ∈ {X, Y, Z}. Note that we have written the gate that corresponds to the Hamiltonian H = (−θ/2)A ⊗ B as AB(θ) for ease of notation. The relations in Fig. 4 are easily proven using standard gate identities [2]. Fi-FIG. 6. Results of the interleaved randomised benchmarking experiments for six different two-qubit gates in the custom pulselevel and standard circuit-level implementation on a real IBM Quantum backend. Each point indicates the fractional ground state population averaged over ten random gate sequences while error bars show the standard deviations. We observe an exponential decay to the fully mixed state indicated by the dashed line at a fractional ground state population of 0.25. For each gate, FS and FC characterise the respective noise resistances of the standard and custom implementation as discussed in the main text. In all cases, we find that FC ≥ FS marks a significant improvement in the noise resistances of the gates.
nally, the XZ(θ), YZ(θ), and YX(θ) gates can either be implemented by circuit identities used in Fig. 4, or alternatively by swapping the control and target qubit in the calibration procedure. Hence, we conclude that having calibrated the ZX(θ) gate, it is straightforward to implement any of the nine AB(θ) gates with A, B ∈ {X, Y, Z}.
Using pulse-level methods, we can further extend the set of easily implemented two-qubit gates. Note that the S and S † gates in Fig. 4 correspond to virtual phase shifts with ∆φ = ±π/2 on the relevant qubit, respectively [18]. As Qiskit Pulse allows us to directly specify a phase shift, we can also implement values of ∆φ other than π/2. For instance, by shifting the phase of the cross-resonance pulse and cancellation tone by ∆φ 0 , we can convert the ZX(θ) gate into a Z(cos(∆φ 0 )X+sin(∆φ 0 )Y)(θ) gate. While this treatment is not exhaustive, it nicely illustrates that using circuit-level and pulse-level methods, a range of two-qubit interactions are straightforwardly implemented once we have calibrated the ZX(θ) gate.
Any of these gates can also be implemented using circuit-level methods with at most three Controlled-NOT gates [19]. However, the advantage of our pulse-level implementation of the ZX(θ) gate is that we require fewer cross-resonance pulses as illustrated in Fig. 5. As twoqubit interactions on superconducting qubits are susceptible to noise [9], minimising the number of crossresonance pulses should increase the noise resistance of the ZX(θ) gate. Further, as single-qubit gates achieve near-perfect fidelities [20] while virtual phase gates have perfect fidelities [18], converting our ZX(θ) gate into other two-qubit gates as in Fig. 4 should lead to similar improvements for the noise resistances of these gates.
We test both hypotheses by performing an interleaved randomised benchmarking experiment similar to those in Ref. [14] and Ref. [21]. To measure the noise resistance we proceed as follows. We define a set of gate sequence lengths {m 1 , ..., m j } with ∆ = m i+1 − m i some fixed positive integer and m j = N . Using the StandardRB method in Qiskit Pulse, we sample a set of random gate sequences {C 1 , ..., C N } and define a new set of gate sequences R = {R m1 , ..., R mj } with R mi = C 1 ...C miCmi whereC mi inverts the previous operations such that the action of each R mi is just the identity operation. Then, to measure the noise resistance of the standard and custom ZX(θ) gate, we interleave ZX(θ)ZX(−θ) after every C l in each R mi , giving two sets of gate sequences R S and R C , respectively. We run each of the gate sequences in R, R S and R C , and measure the fractional ground state population. As the sequences in R S and R C are identity operations that acquire an additional error due to the interleaved cross-resonance gates, we expect the ground state population to decay faster by a factor of F 2mi , where F ≤ 1 characterises the additional error introduced by the ZX(θ) gate, in comparison to the case of non-interleaved gate sequences in R. By fitting the data to an exponential decay, we find the values of F S and F C .
We have implemented the interleaved benchmarking procedure using the same quantum backend and qubits as described above. The gates ZX, ZY, ZZ, XY, XX and YY are tested using m 1 = 5, ∆ = 7, and N = 68 as benchmarking parameters. For the custom implementation we have used the ZX(π/5) gate implemented above while for the standard implementation we have used circuit-level methods in Qiskit [22], employing the circuit identities in Fig. 4 where necessary. For each gate, we repeat the experiment with ten random gate sequences using 20 000 repetitions per circuit. The results are shown in Fig. 6.
We must be careful in interpreting F S and F C as they do not characterise the total gate error but rather the error associated with performing an identity operation by using the ZX(θ) gate and its inverse. In general, we expect this error to come from coherent errors and noise. Performing additional Hamiltonian tomography experiments for the ZX(θ) gate and its inverse both in the custom and standard implementation, we measure similar coefficients for all terms in the respective Hamiltonians. This allows us to rule out the possibility that the large discrepancy between F S and F C is due to coherent errors.
Hence, we can interpret F S and F C as characterising the error from noise for the standard and custom ZX(θ) gate implementations, respectively. With this interpretation, we can verify both hypotheses. First, for the ZX(θ) gate, we observe that F C is significantly larger than F S in Fig. 6, indicating that our custom implementation, requiring fewer cross-resonance pulses, is more resistant to noise. And second, we see similar improvements for the noise resistances of the other gates that we tested in Fig. 6, as expected due to high single-qubit gate fidelities. Therefore, our pulse-level implementation of the ZX(θ) gate provides us with a wide range of two-qubit gates with significantly improved noise resistances. Since the overall pulse schedule time is significantly shorter than the coherence time of the qubits [16], we conjecture that this improvement is due to the simplified pulse architecture we developed, rather than the reduced gate time.
Finally, we comment on the relevance of this result for practical quantum computing. While generally advantageous, improved noise resistances for two-qubit operations are particularly useful in Hamiltonian Simulation. This often requires the repeated application of multi-qubit gates, for instance in Trotterisation approaches [2], and thus remains limited by the low noise resistances of multi-qubit interactions. By calibrating a gate using our pulse-level approach, the noise resistance can be significantly improved. This can enable Hamiltonian Simulation, as we will present in a subsequent paper [23]. In this sense, our procedure is not just interesting from an engineering but also from a physics perspective, as we can use the improved gates to simulate interesting physical systems on publicly available IBM quantum backends.
To conclude, we provide a powerful extension to the set of high-fidelity, multi-qubit gates on currently available quantum computers based on superconducting qubits. With our runtime-efficient and reproducible pulse-level approach, one can calibrate a CR(θ) cross-resonance gate for a given value of θ which is extended to a wide range of other two-qubit gates by applying single-qubit gates. We have demonstrated that this pulse-level approach, requiring fewer two-qubit pulses than the circuit-level approach currently used by IBM, significantly improves the noise resistances of the CR(θ) gate and related interactions.
While providing a compelling proof of principle, we were limited to performing experiments on publicly available IBM Quantum backends. Future work should focus on repeating our experiments on those IBM Quantum backends which are currently not available to the general public. Further, our pulse-level approach should be tested in quantum computing applications to demonstrate the practical usefulness of the improvement. This will be explored for Hamiltonian Simulation in a subsequent paper.
5. (a) Schedule for the ZX(θ) gate using IBM's circuit-level implementation. The four yellow pulses are the cross-resonance pulses which form part of the two required Controlled-NOT gates. The gate duration is 497.8 ns. (b) Schedule for the ZX(θ) gate using our pulse-level approach which only requires two cross-resonance pulses, halving the number of two-qubit pulses in comparison with the circuitlevel approach. The gate duration is reduced to 206.2 ns.
ACKNOWLEDGMENTSWe acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. DD acknowledges support from the Studienstiftung des Deutschen Volkes. FT acknowledges support from the UKRI New Horizons Grant EP/X017249/1.
A Montanaro, 10.1038/npjqi.2015.23npj Quantum Inf. 215023A. Montanaro, npj Quantum Inf 2, 15023 (2016).
M A Nielsen, I L Chuang, 10.1017/CBO9780511976667Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University PressM. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition (Cambridge University Press, 2010).
What's next for quantum computing. M Brooks, MIT Technology Review. M. Brooks, "What's next for quantum com- puting," (2023), MIT Technology Review https://www.technologyreview.com/2023/01/06/ 1066317/whats-next-for-quantum-computing/.
. M Kjaergaard, M E Schwartz, J Braumüller, P Krantz, J I Wang, S Gustavsson, W D Oliver, 10.1146/annurev-conmatphys-031119-050605Annu. Rev. Condens. Matter Phys. 11369M. Kjaergaard, M. E. Schwartz, J. Braumüller, P. Krantz, J. I.-J. Wang, S. Gustavsson, and W. D. Oliver, Annu. Rev. Condens. Matter Phys. 11, 369 (2020).
. S J Glaser, U Boscain, T Calarco, C P Koch, W Köckenberger, R Kosloff, I Kuprov, B Luy, S Schirmer, T Schulte-Herbrüggen, D Sugny, F K Wilhelm, 10.1140/epjd/e2015-60464-1Eur. Phys. J. D. 69279S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, Eur. Phys. J. D 69, 279 (2015).
Qiskit: An open-source framework for quantum computing. 10.5281/zenodo.2573505"Qiskit: An open-source framework for quantum com- puting," (2021).
. " Ibm Quantum, "IBM Quantum,", https://quantum-computing.ibm. com (2023).
. T Alexander, N Kanazawa, D J Egger, L Capelluto, C J Wood, A Javadi-Abhari, D C Mckay, 10.1088/2058-9565/aba404Quantum Sci. Technol. 544006T. Alexander, N. Kanazawa, D. J. Egger, L. Capelluto, C. J. Wood, A. Javadi-Abhari, and D. C. McKay, Quan- tum Sci. Technol. 5, 044006 (2020).
. P Krantz, M Kjaergaard, F Yan, T P Orlando, S Gustavsson, W D Oliver, 10.1063/1.5089550Appl. Phys. Rev. 621318P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gus- tavsson, and W. D. Oliver, Appl. Phys. Rev. 6, 021318 (2019).
J M Gambetta, J M Chow, M Steffen, 10.1038/s41534-016-0004-0npj Quantum Information. 3J. M. Gambetta, J. M. Chow, and M. Steffen, npj Quan- tum Information 3, 2 (2017).
. C Rigetti, M Devoret, 10.1103/PhysRevB.81.134507Phys. Rev. B. 81134507C. Rigetti and M. Devoret, Phys. Rev. B 81, 134507 (2010).
. S Sheldon, E Magesan, J M Chow, J M Gambetta, 10.1103/PhysRevA.93.060302Phys. Rev. A. 9360302S. Sheldon, E. Magesan, J. M. Chow, and J. M. Gam- betta, Phys. Rev. A 93, 060302 (2016).
. E Magesan, J M Gambetta, 10.1103/PhysRevA.101.052308Phys. Rev. A. 10152308E. Magesan and J. M. Gambetta, Phys. Rev. A 101, 052308 (2020).
. E Magesan, J M Gambetta, B R Johnson, C A Ryan, J M Chow, S T Merkel, M P Silva, G A Keefe, M B Rothwell, T A Ohki, M B Ketchen, M Steffen, 10.1103/PhysRevLett.109.080505Phys. Rev. Lett. 10980505E. Magesan, J. M. Gambetta, B. R. Johnson, C. A. Ryan, J. M. Chow, S. T. Merkel, M. P. da Silva, G. A. Keefe, M. B. Rothwell, T. A. Ohki, M. B. Ketchen, and M. Stef- fen, Phys. Rev. Lett. 109, 080505 (2012).
Hamiltonian Tomography. The Qiskit Team, The Qiskit Team, "Hamiltonian Tomography," https: //qiskit.org/textbook/ch-quantum-hardware/ hamiltonian-tomography.html (2022).
IBM Quantum Resources. "IBM Quantum Resources,", https:// quantum-computing.ibm.com/services/resources (2023).
Readout Mitigation. The Qiskit Team, The Qiskit Team, "Readout Mitigation," https://qiskit.org/documentation/experiments/ tutorials/readout_mitigation.html (2022).
. D C Mckay, C J Wood, S Sheldon, J M Chow, J M Gambetta, 10.1103/PhysRevA.96.022330Phys. Rev. A. 9622330D. C. McKay, C. J. Wood, S. Sheldon, J. M. Chow, and J. M. Gambetta, Phys. Rev. A 96, 022330 (2017).
. F Vatan, C Williams, 10.1103/PhysRevA.69.032315Phys. Rev. A. 6932315F. Vatan and C. Williams, Phys. Rev. A 69, 032315 (2004).
. S Sheldon, L S Bishop, E Magesan, S Filipp, J M Chow, J M Gambetta, 10.1103/PhysRevA.93.012301Phys. Rev. A. 9312301S. Sheldon, L. S. Bishop, E. Magesan, S. Filipp, J. M. Chow, and J. M. Gambetta, Phys. Rev. A 93, 012301 (2016).
Randomized Benchmarking. The Qiskit Team, The Qiskit Team, "Randomized Benchmarking," https://qiskit.org/documentation/experiments/ tutorials/randomized_benchmarking.html (2022).
Summary of Quantum Operations. The Qiskit Team, The Qiskit Team, "Summary of Quantum Operations," https://qiskit.org/documentation/tutorials/ circuits/3_summary_of_quantum_operations.html (2022).
. F Tennie, D Danin, T Farrow, ForthcomingF. Tennie, D. Danin, and T. Farrow, (2023), (Forth- coming).
| [] |
[
"FLAT MITTAG-LEFFLER MODULES, AND THEIR RELATIVE AND RESTRICTED VERSIONS",
"FLAT MITTAG-LEFFLER MODULES, AND THEIR RELATIVE AND RESTRICTED VERSIONS"
] | [
"Jan Trlifaj "
] | [] | [
"Mathematics Subject Classification. Primary: 16D40, 18G05. Secondary: 13B40, 13D07, 14F06"
] | A. Assume that is a non-right perfect ring. Then there is a proper class of classes of (right -) modules closed under transfinite extensions lying between the classes P 0 of projective modules, and F 0 of flat modules. These classes can be defined as variants of the class FM of absolute flat Mittag-Leffler modules: either as their restricted versions (lying between P 0 and FM), or their relative versions (between FM and F 0 ). In this survey, we will deal with applications of these classes in relative homological algebra and algebraic geometry.The classes P 0 and F 0 are known to provide for approximations, and minimal approximations, respectively. We will show that the classes of restricted flat relative Mittag-Leffler modules, and flat relative Mittag-Leffler modules, have rather different approximation properties: the former classes always provide for approximations, but the latter do not, except for the boundary case of F 0 .The notion of an (infinite dimensional) vector bundle is known to be Zariski local for all schemes, the key point of the proof being that projectivity ascends and descends along flat and faithfully flat ring homomorphisms, respectively. We will see that the same holds for the properties of being a -restricted flat Mittag-Leffler module for each cardinal ≥ ℵ 0 , and also a flat Q-Mittag-Leffler module whenever Q is a definable class of finite type. Thus, as in the model case of vector bundles, Zariski locality holds for flat quasi-coherent sheaves induced by each of these classes of modules. Moreover, we will see that the notion of a locally -tilting quasi-coherent sheaf is Zariski local for all ≥ 0. | null | [
"https://export.arxiv.org/pdf/2303.12549v1.pdf"
] | 257,663,694 | 2303.12549 | b42c288ed4e4831d45bfac47c872dd01d3e7dd4e |
FLAT MITTAG-LEFFLER MODULES, AND THEIR RELATIVE AND RESTRICTED VERSIONS
March 23. 2023. 2020
Jan Trlifaj
FLAT MITTAG-LEFFLER MODULES, AND THEIR RELATIVE AND RESTRICTED VERSIONS
Mathematics Subject Classification. Primary: 16D40, 18G05. Secondary: 13B40, 13D07, 14F06
March 23. 2023. 2020and phrases flat modulerelative and restricted Mittag-Leffler modulestilting modulesprecov- ering classZariski locality Research supported by GAČR 23-05148S
A. Assume that is a non-right perfect ring. Then there is a proper class of classes of (right -) modules closed under transfinite extensions lying between the classes P 0 of projective modules, and F 0 of flat modules. These classes can be defined as variants of the class FM of absolute flat Mittag-Leffler modules: either as their restricted versions (lying between P 0 and FM), or their relative versions (between FM and F 0 ). In this survey, we will deal with applications of these classes in relative homological algebra and algebraic geometry.The classes P 0 and F 0 are known to provide for approximations, and minimal approximations, respectively. We will show that the classes of restricted flat relative Mittag-Leffler modules, and flat relative Mittag-Leffler modules, have rather different approximation properties: the former classes always provide for approximations, but the latter do not, except for the boundary case of F 0 .The notion of an (infinite dimensional) vector bundle is known to be Zariski local for all schemes, the key point of the proof being that projectivity ascends and descends along flat and faithfully flat ring homomorphisms, respectively. We will see that the same holds for the properties of being a -restricted flat Mittag-Leffler module for each cardinal ≥ ℵ 0 , and also a flat Q-Mittag-Leffler module whenever Q is a definable class of finite type. Thus, as in the model case of vector bundles, Zariski locality holds for flat quasi-coherent sheaves induced by each of these classes of modules. Moreover, we will see that the notion of a locally -tilting quasi-coherent sheaf is Zariski local for all ≥ 0.
I
By a classic result of Bass, non-right perfect rings are characterized by the existence of countably presented flat (right -) modules that are not projective [1, 28.4]. While projective modules can always be decomposed into direct sums of countably generated submodules [1, 26.2], only a weak decomposition theorem is available for the flat modules: if = card + ℵ 0 , then each flat module can be deconstructed into a transfinite extension of ≤ -presented flat modules [18, 6.17]. That is, possesses a continuous increasing chain of submodules, ( | ≤ ), such that 0 = 0, = , and for each < , +1 / is a ≤ -presented flat module. Motivated by Grothendieck's question on Zariski locality of the notion of a vector bundle, Raynaud and Gruson introduced the intermediate class of (absolute) flat Mittag-Leffler modules, F M, in [27]. Recall that a module is Mittag-Leffler, if for each family I = ( | ∈ ) of left -modules, the canonical group homomorphism I : ⊗ ∈ → ∈ ⊗ is monic (see below for unexplained terminology). If is not right perfect, then P 0 F M F 0 where P 0 and F 0 denote the classes of all projective and flat modules, respectively. All these classes are closed under transfinite extensions, but unlike P 0 and F 0 , the class F M is not deconstructible [18, 10.13]. However, there is a very rich supply of deconstructible classes closed under transfinite extensions between P 0 and F 0 : as ranges over all infinite cardinals, the classes F M of -restricted flat Mittag-Leffler modules (= transfinite extensions of ≤ -presented flat Mittag-Leffler modules) form a strictly increasing chain (F M | ℵ 0 ≤ ) between P 0 and F M:
P 0 = F M ℵ 0 F M ℵ 1 · · · F M F M + · · · ℵ 0 ≤ F M = F M.
For each ≥ ℵ 0 , the class F M is obviously deconstructible, and hence precovering [18, 7.21], but the class F M fails these properties [33, 3.3]. When is not right perfect, there is also a rich intermediate structure between the classes F M and F 0 provided by the classes of flat relative Mittag-Leffler modules. These are obtained by restricting the choice of the families I in the definition above: if Q is any class of left -modules, then a module is Q-Mittag-Leffler, if the canonical group homomorphism I is monic for each family I = ( | ∈ ) which consists of modules from Q. Following [21], we will denote the class of all flat Q-Mittag-Leffler modules by D Q . So if Q ′ ⊆ Q, we have the inclusions
F M = D -Mod ⊆ D Q ⊆ D Q ′ ⊆ F 0 .
Though there is a proper class of classes Q ⊆ -Mod, there is only a set of different classes D Q . As proved by Rothmaler [28, 2.2], D Q = D Def ( Q) where Def (Q) is the definable closure of Q, that is, the least class of left -modules containing Q and closed under direct products, direct limits, and pure submodules. Moreover, if ∈ Q, then the structure of the class D Q is completely determined by the countably presented modules in D Q . So if ∈ Q ′ , then D Q = D Q ′ , iff D Q and D Q ′ contain the same countably presented modules, [7, 2.5]. As for the approximation properties of flat relative Mittag-Leffler modules, the situation is similar to the absolute case: the class D Q is precovering only if it coincides with the class of all flat modules [7, 2.6].
Originally, absolute flat Mittag-Leffler modules served as a tool for proving Zariski locality of the notion of a vector bundle over any scheme, cf. [25] and [27]. Relative Mittag-Leffler modules turned out to play an important role in (infinite dimensional) tilting theory, [2]. This has led to an investigation of quasi-coherent sheaves associated with tilting. Their Zariski locality was proved in [24]. The Zariski locality for quasi-coherent sheaves induced by restricted flat Mittag-Leffler modules goes back to [16], while the corresponding result for quasi-coherent sheaves induced by (some) flat relative Mittag-Leffler modules is quite recent, [8].
The goal of this survey is to provide a unified presentation of these recent results: in Section 3, we will deal with the approximation properties, while Section 4 concerns the Zariski locality of the various induced notions of quasi-coherent sheaves on schemes.
B
Mittag-Leffler modules are closely related to Mittag-Leffler inverse systems of modules. Thus we start with recalling basic notions, and fixing our notation, concerning direct and inverse limits of direct and inverse systems of modules over an arbitrary ring . We will also take the opportunity to present examples showing the variety of properties of these systems.
For a class C ⊆ Mod-, we will denote by Filt(C) the class of all modules that are transfinite extensions of the modules from C (or C-filtered modules), that is, the modules possessing an increasing chain of submodules, M = ( | ≤ ), such that 0 = 0, = < for each limit ordinal ≤ , = , and for each < , +1 / for some ∈ C. The ordinal is the length of the C-filtration M. For example, if ≥ ℵ 0 and C denotes the class of all ≤ -presented flat Mittag-Leffler modules, then Filt(C ) = F M is the class of all -restricted flat Mittag-Leffler modules.
2.1. Direct limits. Let ( , ≤) be an (upper) directed poset. A covariant functor ℭ from the category ( , ≤) to Mod-is called an -direct system of modules. Equivalently, ℭ may be viewed as a diagram, C, in the category Mod-, whose objects are modules ( ∈ ), and morphisms are ∈ Hom ( , ) for ≤ ∈ which satisfy the identities = id and = for all ≤ ≤ ∈ . The colimit of the diagram C in the category Mod-is called the direct limit of C and denoted by lim − − → C. In more detail, the colimit is a cocone ( , ( ∈ )) (= a module ∈ Mod-together with morphisms ∈ Hom ( , ) such that = for all ≤ ∈ ) possessing the following universal property: for each cocone ( ′ , ′ ( ∈ )) there is a unique homomorphism : → ′ such that = ′ for each ∈ . We will also use the notation = lim − − → ∈ . There is a useful presentation of the direct limit as a factor of the direct sum
∈ , 0 → ↩→ ∈ → = lim − − → → 0,
where is the submodule of ∈ generated by { − ( ) | ∈ , ≤ ∈ }. Moreover, ↾ = for each ∈ , and is a pure embedding. In fact is even locally split, see e.g. [26, 4.3].
There is a convenient way of checking that a cocone ( ′ , ′ ( ∈ )) is a direct limit of the diagram C. This is the case, iff the following two "internal" conditions are satisfied:
(C1) ∈ Im ′ = ′ , and (C2) Ker ′ = ≤ ∈ Ker . Indeed, both conditions hold for the direct limit ( , ( ∈ )), and condition (C1) is equivalent to the surjectivity of the homomorphism , while (C2) to its injectivity (cf. [18, 2.3]). Notice that condition (C2) implies that if all the morphisms ( ≤ ∈ ) in C are monic, then so are all the ( ∈ ). Also, if ′ is a ≤-cofinal subset of ( , ≤), then conditions (C1) and (C2) hold for the "restricted" cocone ( ′ , ′ ( ∈ ′ )), whence also
′ lim − − → ∈ ′ .
Of particular interest is the case when = is a limit ordinal, ≤ is the ordinal ordering on , +1, is an inclusion for each < , and = < for each limit ordinal < . Then ( | < ) is a continuous chain of modules. In this case = < . One can proceed and define homomorphisms between -direct systems of modules C and C ′ as systems of morphisms ( | ∈ ) such that ∈ Hom ( , ′ ) and
= ′ for all ≤ ≤ ∈ . Then lim − − → : lim − − → C → lim − − → C ′ is defined by lim − − → ( ) = ( ′ ( ))
for each ∈ such that = ( ) for some ∈ and ∈ . In this way, lim − − → defines a functor from the category of -inverse systems of modules to Mod-. This functor is well-known to be exact, see e.g. [30, 5.33].
2.2. Inverse limits. Let ( , ≤) be a directed poset. A contravariant functor from the category ( , ≤) to Mod-is called an -inverse system of modules. Equivalently, may be viewed as a diagram, D, in the category Mod-, whose objects are modules ( ∈ ), and morphisms are ∈ Hom ( , ) for ≤ ∈ that satisfy the identities = id and = for all ≤ ≤ ∈ . The limit of the diagram D in the category Mod-is called the inverse limit of D and denoted by lim ← − − D. The limit is a cone ( , ( ∈ )) (= a module ∈ Modand morphisms ∈ Hom ( , ) such that = for all ≤ ∈ ) with the following universal property: for each cone ( ′ , ′ ( ∈ )) there is a unique homomorphism : ′ → such that = ′ for each ∈ . We will also use the notation = lim ← − − ∈ . The module is (isomorphic to) a particular submodule of the direct product ∈ ( * ) = {( ) ∈ ∈ ∈ | = ( ) for all ≤ ∈ }, and for each ∈ , = ↾ , where is the canonical projection of ∈ on to . From this presentation of the inverse limit, it follows that if ′ is a ≤-cofinal subset of ( , ≤), then also lim ← − − ∈ ′ for the "restricted" cone ( , ( ∈ ′ )). In particular, if a countably infinite set, we can w.l.o.g. assume that = . An -inverse system of modules with = is called a tower of modules.
An inverse system D is called a generalized tower if = is a limit ordinal with the ordinal ordering ≤, and D = ( , | ≤ < ) is a continuous inverse system of modules, that is, = lim ← − − < D for each limit ordinal < . Further, D is called a generalized tower of epimorphisms in case all the maps ( ≤ < ) are surjective, or equivalently, +1 is surjective for each < . Next, we define homomorphisms between -inverse systems of modules D and D ′ as systems of morphisms ( | ∈ ) such that ∈ Hom ( , ′ ) and
= ′ for all ≤ ≤ ∈ . Then lim ← − − : lim ← − − D → lim ← − − D ′ is defined by (lim ← − − ) (( ) ∈ ) = ( ( )) for each ( ) ∈ ∈ .
Thus lim ← − − defines a functor from the category of -inverse systems of modules to Mod-. Since lim ← − − has a left adjoint (provided by the 'constant ' inverse system functor from Modto the category of all -directed posets), the functor lim ← − − is left exact, see e.g. [30, 5.52]. It is not necessarily right exact in Mod-in general. Here is a simple example demonstrating that right exactness may fail even for towers of short exact sequences: Example 2.1. Let be a prime integer. Consider the tower E of short exact sequences of abelian groups 0 → Z ⊆ Z → Z → 0 (= free resolutions of the cyclic groups Z ) for 0 < < . The connecting triples of morphisms are ( , +1 , id Z , , +1 ) (0 < < ), where , +1 : Z +1 ⊆ Z is the inclusion and , +1 : Z +1 → Z the projection modulo the socle of Z +1 . Then lim
← − − E is the sequence 0 → 0 → Z → J → 0 where J = lim ← − − Z is the (uncountable) group of all -adic integers. So lim ← − − E is not right exact.
If a cone ( , ( ∈ )) is the inverse limit of D, then the following "internal" condition (D1) -dual to the condition (C1) above -holds (D1) ∈ Ker = 0. Indeed, in the notation above, for a cone ( ′ , ′ ( ∈ )), condition (D1) is equivalent to the injectivity of the homomorphism . However, no "internal" condition is known to be equivalent to the surjectivity of in general. Of course, we can formally dualize condition (C2) as (D2) Im = ≤ ∈ Im . Notice that (D2) implies that if all the morphisms ( ≤ ∈ ) in D are surjective, then so are all the ( ∈ ).
We will briefly discuss condition (D2) for countable inverse systems of modules. By the above, we can w.l.o.g. assume that = , that is, D is a tower. For each < , let = ≤ < Im and ℎ = , +1 ↾ +1 . Clearly, ( 2) is equivalent to the surjectivity of all the ℎ ( < ).
Let us restrict further to the particular case of a tower formed by an iteration of a single endomorphism. That is, we consider ∈ Mod-and ∈ End ( ), and let = and , +1 = for all < . Then ( 2) holds, iff ( ) = , where = < ( ). The latter clearly holds when is surjective, and it is easy to see that is also holds when is injective. However, it may fail for a general endomorphism . To demonstrate this fact, we recall the classic construction of totally projective modules due to Walker (cf. . Thus condition (D2) fails for the tower formed by an iteration of the endomorphism .
In fact, for each ≤ , = , / , and +1 = 0, whence lim
← − − D = 0.
Notice that in the setting of Example 2.1, condition (D2) holds for all the three towers of modules forming the tower of short exact sequences E , but E is not right exact.
Clearly, (D2) holds for all towers of epimorphisms, and more in general, for all generalized towers of epimorphisms: then also all the morphisms ( ∈ ) are surjective. However, the latter (and hence (D2)) may fail for uncountable (non-continuous) well-ordered inverse systems of epimorphisms.
Our example exhibiting the failure is based on the construction of an Aronszajn tree, that is, of a tree of height ℵ 1 with no branches of length ℵ 1 , such that for each ∈ , the set of all successors of in has cardinality ℵ 1 , and all levels ( < ℵ 1 ) of are countable. We refer to [17, Appendix on Set Theory] for a construction of such tree. Example 2.3 (Aronszajn's well-ordered inverse systems). Let be an Aronszajn tree. For each < ℵ 1 , let be the set of all branches in of length . For ≤ < ℵ 1 , we define a map : → as the restriction map. That is, restricts each branch ∈ to its initial segment in . Since all levels of are countable, and for each ∈ , the set of all successors of in has cardinality ℵ 1 , the maps are surjective for all ≤ < ℵ 1 . Let be any module. Let = ℵ 1 with the ordinal ordering ≤. For each ∈ , let = ( ) . For ≤ ∈ , we define an epimorphism : → by (( ) ∈ ) = ( ) ∈ , where for each ∈ , = ∈ , ( )= ∈ . As has no branch of length ℵ 1 , ( * ) yields that = lim ← − − = 0. Moreover, all the ( ≤ ∈ ) are epimorphisms, while all the : → ( ∈ ) are zero morphisms.
There is, however, an important class of inverse limits of modules where condition (D2) does hold, namely the class of dual inverse systems: Example 2.4 (Dual inverse systems). Let be a ring. Denote by (−) * = Hom Z (−, Q/Z) the character module duality from Mod-to -Mod. Notice that for each homomorphism in Mod-, there is a canonical isomorphism of left -modules : (Im ) * Im * . For a directed set ( , ≤) and a covariant functor ℭ : → Mod-, we define a contravariant functor = (−) * • ℭ : → -Mod. In other words, if C = ( , | ≤ ∈ ) is the -direct system in Mod-corresponding to , then D = ( * , * | ≤ ∈ ) is the -inverse system of left -modules corresponding to . D is called the dual inverse system of C.
Let ( , ( ∈ )) be the direct limit of C in Mod-, so in the notation of 2.1, we have the short exact sequence 0 → ↩→ ∈ → → 0 where ↾ = ( ∈ ), and is generated by the elements of the form − ( ) where ∈ and ≤ ∈ .
Then 0 → * * → ∈ * * → * → 0 is exact in -Mod, and ∈ * ( * ), iff = ( ) ∈ where ∈ * and − = 0 for all ≤ ∈ . The latter equality just says that = * ( ). Denoting by the restriction to * ( * ) of the th canonical projection of ∈ * on to * , we infer that ( * ( * ), ( ∈ )) is the inverse limit of D in -Mod. As ↾ = and * ( ) = ( * ( )) ∈ for each ∈ * , the inverse limit is isomorphic to the cone ( * , * ( ∈ )).
Let ∈ . Consider the direct system E of short exact sequences 0 → Ker ↩→ → Im → 0 ( ≤ ∈ ) with the connecting homomorphisms ( , id , )
where : Ker ⊆ Ker is the inclusion and : Im → Im the canonical epimorphism, for all ≤ ≤ ∈ . By condition (C2) for the direct system C, Ker is the directed union of its submodules Ker ( ≤ ≤ ∈ ). It follows that lim − − → E is the short exact sequence 0 → Ker ⊆ → Im → 0. Applying the duality (−) * to E and the isomorphism above, we obtain the inverse system E * of short exact sequences 0 → Im * ↩→ * → (Ker ) * → 0 ( ≤ ∈ ) with the connecting homomorphisms ( , id * , * ) where :
Im * ⊆ Im * is the inclusion. Applying (−) * to lim − − → E, we infer that lim ← − − E * (lim − − → E) * is the short exact sequence 0 → Im * ↩→ * → (Ker ) * → 0.
Thus Im * = ≤ ∈ Im * , and condition (D2) holds for the cone ( * , * ( ∈ )).
Remark 1. (1)
The tower of abelian groups D : · · · → Z +1 → Z → · · · → Z → 0 is a dual inverse system. It is obtained by applying the character module duality to the direct
system C : 0 → Z ⊆ · · · ⊆ Z ⊆ Z +1 ⊆ . . . . Here, is a prime integer, Z ∞ = lim − − → Z is the Prüfer -group, while J (Z ∞ ) * = lim ← − −
Z the group of all -adic integers. More in general, if C is any continuous chain of modules, then the dual inverse system D is a generalized tower of modules.
(2) Condition (D2) holds also for other types of dualities: for example, if (−) * = Hom (−, ) where is a pure-injective module, then for each covariant functor ℭ : → Mod-, the functor = (−) * •ℭ defines an -inverse system of abelian groups that satisfies condition (D2), see [20, 1.7].
Mittag-Leffler conditions.
Mittag-Leffler conditions are stabilization conditions for the decreasing chains of images of the inverse system maps:
Definition 2.5. Let D = ( , | ≤ ∈ )
be an inverse system of modules and let = lim ← − − D = ( , ( ∈ )) be its inverse limit.
(1) D is Mittag-Leffler, provided that for each ∈ there exists ≤ ∈ , such that Im = Im for each ≤ ∈ . (2) D is strict Mittag-Leffler, provided that for each ∈ there exists ≤ ∈ , such that Im = Im .
Since Im ⊆ Im for each ≤ ∈ , each strict Mittag-Leffler inverse system is Mittag-Leffler. For example, if all the ( ≤ ∈ ) are epimorphisms, then D is Mittag-Leffler.
Remark 2. It is easy to see that the two notions coincide for towers of modules: if D is a Mittag-Leffler tower, we can w.l.o.g. assume that in 2.5(1), = + 1, and then for each ∈ Im , +1 , by induction on > , find a ∈ Im , +1 such that −1, ( ) = −1 . Thus ∈ Im , and D is strict Mittag-Leffler.
However, Example 2.3 presents a well-ordered inverse system D of epimorphismshence a Mittag-Leffler inverse system -whose inverse limit is 0. So D is not strict Mittag-Leffler.
Let us record another case of coincidence of the two notions from [20]:
Lemma 2.6. Assume that the inverse system D satisfies condition (D2). Then D is Mittag- Leffler, iff it is strict Mittag-Leffler.
In particular, the equivalence holds for all dual inverse systems of modules.
Proof. This is a simple consequence of the set being (upper) directed: the equalities Im = Im for each ≤ ∈ imply that Im = ≤ ∈ Im . By condition (D2), the latter intersection equals Im .
The final claim follows from Example 2.4 (see also Remark 1 (2)).
The Mittag-Leffler conditions are sufficient to guarantee exactness of the functor lim ← − − at towers of modules. More precisely, let ( †) 0 → A → B → C → 0 be a short exact sequence of generalized towers of modules indexed by a limit ordinal , and let
( ‡) 0 → lim ← − − A → lim ← − − B → lim ← − − C → 0 be the left exact sequence obtained by applying the functor lim ← − − to ( †).
Lemma 2.7. (i) Assume that has cofinality (e.g., ( †) is a short exact sequence of towers of modules). Then ( ‡ ) is exact provided that
A is Mittag-Leffler. (ii) Assume that A is a generalized tower of epimorphisms. Then ( ‡) is exact.
Proof. (i) W.l.o.g., we can assume that = . Then for a tower D, lim
← − − D = Ker D where D : < → < is defined by (( ) < ) = ( − , +1 ( +1 )) < . By the Snake Lemma, ( ‡) is exact if coker A = 0.
But the latter is known to hold when A is Mittag-Leffler (see e.g. [18, 3.6]).
(
ii) Let A = ( , | ≤ < ), B = ( , | ≤ < )
, and C = ( , ℎ | ≤ < ) be the generalized towers, and ( , ) ( < ) the maps such that the short exact sequences 0 → → B → C → 0 form a generalized tower of short exact sequences. We have to prove that = lim
← − − : lim ← − − → lim ← − − is surjective. Let = ( | < ) ∈ lim ← − − . By induction on < , we will define a sequences = ( | < ) ∈ ∈ such that ( ) = for all < and ( ) = for all < < . Then ∈ lim ← − − and ( ) = .
First, since 0 is surjective, there exists 0 ∈ 0 such that 0 ( 0 ) = 0 . Is is defined up to an < , then we define +1 as follows: we take any ′ +1 ∈ +1 such that
+1 ( ′ +1 ) = +1 . Since , +1 ( ′ +1 ) = ℎ , +1 +1 ( ′ +1 ) = ℎ , +1 ( +1 ) = = ( ), we have − , +1 ( ′ +1 ) = ( ) for some ∈ . Since A is a generalized tower of epimorphisms, , +1 is surjective. So there exists +1 ∈ +1 such that ( ) = , +1 ( +1 ) = , +1 +1 ( +1 ). Let +1 = ′ +1 + +1 ( +1 )
. Then +1 ( +1 ) = +1 , and , +1 ( +1 ) = , +1 ( ′ +1 ) + ( ) = . If < is a limit ordinal, then since the generalized towers B and C are continuous, letting = ( | < ), we conclude that ( ) = ( | < ) = , q.e.d.
Relative Mittag-Leffler and tilting modules.
Mittag-Leffler conditions are closely related to Mittag-Leffler modules and their relative versions. In order to make this clear, we require further notions and results from [2] and [20] that generalize the classic (absolute) case studied in [27].
Definition 2.8. Let be a module.
(1) Let be an -bimodule. Then is -stationary (strict -stationary), provided that = lim − − → for some direct system ( , | ≤ ∈ ) consisting of finitely presented modules, such that the inverse system (Hom ( , ),
Hom ( , ) | ≤ ∈ ) is Mittag-Leffler (strict Mittag-Leffler) in Mod-Z. (2) Let B be a class of modules. Then is B-stationary (strict B-stationary), provided is -stationary (strict -stationary) for each ∈ B.
Remark 3. (1) The notions from 2.8 can equivalently be defined by replacing the existential quantifier with the universal one, that is, by replacing 'for some direct system' with 'for each direct system', see [2] (2) If is a pure-injective module, then each -stationary module is strict -stationary, cf. Remark 1(2) and Lemma 2.6. Definition 2.9. A class of modules C is definable provided that it is closed under direct limits, direct products and pure submodules. For a class of modules Q, we denote by Def (Q) the definable closure of Q, which is the least definable class of modules containing Q.
Given a definable class C of left (right) -modules, we define its dual definable class of right (left) -modules, denoted by C ∨ , as the least definable class of right (left) -modules containing the character modules * = Hom Z ( , Q/Z) of all modules ∈ C. Then C = (C ∨ ) ∨ for any definable class of left (right) -modules C, see e.g. [29, 2.5].
Example 2.10. Let S be a class of FP 2 -modules (i.e., the modules possessing a presentation / where is finitely generated projective, and is a finitely presented submodule of ). Then the class
S ⊥ = { ∈ Mod-| Ext 1 ( , ) = 0 for all ∈ S} is definable in Mod-, it's dual definable class in -Mod being S ⊺ = { ∈ Mod-| Tor 1 ( , ) = 0 for all ∈ S}.
The definable classes of this form are called of finite type, see e.g. [8, 3.2].
For a concrete example, assume that is a right coherent ring. Then FP 2 -modules coincide with the finitely presented modules. If S denotes the class of all finitely presented modules, then S ⊥ is the definable class of all fp-injective modules, and S ⊺ the dual definable class of all flat left -modules.
A key relation between relative Mittag-Leffler properties and stationarity was proved in [20, 2.11]:
Theorem 2.11. Let Q be a definable class of left -modules and B = Q ∨ be its dual definable class (of right -modules). Let be a module. Then the following conditions are equivalent:
(i) is Q-Mittag-Leffler. (ii) is { }-Mittag-Leffler for each ∈ Q. (iii) is (strict) * -stationary for each ∈ Q. (iv)
is B-stationary.
We will also need the following description of flat Q-Mittag-Leffler modules from [21, 2.6]. Recall that a system S of submodules of a module is called ℵ 1 -dense provided that each countable subset of is contained in an element of S, and S is closed under unions of countable chains. Then ∈ D Q , iff has an ℵ 1 -dense system consisting of countably generated flat Q-Mittag-Leffler modules.
Countably presented absolute Mittag-Leffler modules are pure-projective,that is, they are direct summands in direct sums of finitely presented modules. Hence countably presented modules in F M are projective. There are stronger versions of Theorems 2.11 and 2.12 available for the absolute case (we refer to [18, 3. (1) The following conditions are equivalent for a module :
(i) is Mittag-Leffler. (ii)
has an ℵ 1 -dense system consisting of countably generated pure-projective modules.
(iii) Each finite (or countable) subset of is contained in a countably generated pure-projective submodule which is pure in . (iv)
is Mod--stationary. (2) The following conditions are equivalent for a module :
(i) is flat Mittag-Leffler. (ii)
has an ℵ 1 -dense system consisting of countably generated projective modules. (iii) Each finite (or countable) subset of is contained in a countably generated projective submodule which is pure in .
(iv) is flat and -stationary. (3) Let be an infinite cardinal and
∈ F M be ≤ -generated. Then is ≤presented.
Theorem 2.11 concerns only definable classes of modules. But this is not a serious restriction, because of the following general fact from [28, 2.2], see also [20, 2. is a countably presented flat module, then ∈ D Q , iff ∈ ⊥ (S ⊥ ). W.l.o.g., we can assume that ∈ S; then the latter condition is equivalent to being a direct summand in a module such that has a S-filtration of length , see e.g. [18, 6.14 and 7.10].
Restricting further the setting of Example 2.16, we arrive at the notions of (infinitely generated) tilting modules, and tilting classes: Definition 2.17. A module is tilting, in case it satisfies the following three properties:
(T1) has finite projective dimension, (T2) Ext ( , ( ) ) = 0 for all sets , (T3) there exists a finite exact sequence 0 → → 0 → 1 → · · · → → 0 such that ∈ Add for each ≤ . Here, Add denotes the class of all modules isomorphic to direct summands of (possibly infinite) direct sums of copies of the module .
Let be a tilting module. Then the class B = ⊥ ∞ is called the (right) tilting class induced by , and A = ⊥ B the left tilting class induced by . Moreover, Add = A ∩ B .
If has projective dimension ≤ , then is called -tilting, and similarly for the tilting classes A and B .
Two tilting modules and ′ are said to be equivalent in case Add = Add ′ .
Tilting classes fit in the setting of classes of finite type due to the following (see [2, 9.8] and [18, 13. [1, 12.6], the pure-projective abelian -groups are isomorphic to direct sums of copies of the groups Z (0 < < ). In particular, each countable Mittag-Leffler abelian group contains no non-zero element of infinite -height.
For each < , let = 1 + Z +1 ∈ Z +1 . Consider the short exact sequence
0 → Z → 0< < Z +1 / → 0< < Z → 0 where = 0< < Z( − +1 +1 ), ( 0 ) = 1 + , and for each 0 < < , ( + ) = −1 .
Then the outer terms of this short exact sequence, Z and 0< < Z , are countable pure-projective abelian groups. However, the middle term
= 0< < Z +1 / is not Mittag-Leffler. Indeed, 0 ≠ ( 0 ) = ( 1 + ) = 2 ( 2 + ) = · · · = ( + ) = . . . , so ( 0 ) is
A
Precovering classes C of modules are important for extending classical homological algebra to more refined settings. Classically, one uses projective resolutions of modules. In the refined setting, one deals with C-resolutions obtained by iterations of C-precovers. This is one of the themes of relative homological algebra [14]. Definition 3.1. Let C be a class of modules and ∈ Mod-. A homomorphism : → with ∈ C is called a C-precover of (or a right C-approximation of ) provided that for each homomorphism ′ : ′ → with ′ ∈ C, there exists a homomorphism : ′ → such that ′ = . The C-precover is a C-cover, provided that is right minimal, i.e., if each ∈ End ( ) such that = is an automorphism of . C is called a precovering (covering) class, if each module ∈ Mod-has a C-precover (C-cover).
It is well-known that the class P 0 is precovering for any ring , and P 0 is covering, iff is a right perfect ring. However, the class F 0 is covering for every ring , [9]. Our goal in this section is to investigate precovering properties of the intermediate classes of the restricted, and the relative, flat Mittag-Leffler modules. Our presentation follows [7], [33], and [32].
Approximations by restricted Mittag-Leffler modules.
We start with recalling a remarkable general result from [31, 2.15] (see also [18, 7. The class F M ℵ 0 = P 0 is not covering when is not right perfect, [1, 28.4]. It is conjectured that the same holds for the classes F M when > ℵ 0 . In fact, there is a much more general conjecture due to Enochs: Enochs' Conjecture is still open in general, but has been proved in a number of particular cases (e.g., for all left tilting classes in [4, 5.2]; note that all left tilting classes are precovering by Theorems 2.18 and 3.2). Recently, an important case of the conjecture was proved to be consistent with ZFC in [5]: it holds in any extension of ZFC satisfying Gödel's Axiom of Constructibility (V = L):
Theorem 3.5. Assume V = L. Let S be any set of modules. Then the class Filt(S) is covering, iff it is closed under direct limits.
Since F M is not closed under direct limits for any ≥ ℵ 0 in case is not right perfect, we have Corollary 3.6. Assume that V = L. Let be a non-right perfect ring. Then the class F M is not covering for any infinite cardinal .
Approximations by relative Mittag-Leffler modules.
For the rest of this section, we will again assume that is a non-right perfect ring. Then the setting of relative Mittag-Leffler modules is quite different from the restricted ones: we will see that for any class of left modules Q, the class of relative flat Mittag-Leffler modules D Q is precovering only in the boundary case of D Q = F 0 . This will follow from the next two lemmas that were originally proved in more general forms in [32] and [33]. First, we need further notation. Definition 3.7. Let be a countably presented flat non-projective module. (Such modules exist, because is not right pefect; they are called Bass modules). Since F M ℵ 0 = P 0 , necessarily ∉ F M. As is a countable direct limit of finitely generated free modules, there is a chain
( * * ) 0 ℎ 0 → 1 → . . . ℎ −1 → ℎ → +1 ℎ +1
→ . . . where is a finitely generated free module for each < such that lim − − → < . Let be an infinite cardinal and be the tree on consisting of all finite sequences of ordinals < . That is, each ∈ is a map : → for some < . The partial order on is by restriction, so if , ∈ , then ≤ , iff ↾ = . For each ∈ , ℓ( ) will denote the length of .
Let denote the set of all branches of . Notice that card = , and card = (because branches in correspond to -sequence of ordinals < ).
The following lemma is a special instance of Lemma [32, 5.6]: Further, for each ∈ , ( + )/ , as for each < , we can define : → ( + )/ by ( ) = + , and (( + )/ , | ∈ ) is the direct limit of the direct system ( * * ).
Moreover, each element of is a sequence in whose th component is zero for all ∉ { ↾ | < }, so the modules (( + )/ | ∈ ) are independent. It follows that / = ∈ ( + )/ ( ) . For each countable subset = { | < } of , the module = ∈ is isomorphic to a countable direct sum of the s. Indeed, = < , where = ≤ is a direct summand in +1 , with the complementing direct summand isomorphic to a countable direct sum of the s. It follows that the set S of all , where runs over all countable subsets of , is an ℵ 1 -dense system of submodules of consisting of countably generated free modules. By Theorem 2.13(2), ∈ F M. Proof. Let : → be a D Q -precover of . Since P 0 ⊆ D Q , is surjective, and we have a short exact sequence 0 → → → → 0. Let be an infinite cardinal such that = 2 , card ≤ , and card ≤ 2 (e.g., let 0 = card + card + ℵ 0 , +1 = 2 and = sup < ). By Lemma 3.8, there is a short exact sequence involving the tree module as follows: 0 → ( ) ↩→ → (2 ) → 0. Consider the group homomorphism Ext 1 ( , ). It takes a short exact sequence 0 → → → → 0 to its pushout with . The pushout yields a commutative diagram with exact rows and columns as follows: 0 0 Since Ext 1 ( , ) is monic, Ker Ext 1 ( , ) 2 ⊆ Ker = Im ℎ. If Ker Ext 1 ( , ) ≠ 0, then Ker Ext 1 ( , ) 2 has cardinality ≥ 2 2 , while Im ℎ has cardinality less than or equal to card Hom ( ( ) , ) = card ≤ 2 . Thus, also Ext 1 ( , ) is monic, which implies that Hom ( , ) is surjective. In particular, splits, so ∈ D Q , a contradiction. Now we can proceed as in [7, Theorem 2.6]: Theorem 3.10. Let Q be any class of left -modules. Then the class D Q is precovering,
0 − −−−−− → − −−−−− → − −−−−− → − −−−−− → 0 0 − −−−−− → ⊆ − −−−−− → − −−−−− → − −−−−− → 0 0 − −−−−− → iff D Q is deconstructible, iff D Q = F 0 .
Proof. The class F 0 is deconstructible over any ring, and each deconstructible class is precovering by Theorem 3.2.
Assume there exists ∈ F 0 \ D Q . As ∈ F 0 , is a direct limit of a direct system of finitely generated free modules ( | ∈ ) for a directed set ( , ≤). By [18, 3.11], there exists a countable chain 0 < 1 < . . . < +1 < . . . of elements of and a countable family
( | < ) of elements of Q such that = lim − − → < ∉ D Q ′ where Q ′ = { | < }.
As D Q ⊆ D Q ′ , is a Bass module which is not contained in D Q . By Lemma 3.9, has no D Q -precover. Example 3.11. If Q = { }, then the modules in D Q are called f-projective, [19]. Recall that a module is coherent in case all finitely generated submodules of are finitely presented (so a ring is right coherent, if the regular module is coherent).
By [7, 3.5], if is right coherent, then f-projective modules are exactly the flat coherent modules, whence D { } = F 0 , iff all flat modules are coherent. By [7, 3.6], the latter holds for each coherent domain . We can view such C as a coordinate system on that enables us to represent the geometric object of interest (a quasi-coherent sheaf on ) by an algebraic one (a quasi-coherent R-module).
For any property of modules over a ring , one can use the representation above to extend to a property of quasi-coherent sheaves on schemes as follows: a quasicoherent R-module M is locally -quasi-coherent in case for each open affine subset of , the ( )-module ( ) has property ( ) . We will call the property of quasi-coherent sheaves induced by the property of rings . For example, if is the property of being a projective -module, then the locally projective quasi-coherent sheaves on a scheme are exactly the vector bundles on , [11].
Of course, we are interested in those properties that are independent of the choice of a coordinate system, so that they can be checked using any open affine covering C of . Then an R-module M is locally -quasi-coherent, whenever the ( )-module ( ) has property ( ) for each ∈ C. Such properties of quasi-coherent sheaves on are called Zariski local. We will also say that the notion of a locally -quasi-coherent sheaf is Zariski local, or affine local.
Our goal here is to prove Zariski locality for the various notions of locally -quasicoherent sheaves induced by classes of restricted and flat relative Mittag-Leffler modules, and by tilting modules. The definition of a locally -quasi-coherent sheaf given above will be sufficient to achieve this goal in the case of vector bundles (in section 4.1), their generalizations to -restricted Drinfeld vector bundles (4.2), and in the case of locally -tilting quasi-coherent sheaves (4.4). However, for relative Mittag-Leffler modules (4.3), we will have to impose extra compatibility conditions on the relations among the properties for various rings , and possibly also restrict the type of schemes considered.
Our main tool for proving the Zariski locality is the following classic lemma [34, 5.3.2] (see also [16, 3.5]): Notice that for each ∈ , the localization in , : → be a flat ring homomorphism, and be a property of modules.
→ [ −1 ],
(i)
is said to ascend along if for each -module with the property , the -module ⊗ has the property . (ii) Assume is a faithfully flat ring homomorphism. Then is said to descend along if for each -module , has the property whenever the -module ⊗ has the property .
If ascends along all flat ring homomorphisms, and descends along all faithfully flat ring homomorphisms, then is called an ad-property.
In view of Lemma 4.1, in order to prove Zariski locality of a property of quasi-coherent sheaves on a scheme , it suffices to verify that is an ad-property. This is the way we will proceed below for the properties arising from Mittag-Leffler conditions. 4.1. Vector bundles. Let us start with the model case of vector bundles going back to [27, Seconde partie]. Let be the property of being a projective -module. As mentioned above, a quasi-coherent sheaf on a scheme is locally -quasi-coherent, iff it is a vector bundle on .
While the ascent of is trivial, the descent is a nontrivial fact: First, notice that a module is projective, iff is flat Mittag-Leffler and decomposes into a direct sum of countably presented modules (the if part follows from the fact that countably presented flat Mittag-Leffler modules are projective, cf. Theorem 2.13 (2)).
The tools we have presented so far make it possible to prove the descent of the property of being a flat Mittag-Leffler module (cf. [18, 7.33] or [25, 9.2]): Proof. As the ascent of projectivity is trivial, the ascent of the property of being a flat Mittag-Leffler module follows by Theorem 2.13 (2).
To prove the descent, let : → be a faithfully flat ring homomorphism, and ∈ Mod-be such that ⊗ ∈ Mod-is flat and Mittag-Leffler. Viewed as an -module, ⊗ is also flat, because it is isomorphic to ⊗ ( ⊗ ) ( ⊗ ) ⊗ and is a flat -module (via ). So for each short exact sequence E of -modules, E ⊗ ( ⊗ ) is a short exact sequence of -modules. Hence, by faithful flatness of , E ⊗ is exact in Mod-, whence is a flat -module. Thus is isomorphic to the direct limit of a direct system C = ( , | ≤ ∈ ) of finitely generated free modules, = lim − − → ∈ . Applying the functor Hom (−, ), we obtain the inverse system D = (Hom ( , ), Hom ( , ) | ≤ ∈ ). By Theorem 2.13(2), we have to prove that is -stationary, i.e., the inverse system D is Mittag-Leffler.
Notice that ⊗ = lim − − → ∈ ⊗ , and ⊗ is a Mittag-Leffler -module by assumption. By Theorem 2.13(1), for each ∈ there exists ≥ , such that Im Hom ( ⊗ , ) = Im Hom ( ⊗ , ) for all ≤ ∈ . Since is commutative, there is a natural homomorphism Hom ( , ) ⊗ → Hom ( ⊗ , ); if is finitely generated and free, then this is even an isomorphism. The faithful flatness of thus yields that Im Hom ( , ) ⊗ = Im Hom ( , ) ⊗ for all ≤ ∈ . Again by faithful flatness, we conclude that D is a Mittag-Leffler inverse system.
The fact that projectivity (= the property of being flat Mittag-Leffler, and a direct sum of countably presented modules) descends along faithfully flat ring homomorphisms of commutative rings can now be proved by a technique called dévisage, [27,Seconde partie] (see also [25, 9.6]): As a first step, we deduce from Lemma 4.3 that if ⊗ is a countably generated projective -module, then is a countably generated projectivemodule. Then we fix a decomposition of the module ⊗ ∈ into a direct sum of countably presented projective -modules, and use it to construct by induction on a continuous chain ( | < ) of submodules of such that = < , and ⊗ = ∈ for a subset of , so that card ( +1 \ ) ≤ ℵ 0 for each < . As ( +1 / ) ⊗ ∈ +1 \ , the first step yields that +1 / is a projective module for every < , whence is projective. Thus, we obtain Notice that for = ℵ 0 , P 0 = F M ℵ 0 , because countably presented flat Mittag-Leffler modules are just the countably presented projective modules. So by Theorem 4.4, the question of whether the property of being a locally -quasi-coherent sheaf is Zariski local has a positive answer in the particular case of = ℵ 0 .
In [16, 1.1], Theorem 4.4 was generalized to provide a positive answer for each infinite cardinal . We will now present this result with a simplified proof: Proof. The proof goes along the lines of the proof of Theorem 4.4 above except for the final part concerning the descent along faithfully flat ring homomorphisms. Here, dévisage is replaced by a more general technique dealing with filtrations of modules rather than their direct sum decompositions.
First, we fix a filtration F of the module ⊗ by ≤ -presented flat Mittag-Leffler -modules witnessing that ⊗ is a -restricted flat Mittag-Leffler -module. The filtration F is then enlarged into a family, H , of -submodules of ⊗ with the following properties:
(P1) H is a complete distributive sublattice of the complete modular lattice of all -submodules of ⊗ , (P2) if , ′ ∈ H satisfy ⊆ ′ , then ′ / is a -restricted flat Mittag-Leffler -module, (P3) for each ∈ H and each subset of ⊗ of cardinality ≤ , there exists ′ ∈ H such that ∪ ⊆ ′ and ′ / is ≤ -presented. Such enlargement of F is possible by a general construction known as the Hill lemma, see [18, 7.10]. Let C be the class of all ≤ -presented flat Mittag-Leffler modules. We will employ properties (P1)-(P3) to construct by induction on a C -filtration ( | ≤ ) of . This will show that ∈ F M . First, 0 = 0. Assume that ∈ F M is defined so that ⊗ ∈ H . Then either = and we let = , or else there is a ≤ -generated submodule 0 of such that 0 . By property (P3), there exists 1 ∈ H such that 0 = ⊗ ⊆ ( + 0 ) ⊗ ⊆ 1 and 1 / 0 is ≤ -presented. So there exists a ≤ generated submodule 1 of such that 0 ⊆ 1 and 1 ⊆ ( + 1 ) ⊗ . Proceeding similarly, we obtain a chain of ≤ -generated submodules 0 ⊆ 1 ⊆ . . . of and a chain 0 ⊆ 1 ⊆ . . . of elements of H such that
0 = ⊗ ⊆ ( + 0 ) ⊗ ⊆ 1 ⊆ ( + 1 ) ⊗ ⊆ 2 ⊆ . . . Let = < , = < , and +1 = + .
Then is a ≤ -generated submodule of , so +1 / is ≤ -generated, too. Moreover, ∈ H by property (P1), whence +1 ⊗ = < ( + ) ⊗ = ∈ H . Since is infinite and for each < , +1 / is a ≤ -presented flat Mittag-Leffler -module by property (P2), so is / 0 = ( +1 ⊗ )/( ⊗ In general, in order for the ad-property to hold, there has to be compatibility among the classes of left -modules Q defining the meaning of 'relative' for various rings . The following results from [8, §3] will help us see what is needed: The final assertion follows by Lemma 4.1.
The following is an application of Theorem 4.7 to the particular case of locally fprojective quasi-coherent sheaves on coherent schemes (i.e., those schemes whose structure sheaf consists of coherent rings): Example 4.8. We recall the setting of Example 3.11. So Q = { }, and the modules in D Q are called f-projective.
Assume that is a coherent ring. Then Def (Q) is the class of all flat modules, so Def (Q) = (S ) ⊺ where S denotes the class of all finitely presented modules. Then condition (C1) clearly holds for each flat ring homomorphism of coherent rings.
As for Condition (C2) in the setting of coherent rings, it suffices to prove that (S ⊗ ) ⊺ ⊆ (S ) ⊺ for each faithfully flat ring homomorphism : → . However, if ∈ (S ⊗ ) ⊺ , then also Tor 1 (S , ) = 0 by [12, VI.4.1.1], whence is a flat module, and ⊗ a flat -module. Moreover, defining ∈ Hom ( , ⊗ ) and ∈ Hom ( ⊗ , ) by ( ) = ⊗ 1 and ( ⊗ 1) = , we see that = 1 , whence is isomorphic to a direct summand in ⊗ . So is a flat -module, and ∈ (S ) ⊺ . Thus Theorem 4.7 implies that the notion of a locally f-projective quasi-coherent sheaf is Zariski local for all coherent schemes.
4.4.
Locally tilting quasi-coherent sheaves. Finally, we turn to Zariski locality in the settings induced by tilting modules. The results of this section come from [24], and rely on the structure theory of tilting classes over commutative rings developed in [3], [22], and [23].
Let ≥ 0. Consider the property of being an -tilting -module (see Definition 2.17). Thus, a locally -tilting quasi-coherent sheaf on a scheme is a quasi-coherent sheaf M such that ( ) is an -tilting ( )-module for each open affine subset of .
A key tool for proving the Zariski locality in the tilting case is the following lemma from [24, §2] which relies substantially on Theorem 2.18, that is, on tilting classes being of finite type. Proof. (1) Let : → be a flat ring homomorphism and be an -tilting module. Then ⊗ is an -tilting -module by Lemma 4.9. (2) Let : → be a faithfully flat ring homomorphism and¯ be a module such that ′ =¯ ⊗ is an -tilting -module. By Lemma 4.10, there is an -tilting module such that¯ ′ is equivalent to the -tilting -module ′ = ⊗ . Let A ′ and B ′ be the -tilting classes induced by ′ (equivalently, by¯ ′ ) in Mod-. Then¯ ⊗ ∈ Add ′ = A ′ ∩ B ′ . By Lemma 4.9,¯ ∈ A ∩ B = Add . Since conditions (T1) and (T2) hold true for , they also hold for¯ .
It is an open problem whether the property of being an -tilting module descends along all faithfully flat ring homomorphisms. In [24], a positive answer was given for ≤ 1, and for the case of faithfully flat ring homomorphisms of commutative noetherian rings.
However, Zariski locality does hold in general, as one only needs to prove the descent along the particular faithfully flat ring homomorphisms of the form : → = < [ −1 ] where = < (see 4.1). Moreover, in the presence of conditions (T1) and (T2), condition (T3) can be replaced by a homological condition involving the unbounded derived category ( ) of -modules. Namely, (T3) is then equivalent to the condition that ( ) is the smallest localizing subcategory of itself containing . The latter condition can be verified in the setting of Theorem 4.11(2) (see [24,Lemma 4.1] for more details). Thus we conclude: Remark 5. Unlike the previous sections, the case of = 0 in Theorem 4.12 differs from the case of vector bundles. Namely, 0-tilting modules are exactly the (possibly infinitely generated) projective generators. However, Lemmas 4.9 and 4.10 for = 0 do imply that the property of being a projective module is an ad-property. The point is that the assumption that any -tilting -module ′ is equivalent to an -tilting -module of the form ⊗ for a module ∈ Mod-is satisfied for = 0.
However, this assumption fails for ≥ 1: by [23, 6.2], -tilting classes in Mod-correspond 1-1 to certain -tuples of subsets of Spec( ) called characteristic sequences. The existence of a faithfully flat ring homomorphism : → only gives a monomorphism from the characteristic sequences in Spec( ) to those in Spec( ), not a bijection (see [24, 3.8] for more details).
[33] J.
Example 2.2 (Walker's towers). Let be a discrete valuation domain with a prime element∈ , and be any infinite ordinal. The module is defined by generators and relations as follows: the generators are labeled by finite sequences 1 . . . of ordinals such that > 1 > · · · > . The relations are . 1 . . . +1 = 1 . . . and . = 0.The endomorphism is the multiplication by on . For each ordinal , we define a submodule of by induction: 0 = , +1 = ( ), and = < for a limit ordinal. For each ordinal ≤ , let be the submodule of generated by (the cosets of) the generators labeled by the sequences 1 . . .. Then ( ) = = for each < ,
Theorem 2 . 12 .
212Let Q be any class of left -modules and ∈ Mod-.
14 and 3.19] and [15, 2.7(1)] for details): Theorem 2.13.
10]: Proposition 2.14. Let Q be any class of left -modules and ∈ Mod-. Then is Q-Mittag-Leffler, iff is Def (Q)-Mittag-Leffler. If Q is a definable class of modules, then there is a useful criterion in [20, §1] for countably presented flat modules to be Q-Mittag-Leffler, expressed in terms of vanishing of the Ext-functor: Lemma 2.15. Let be a countably presented flat module, Q a definable class of left -modules, and B = Q ∨ the dual definable class in Mod-. Then is Q-Mittag-Leffler, if and only if Ext 1 ( , ) = 0 for all ∈ B. Example 2.16. Let S be a class of FP 2 -modules. By Example 2.10, we can take Q = S ⊺ in Lemma 2.15, whence B = S ⊥ . Thus, if
35 and 13.46]): Theorem 2.18. Let be a tilting module. Let S be the representative set of all FP 2modules in A . Then B = S ⊥ , hence B is a definable class of finite type. Let Q = S ⊺ be the dual definable class in -Mod, and C be the class of all countably presented modules in A . Then A = Filt(C ). Moreover, C coincides with the class of all countably presented Q -Mittag-Leffler modules such that ∈ lim − − → S . As all extensions of the modules in D Q are pure, the classes D Q are closed under extensions (and transfinite extensions) for each class Q ⊆ -Mod. This is also true of the classes A from Theorem 2.18, because A = Filt(C ). However, the class of all (not necessarily flat) Mittag-Leffler modules need not be closed under extensions in general: Example 2.19. Let = Z and a prime integer. By the Krull-Remak-Schmidt-Azumaya theorem
a non-zero element of of infinite -height. This shows that the class of all (countable) Mittag-Leffler abelian groups is not closed under extensions.
Remark 4 .
4In the setting of Theorem 2.18, we also have A ⊆ lim − − → S = ⊺ (S ⊺ ) = lim − − → A (cf. [18, 8.40]), so the class lim − − → S is always closed under transfinite extensions. However, the characterization of the countably presented modules from A as the Q -Mittag-Leffler modules in lim − − → S from Example 2.16 does not extend to arbitrary modules in A = Filt(C ). For example, if is any non-right perfect ring and = , then A = P 0 , Q = -Mod and lim − − → S = F 0 , but the class of all Q -Mittag-Leffler modules in lim − − → S is just the class F M ( P 0 ). Notice that this example also shows that the criterion for countably presented flat modules to be Q-Mittag-Leffler from Lemma 2.15 does not extend to all flat modules (here, D Q = F M, while ⊥ B = P 0 ).
21]): Theorem 3.2. Let S be any set of modules. Then the class Filt(S) is precovering. Theorem 3.2 of course includes the case of restricted flat Mittag-Leffler modules: Corollary 3.3. The class F M is precovering for each infinite cardinal .
Conjecture 3.4. [Enochs'Conjecture] Let C be a precovering class of modules. Then C is covering, iff C = lim − − → C.
Lemma 3. 8 .
8There exists a module ∈ F M (called the tree module for ), such that contains a free submodule of rank , and / ( ) .Proof. Let = ∈ ℓ ( ) , and = ∈ ℓ ( ) . Since card = , is a free module of rank For each ∈ , < , and ∈ , we define ∈ by ↾ ( ) = , ↾ ( ) = ℎ −1 . . . ℎ ( ) for all < < , and ( ) = 0 otherwise. Here ∈ Hom ( , ℓ ( ) ) denotes the th projection for each ∈ . ∈ . Indeed, the inclusion ⊆ , +1splits, as the short exact sequence 0 → ↩→ ⊕ , +1 → +1 → 0 splits, where ( ) = + ( ( )) , +1 , and ( + , +1 ) = − ( ).
the middle vertical sequence splits, so there exists ∈ Hom ( , ) such that = 1 . Since is a D Q -precover of and ∈ F M ⊆ D Q , there exists ∈ Hom ( , ) such that = . Then = . Thus − maps into , and ( − ) = ( − ) = − 0 = 1 . Hence also the left vertical sequence splits. This proves that the group homomorphism Ext 1 ( , ) is monic. Consider the commutative diagram with exact rows Hom ( ( ) , ) ℎ − −−−−− → Ext 1 ( , ) 2 − −−−−− → Ext 1 ( , ) Hom ( ( ) , ) Ext 1 ( , ) 2 Ext 1 ( , ) Hom ( ( ) , ) − −−−−− → Ext 1 ( , ) 2 − −−−−− → Ext 1 ( , ).
All rings in this section are commutative. By a classic theorem of Grothendieck, if is a ring, then the category Qcoh( ) of all quasi-coherent sheaves on the affine scheme = Spec( ) is equivalent to the module category Mod-. For general schemes , Qcoh( ) can be represented as a category consisting of quasicoherent R-modules M = ( ( ), | ⊆ ⊆ , , open affine ) over the structure sheaf of rings R = ( ( ) | open affine subset of ) as follows: for every open affine subset ⊆ , ( ) is an ( )-module, and for each pair of open affine subsets ⊆ ⊆ , : ( ) → ( ) is an ( )-homomorphism such that • id ( ) ⊗ is an ( )-isomorphism, and • if is an open affine subset of , then = . One does not need all the open affine subsets of to represent Qcoh( ) in this way: an open affine covering C of is sufficient for this purpose, cf. [13, §2]. Here, a set C of open affine subsets of is an open affine covering of in case C covers both , and all the sets ∩ where and are open affine subsets of .
Lemma 4.1. [The Affine Communication Lemma] Let be a ring, ∈ Mod-, and be a property of -modules such that (i) if satisfies property , then [ −1 ] = ⊗ [ −1 ] satisfies property [ −1 ] for each ∈ , and (ii) if = < , and the [ −1 ]-modules [ −1 ] = ⊗ [ −1 ] satisfy property [ −1 ] for all < , then satisfies property . Then the induced property of quasi-coherent sheaves on is Zariski local for every scheme .
is a flat ring homomorphism (that is, makes [ −1 ] into a flat -module). Moreover, the ring homomorphism 0 ,..., −1 : → < [ −1 ] is faithfully flat when = < (that is, 0 ,..., −1 makes < [ −1 ] into a faithfully flat -module). So the assumptions of the Affine Communication Lemma are satisfied in case ascends along flat ring homomorphisms, and descends along faithfully flat ring homomorphisms in the sense of the following definition: Definition 4.2. Let :
Lemma 4. 3 .
3The property of being a flat Mittag-Leffler module is an ad-property.
Theorem 4. 4 .
4The property of being a projective module is an ad-property. Hence the notion of a vector bundle is Zariski local for all schemes.4.2. Quasi-coherent sheaves arising from restricted flat Mittag-Leffler modules. Let ≥ ℵ 0 and be the property of being a -restricted flat Mittag-Leffler -module. Let be the induced property of quasi-coherent sheaves. Then the locally -quasi-coherent sheaves are called -restricted Drinfeld vector bundles, cf. [15, p.1423] and [16, 2.1.3].
Theorem 4. 5 .
5Let be an infinite cardinal and be the property of being a -restricted flat Mittag-Leffler -module. Then is an ad-property. Hence the notion of a -restricted Drinfeld vector bundle is Zariski local for all schemes.
4. 3 .
3Finite type and the ad-property for flat relative Mittag-Leffler modules. Now we turn to the setting of quasi-coherent sheaves arising from flat Q-Mittag-Leffler modules. First, let us consider the boundary cases of Q = {0} and Q = -Mod. As D -Mod = F M, and being a flat Mittag-Leffler module is an ad-property by Lemma 4.3, the induced notion of a quasi-coherent sheaf is Zariski local for any scheme. The same holds for D {0} = F 0 .
Lemma 4.6. (i) Let : → be a flat ring homomorphism, Q ⊆ -Mod, and be a flat Q-Mittag-Leffler module. Then ⊗ is a flat Q ⊗ -Mittag-Leffler -module. (ii) Let : → be a faithfully flat ring homomorphism, and let ∈ Q ⊆ -Mod. Assume that the implication (★) ⊗ is a (Q ⊗ )-Mittag-Leffler -module =⇒ ∈ D Q holds for each countably presented flat module . Then (★) holds for each flat module . (iii) Let S be a class of FP 2 -modules and Q = S ⊺ . Then the implication (★) holds for every flat module and each faithfully flat ring homomorphism : → . Moreover, for each flat ring homomorphism : → , we have Def (Q ⊗ ) = (S ⊗ ) ⊥ .
Lemma 4.6(iii) suggests that one should concentrate on the case when Q are the classes of left -modules of finite type from Example 2.10. The compatibility conditions sufficient for the ad-property are then as follows [8, 4.4]: Theorem 4.7. For each ring , let S be a class of FP 2 -modules. Assume that the following compatibility conditions hold: (C1) S ⊗ ⊆ S for each flat ring homomorphism : → . (C2) (S ⊗ ) ⊺ = (S ) ⊺ for each faithfully flat ring homomorphism : → . Let be the property of being a flat Q -Mittag-Leffler module, where Q = (S ) ⊥ . Then is an ad-property. In particular, the notion of a locally -quasi-coherent sheaf is Zariski local. Proof. First, we prove the ascent along flat ring homomorphisms : → . If ∈ Mod-is flat and Q -Mittag-Leffler, then ⊗ is a flat (Q ⊗ )-Mittag-Lefflermodule by Lemma 4.6(i). By Proposition 2.14 and Lemma 4.6(iii), ⊗ is a flat (S ⊗ ) ⊺ -Mittag-Leffler -module. By Condition (C1), (S ) ⊺ ⊆ (S ⊗ ) ⊺ , whence ⊗ is a flat Q -Mittag-Leffler -module. Let : → be a faithfully flat ring homomorphism and ∈ Mod-be such that ⊗ is a flat Q -Mittag-Leffler -module. By Condition (C2) and Lemma 4.6(iii), ⊗ is a flat Def (Q ⊗ )-Mittag-Leffler -module. Then is flat by (the proof of) Lemma 4.3, and is a Q -Mittag-Leffler module by Lemma 4.6(iii).
Lemma 4. 9 .
9Let : → be a flat ring homomorphism, be an -tilting module, A the induced left -tilting class, B the induced -tilting class, and S the representative set of all FP 2 -modules in A (so that B = S ⊥ ). Let ′ = ⊗ . Then ′ is an -tilting -module, and B ′ = (S ⊗ ) ⊥ is the -tilting class induced by ′ . Let A ′ = ⊥ B ′ be the left -tilting class induced by ′ .(1) A ⊗ ⊆ A ′ . Moreover, if is faithfully flat, then for each module ∈ Mod-, ∈ A , iff ⊗ ∈ A ′ . (2) B ⊗ ⊆ B ′ .Moreover, if is faithfully flat, then for each module ∈ Mod-, ∈ B , iff ⊗ ∈ B ′ . The following lemma was proved in [24, 3.16]: Lemma 4.10. Let : → be a faithfully flat ring homomorphism. Let ′ be any -tilting -module of the form ⊗ for a module ∈ Mod-. Then there is an -tilting module ∈ Modsuch that ⊗ is equivalent to ′ . Now, we can prove our first claim concerning Zariski locality: Theorem 4.11. Let ≥ 0. (1) The property of being an -tilting module ascends along flat ring homomorphisms. (2) If : → is a faithfully flat ring homomorphism, and¯ is a module such that ′ =¯ ⊗ is an -tilting -module, then¯ satisfies conditions (T1) and (T2).
Theorem 4 . 12 .
412Let ≥ 0. Then the notion of a locally -tilting quasi-coherent sheaf is Zariski local for all schemes.
). By Lemma 4.3, +1 / is a ≤ -generated flat Mittag-Leffler module, whence +1 / ∈ C by Theorem 2.13(3). Thus +1 ∈ F M and +1 ⊗ ∈ H . If is a limit ordinal, we let =<
. Since the chain (
|
< ) is
continuous,
∈ F M . Moreover,
⊗
=
<
⊗
∈ H by property
(P2).
Šaroch, Approximations and Mittag-Leffler conditions -the tools, Israel J. Math. 226(2018), 737-756. [34] R. http://math.stanford.edu/~vakil/216blog/FOAGjun1113public.pdf. Email address: [email protected],
Math
216:
Foundations
of
Algebraic
Geometry,
available
at
C
U
, F
M
P
, D
A
, S
83,
186 75 P
8, C
R
Rings and categories of modules. F W Anderson, K R Fuller, GTM 13. New YorkSpringer-Verlag2nd ed.F.W. Anderson, K.R. Fuller, Rings and categories of modules, 2nd ed., GTM 13, Springer-Verlag, New York 1992.
Mittag-Leffler conditions on modules. L Hügel, D Herbera, Indiana Math. J. 57L. Angeleri Hügel, D. Herbera, Mittag-Leffler conditions on modules, Indiana Math. J. 57(2008), 2459- 2517.
Tilting, cotilting, and spectra of commutative noetherian rings. L Hügel, D Pospíšil, J J Šťovíček, J Trlifaj, Trans. Amer. Math. Soc. 366L. Angeleri Hügel, D. Pospíšil, J. J. Šťovíček, J. Trlifaj, Tilting, cotilting, and spectra of commutative noetherian rings, Trans. Amer. Math. Soc. 366(2014), 3487-3517.
Approximations and Mittag-Leffler conditions -the applications. L Hügel, J Šaroch, J Trlifaj, Israel J. Math. 226L. Angeleri Hügel, J. Šaroch, J. Trlifaj, Approximations and Mittag-Leffler conditions -the applications, Israel J. Math. 226(2018), 757-780.
Enochs' conjecture for cotorsion pairs and more. S Bazzoni, J Šaroch, arXiv:2303.08471v1preprintS. Bazzoni, J. Šaroch, Enochs' conjecture for cotorsion pairs and more, preprint, arXiv:2303.08471v1.
On the abelianization of derived categories and a negative solution to Rosicky's problem. S Bazzoni, J Šťovíček, Composition Math. J. 149S. Bazzoni, J. Šťovíček, On the abelianization of derived categories and a negative solution to Rosicky's problem, Composition Math. J. 149(2013), 125-147.
A Ben Yassine, J Trlifaj, arXiv:2110.06792v2Flat relative Mittag-Leffler modules and approximations. preprintA. Ben Yassine, J. Trlifaj, Flat relative Mittag-Leffler modules and approximations, preprint, arXiv:2110.06792v2.
A Ben Yassine, J Trlifaj, arXiv:2208.00869v1Flat relative Mittag-Leffler modules and Zariski locality, preprint. A. Ben Yassine, J. Trlifaj, Flat relative Mittag-Leffler modules and Zariski locality, preprint, arXiv:2208.00869v1.
All modules have flat covers. L Bican, R El Bashir, E Enochs, Bull. London Math. Soc. 33L. Bican, R. El Bashir, E. Enochs, All modules have flat covers, Bull. London Math. Soc. 33(2001), 385-390.
S Breaz, M Hrbek, G C Modoi, arXiv:2204.01374v1Silting, cosilting, and extensions of commutative rings, preprint. S. Breaz, M. Hrbek, G. C. Modoi, Silting, cosilting, and extensions of commutative rings, preprint, arXiv:2204.01374v1.
Infinite-dimensional vector bundles in algebraic geometry: an introduction. V Drinfeld, The Unity of Mathematics. Birkhäuser, BostonV. Drinfeld, Infinite-dimensional vector bundles in algebraic geometry: an introduction, in The Unity of Mathematics, Birkhäuser, Boston 2006, 263-304.
Homological Algebra. H Cartan, S Eilenberg, Princeton Univ. PressPrincetonH. Cartan, S. Eilenberg, Homological Algebra, Princeton Univ. Press, Princeton 1956.
Relative homological algebra in the category of quasi-coherent sheaves. E Enochs, S Estrada, Adv. Math. 194E. Enochs, S. Estrada, Relative homological algebra in the category of quasi-coherent sheaves, Adv. Math. 194(2005), 284-295.
Relative homological algebra. E E Enochs, O M G Jenda, GEM. 1W. de GruyterE.E. Enochs, O.M.G. Jenda, Relative homological algebra, vol. 1, GEM 30, W. de Gruyter, Berlin 2011.
Model category structures arising from Drinfeld vector bundles. S Estrada, P Asensio, M Prest, J Trlifaj, Advances in Math. 231S. Estrada, P. Guil Asensio, M. Prest, J. Trlifaj, Model category structures arising from Drinfeld vector bundles, Advances in Math. 231(2012), 1417-1438.
Descent of restricted flat Mittag-Leffler modules and generalized vector bundles. S Estrada, P Asensio, J Trlifaj, Proc. Amer. Math. Soc. 142S. Estrada, P. Guil Asensio, J. Trlifaj, Descent of restricted flat Mittag-Leffler modules and generalized vector bundles, Proc. Amer. Math. Soc. 142(2014), 2973-2981.
L Fuchs, L Salce, Modules over Non-Noetherian Domains. AMS84L. Fuchs, L. Salce, Modules over Non-Noetherian Domains, MSM 84, AMS, Providence 2001.
Approximations and Endomorphism Algebras of Modules. R Göbel, J Trlifaj, W. de GruyterBerlin2nd ed., GEM 41R. Göbel, J. Trlifaj, Approximations and Endomorphism Algebras of Modules, 2nd ed., GEM 41, W. de Gruyter, Berlin 2012.
Distributing tensor product over direct product. K R Goodearl, Pacific J. Math. 43K.R. Goodearl, Distributing tensor product over direct product, Pacific J. Math. 43(1972), 107-110.
Definable classes and Mittag-Leffler conditions, in Ring Theory and Its Applications. D Herbera, Contemp. Math. 609D. Herbera, Definable classes and Mittag-Leffler conditions, in Ring Theory and Its Applications, Contemp. Math. 609(2014), 137-166.
Almost free modules and Mittag-Leffler conditions. D Herbera, J Trlifaj, Advances in Math. 229D. Herbera, J. Trlifaj, Almost free modules and Mittag-Leffler conditions, Advances in Math. 229(2012), 3436-3467.
One-tilting classes and modules over commutative rings. M Hrbek, J. Algebra. 462M. Hrbek, One-tilting classes and modules over commutative rings, J. Algebra 462(2016), 1-22.
Tilting classes over commutative rings. M Hrbek, J Šťovíček, Forum Math. 32M. Hrbek, J. Šťovíček, Tilting classes over commutative rings, Forum Math. 32(2020), 235-267.
Zariski locality of quasi-coherent sheaves associated with tilting, Indiana Univ. M Hrbek, J Šťovíček, J Trlifaj, J. Math. 69M. Hrbek, J. Šťovíček, J. Trlifaj, Zariski locality of quasi-coherent sheaves associated with tilting, Indiana Univ. J. Math. 69(2020), 1733-1762.
A , arXiv:1011.0038v1Faithfully flat descent for projectivity of modules. A. P , Faithfully flat descent for projectivity of modules, arXiv:1011.0038v1.
Closure properties of lim − − → C. L Positselski, P Příhoda, J Trlifaj, J. Algebra. 606L. Positselski, P. Příhoda, J. Trlifaj, Closure properties of lim − − → C, J. Algebra 606(2022), 30-103.
Critères de platitude et de projectivité. M Raynaud, L Gruson, Invent. Math. 13M. Raynaud, L. Gruson, Critères de platitude et de projectivité, Invent. Math. 13(1971), 1-89.
P Rothmaler, Mittag-Leffler modules and positive atomicity. Habilitationsschrift, Kiel. P. Rothmaler, Mittag-Leffler modules and positive atomicity. Habilitationsschrift, Kiel, 1994.
Mittag-Leffler modules and definable subcategories, in Model Theory of Modules, Algebras and Categories. P Rothmaler, Contemp. Math. 730P. Rothmaler, Mittag-Leffler modules and definable subcategories, in Model Theory of Modules, Alge- bras and Categories, Contemp. Math. 730(2019), 171-196.
An introduction to homological algebra. J J Rotman, SpringerNew YorkUniversitext2nd ed.J.J. Rotman, An introduction to homological algebra, 2nd ed., Universitext, Springer, New York 2008.
Šťovíček On exact categories and applications to triangulated adjoints and model structures. M Saorín, J , Advences in Math. 228M. Saorín, J. Šťovíček On exact categories and applications to triangulated adjoints and model structures, Advences in Math. 228 (2011), 968--1007.
Approximations and locally free modules. A Slávik, J Trlifaj, Bull. London Math. Soc. 46A.Slávik, J.Trlifaj, Approximations and locally free modules, Bull. London Math. Soc. 46(2014), 76-90.
| [] |
[
"COMPUTING AUTOMORPHISM GROUPS OF RATIONAL FUNCTIONS",
"COMPUTING AUTOMORPHISM GROUPS OF RATIONAL FUNCTIONS"
] | [
"Xander Faber ",
"Michelle Manes ",
"Bianca Viray "
] | [] | [] | Let φ be an endomorphism of the projective line of degree at least 2, defined over a noetherian commutative ring R with unity. We show that the automorphism group of φ is a finite group scheme, and we construct algorithms to compute it when R is a finite field or a number field. We also give an algorithm for determining when two such endomorphisms are conjugate. We have implemented these algorithms in Sage when R is a finite field or the field of rational numbers. | 10.1016/j.jalgebra.2014.08.048 | [
"https://arxiv.org/pdf/1202.5557v2.pdf"
] | 9,485,019 | 1202.5557 | e1ad1aa22e11de6f669b350fd52bb5b07adb02be |
COMPUTING AUTOMORPHISM GROUPS OF RATIONAL FUNCTIONS
24 Feb 2012
Xander Faber
Michelle Manes
Bianca Viray
COMPUTING AUTOMORPHISM GROUPS OF RATIONAL FUNCTIONS
24 Feb 2012
Let φ be an endomorphism of the projective line of degree at least 2, defined over a noetherian commutative ring R with unity. We show that the automorphism group of φ is a finite group scheme, and we construct algorithms to compute it when R is a finite field or a number field. We also give an algorithm for determining when two such endomorphisms are conjugate. We have implemented these algorithms in Sage when R is a finite field or the field of rational numbers.
Introduction
Let F be a field, and let φ = f /g ∈ F (z) be a rational function such that gcd(f, g) = 1 and d = deg(φ) := max{deg(f ), deg(g)} > 1. When viewed as an endomorphism of the projective line φ : P 1 F → P 1 F , a dynamical theory of φ arises from iteration. That is, for x ∈ P 1 F , we may consider its orbit x → φ(x) → φ 2 (x) → φ 3 (x) → · · · (Here we write φ 1 = φ and φ n = φ • φ n−1 for each n > 1.) The case F = C -dynamics of self-maps of the Riemann sphere -has a fascinating history dating back as far as Newton; e.g., see [1,11]. When F is a finite field, these dynamical systems behave (conjecturally) like random maps, which has applications to factoring integers [13,2]. If F is a number field, then we have the younger theory of arithmetic dynamics [15]. The case where F is a non-Archimedean field is younger still and draws much inspiration from the complex case [3,10].
In the present paper, we study algorithmic aspects of a fundamental question: When do two rational functions φ, ψ ∈ F (z) exhibit the same dynamical behavior? For topological reasons, they must have the same degree. Given any fractional linear transformation f (z) = αz+β γz+δ , viewed as an element of PGL 2 (F ), the dynamical behavior of φ is equivalent to that of φ f := f • φ • f −1 ; indeed, f maps the φ-orbit of a point x ∈ P 1 F to the φ f -orbit of f (x). If there exists f ∈ PGL 2 (F ) such that ψ = f • φ • f −1 , then we say that φ and ψ are conjugate (over F ). Conjugation defines an action of PGL 2 (F ) on the parameter space of rational functions of fixed degree d > 1, denoted Rat d (F ). The stabilizer of a rational function φ for this action is called the automorphism group of φ, and is denoted by Aut φ (F ). This group is always finite, and it is trivial for most rational functions.
Date: February 28, 2012. The first and second authors were partially supported by NSF grants DMS-0902532 and DMS-1102858, respectively. The third author was partially supported by NSF grant DMS-1002933 and by ICERM.
We thus have two problems which happen to be computationally quite similar: (1) determine whether two given rational functions are conjugate, and (2) determine whether a given rational function has nontrivial automorphism group. We focus mainly on algorithms designed to compute automorphism groups, and in the final section we sketch the modifications needed to address the first problem.
We design three algorithms for computing automorphism groups of rational functions, each applying to a slightly different setting. Let F be a field and φ : P 1 F → P 1 F be a morphism of degree at least 2, as above. Our first algorithm computes the absolute automorphism group Aut φ (F ) and a field of definition E/F ; that is, Aut φ (E) = Aut φ (F ). This algorithm requires constructing the splitting field of a polynomial with degree O(d), so this is not very practical over number fields unless d is small. Over finite fields it is much more efficient because the average size of the splitting field of a polynomial of degree O(d) is significantly smaller. For s ∈ PGL 2 (F ), write Fix(s) for the set of fixed points of s in P 1 (F ). If s is an automorphism of φ, the action of φ on Fix(s) is highly restricted, both geometrically and arithmetically. Our second algorithm takes advantage of this fact to compute Aut φ (F ) when F is any field of characteristic 0 or when F is a finite field. If F has characteristic p > 0 and is not finite, then the algorithm only detects the elements of Aut φ (F ) whose order is prime to the characteristic. This algorithm requires finding linear and quadratic factors of a polynomial of degree d 2 + 1. With the present implementations of root finding and polynomial factoring over number fields available in Sage and Magma, this is infeasible when d is large. However, the algorithm is quite efficient when F is a finite field and for number fields when d is reasonable, say d < 15.
Our third algorithm computes Aut φ (F ) when F is a number field. For rational maps of degree at least 15 it is significantly faster than the second method. The main idea here is to first reduce the coefficients modulo v for several finite places v of good reduction and compute the automorphism group over the residue field (using our second algorithm). We then use the Chinese remainder theorem to piece together the various automorphisms modulo v to arrive at the set of automorphisms over the original number field F .
We first prove a general result about the algebro-geometric structure of Aut φ . As we will work with automorphism groups over global fields and the reductions at several primes, this type of result is necessary to ensure we are on solid footing. For notation, let R be a noetherian commutative ring with unity, and let R-Alg and Grp denote the categories of commutative R-algebras and groups, respectively. For any R-algebra S, we identify PGL 2 (S) with Aut(P 1 S ), the group of automorphisms of P 1 defined over S. We make the following definition:
Definition. Let φ : P 1 R → P 1 R be a nonconstant morphism.
The automorphism group of φ is the R-group scheme Aut φ represented by the functor Aut φ : R-Alg → Grp defined by
Aut φ (S) = {f ∈ Aut(P 1 S ) : φ = φ f := f • φ • f −1 }. Theorem 1.1.
Let R be a commutative ring and let φ : P 1 R → P 1 R be a nonconstant endomorphism. Then the functor Aut φ is represented by a closed R-subgroup scheme Aut φ ⊂ PGL 2 . If moreover deg(φ) ≥ 2, then Aut φ is finite over Spec R.
Remark 1.2. The group scheme Aut φ need not be flat over Spec R. For example, if φ(z) = z 2 as an endomorphism of P 1 Z , then one can check that Aut φ (Q) = {z, 1/z} = Aut φ (F p ) for p > 2. But Aut φ (F 2 ) ∼ = PGL 2 (F 2 ), which has order 6.
Since the order of a finite flat group scheme is locally constant, we conclude that Aut φ is not flat over Spec Z.
Remark 1.3. If deg(φ) = 1, then Aut φ is not a finite group scheme in general.
Consider the examples φ(z) = z and ψ(z) = z + 1, for which Aut φ = PGL 2 and Aut ψ ∼ = G a , respectively.
The next result will have two uses in this paper. First, it will allow us to deduce that Aut φ is proper when φ has degree at least 2. Second, it will provide the main tool for relating the automorphism group of an endomorphism over a number field to the automorphism group over a finite field. For the statement, if k is a non-Archimedean field (not necessarily complete) with valuation ring o, we say that an endomorphism φ : P 1 k → P 1 k has good reduction if there exists a morphism Φ : P 1 o → P 1 o with generic fiber φ. Reduction Lemma. Let k be a non-Archimedean field with valuation ring o and residue field F, and let φ ∈ k(z) be a rational function of degree at least 2 (which is equivalent to a morphism P 1 k → P 1 k ). Suppose that φ has good reduction. Then every element of Aut φ (k) has good reduction, and the canonical reduction o → F induces a homomorphism red : Aut φ (k) → Aut φ (F). If F has characteristic p > 0 (resp. characteristic zero), then the kernel of reduction is a p-group (resp. trivial ).
If K is a number field and v is a finite place of K, we write K v , Z v , and F v for the completion of K at v, the valuation ring of K v , and the residue field of K v , respectively. If φ ∈ K(z) is a rational function, we say that it has good reduction at v if all of its coefficients are integral at v, and it extends to a morphism over Spec Z v . (Equivalently, φ has good reduction if one can reduce its coefficients modulo v, and the resulting morphism of P 1
Fv has the same degree as φ. ) Proposition 1.4. Let K be a number field and let φ ∈ K(z) a rational function of degree d ≥ 2. Define S 0 to be the set of rational primes given by
S 0 = {2} ∪ p odd : p − 1 2 [K : Q] and p | d(d 2 − 1) ,
and let S be the (finite) set of places of K of bad reduction for φ along with the places that divide a prime in S 0 . Then red v : Aut φ (K) → Aut φ (F v ) is a well-defined injective homomorphism for all places v outside S.
Remark 1.5. In practice, Proposition 1.4 allows one to determine the group structure of Aut φ (K) very quickly by computing Aut φ (F v ) for a few places v ∈ S. This is analogous to the way one typically computes the torsion subgroup of an elliptic curve over a number field; see [16,VII.3]. If one wishes to compute the elements of Aut φ (K) rather than just the group structure, then more work is required.
Outline. Section 2 is occupied by the proof of the Reduction Lemma and its corollary. The argument uses techniques from the ramification theory of endomorphisms of the Berkovich projective line; the reader may safely skip the proof and use the Reduction Lemma as a black box if necessary. The proof of Theorem 1.1 is given in Section 3. Section 4 is occupied by the algorithms for computing automorphism groups. Section 5 contains a few examples of our implementation of the algorithms. All of the computations were carried out using Sage [17]. Our code is included with the arXiv distribution of this article. Finally, Section 6 contains a brief discussion of the scheme Conj φ,ψ of elements of PGL 2 that conjugate φ to ψ, and we also sketch the main ideas of an algorithm for determining if two endomorphisms φ and ψ are conjugate, i.e., if the scheme Conj φ,ψ has any points.
Acknowledgements
This project began at the University of Georgia, during an NSF-sponsored summer school on Arithmetic Dynamics. We thank the organizer, Robert Rumely, for the experience. The authors are grateful for the opportunity to complete the project at the Institute for Computational and Experimental Research in Mathematics. Finally, we would like to thank Joseph H. Silverman for helpful comments on our number field algorithm.
Proof of the Reduction Lemma
For background on the Berkovich projective line and dynamics, see [3]. For a more concise summary of the necessary ideas, we direct the reader to [6].
Let C k be the completion of an algebraic closure of the completion of k, and let P 1 be the Berkovich analytification of the projective line P 1 C k . The morphism φ extends functorially to P 1 . We use two key facts due to Rivera-Letelier [14, Thm. 4]: (1) φ has good reduction if and only if the Gauss point ζ ∈ P 1 is totally invariant, and (2) a rational function has at most one totally invariant point in
P 1 P 1 (C k ). For f ∈ Aut φ (k), we have f −1 (ζ) = f −1 (φ −1 (ζ)) = (φ • f ) −1 (ζ) = (f • φ) −1 (ζ) = φ −1 (f −1 (ζ)).
Hence f −1 (ζ) is a totally invariant type II point for φ, so that f (ζ) = ζ. Equivalently, f has good reduction. Thus the reduction map red : Aut φ (k) → Aut φ (F) is well-defined, and it is evidently a homomorphism. Now we compute the kernel of reduction. Suppose red(f ) is trivial. Without loss of generality, we may replace k with a finite extension in order to assume that f has a k-rational fixed point. Moreover, we may conjugate f by an element of PGL 2 (o) in order to assume that f (∞) = ∞. Now f (z) = αz + β. If f has order m > 1, then the equation f m (z) = f (z) shows that α is an m-th root of unity. But red(f ) is trivial, so we haveα = 1. If k has residue characteristic zero, then we conclude that α = 1 and β = 0. Otherwise, we find that α is a p-power root of unity in k, and hence f has p-power order in Aut φ (k). The proof of the Reduction Lemma is complete.
Remark 2.1. A different proof of the first part of the Reduction Lemma can be given using the maximum modulus principle in non-Archimedean analysis [12,Lem. 6].
Proposition 2.2. Let F be a field, and let n ≥ 2 be an integer. Suppose that φ :
P 1 F → P 1 F is a morphism of degree d ≥ 2 such that Aut φ (F ) contains an element of order n. Then n divides d(d 2 − 1).
Proof. We may assume without loss of generality that F is algebraically closed. Let s ∈ Aut φ (F ) have order n. We conjugate one of the fixed points of s to ∞, so that s = α β 1 . (Note that replacing s with usu −1 has the effect of replacing φ with u • φ • u −1 .) The proof divides into two cases, depending on whether s has one or two fixed points.
If s has only one fixed point, then necessarily n = char(F ) is prime and α = 1.
(See, for example, [5, Lem. 3.1].) Replace s with β −1 1 s β 1 in order to assume that β = 1. It follows that φ(z + 1) − 1 = φ(z), or equivalently, that the function φ(z)−z is invariant under the map z → z +1. Hence there exists a rational function ψ(z) ∈ F (z) such that φ(z) − z = ψ(z n − z). We conclude that deg(φ) = n · deg(ψ) or n · deg(ψ) + 1.
Now suppose that s has two distinct fixed points: ∞ and β/(1 − α). We may conjugate the second fixed point to 0 in order to assume that β = 0. Note that this implies that α ∈ F × has multiplicative order n. To say that s is an automorphism of φ is equivalent to saying that
φ(z)/z is invariant under the map z → αz. Hence there is a rational function ψ ∈ F (z) such that φ(z)/z = ψ(z n ). So deg(φ) = n · deg(ψ) or n · deg(ψ) ± 1.
Proof of Proposition 1.4. By the Reduction Lemma, it suffices to prove that if v ∈ S, then Aut φ (K) has no element of order p, where v | p. Suppose otherwise.
The group PGL 2 (K) contains an element of order p if and only if ζ p + ζ −1 p ∈ K for some primitive p-th root of unity ζ p [4].
Note that [Q(ζ p + ζ −1 p ) : Q] = 1 2 (p − 1) for p > 2, so that p−1 2 | [K : Q]. If Aut φ (K) contains an element of order p, then p divides d(d 2 − 1) by Proposition 2.2. Hence p ∈ S 0 , and so v ∈ S.
Proof of Theorem 1.1
Fix a commutative ring R. Over R, PGL 2 may be embedded as an affine subvariety of P 3 R = Proj R[a, b, c, d]; indeed, it is the complement of the quadric ad−bc = 0. Let φ : P 1 R → P 1 R be a nonconstant endomorphism. We may define Aut φ as a subgroup scheme of PGL 2 as follows. After fixing coordinates of P 1 R , the morphism φ can be given by a pair of homogeneous polynomials Φ = (Φ 0 (X, Y ), Φ 1 (X, Y )) of degree D = deg(φ) with coefficients in R such that the homogeneous resultant Res(Φ 0 , Φ 1 ) is a unit in R. The pair Φ 0 , Φ 1 is unique up to multiplication by a common unit in R. Similarly, for any R-algebra S, an element f ∈ PGL 2 (S) may be given by a pair F = (aX + bY, cX + dY ) with a, b, c, d ∈ S and ad − bc ∈ S × . Note that f −1 is represented by the pair
F −1 := (dX − bY, −cX + aY ). Then f • φ • f −1 = φ is equivalent to saying that F • Φ • F −1 and Φ define the same morphism on P 1 S → P 1 S . If F • Φ • F −1 = (Ψ 0 (X, Y ), Ψ 1 (X, Y )), then this means Φ 0 (X, Y )Ψ 1 (X, Y ) − Φ 1 (X, Y )Ψ 0 (X, Y ) = 0. (3.1)
The expression on the left is a homogeneous polynomial of degree 2D in X and Y whose coefficients are homogeneous polynomials in R[a, b, c, d]. So (3.1) gives 2D + 1 equations that cut out a closed subscheme of PGL 2 defined over R. One checks readily that Aut φ (S) is a subgroup of PGL 2 (S) for every S. Next we argue that Aut φ is a finite group scheme over R when φ has degree at least 2. The map Aut φ → Spec R is quasi-finite. Indeed, it suffices to check this statement on geometric fibers, and Silverman has shown that Aut φ (L) is a finite group for any algebraically closed field L [15, Prop. 4.65]. 1 Moreover, Aut φ is proper over Spec R. Indeed, since Aut φ and Spec R are noetherian, this can be checked using the valuative criterion for properness using only discrete valuation rings [8,Ex. II.4.11]. Let o be a discrete valuation ring with field of fractions k, and consider a commutative diagram
Spec k / / Aut φ Spec o / / 9 9 t t t t t Spec R.
The left vertical map is the canonical open immersion, and the right vertical map is the structure morphism. We must show there is a unique morphism Spec o → Aut φ that makes the entire diagram commute. Without loss of generality, we may assume that R = o and that the lower horizontal arrow is the identity map.
If v : k → Z ∪ {+∞} is the canonical extension of the valuation on o, then we may endow k with the structure of a non-Archimedean field by setting |x| = e −v(x) for every x ∈ k. (Note that we interpret e −∞ as zero.) Since φ is defined over o, it has good reduction. The Reduction Lemma asserts that every k-automorphism of φ also has good reduction. Equivalently, every k-valued point may be extended to an o-valued point, which is what we wanted to show.
We now know that Aut φ → Spec R is a quasi-finite proper morphism. Zariski's main theorem tells us that it factors as an open immersion of R-schemes Aut φ → X followed by a finite morphism X → Spec R. But Aut φ is proper, so any open immersion is actually an isomorphism. Hence Aut φ is finite over Spec R. This completes the proof of Theorem 1.1.
Algorithms
Absolute Automorphism Groups -Method of Invariant Sets. Let
F be an arbitrary field, and suppose φ : P 1 F → P 1 F is a morphism of degree at least 2. In this section we describe an algorithm to compute Aut φ (F ), where F is an algebraic closure of F . In fact, we will see that it also gives a field of definition E/F for the absolute automorphism group, although E is typically not the smallest such field. The idea is to use a finite Aut φ (F )-invariant subset of P 1 (F ) to produce a set of candidate automorphisms, and then to sort through the candidates by brute force. We begin by explaining Algorithm 1, which determines the automorphism group if we already have such an invariant set T . Then we give a description of how one constructs T .
Suppose that we are given a finite set T = {τ 1 , . . . , τ n } ⊂ P 1 (F ) with n ≥ 3 on which Aut φ (F ) acts. Let E = F (τ 1 , . . . , τ n ) be the field of definition for T . Let s ∈ Aut φ (E). Since we assume T is invariant, there must be a triple of distinct indices i, j, k ∈ {1, . . . , n} such that s(τ 1 ) = τ i , s(τ 2 ) = τ j , and s(τ 3 ) = τ k .
Conversely, given a triple of distinct indices i, j, k ∈ {1, . . . , n}, there exists a unique element s ∈ PGL 2 (E) such that s(τ 1 ) = τ i , s(τ 2 ) = τ j , and s(τ 3 ) = τ k . One now determines if this candidate element s actually satisfies the functional equation s • φ = φ • s; if that is the case, then s ∈ Aut φ (E). See Algorithm 1.
A natural candidate for an Aut φ (F )-invariant set is the set of fixed points of φ. For let x ∈ P 1 (F ) be a fixed point, and let s ∈ Aut φ (F ). Then compute s ∈ PGL 2 (E) by solving the linear system
s(x) = s(φ(x)) = φ(s(x)),(4.s(τ 1 ) = τ i , s(τ 2 ) = τ j , s(τ 3 ) = τ k if s • φ = φ • s: append s to L return L
which shows that s(x) is also a fixed point. A general rational function φ of degree d ≥ 2 has d + 1 ≥ 3 distinct fixed points. Thus the choice T = Fix(φ) will suffice in most circumstances.
If instead Fix(φ) has cardinality 2, define T = φ −1 (Fix(φ)). This set is Aut φ (F )- invariant: if x ∈ Fix(φ), y ∈ φ −1 (x), and s ∈ Aut φ (F ), then φ(s(y)) = s(φ(y)) = s(x).
Since s(x) is a fixed point by equation (4.1), we find s(y) ∈ φ −1 (Fix(φ)). Recall that we require #T ≥ 3, and by construction we have #T ≥ 2 since Fix(φ) ⊆ T . If #T = 2, then T = Fix(φ), and each of the fixed points is totally ramified for φ. This implies that the derivative at each of the fixed points vanishes, 2 which in turn means each element of Fix(φ) has fixed point multiplicity 1. But the total number of fixed points of a map of degree d is d + 1 ≥ 3, counting multiplicities, so we have a contradiction. (See, for example, [7, Appx. A].)
Finally, suppose that Fix(φ) = {x}. We claim that #φ −1 (x) ≥ 2. For otherwise x is ramified for φ, which implies that the derivative φ ′ (x) vanishes there. But the fact that x is the unique fixed point of φ means that in local coordinates centered at x our map is of the form z → z + a d+1 z d+1 + · · · with a d+1 = 0. So the derivative cannot vanish at x, and we must have #φ −1 (x) ≥ 2 as desired.
If #φ −1 (x) ≥ 3, then set T = φ −1 (x). Otherwise, φ −1 (x) = {x, y}; set T = φ −2 (x) = {x, y} ∪ φ −1 (y), which satisfies 3 ≤ #T ≤ d + 2.
The argument in the preceding paragraph shows that T is Aut φ (F )-invariant in all cases.
Algorithm 2 gives the complete description of this method for computing the automorphism group of a rational function defined over an arbitrary field F .
4.2.
Method of Fixed Points. Let F be a finite field or a field of characteristic zero, and let φ : P 1 F → P 1 F a nonconstant morphism. For any φ-periodic point x ∈ P 1 (F ), write per(x) for its exact period -i.e., the minimum positive integer i such that φ i (x) = x. If x is not periodic, write per(x) = +∞. For each pair of integers i, j ∈ {1, 2}, define the following set: We also define the following set of ordered pairs:
Z i,j = {x ∈ P 1 (F ) : per(x) = i, [F (x) : F ] = j}.W = {(x, y) : x ∈ Z 1,1 , y ∈ φ −1 (x), [F (y) : F ] = 1}. (4.3)
(More concretely, W is the set of pairs of F -rational points such that x is fixed by φ and φ(y) = x.) These sets may be constructed by factoring the polynomials that define the fixed points of φ, the points of period 2, and the preimages of F -rational points. We write Z (2) for the set of unordered pairs of elements of a set Z. Suppose that s = α β γ δ ∈ Aut φ (F ) is nontrivial. The homogeneous polynomial defining the fixed points of s is γX 2 + (δ − α)XY − βY 2 . If s has a unique fixed point x, then F is a field of characteristic p > 0. (To see this, move the unique fixed point to infinity. Then s is a translation with finite order.) So we conclude that F = F q , that x is F -rational (because F is perfect), and that s has order p [5, Lem. 3.1]. Now
s(φ(x)) = φ(s(x)) = φ(x), so that φ(x) = x.
Hence x ∈ Z 1,1 . Choose u ∈ PGL 2 (F ) such that u(x) = ∞; then usu −1 = ( 1 λ 1 ) for some λ ∈ F {0}. That is, s ∈ u −1 1 F {0} 1 u. In order to find all elements of Aut φ (F ) of order p, it suffices to apply this technique to every x in the set Z 1,1 . See the first for-loop of Algorithm 3. Now suppose that s ∈ Aut φ (F ) has precisely two distinct fixed points x 1 and x 2 . Then s(φ(x 1 )) = φ(s(x 1 )) = φ(x 1 ), so that φ(x 1 ) ∈ {x 1 , x 2 }. There are three possible cases: (1) φ fixes both x 1 and x 2 ; (2) φ swaps x 1 and x 2 ; or (3) φ(x 1 ) = x 2 and φ fixes x 2 (perhaps after interchanging x 1 and x 2 ). Since φ is defined over F , all Galois conjugates of a fixed point must also be fixed points. Thus in cases (1) and (2), either x 1 and x 2 are both F -rational, or they are quadratic conjugates over F . In case (3), both x 1 and x 2 must be F -rational.
If x 1 and x 2 are both F -rational in case (1) -so that (x 1 , x 2 ) ∈ Z
1,1 -then we may select u ∈ PGL 2 (F ) such that u(x 1 ) = ∞ and u(x 2 ) = 0. Then usu −1 = ζ 1 for some root of unity ζ ∈ F . If ζ has order n, then n | d(d 2 − 1) by Proposition 2.2. Let T be the set of roots of unity in F that have order dividing d(d 2 − 1). We loop over all distinct unordered pairs of elements (x 1 , x 2 ) ∈ Z (2) 1,1 , and check which elements of u −1 ( T 1 ) u lie in Aut φ (F ). See the second for-loop of Algorithm 3. In fact, this strategy works in case (2) when x 1 , x 2 are both F -rational, and in case (3). These correspond to looping over pairs (x 1 , x 2 ) in Z (2) 2,1 and in W , respectively. Now suppose x 1 and x 2 are quadratic Galois conjugates over F in case (1), so that (x 1 , x 2 ) ∈ Z (2) 1,2 . Then E = F (x 1 , x 2 ) is a quadratic extension. In this case, we may choose u ∈ PGL 2 (E) satisfying u(x 1 ) = ∞ and u(x 2 ) = 0, so that usu −1 = ξ 1 for some root of unity ξ ∈ E. In particular, if the fixed points of s are a ± b √ d, we may take
u = 1 −a − b √ d 1 −a + b √ d , so s = u −1 ξ 1 u = a + (1+ξ)b √ d (1−ξ) −a 2 + b 2 d 1 −a + (1+ξ)b √ d (1−ξ) .
Let σ be the nontrivial element of Gal(E/F ). Since s ∈ PGL 2 (F ), we have
σ (1 + ξ)b √ d (1 − ξ) = −(1 + ξ σ )b √ d (1 − ξ σ ) = (1 + ξ)b √ d (1 − ξ) ,
which implies that ξ σ ξ = 1. If σ acts trivially on ξ, we must have ξ = −1 (since s is nontrivial). Otherwise, E = F (ξ). In particular, F (x 1 , x 2 ) is always a cyclotomic extension of F . If F = F q is a finite field, then ξ ∈ F q 2 . Since the automorphism σ acts by Frobenius on F q 2 , it follows that ξ σ ξ = ξ q+1 = 1. That is, a nontrivial element of PGL 2 (F q ) with two quadratic conjugate fixed points necessarily has order dividing q + 1. In this case, let Λ ⊂ F × q 2 be the unique subgroup of order q + 1; note that Λ always contains −1.
If F is a field of characteristic zero, let C(X) = X d(d 2 −1) − 1. In this case, let Λ be the set of roots of the quadratic factors of C(X) over F together with −1.
To detect s, loop over all Galois conjugate pairs in Z
Chinese Remainder Theorem Method for Number Fields.
In this section, we assume that our field K is a number field. As discussed in the previous section, Algorithm 3 computes Aut φ (K). This algorithm requires computing the irreducible factors of degree at most 2 of a degree d 2 + 1 polynomial, namely the polynomial obtained by clearing denominators in the equation φ • φ(z) = z. As the degree becomes large, this quickly becomes impractical on computer algebra systems, even over K = Q. Thus, we have a need for an alternative algorithm over number fields.
We use an approach that is ubiquitous in number theory: first compute the automorphism group over a residue field F v for some finite place(s) v, and then use the local information to obtain a global answer. More precisely, our method is as follows.
Let v = v 0 be a finite prime such that the reduction map Aut φ (K) → Aut φ (F v ) is injective, e.g. v ∈ S as defined in Proposition 1.4. Now compute Aut φ (F v ) using Algorithm 3 -Computation of Aut φ (F ) via the method of fixed points Input: a field F , finite or of characteristic zero, and φ ∈ F (z) of degree ≥ 2 Output: the set Aut φ (F )
if F = F q is a finite field: create a list T = F × q create a list Λ of ξ ∈ F × q 2 {1} with ξ q+1 = 1 else: let C(X) = X d(d 2 −1) − 1 create a list T of F -for x ∈ Z 1,1 : choose u ∈ PGL 2 (F ) such that u(x) = ∞ for λ ∈ T : set s(z) = u −1 (u(z) + λ) if s • φ = φ • s: append s to L for each pair (x, y) with x = y in Z (2) 1,1 ∪ Z (2) 2,1 ∪ W : choose u ∈ PGL 2 (F ) such that u(x) = ∞ and u(y) = 0 for ζ ∈ T {1}: set s(z) = u −1 (ζu(z)) if s • φ = φ • s: append s to L
for each pair of Galois conjugates (x, y) in Z
2,1 ∪ Z
2,2 : choose u ∈ PGL 2 (F (x, y)) such that u(x) = ∞ and u(y) = 0 for ξ ∈ Λ: set s(z) = u −1 (ξu(z)) if s • φ = φ • s: append s to L return L Algorithm 3. For each element in Aut φ (F v ), let f ∈ Aut(P 1 K ) be a lift of minimal height and check if f ∈ Aut φ (K). If every element of Aut φ (F v ) lifts to an element of Aut φ (K), then we are done. Otherwise, we will repeat this process, with some minor modifications; we explain these modifications now.
Let v 1 be finite prime outside of S ∪ {v 0 } and compute Aut φ (F v1 ). Let G ⊆ Aut(P 1 OK / vi ) be a subset that surjects onto Aut φ (F vi ) for each i; this step is called the CRT (Chinese Remainer Theorem) step. For each element of G, choose a lift f ∈ Aut(P 1 K ) of minimal height. If f • φ = φ • f then add f to the list Auts. If Auts surjects onto Aut φ (F vi ) for any i, then we are done. If not, then choose another prime v i+1 ∈ S ∪ {v 0 , . . . , v i } and repeat.
In order to make this method into an algorithm, we need to provide a terminating condition. Write N (v) for the norm of a finite prime v. We claim that if i N(v i ) ≥ (2M ) [K:Q] , for some explicitly computable constant M , then Auts = Aut φ (K), even if Auts does not surject onto Aut φ (F vi ) for any i. We will spend the rest of the section proving this claim via the theory of heights.
Let H K : P 1 (K) → R ≥1 denote the relative multiplicative height for K and let L 2 (f ) denote the L 2 -norm of a polynomial f . See, for example, [9, B.2, B.7] for definitions. Proposition 4.3. Let T, T ′ ⊂ P 1 (K) be Galois invariant sets of order at least 3, and let f T , f T ′ ∈ K[w, z] (0) be square-free polynomials such that V (f T ) = T and V (f T ′ ) = T ′ . Then for any s ∈ Aut(P 1
K ) ⊂ P 3 (K) such that s(T ) = T ′ , we have H K (s) ≤ 6 [K:Q] L 2 (f T ) 3 L 2 (f T ′ ) 3 .
Proof. Let s be as in the statement of the Proposition. Let τ 1 , τ 2 , τ 3 be 3 distinct elements of T , and let η i := s(τ i ) ∈ T ′ . In coordinates, we write τ i = (τ i,0 : τ i,1 ) and η i = (η i,0 : η i,1 ). Since an automorphism of P 1 is determined by its action on 3 elements, we have an expression for s = α β γ δ in terms of τ i,j , η i,j , i.e.
α = σ∈S3 (sgn σ)B σ(1) C σ(2) D σ(3) , β = σ∈S3 (sgn σ)A σ(1) C σ(2) D σ(3) , γ = σ∈S3 (sgn σ)A σ(1) B σ(2) D σ(3) , δ = σ∈S3 (sgn σ)A σ(1) B σ(2) C σ(3) , where A i = τ i,0 η i,1 , B i = −τ i,1 η i,1 , C i = −τ i,0 η i,0 , and D i = τ i,1 η i,0 .
This expression allows us to obtain a bound on the local height of s. Let v be any place of K and let ε v = 6 if v | ∞ and ε = 1 if v ∤ ∞. Then, by the triangle inequality,
|α| v ≤ ε v · max σ∈S3 |B σ(1) C σ(2) D σ(3) | v ≤ ε v 1≤i≤3 max{|τ i0 | v , |τ i1 | v } · max{|η i0 | v , |η i1 | v }.
One can easily check that the same bound holds for |β| v , |γ| v , |δ| v . It follows that [9,Lemma B.7.3.1], this completes the proof.
H K (s) = v max{|α| v , |β| v , |γ| v , |δ| v } [Kv:Qv ] ≤ v ε [Kv:Qv] v · 1≤i≤3 max{|τ i0 | v , |τ i1 | v } [Kv:Qv ] · max{|η i0 | v , |η i1 | v } [Kv:Qv] = 6 [K:Q] 1≤i≤3 H K (τ i )H K (η i ). Since H K (τ i ) ≤ L 2 (f T ) and H K (η i ) ≤ L 2 (f T ′ )
Corollary 4.4. Let φ ∈ K(z) be a rational function of degree > 1, let T ⊂ P 1 (K) be the Galois invariant set constructed in Algorithm 2 (with F = K), and let f T be a square-free polynomial such that V (f T ) = T . Then every element of Aut φ (K) ⊂ P 3 (K) has relative multiplicative height bounded by 6 [K:Q] L 2 (f T ) 6 We will take this height bound 6 [K:Q] L 2 (f T ) 6 to be our explicit constant M . Now we need to show that if i N(v i ) ≥ (2M ) 2[K:Q] , then every element of Aut φ (K) is a lift of an element of i Aut φ (F vi ) of minimal height. We will need the following two lemmas. Proof. Since β ∈ O K , we have |β| v ≤ 1 for any finite place v. Therefore
H K (β) = v|∞ max{1, |β| v } [Kv:Qv] v∤∞ v(b)<0 max{1, |β| v } [Kv:Qv] ≥ v|∞ |β| [Kv:Qv ] v v∤∞ v(b)<0 |β| [Kv :Qv ] v = v∤∞ v(b)≥0 |β| −[Kv:Qv ] v ,
where the last equality follows from the product formula. Let e p be such that
b = p ep . Since β ∈ b, v(β) ≥ e pv , so |β| −[Kv :Qv] v ≥ N(p v ) ep v .
Lemma 4.6. Let a ⊂ O K be an integral ideal, and let ρ a : P n (O K ) → P n (O K /a) denote the canonical projection. For each β = (β 0 : β 1 : · · · : β n ) ∈ P n (O K /a), there is at most one element α = (α 0 :
α 1 : · · · : α n ) ∈ ρ −1 a (β) with H K (α) < 2 −[K:Q] N(a) 1/2 . Proof. Let α, α ′ ∈ P n (O K ) be such that H K (α), H K (α ′ ) < 2 −[K:Q] N(a)
1/2 and such that ρ a (α) = ρ a (α ′ ). Since α ∈ P n (O K ), there exists a coordinate i 0 such that α i0 ∈ a. It follows that α ′ i0 ∈ a too. Then for each i and each place v, an argument as in Proposition 4.3 shows that
max 1, α i α i0 − α ′ i α ′ i0 v ≤ 2 max ℓ α ℓ α i0 v · max ℓ α ′ ℓ α i ′ 0 v .
Taking the product over all v gives H K
αi αi 0 − α ′ i α ′ i 0 ≤ 2 [K:Q] H K (α)H K (α ′ ).
The latter is less than N(a) by hypothesis, and αi
αi 0 − α ′ i α ′ i 0
lies in the fractional ideal (α i0 α ′ i0 ) −1 a, so the preceding lemma implies that αi
αi 0 = α ′ i α ′ i 0 . That is, α = α ′ .
Proposition 4.7. Let v 0 , . . . , v n be finite places of K such that (1) the reduction map Aut φ (K) → Aut φ (F vi ) is injective for all i, and 6 is as in Corollary 4.4. For any tuple (g i ) ∈ i Aut φ (F vi ), let g K ∈ Aut(P 1 K ) be a simultaneous lift of each g i of minimal height. If g K ∈ Aut φ (K), then (g i ) ∈ im (Aut φ (K) → i Aut φ (F vi )) Proof. Assume that (g i ) ∈ im (Aut φ (K) → i Aut φ (F vi )) and let g ′ ∈ Aut φ (K) denote its pre-image. (The automorphism g ′ is unique by assumption (1).) By
(2) i N(v i ) ≥ 2 [K:Q] M 2 , where M = 6 [K:Q] L 2 (f T )Corollary 4.4, H K (g ′ ) ≤ M ≤ 2 −[K:Q] i N(v i ) 1/2 . By Lemma 4.6, g ′ must have
minimal height among all lifts, so g ′ = g K ∈ Aut φ (K)
There are a few technical details that we have left out in our description of Algorithm 4, mostly in the step where we decide whether to terminate and in the Chinese Remainder Theorem step. These details allow us to avoid extraneous computation. We give an example here and the curious reader can find the rest in the source code.
It is possible for the reduction of Aut φ (K) to be a proper subgroup of Aut φ (F v ) for all places v of good reduction. Consider the rational map φ = 2z 5 . One can use the method of invariant sets to check that
Aut φ (Q) = z, iz, −z, −iz, ( √ 2z) −1 , i( √ 2z) −1 , −( √ 2z) −1 , −i( √ 2z) −1 ,
Algorithm 4 -Computation of Aut φ (K) via the Chinese Remainder Theorem
Input: a number field K and a rational function φ ∈ K(z) of degree d > 1 Output: the set Aut φ (K) choose T as in Algorithm 2 and set M = 6 [K:Q] L 2 (f T ) 6
create an empty list L, and set a = 1 for v a prime of good reduction at v such that Aut
φ (K) → Aut φ (F v ) is injective: compute Aut φ (F v ) using Algorithm 3 if Aut φ (F v ) = {z}: return {z} else:
append Aut φ (F v ) to L, and set a = ap v Set L ′ = CRT (L) and initialize an empty list Auts for s in L ′ :
set s ′ ∈ PGL 2 (O K ) to be a lift of s of minimal height if H K (s ′ ) ≤ M and s ′ • φ = φ • s ′ : append s ′ to Auts if N(a) ≥ 2 [K:Q] M 2 or if #Auts = # Aut φ (F v ) for any v | a:
return Auts which is a dihedral group of order 8. For all primes p > 2, at least one of −1, 2, −2 is a square in F p . Therefore, Aut φ (F p ) always contains Z/2 × Z/2 or Z/4 as a subgroup. As the algorithm is stated, we would compute a lift of every element in 19 p=5 Aut φ (F p ). However, by p = 7 one can already recognize that Aut φ (Q) ⊆ Z/2 since Aut φ (F 5 ) = Z/4 and Aut φ (F 7 ) = Z/2 × Z/2. Our code checks for grouptheoretic properties like this when deciding whether to terminate.
It is important to build in as many early termination conditions as possible, since typically the elements of Aut φ (K) have significantly smaller height than the theoretical bound. This is of course true when Aut φ (K) is trivial, but it remains true even in the non-trivial case. For example, consider the functions in the last 3 lines of Table 1. The height bound for φ = 345025251z 6 is over 50 digits, and the height bounds for the other two are over 100 digits. In contrast, the heights of the automorphisms are (from last to first) 2601, 101 and 11.
Examples
In this section, we compute some examples to give an idea of the running time. It is hard to produce "random" examples that have non-trivial automorphism group. Therefore, we present some hand-selected examples with non-trivial automorphism group which demonstrate the correctness of the algorithm (see Table 1). Then we present median running times for randomly generated rational maps of varying degrees and varying heights (see Table 2). All of the randomly generated functions had trivial automorphism group.
Our computations indicate that the fixed-point method is faster for rational functions of small degree, but that the CRT method is a better choice once the degree is larger than 15. In our implementation, the main bottleneck in the fixed point algorithm is in computing Z 1,2 and Z 2,2 ; this requires computing the quadratic factors of a degree d 2 + 1 polynomial . It is possible that the fixed-point method Table 2. Median running times for the fixed point and CRT algorithms on 100 random rational functions with given degree and height bound.
Theorem 6.1. Let R be a commutative ring, let d ≥ 0 be an integer, and let φ, ψ : P 1 R → P 1 R be endomorphisms of degree d. Then the functor Conj φ,ψ is represented by a closed R-subscheme Conj φ,ψ ⊂ PGL 2 . If moreover d ≥ 2, then Conj φ,ψ is finite over Spec R.
The theorem does not preclude the possibility that Conj φ,ψ is the empty scheme; in fact, it is typically empty when d ≥ 2. The group scheme PGL 2 has relative dimension 3 over R, and the space Rat d of endomorphisms of P 1 of degree d has relative dimension 2d + 1 > 3 over R. So for a fixed φ ∈ Rat d (R), a general choice of ψ will yield Conj φ,ψ = ∅.
Remark 6.2. When Conj φ,ψ is not the empty scheme, it is a principal homogeneous space for Aut φ (or Aut ψ ).
In the case d ≥ 2 of the theorem, in order to establish that Conj φ,ψ is finite over Spec R, one must argue that it is proper and quasi-finite. Properness follows from a direct generalization of the Reduction Lemma. Quasi-finiteness may be proved by taking R = F to be an algebraically closed field and using a variation of our method of invariant sets in §4.1. The basic idea is to replace the invariant set T with two sets T φ , T ψ ⊂ P 1 (F ) such that • #T φ ≥ 3, and • s(T φ ) ⊂ T ψ for every s ∈ Conj φ,ψ (F ). The sets T φ , T ψ may be taken as the fixed points of φ and ψ, respectively; if there are not enough fixed points, then one may use pre-images exactly as in Algorithm 1. We leave the details to the reader.
Finally, we note that the algorithms in §4.1 and §4.3 may be adapted (as in the previous paragraph) to give algorithms for computing Conj φ,ψ (F ) and Conj φ,ψ (F ) when F is a finite field or number field. Again, we leave the details to the interested reader; note that the main technical tool -Proposition 4.3 -applies in this more general situation. We close with the remark that, in particular, this strategy gives an algorithmic procedure for determining if Conj φ,ψ is the empty scheme or not. More concretely, this means that one can determine effectively if two rational functions φ and ψ are conjugate.
1 )
1Algorithm 1 -Compute Aut φ (E) given an Aut φ (E)-invariant subset of P 1 (E) consisting of n ≥ 3 points Input:• a nonconstant rational function φ ∈ E(z)• an Aut φ (E)-invariant subset T = {τ 1 , . . . , τ n } ⊂ P 1 (E), n ≥ 3 Output: the set Aut φ (E) create an empty list L for each triple of distinct integers i, j, k ∈ {1, . . . , n}:
( 4 . 2 )
42Algorithm 2 -Computation of Aut φ (F ) via the method of invariant setsInput: a field F and a rational function φ ∈ F (z) of degree at least 2 Output: a field extension E/F and the set Autφ (E) = Aut φ (F ) if # Fix(φ) ≥ 3:compute Aut φ (E) using Algorithm 1 with T = Fix(φ) and E = F (T ) else if # Fix(φ) = 2: compute Aut φ (E) using Algorithm 1 with T = φ −1(Fix(φ)) and E = F (T ) else: compute Aut φ (E) using Algorithm 1 with T = φ −2 (Fix(φ)) and E = F (T ) return E and Aut φ (E)
check which elements of u −1 ( Λ 1 ) u lie in Aut φ (F ). See the third for-loop of Algorithm 3. The same strategy also applies if we are in case (2) and x 1 and x 2 are quadratic conjugates.
Remark 4 . 1 .
41The first for-loop in Algorithm 3 will not terminate if F is an infinite field of characteristic p. But the remainder of the algorithm proceeds without modification and computes {s ∈ Aut φ (F ) : gcd(order of s, char(F )) = 1}.
Remark 4.2. When F is a number field, the technique in the proof of Proposition 1.4 gives further restrictions on the set of roots of unity one must include in the set Λ. It suffices to consider F -rational and F -quadratic roots of X m − 1, where m is the product of all integers n such that ϕ(n) | 2[F : Q]. Here ϕ is Euler's function.
rational roots of C(X) create a list Λ of roots of F -quadratic factors of C(X) and −1 create a list L = [z] create the sets Z i,j , W defined in equations (4.2) and (4.3)
Lemma 4 . 5 .
45Let b be a nonzero fractional ideal of O K , and write it as a quotient b = b + /b − of relatively prime integral ideals. Then H K (β) ≥ N(b + ) for all nonzero β ∈ b.
Alternatively, §4.1 gives a slightly simpler proof of Silverman's result.
More precisely, the induced map T φ on the tangent space T P 1 x is zero.
These examples were computed on a Macbook Air (Apple, Inc.) running Mac OS X 10.7.2 with a 2.13 GHz Intel Core 2 Duo processor and 2GB of RAM. They were run with Sage 4.7.2 which was released on October 29, 2011. All running times are listed in seconds.Conjugate Rational FunctionsLet F be a number field or finite field, and let φ, ψ : P 1 F → P 1 F be a pair of endomorphisms of the same degree d. We return now to the question presented in the introduction:Does there exist a change of coordinate f ∈ PGL 2 (F ) such thati.e., are φ and ψ conjugate over F ? This question can be dealt with both theoretically and computationally in a manner similar to that of automorphism groups. We briefly sketch the theoretical side; the proofs are straightforward modifications of arguments presented earlier.Definition. Fix a nonnegative integer d, and let φ, ψ : P 1 R → P 1 R be two endomorphisms of degree d. The conjugation scheme of the pair (φ, ψ) is the R-scheme Conj φ,ψ represented by the functor Conj φ,ψ : R-Alg → Grp defined by Conj φ,ψ (S) = {f ∈ Aut(P 1 S ) : f • φ • f −1 = ψ}.
A history of complex dynamics. Daniel S Alexander, Aspects of Mathematics, E24. Friedr. Vieweg & Sohn. Daniel S. Alexander. A history of complex dynamics. Aspects of Mathematics, E24. Friedr. Vieweg & Sohn, Braunschweig, 1994. From Schröder to Fatou and Julia.
Toward a theory of Pollard's rho method. Eric Bach, Inform. and Comput. 902Eric Bach. Toward a theory of Pollard's rho method. Inform. and Comput., 90(2):139-155, 1991.
Potential theory and dynamics on the Berkovich projective line. Matthew Baker, Robert Rumely, Mathematical Surveys and Monographs. 159American Mathematical SocietyMatthew Baker and Robert Rumely. Potential theory and dynamics on the Berkovich pro- jective line, volume 159 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2010.
In Vector bundles and complex geometry. Arnaud Beauville, Contemp. Math. 522Amer. Math. SocFinite subgroups of PGL 2 (K)Arnaud Beauville. Finite subgroups of PGL 2 (K). In Vector bundles and complex geometry, volume 522 of Contemp. Math., pages 23-29. Amer. Math. Soc., Providence, RI, 2010.
Finite p-irregular subgroups of PGL 2 (k). Xander Faber, arXiv:1112.1999v1Preprintmath.NTXander Faber. Finite p-irregular subgroups of PGL 2 (k). Preprint, arXiv:1112.1999v1 [math.NT], 2011.
Xander Faber, arXiv:1102.1432v3Topology and geometry of the Berkovich ramification locus for rational functions. Preprint. math.NTXander Faber. Topology and geometry of the Berkovich ramification locus for rational func- tions. Preprint, arXiv:1102.1432v3 [math.NT], 2011.
Prime factors of dynamical sequences. Xander Faber, Andrew Granville, J. Reine Angew. Math. To appear inXander Faber and Andrew Granville. Prime factors of dynamical sequences. To appear in J. Reine Angew. Math.
Algebraic geometry. Robin Hartshorne, Graduate Texts in Mathematics. 52Springer-VerlagRobin Hartshorne. Algebraic geometry. Springer-Verlag, New York, 1977. Graduate Texts in Mathematics, No. 52.
Diophantine geometry. An Introduction. Marc Hindry, Joseph H Silverman, Graduate Texts in Mathematics. 201Springer-VerlagMarc Hindry and Joseph H. Silverman. Diophantine geometry. An Introduction, volume 201 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2000.
Mattias Jonsson, arXiv:1201.1944v1Dynamics on Berkovich spaces in low dimensions. Preprint. math.DSMattias Jonsson. Dynamics on Berkovich spaces in low dimensions. Preprint, arXiv:1201.1944v1 [math.DS], 2012.
John Milnor, Dynamics in one complex variable. Friedr. Vieweg & Sohn, Braunschweig. second edition. Introductory lecturesJohn Milnor. Dynamics in one complex variable. Friedr. Vieweg & Sohn, Braunschweig, second edition, 2000. Introductory lectures.
Isotriviality is equivalent to potential good reduction for endomorphisms of P N over function fields. Clayton Petsche, Lucien Szpiro, Michael Tepper, J. Algebra. 3229Clayton Petsche, Lucien Szpiro, and Michael Tepper. Isotriviality is equivalent to potential good reduction for endomorphisms of P N over function fields. J. Algebra, 322(9):3345-3365, 2009.
A Monte Carlo method for factorization. J M Pollard, Nordisk Tidskr. Informationsbehandling (BIT). 15J. M. Pollard. A Monte Carlo method for factorization. Nordisk Tidskr. Informationsbehan- dling (BIT), 15(3):331-334, 1975.
Points périodiques des fonctions rationnelles dans l'espace hyperbolique p-adique. Juan Rivera-Letelier, Comment. Math. Helv. 803Juan Rivera-Letelier. Points périodiques des fonctions rationnelles dans l'espace hyperbolique p-adique. Comment. Math. Helv., 80(3):593-629, 2005.
The arithmetic of dynamical systems. Joseph H Silverman, Graduate Texts in Mathematics. 241SpringerJoseph H. Silverman. The arithmetic of dynamical systems, volume 241 of Graduate Texts in Mathematics. Springer, New York, 2007.
The arithmetic of elliptic curves. Joseph H Silverman, Graduate Texts in Mathematics. 106Springersecond editionJoseph H. Silverman. The arithmetic of elliptic curves, volume 106 of Graduate Texts in Mathematics. Springer, Dordrecht, second edition, 2009.
The Sage Development Team. W A Stein, Sage Mathematics Software. Version 4.7W. A. Stein et al. Sage Mathematics Software (Version 4.7). The Sage Development Team, 2009. http://www.sagemath.org.
| [] |
[
"Entanglement Evolution in Lifshitz-type Scalar Theories",
"Entanglement Evolution in Lifshitz-type Scalar Theories"
] | [
"M Reza ",
"Mohammadi Mozaffar s:[email protected] \nDepartment of Physics\nUniversity of Guilan\nP.O. Box41335-1914RashtIran\n\nSchool of Physics\nInstitute for Research in Fundamental Sciences (IPM)\n19538-33511TehranIran\n",
"Ali Mollabashi [email protected] \nMax-Planck-Institut für Physik\nWerner-Heisenberg-Institut\n80805MünchenGermany\n"
] | [
"Department of Physics\nUniversity of Guilan\nP.O. Box41335-1914RashtIran",
"School of Physics\nInstitute for Research in Fundamental Sciences (IPM)\n19538-33511TehranIran",
"Max-Planck-Institut für Physik\nWerner-Heisenberg-Institut\n80805MünchenGermany"
] | [] | We study propagation of entanglement after a mass quench in free scalar Lifshitz theories. We show that entanglement entropy goes across three distinct growth regimes before relaxing to a generalized Gibbs ensemble, namely, initial rapid growth, main linear growth and tortoise saturation. We show that although a wide spectrum of quasi-particles are responsible for entanglement propagation, as long as the occupation number of the zero mode is not divergent, the linear main growth regime is dominated by the fastest quasi-particle propagating on the edges of a widen light-cone. We present strong evidences in support of effective causality and therefore define an effective notion of saturation time in these theories. The larger the dynamical exponent is, the shorter the linear main growth regime becomes. Due to a pile of tortoise modes which become dominant after saturation of fast modes, exact saturation time is postponed to infinity. | 10.1007/jhep01(2019)137 | [
"https://arxiv.org/pdf/1811.11470v2.pdf"
] | 119,421,097 | 1811.11470 | 28f42bab6b10f6d8bb840b46a19af8943f3e11c3 |
Entanglement Evolution in Lifshitz-type Scalar Theories
14 Jan 2019
M Reza
Mohammadi Mozaffar s:[email protected]
Department of Physics
University of Guilan
P.O. Box41335-1914RashtIran
School of Physics
Institute for Research in Fundamental Sciences (IPM)
19538-33511TehranIran
Ali Mollabashi [email protected]
Max-Planck-Institut für Physik
Werner-Heisenberg-Institut
80805MünchenGermany
Entanglement Evolution in Lifshitz-type Scalar Theories
14 Jan 2019
We study propagation of entanglement after a mass quench in free scalar Lifshitz theories. We show that entanglement entropy goes across three distinct growth regimes before relaxing to a generalized Gibbs ensemble, namely, initial rapid growth, main linear growth and tortoise saturation. We show that although a wide spectrum of quasi-particles are responsible for entanglement propagation, as long as the occupation number of the zero mode is not divergent, the linear main growth regime is dominated by the fastest quasi-particle propagating on the edges of a widen light-cone. We present strong evidences in support of effective causality and therefore define an effective notion of saturation time in these theories. The larger the dynamical exponent is, the shorter the linear main growth regime becomes. Due to a pile of tortoise modes which become dominant after saturation of fast modes, exact saturation time is postponed to infinity.
Introduction
Propagation of entanglement in systems with a large number of degrees of freedom is of great importance to understand non-equilibrium physics in such systems [1,2]. This has been widely studied via considering a global quench and focusing on the evolution of a given initial state. The evolution of such a system due to the quench is generally a thermalization process. A typical measure for this thermalization process is how the reduced density matrix corresponding to a typical spatial subregion is close to a thermal density matrix.
The evolution of the system after the quench on the other hand is also an equilibration process. Since the post-quench evolution is a unitary evolution, the system will only reach local equilibrium though never a global one. In this also the relevant quantity is a density matrix corresponding to a subsystem which is expected to reach local equilibrium. At the end of the thermalization process when the system has reached local equilibrium, the local observables are given by the thermal (Gibbs) ensemble.
The story is different in case of integrable models (including free systems). In these systems the observables are much more constrained by an infinite number of conserved charges and the systems does not thermalize ending up in a Gibbs ensemble but relaxes to generalized Gibbs ensemble [3,4].
A typical measure to quantify the evolution of the system in a pure state after a global quench is to study entanglement entropy of a subsystem in the post-quench state. The entanglement entropy is defined as
S A = −Tr [ρ A log ρ A ]
and ρ A := TrĀ|ψ ψ| where |ψ = e −iHt |ψ 0 . |ψ 0 is the pure pre-quench state and H is the post-quench Hamiltonian. Here by quench we mean a global quench which is defined by a sudden change of a parameter in the Hamiltonian of the system at t = 0. The dynamics of the system and specifically the spread of entanglement in such systems has been widely studied in a large number of papers (see [1] and [5] and references therein). The quasi-particle picture is a core of understanding how entanglement spreads over the system after a global quench. More recently this picture has been improved in [5] which makes it valid in a wider scope.
From a field theoretic point of view, the quasi-particle picture successfully describes propagation of entanglement in relativistic field theories. This is strongly based on the notion of causality in these theories. Of course the question of entanglement propagation as a probe to study how the system relaxes to a generalized Gibbs ensemble is a very important question in non-relativistic theories. The propagation of entanglement is specifically intertwined with the causality structure together with the equilibration process of the theory under consideration.
The goal of this paper is to investigate how entanglement propagates in a field theory with Lifshitz scaling symmetry [6,7] given by t → λ z t ,
x → λ x, (1.1) where z is the dynamical exponent and determines the anisotropy between space and time. This kind of scaling symmetry is typical at critical points where a continuous phase transition takes place. 1 We study evolution of entanglement in the vacuum state of a translational invariant system and try to understand the corresponding physics basically by means of a zero mode analysis. The rest of the paper is organized in the following order: As the rest of the introduction section we introduce a discrete version of Lifshitz scalar field theory which we will utilize in our numerical analysis. In section 2 we review propagation of entanglement in relativistic scalar theory after a global quench. In section 3 we give analytical arguments for quasi-particle picture in these theories followed by a numerical study of evolution of entanglement entropy. We analyse our numerical studies focusing on the spectrum of these theories, the key role of tortoise modes and the quasiparticle picture.
Lifshitz-type QFTs on Square Lattice
A quantum field theory that is invariant under a Lifshitz scaling transformation, introduced in Eq. (1.1), is what we call a Lifshitz-type QFT. As the simplest example we consider a free scalar 1 Various properties of entanglement entropy is such theories in static cases has been previously studied in [8][9][10][11][12][13][14]. field in d + 1-dimensions with the following action [15]
S = 1 2 dtd x φ 2 − d i=1 (∂ z i φ) 2 − m 2z φ 2 .
(1.
2)
The above action has a Lifshitz scaling symmetry in the massless limit (m = 0). Although in this manuscript we will study both m = 0 and m > 0 theories. This is clearly a generalization of Klein-Gordon theory (z = 1) to generic z with non-relativistic scaling symmetry. The realization of Klein-Gordon theory on a lattice is the well-known "harmonic lattice". There also exists a family of models which is generalization of harmonic model on a (hyper)square lattice in generic spatial dimensions. These models are known as Lifshitz harmonic models [11,12] and realize the action given in Eq.(1.2) on a square lattice.
To have a intuitive picture of the model lets focus on 1 + 1-dimensions. In this case the model is a chain of coupled harmonic oscillators, which each oscillator is coupled to z other oscillators from each side where z is the dynamical exponent. Obviously at this level z should be considered as a positive integer.
The Hamiltonian of these models is given as follows
H = N n=1 p 2 n 2M + M m 2z 2 q 2 n + K 2 z k=0 (−1) z+k z k q n−1+k 2 . (1.3)
One can easily check that this Hamiltonian reduces to the well-known harmonic model for the case of z = 1. It is straightforward to generalize this Hamiltonian to higher dimensions which we are not interested in this paper (see [11] for 3d generalization). One can also show that after a canonical transformation (q n , p n ) → (
√ M Kq n , p n / √ M K),
this is a discretized version of a free Lifshitz theory on a square lattice with mass m and lattice spacing = M/K, 2 introduced in Eq.(1.2).
The Hamiltonian of this model in any number of dimensions on a (hyper)square lattice (more precisely on a d dimensional torus) can be diagonalized in a standard way which we are not going to review here leading to the following dispersion relation [11,12]
ω 2 k = m 2z + d i=1 2 sin πk i N x i 2z ,(1.4)
where k = {k 1 , k 2 , · · · , k d } is the momentum vector, k i 's refers to the momentum components in all spatial directions and N x i refers to the number of sites in the i-th spatial direction. Different aspects of quantum entanglement in the vacuum state of these models has been studied previously [11][12][13] 3 . We will utilize the covariant correlator method to study entanglement propagation in these models. The method is briefly reviewed in the appendix section.
Entanglement Propagation in Relativistic Theories
Before getting into the question of how the relaxation of these systems is affected by non-Lorentzian dynamical exponent, in this section we briefly review the corresponding physics in relativistic field theories. The issue has been studied for 2d CFTs in a series of papers starting by [1]. Lets first focus on the simplest case which is a global quantum quenches in the context of 2d CFTs. The post-quench Hamiltonian is scale invariant while the pre-quench one is not. Thanks to the simplification due to the conformal symmetry of the post-quench Hamiltonian, EE can be worked out analytically as a function of time. It is well-known that EE exhibits a linear growth behavior up to a saturation time when a sudden transition to a saturation regime happens. The saturation time is proportional to the length of the entangling region (with a proportionality factor of 1/2) and the saturation value of EE depends on the details of the initial state. These universal features have been verified in various scale invariant models including transverse Ising spin chain in the same reference above.
In figure 1 we have summarized mainly the results of [1], where the numerical results are obtained using a harmonic lattice model. Note that although in the case of a CFT the saturation happens instantaneously, lattice simulation shows a mild transition. In order to clarify different scaling regimes of the process, in the right panel we have also numerically plotted the time deriva-tive of EE. At the very beginning there is quadratic growth regime 4,5 after which there is the well-known linear growth. The linear growth is followed by an extremely slower growth which is argued in [18] to logarithmically depend on time. 6 We will denote this regime by tortoise saturation in what follows and will also derive an analytic time dependence for this regime in a certain limit of mass quenches. Due to the mild transition, i.e. the existence of the tortoise saturation regime, saturation time is postponed [1,18]. We will elaborate on this point both in Lorentzian and in Lifshitz theories.
There are technical problems comparing between harmonic lattice and CFT results. Utilizing the harmonic lattice model, the straightforward choices is to impose periodic boundary condition which leads to a translational invariant model, although a discrete one. But this model in general suffers from IR divergence due to the existence of a zero mode. This zero mode and a pile of other slow modes which come into the game as soon as the model needs an IR cut-off are actually responsible for this mild transition. On the other hand one can impose a Dirichlet boundary condition on both ends of the lattice and sticking the entangling region to one of them in order to get around the problem. Although this makes the transition sharper since the zero mode is killed, but does not solve the problem and the price is loosing translational invariance.
The qualitative behavior of S A (t) during this relaxation process is well-known to be understood by the quasi-particle picture [1]. In this picture the entanglement between a subregion A and its complementĀ is carried by a uniform density of free streaming quasi-particles. A pair of entangled quasi-particles is created at each spatial point and the entropy between A andĀ is measured by the number of quasi-particles which a pair is in A and the other pair is inĀ. As a consequence of causality the propagation of quasi-particles constrained to be inside the light-cone such that the massless quasi-particles move along the null rays. Using this simple scenario it has been shown that the saturation time is given by t s = /(2v g ), where v g denotes the group velocity of entanglement propagation quasi-particles which in a critical theory is v g = 1.
Recently an analytic description for the time evolution of EE after a quantum quench based on the quasi-particle picture was proposed in [2,5]. 7 It was shown that considering a quasi-particle picture for spread of entanglement and also knowing the late time stationary state provided by the Bethe ansatz, one can find an analytic description for time evolution of EE. This construction works for several integrable models including non-interacting bosonic and fermionic systems. Since we will utilize this picture to understand our results, we will shortly review its main features in the following.
According to this description the general prediction for time dependence of EE in a 2d theory Figure 2: Alba-Calabrese quasi-particle picture for a relativistic free boson and three distinct modes together with the zero mode. The saturation time for each mode is shown with {t * 1 , t * 2 , t * 3 }.
. . t . v 1 . v 2 . v 3 . zeromode . v 3 < v 2 < v 1 = 1 . A . A .Ā . t * 1 . t * 2 . t * 3
for a subregion with length is given by [5] S(t) = 2t
2|vg|t< dk|v g |s(k) + 2|vg|t> dks(k), (2.1)
where s(k) is the entropy density and v g ≡ dω k /dk is the group velocity of the corresponding quasi-particles with momentum k.
To have a better intuitive understanding of what Eq.(2.1) means, we have illustrated the physics in figure 2. In general there is an infinite number of modes in the model and in this figure we have shown four of them to describe the physics in a relativistic model. The fastest mode is shown in green and two slower ones in orange and blue together with the zero mode in red. When a global quench happens at any spatial point, corresponding to each mode a pair of quasi-particle starts to move around back to back. Different quasi-particles have different group velocities bounded by |v g | ≤ 1. Each mode has a saturation time with regard to its group velocity given by /(2v g ). The saturation time of each mode (for a single interval) is by definition the point where the corresponding rays from the two ends of the region intersect. These are shown by {t *
1 , t * 2 , t * 3 }. Obviously since v zero mode (= 0) < v 3 < v 2 < v 1 , one would expect t * zero mode > t * 3 > t * 2 > t * 1 .
The zero mode saturation time is infinite. Due to this intuitive picture one can understand the physics of Eq.(2.1). At any time t, there are a number of modes which are fast enough to be saturated some time before t, these modes no more contribute to the time evolution of EE and form the second term in Eq.(2.1). At the same time the slower modes which satisfy /(2v g ) > t, still contribute to the time evolution and form the first term in Eq.(2.1). The smaller t is, the larger the number of modes contributing to the first term.
Indeed in [2] it was shown that for certain integrable models, s(k) can be fixed employing the equivalence between the entanglement and the thermodynamic entropy in the stationary state. Note that the thermodynamic entropy can be calculated from the generalized Gibbs ensemble describing the stationary state in terms of the expectation value of mode occupation number as follows
s(k) = 1 2π ((n k + 1) log(n k + 1) − n k log n k ) , (2.2)
where n k = n k GGE . Also note that becausen k is an integral of motion, n k can be found using the initial state, i.e.,
n k GGE = a † k a k GGE = ψ 0 |a † k a k |ψ 0 . (2.3)
In the case of a free scalar theory one finds [28] n k = 1 4
ω k ω 0,k + ω 0,k ω k − 1 2 , (2.4)
where ω 0,k (ω k ) is the frequency before (after) the quantum quench. Now we are equipped with all we need to calculate the time dependence of EE using Eq.(2.1). In figure 3 we demonstrate the evolution of EE predicted by Eq.(2.1) and compare it with the numerical results for different values of once with m 0 = 1, m = 2 and once with m 0 = 1, m = 0. Note that as we increase the length of the entangling region, we find better agreement. For the first family where m = 0 the results for 200 number of sites is almost the same as the thermodynamics limit but for massless post-quench Hamiltonian it does not coincide with the quasi-particle picture which we believe is due to the zero mode effect. We will try to analyse this point in details in what follows.
Remarkably, there exists an interesting relation between the spectrum of the quasi-particles and the saturation time of the EE. As the concluding note for this section we explain this relation in the simplest case, i.e., the harmonic lattice. In a 2d QFT with relativistic scaling the lattice dispersion relation for the corresponding quasi-particles is given by Eq.(1.4) (setting z = 1) as follows 8
ω k = m 2 + 2 sin k 2 2 . (2.5)
The group velocity of the quasiparticles from the above equation is
v g (k) ≡ dω k dk = sin k m 2 + 2 sin k 2 2 . (2.6)
Now one can find the mode with the maximum group velocity as follows
dv g dk k=kmax = 0 ⇒ κ 2 + m 2 κ − m 2 = 0, (2.7)
where κ = 2 sin 2 kmax 2 . Regarding Eq.(2.6) and Eq.(2.7) the following comments are in order
• To trace the effect of tortoise modes (those with small momenta) more precisely, we look at the k → 0 limit of the group velocity. The behaviour is as follows
v g (k → 0) ≈ k m + O k 2 .
One should note that since we are considering a translational invariant lattice, which the dispersion relation is given in Eq.(2.5), there is an IR regulator in the model. Thus to track the effect of the zero mode in a scale invariant post-quench state one should always be careful about the order of the m → 0 and k → 0 limits. Because of the existence of the IR regulator, one should take the k → 0 limit first. Note that k = 0 is a permissible mode in this model thus whatever the IR regulator (the mass of the post-quench state) is, the zero mode has vanishing group velocity and modes with small momenta (k m) also have very small group velocities. These modes are responsible for tortoise saturation after all other fast modes are saturated.
• For a massive quasiparticle we have κ = − m 2 2 + m 1 + m 2 2 and the maximum group velocity is given by
v max g = 1 + m 2 2 − m 1 + m 2 2 ,
(2.8) 8 Note that in the following without loss of generality we consider the continuum limit of the dispersion relation.
which shows that the maximum velocity keeps decreasing as a function of the mass parameter.
• From the above picture in principle it is straightforward to workout the analytical behaviour of EE after saturation time. Here there are tortoise modes which do not saturate at finite time, but one can still define an effective saturation time regarding to the saturation of the fastest mode t max s . In general there is a maximum momentum which we denote by k max corresponding to each mode. In principle the time dependent part of entropy
S(t) = |k|<kmax(t) dk s(k) (2.9)
gives the time dependence of entropy after t max s . Even for the free bosonic case this is not analytically computable but in a certain limit which one considers a quench from a very heavy state (m 1) to a gapless state (m 1) the above integral can be performed analytically which leads to (for t > /2)
S(t) = S 0 + 1 t 2 − 2 /4 c 1 + c 2 log t 2 − 2 /4 m + O m 2 (2.10)
where S 0 is the value of EE at t = /2, c 1 and c 2 are constants depending on m 0 , m and .
The key point to work out this behaviour is that the above intuitive picture lets us to think about the time dependence of EE as the time dependence of k max .
It is worth to mention that the above behaviour, although is found for a quench from highly gapped to a gapless Hamiltonian, but numerical data show that it works well even in the regime of our study where m 0 ∼ O(1) and m = 10 −6 .
• Thinking for a while about the continuum scalar field theory, the qualitative features of the previous results do not change. In this case the group velocity can be obtained from the
continuous dispersion relation ω = √ m 2 + k 2 as follows v g (k) = k √ m 2 + k 2 . (2.11)
Once again the existence of massive particles with vanishingly small momentum (tortoise modes), leads to a tortoise saturation regime for EE at late times.
• To get around these tortoise modes, one approach is to consider mode-dependent mass quench, i.e., m(k) [18]. In this case the group velocity becomes
v g (k) = k + m(k)m (k) m(k) 2 + k 2 ,(2.12)
where we should impose v g (k ∼ 0) → v 0 (> 0) to prevent the excitation of tortoise modes.
Any mass function with the specific behavior of m(k ∼ 0) = m 0 + v 0 k + · · · satisfies this condition and removes these modes from the spectrum of quasi-particles 9 . Considering these family of quenches, one no longer finds a finite saturation time with no tortoise saturation regime. It is important to note that according to Eq.(2.11) the massless quasi-particles with linear dispersion relation (ω ∼ k) move along the null rays with the maximum momentum independent velocity, v g (k) = 1.
Entanglement Propagation in Lifshitz Theories
In this section we study how EE propagates in theories with Lifshitz scaling. To be more precise, we study post-quench states both with m = 0 and m > 0. We first study analytically the quasiparticle picture for Lifshitz theories, modelled by Lifshitz harmonic lattice, after which we present numerical studies of EE and discuss about the physics of propagation of entanglement in different scaling regimes.
Analytic Description
In principle since our Lifshitz field theory of interest is a generalization of Klein-Gordon theory but the spatial correlations are stretched out via the dynamical exponent, one would naturally expect a similar analysis to what we have reviewed in the previous section from [2,5] is valid in this theory. The intuitive description we discussed in the previous section, simply shows that similar to the relativistic case, the general time dependence of EE predicted in a 2d free scalar theory with Lifshitz sacling is given by Eq.(2.1). So what we need is to workout the exact dependence of n(k), s(k) and v g on the dynamical exponent. Using the dispersion relation Eq.(1.4) a strightforward computation gives
v g (z) = z 2 sin k 2 2z−2 sin k m 2z + 2 sin k 2 2z . (3.1)
Also similar to the relativistic case the entropy density in terms of the expectation value of mode occupation numbers in the initial state is given by Eq.(2.4) where we should consider the dependence of ω k and ω 0,k on z as follows
ω k = m 2z + 2 sin k 2 2z , ω 0,k = m 2z 0 + 2 sin k 2 2z ,(3.2)
where m 0 and m are the mass parameters before and after the quantum quench respectively. In 9 Note that in order to have a finite injected energy during the quench scenario we should impose m(k ∼ ∞) → 0. general the value of n(k) and s(k) increase at a given momentum as we increase the dynamical exponent. In other words the contribution of individual quasi-particles to the thermodynamic entropy of the generalized Gibbs ensemble describing the steady state increases due to a z > 1.
In the following we will present some results which the role of tortoise modes becomes extremely important. As an example we have plotted the occupation number and the entropy density in figure 4 for quenches from a fixed pre-quench state (m 0 = 1) to different values of smaller postquench mass parameters. We have considered these parameters to figure out what happens toward quenching to gapless models (m → 0). One can easily check from the expression of n(k = 0) which is n(k = 0) = 1 4
m m 0 z + m 0 m z − 1 2 ,
that the occupation number (and also the entropy density) diverges in the following three cases: m 0 /m 1, m/m 0 1, and z 1. In the case of scale invariant system in principle the postquench mass vanishes. But here due to the IR cut-off, this case is a special case of m/m 0 1. We will show in what follows that the numerical results deviate from the quasi-particle picture in these three regimes, although the picture works perfectly in other regimes.
In the following sections after describing the quasi-particle picture we will present numerical results where our main focus is comparing the vanishing and non-vanishing mass parameters in the post-quench state to inquire our understanding of the quasi-particle picture.
Quasi-particle Picture
As we discussed in section 2, the spectrum of quasi-particles together with the notion of causality interestingly describes the linear growth and tortoise saturation regimes. Here we explore the role of z in this scenario focusing on 2d theories in order to avoid any complication arising in case of more than one velocity component. In general we would expect the qualitative picture should be straightforwardly generalized to higher dimensions. Although the notion of causality is a totally subtle notion in these theories, we show that (except in certain cases which the tortoise mode contributions become dominant) different scaling regimes during the relaxation of the system can be perfectly described by Alba-Calabrese quasi-particle picture in presence of z > 1 dynamical exponents. Comparing to the z = 1 case there are interesting features in z > 1 which we will describe in the following.
The group velocity for quasi-particles in a Lifshitz harmonic lattice is given by Eq. the maximum group velocity can be derived by first solving dv g /dk = 0 for the maximum momenta. This can be done analytically for small post-quench mass (for z > 1) parameter as
k max = cos −1 2 − z z + z 4 z (z − 1) −z− 1 2 m 2z + O m 4z which gives v max g (z) = 2 z−1 √ z z − 1 z z−1 2 − 2 −z−2 √ z z − 1 z − z+1 2 m 2z + O m 4z . (3.4)
According to the above result a few comments are in order:
• For z = 1 in the massless limit we have v max g = 1 which is consistent with the group velocity of quasi-particles which we reviewed in section 2. Also note that the mass correction is negative independent of z which shows that as expected intuitively, the velocity is a decreasing function of the mass parameter.
• Similar to what we discussed in Eq.(2.9), here also it is possible to extract an analytical temporal behaviour of EE for z > 1. The most straightforward case is z = 2 which the same analysis again in a limit which a Hamiltonian with a large mass is quenched to a tiny mass Hamiltonian one finds
S(t) = S 0 + 1 t (c 1 + c 2 log t) m + O m 2 ,(3.5)
where S 0 is the value of EE at t = /(2v max g ) and v max g is given by Eq.(3.6) for z = 2, c 1 and c 2 are constants depending on m 0 , m and . The analysis is a bit hard to be extended for higher values of z because of a technical problem which one cannot easily solve for the generic time dependence of k max (t).
• For z → ∞ where the corresponding scalar theory given by Eq.(1.2) becomes strongly nonlocal the maximum group velocity diverges, i.e., • For z < 1, v max g becomes pure imaginary. Based on this we conclude that there may not be a self-consistent analytical continuation for this model for z < 1, although our results show that there should be such a continuation for non-integer z > 1. Based on this we have not considered this range of parameter space of these theories. 12 It is also worth to mention that due to null energy condition, the same constraint on the dynamical exponent arises in the holographic theories with Lifshitz scaling symmetry [37].
• In figure 5 we have plotted v max g both as a function of the dynamical exponent (z > 1) and 10 The existence of Lieb-Robinson bounds in discrete systems but with infinite dimensional Hilbert space at each site, such as Harmonic lattice model, has been proved in [30,31]. The same proof works for Lifshitz harmonic model replacing the corresponding dispersion relation. We would like to emphasize that these bounds are not strong enough to lead to a physically reasonable Lieb-Robinson velocity. 11 For a related study in non-relativistic theories see [33]. 12 Entanglement dynamics in some long-range models which resembles our mode for 0 < z < 1 has been studied in [35,36]. In the left panel there is perfect agreement between Alba-Calabrese quasi-particle picture and numerics. In the middle panel one can see that due to the zero mode effect even for = 400 numerics deviate from Alba-Calabrese picture around the offset of tortoise saturation regime. The same is correct for higher values of z although one must wait much longer to see this.
◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ z=2, Alba-Calabrese z=3, Alba-Calabrese z=2, ℓ=100 ◆ z=2, ℓ=200 z=3, ℓ=100 ◆ z=3,SA/ℓ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ z=3, Alba-Calabrese z=3, ℓ=200 ◆ z=3,
mass parameter. Obviously there is no divergence in the m → 0 limit but as expected it diverges as z → ∞.
Numerical Results: A Zero Mode Analysis
In this section we present numerical results for propagation of entanglement after a mass quench in Lifshitz theories. We consider a mass quench while the dynamical exponent as another parameter in the dispersion relation is held fix. We consider two family of quenches which basically differ in the post-quench state. In both families the state prior to the quench is the vacuum state of an infinite Lifshitz harmonic lattice with parameters (m 0 , z) where m 0 = 0. The post-quench state is again the vacuum state of an infinite Lifshitz harmonic lattice with parameters changed to (m, z). By two families we mean the post-quench state is either chosen to be massless which the system has Lifshitz scaling symmetry or massive which does not have such a symmetry. In figure 6 we present numerical results regarding to both families and compare them with the quasi-particle picture. In the left panel which the post quench mass is finite one can see perfect agreement between the quasi-particle picture and our numerical results. In the middle and right panel we have shown results for the case where the post-quench mass vanishes. On can see that in these cases since the zero mode effect become more important specifically after the short linear growth regime is finished, there is a deviation from the quasi-particle picture although the deviation is suppressed as one gets closer to the thermodynamic limit. The latter gets much harder to reach as the dynamical exponent is increased because of what we have explained previously about the occupation number of tortoise modes (see figure 4).
In these curves one can see that the larger the value of the dynamical exponent is, the faster EE grows in time. Thus the saturation value of EE is an increasing function of z as expected. Figure 7: Numerical versus analytic results for time evolution of EE for z = 1, 2, 3. Here we set m 0 = 1 and m = 2. There is a very good matching between quasi-particle analytic results and numerics. The small difference at the early times is due to numerical instability and the mild transition in numerical results is due to the zero mode which is not captured by the quasi-particle picture. In these plots we have set the subregion size to be 100 in units of lattice spacing.
This behaviour is expected due to the enhancement of spatial correlations for larger values of the dynamical exponent (for a detailed discussion on the relation between the strength of spatial correlation and z see [11]). We have checked this for higher values of z up to z = 5 but we do not present the results here since the curves have the same qualitative behaviour in different scales. We believe that the physics is perfectly captured by these values of z. Also it is notable that we have not studied z > 5, since in order to interpret the results in the thermodynamic limit and avoid lattice effects for these higher values of z, one needs to consider much larger subregions which requires a much higher numerical cost.
In figure 7 we focus on numerical results regarding to the first family which is a mass quench from (m 0 = 1, z) to (m = 2, z) for z = 1, 2, 3. In figure 8 we do the same analysis for the second family that is a mass quench from (m 0 = 1, z) to (m = 0, z) for the same values of dynamical exponent. In both figures 7 and 8 we have presented the time derivative of EE. In figure 6 it is not easy to distinguish between different growth regimes specifically in the second family while this can be clearly seen in figure 7. We would say that figure 7, 8 carry the most important results of this paper since all essential features of propagation of entanglement in these theories can be extracted from them.
There is an initial rapid growth regime which rapidly disappears and is very hard to follow from these cures. During the rapid growth regime the system has not even reached a local-equilibrium and the scaling of entropy is expected to be understood from the only well-defined physical quantity during this regime which is the energy density of the system. We will not present a careful study of this regime here but will make some comments on the z-dependent scaling of EE in this regime in the discussion section.
The system rapidly enters its main growth regime which in general longs up to a certain point (see figure 7 and figure 8). After this point a tortoise saturation regime starts which in principle carries on up to infinite time. Both families share this property.
In general one can obviously mention that in our first family which is the generic case of Figure 8: Numerical versus analytic results for time evolution of EE for z = 1, 2, 3. Here we set m 0 = 1 and m = 10 −6 . The matching between quasi-particle analytic results and numerics is not as good as the m 0 = 1 and m = 2 case. In this case which tortoise modes are highly occupied, the thermodynamic it is much harder to numerically reach the thermodynamic limit. In these plots we have set the subregion size to be 200 in units of lattice spacing. In this case the difference between numerics and analytical results at the early times is both due to highly occupation of tortoise modes and also numerical instability. these theories, the prediction of quasi-particle picture applies here perfectly (figure 7) while in the second family which the contribution of the tortoise modes is increased due to the increase in their occupation number there is a serious non-concurrence with the quasi-particle picture ( figure 8). In the following subsections we will interpret these results and argue that these results can be understood if there is an effective notion of causality in these theories while this is totally non-trivial specially from the field theoretic point of view.
Physical Interpretation
The quasi-particle picture which works perfectly in many cases including 2d CFT and other massive 2d theories also offers a nice understanding of entanglement propagation in Lifshitz theories. Due to our analysis in section 3 there is a large spectrum of quasi-particles with different group velocities which are responsible for propagation of entanglement.
The simplest case understood by this picture was the case of 2d CFT which there was a monochromatic quasi-particle with v g = 1. In this case the sharp transition from linear growth to saturation regime is totally clear. At the instance that the quench happens, a pair of monochromatic quasi-particles start to propagate back-to-back to each other at all spatial points and they can pass through the entangling region. At early times those pairs initiated from the spatial points near the boundary of the entangling region have one mate inside the region and the other outside. These pairs are responsible for the linear growth. After a while proportional to the length of the region, the phenomena of ingoing and outgoing of quasi-particles through the entangling region equilibrates and thus EE saturates.
In the case of massive theories where there exists a spectrum of quasi-particles with different group velocities, the situation is a bit more complex. One should consider the behaviour of different types of quasi-particles to understand the propagation of entanglement. The group velocity in this case is constrained to 0 ≤ v g ≤ 1. All quasi-particles propagate inside the light-cone. Those with maximum group velocity propagate on the null directions and the zero modes depending on the value of the mass propagate extremely slow. In presence of these extremely slow modes, the above scenario still works for all types of quasi-particles. After the effect of the fast quasi-particles, specifically the one with maximum group velocity is equilibrated, the slow modes take over the role. Due to existence of these modes there always exists pairs which one mate is inside the entangling region and the other mate is outside. Although the number of these pairs is decreasing in time, but the decrease rate is infinitely slow thus the tortoise saturation regime extend over infinite time.
The very interesting thing about Lifshitz theories is that even in the massless case, there exists a spectrum of quasi-particles with 0 ≤ v g (z) ≤ v max g (z), where v g (z) is given in Eq.(3.1) and v max g (z) is given in Eq. (3.4) and v max g (1) = 1. The quasi-particles propagate inside a widen light-cone which its maximum velocity rays are given by v max g . In figure 9 we have shown this behaviour for an entangling region for values of z = {1, 2, 3}. The slope of the blue, orange and green rays are respectively given by {±v max g (1), ±v max g (2), ±v max g (3)}. The three coloured points are the saturation times found from the fastest monochromatic quasi-particle. For a single interval entangling region, the saturation time for a monochromatic quasi-particle is found to be defined as the intersection of the world-line of two of these quasi-particles starting from the end points of the region. This is simply found to be t max s (z) = /(2v max g (z)). The interesting point here is that in Lifshitz theories there still exists an effective notion of saturation time which we denote by t * s (z). The saturation time is defined to be the instance that evolution of EE experiences a sudden transition from the main growth regime to the tortoise saturation regime. This notion is also well-defined for z = 1 case, which is specifically important when the system is massive and a spectrum of quasi-particles are propagating around. Our numerical data hints to interesting physics in Lifshitz theories. The picture is that before EE enters the tortoise saturation regime, the monochromatic quasi-particle with v max g (z) dominates the entanglement propagation. This the physical reason why figure 9 is a very good approximate for the physics of entanglement propagation in the main growth regime. In other words it is clear that in principle t max s (z) is not the same as the effective saturation time we have defined above by t * s (z). But here what we find is that these two are actually the same. This picture clearly shows that in presence of tortoise modes the precise saturation time goes to infinity, meanwhile the effective saturation time t * s (z) goes to zero (following the green, orange and blue points in figure 9 toward the t = 0 axis). In other words as the dynamical exponent increases, the dominant regime in time is the tortoise saturation regime.
The other interesting behaviour which is also understandable with the quasi-particle picture is comparing the behaviour of EE for different values of z in the tortoise saturation. As we have argued the tortoise modes are mainly responsible for this behaviour. We have shown in 3.1 that the occupation number increases while z is increased. This is the reason why the tortoise saturation regime becomes slower for higher values of z. The EE saturates at infinite time for z > 1, and these infinite times comparing to each other is larger for higher values of z.
Lattice versus Continuum
An interesting question which was one of the main motivations for this study is how propagation of entanglement is related to the causality structure of the theory. It is well-known that propagation of entanglement is governed by the dispersion relation (spectrum of the quasi-particles) and the causality structure of the theory of interest. This was clearly understood at least in the case of 2d CFTs and massive scalar theory. On the other hand the picture is also consistent with the lattice version of massive scalar theory, i.e. the harmonic lattice.
Lets more precisely compare what is going on in our lattice model and how these results are supported by a simple analysis in the continuum limit. The dispersion relation and group velocity of propagating modes in this theory is given by
ω(k) = √ m 2z + k 2z , v g (k) = zk 2z−1 √ m 2z + k 2z . (3.7)
Clearly this group velocity does not have an extremum at least for z > 1. This makes it highly counter-intuitive to have a causal structure in these theories. On the other hand our results which are strongly supported by the quasi-particle picture (modulo the states which the zero modes are highly occupied) show that there is at least an effective notion of causal structure due to propagation of entanglement in these theories. It is worth to note that we are comparing the discrete model in the thermodynamic limit with the continuum theory. This is actually what is well-known in the context of many-body systems by the Lieb-Robinson bound [29]. There are several well known models which in the continuum correspond to local theories which do not have a Lorentzian causal structure but due to locality the correlations between points decay exponentially with their distance and there is a Lieb-Robinson velocity defined by the bound which information cannot spread faster it.
To be more concrete in the field theory language, there should be a light-cone like structure in the theory which measurements inside the cone should not be affected from outside the cone and vice versa. In the language of our theory in the simple case of d = 1, this means that the simplest thing is to look at the commutator of fields
[φ(0), φ(y)] = dk 2π 1 2ω(k) e iω(k)t−iky − e −iω(k)t+iky ,(3.8)
at different space-time points. In the Lorentzian case this is going to vanish for any spacelike separated points. In the case of Lifshitz scalar theory we present a numerical study of this quantity for different values of dynamical exponent and mass parameter in figure 10. It can be shown from the plots that there exists an effective widen cone that the commutator vanishes outside it. This is in agreement from what we expect from Lieb-Robinson bound in the lattice version and also with what we have found from the behaviour of entanglement entropy. We postpone a more concrete study of causality in these theories to [32]. 13
Conclusions and Discussions
We have studied the relaxation process of quenched states in free Lifshitz scalar theories to generalized Gibbs ensemble. Our study was mainly in 2d, which is the simplest case to utilize the quasi-particle picture to understand the process physically. A wide range of dynamical exponents is studied although we have presented results for a few number of them which was enough to focus on the physical picture. An important feature of these theories is a momentum-dependent group velocity of its propagating modes. We have utilized a discrete version of these theories, i.e., Lifshitz harmonic lattice models introduced in [11]. Different regimes of growth of entanglement entropy in these theories after a sudden quench is studied. Our results are mainly captured by an improved version of the quasiparticle picture introduced in [2]. In the following we would like to summarize our main results:
• Comparing two specific values of dynamical exponent, say z 1 and z 2 which z 2 > z 1 , as expected from the enhancement of equal-time correlation functions we have shown that the value of EE is larger for z 2 . The growth rate of EE is also grater for the z 2 case, except in a short period of time after z 2 has entered the tortoise saturation regime while z 1 is still linearly increasing. This short period is expected due to two different phenomena. On one hand as the dynamical exponent increases the linear regime is shortened, and on the other hand the occupation number of slow modes is increased. Thus one would expect that the growth rate of z 1 is larger than z 2 during the end of the late linear regime of z 1 .
• We have shown that the larger the dynamical exponent is, the slower EE saturates. Actually exact saturation for these theories (and for any theory admitting a pile of slow modes) is postponed to infinity. For larger values of dynamical exponent, this process becomes slower and slower. This effect is due to the increase of occupation number of slow modes with the dynamical exponent. Based on this, even in the scale invariant case, i.e., the mass parameter of the post-quench Hamiltonian vanishes, no sudden saturation happens in these theories in contrast with scale invariant 2d relativistic theories.
• We have shown that except the cases which the contribution of tortoise modes becomes dominant (in other words the occupation number of these modes is much larger than that for fast modes), the Alba-Calabrese quasi-particle picture works perfectly in these theories. We have shown this for EE and its time derivative for mass quenches. Thanks to Alba-Calabrese, we have an analytic expression for propagation of entanglement in these theories.
As a byproduct we have worked out the analytical time dependence of EE during the tortoise saturation regime for z = 1 and z = 2.
• Our result which was explained in sections 3.4 and 3.5 are showing that the propagation of entanglement is well understood with the existence of an effective notion of widen light-cone in these theories. The larger the dynamical exponent, the wider the light-cone is. Our study is a strong, although still indirect, representative of existence of at least an effective causal structure in Lifshitz theories. Of course from the dispersion relation of these theories this is not obvious at all. We postpone a rigorous study of causal structure in these theories to a later work [32].
• We would like to compare our results with holographic studies of evolution of EE in Lifshitz space times [40,41]. This is mainly studied by considering a Vaidya geometry with an asymptotically Lifshitz spacetime leading to a finite saturation time. But we have shown that based on the existence of tortoise modes the tortoise saturation regime is prolonged and due to that the saturation time goes to infinity. This is another sign for non-robustness of considering asymptotically Lifshitz geometries in a relativistic theory as a dual to a state in a Lifshitz-type theory. 14 There are some other results/comments regarding to our study which we would like to mention in the following:
We think that a interesting question which becomes more significant after this work is how is it possible to generalize the quasi-particle picture to be able to capture the strange effects of the tortoise modes in the dynamics?
In this work we have considered a sudden quench since it is the simplest framework to study propagation of entanglement turning off other interesting features of the theory and only focusing on propagation of entanglement. The more general question is what are the critical exponents of these theories under a Kibble-Zurek phase transition which our study is a fast quench limit of that more general setup (for similar studies in Lorentzian theories see [43][44][45][46]). We postpone reporting results regarding to this question to a feature work [47].
The numerical analysis of the early time rapid growth in these theories show that the growth of EE in this regime seems to be have a very good fit with S ∼ t 1+ 1 z . This behaviour was previously found in the holographic context [40,41]. This kind of scaling with t seems a bit confusing. The reason is that one would expect EE in the very early region after a sudden quench, which even no local equilibrium state is reached, to scale with E · A · t α , where E the energy density of the system, A is the area of the entangling region and α is fixed by dimensional analysis which in a Lifshitz theory terns out to be 1 + z. This behaviour is clearly expected to be the case for any number of spatial dimensions. It would be interesting to figure out what is the correct scaling of entanglement in this regime.
Another interesting future direction of this work would be considering a new family of quenches regarding to the other parameter in the Hamiltonian. The dynamical exponent is a parameter in the dispersion relation very similar to the mass parameter. It would be interesting to study the pattern of EE after a dynamical exponent quench specifically from a scale invariant theory (m 0 = 0 and z = 1) to another scale invariant theory (m 0 = 0 and z > 1).
A Time Dependent Correlator Method
We are now interested in a case that the frequency of the Hamiltonian is suddenly changed from its initial value ω 0,k to ω k . After this sudden change the vacuum state of the initial Hamiltonian evolves unitarily with the new Hamiltonian. If we denote the vacuum state of the initial state as |0 we need to compute the following correlators in order to study the time evolution of entanglement and Renyi entropies.
X ij (t) = 0|φ i (t)φ j (t)|0 , P ij (t) = 0|π i (t)π j (t)|0 , R ij (t) = 0|φ i (t)π j (t)|0 ,
(A.1)
where φ i (t) = e iH(ω k )t φ i (0)e −iH(ω k )t , π i (t) = e iH(ω k )t π i (0)e −iH(ω k )t .
The explicit form of these correlators is given by where X k = 1 ω k ω k ω 0,k cos 2 ω k t + ω 0,k ω k sin 2 ω k t P k = ω k ω k ω 0,k sin 2 ω k t + ω 0,k ω k cos 2 ω k t
R k = ω k ω 0,k − ω 0,k ω k sin ω k t cos ω k t (A.3)
It is worth to note that in the numerical calculations of this paper we have used a continuous version of these correlators, that is we have used the integral version of Eq.(A.2), that is N x 1 → ∞. Having these we are almost equipped to compute the entropy via the eigenvalues of iJ · Γ which we denote by {ν k (t)} where
Γ = X R R T P , J = 0 1 −1 0 . (A.4)
The entropies are given by
S A = n A k=1 ν k (t) + 1 2 log ν k (t) + 1 2 − ν k (t) − 1 2 log ν k (t) − 1 2 , (A.5) S (n) A = 1 n − 1 n A k=1 log ν k (t) + 1 2 n − ν k (t) − 1 2 n , (A.6)
where n A is the number sites in region A.
Figure 1 :
1Numerical data showing entanglement entropy as a function of time in a theory with relativistic scaling. Left: CFT prediction vs. harmonic lattice simulation with periodic BC. The numerical results correspond to = 100 and m 0 = 1, m = 10 −6 . Right: Time derivative of EE during the thermalization process.
Figure 3 :
3Analytic versus numerical results for the evolution of EE in harmonic lattice. In the left panel we have set m 0 = 1 and m = 2 and in the right panel we have set m 0 = 1 and m = 0. In the right panel the effect of the zero mode which is not captured by the analytical picture causes small deviation for t > /2.
Figure 4 :
4Mode occupation number and entropy density as a function of k for different values of the dynamical exponent. Here we set m 0 = 1. We have shown two values of post-quench mass to see how fast the n(k) and s(k) diverge for small values of mass. Comparing these two values of m gives a sense of how fast the occupation number (and entropy density) diverge in the massless limit. The same behaviour is correct for fixed post-quench mass and small pre-quench mass.
Figure 5 :
5Left: v max g as a function of z for different mass parameters. Right: v max g as a function of m for different values of the dynamical exponent.
the existence of a maximum group velocity in this non-relativistic model, Eq.(3.4) is consistent with the general expectation of the existence of Lieb-Robinson bound in local (laticized) QFTs [29-32] 10 . 11 For any finite z there is an upper bound on the propagation velocity of the quasi-particles. The above relation shows that in the non-local limit where the Lieb-Robinson bound is expected to break down, there is no upper bound on velocity (see section 3.5 and [32]).
Figure 6 :
6Analytic versus numerical results for the evolution of EE in Lifshitz harmonic lattice. In the left panel we have set m 0 = 1 and m = 2 and in the middle (z = 2) and right (z = 3) panels we have set m 0 = 1 and m = 0.
Figure 9 :
9Here A is the entangling region andĀ is the complement. The widen light-cones for different values of dynamical exponent are shown. The green lines are the standard (Lorentzian) light rays with v max g (1) = 1 and the orange and blue ones correspond to higher dynamical exponents which the cone gets wider. The specified points in the middle of the region along the time direction are {t max }. As z gets larger, t max s (z) → 0 and the boundaries of the widen lightcone tend to lie on the t = 0 axis.
Figure 10 :
10Density plot for the vacuum expectation value of [φ(0, 0), φ(x, t)] in 2d continuous theory. The plots refer to z = 1, 2, 3, from left to right respectively. The horizontal axes is x and the vertical axes is t. In all plots we have set m = 2 and the UV cut-off to be Λ = 10 3 .
R
k cos 2π(i r − j r )k r N xr , (A.2)
In what follows for simplicity we choose K = M = 1 without loss of generality.
Also[14] studied the entanglement entropy in Lifshitz theory using a holographic setup.
Although this behaviour was first found in the holographic context[16], but it can also be captured by 2d CFT techniques similar to[1], see for instance section 2.3.1. of[17].5 The quadratic growth regime is before local-equilibrium and is not captured by the quasi-particle picture.6 An analytical argument in support of log t behaviour is recently given in[19].7 It has been also extended to multipartite subregions in[20], to more general initial states in[21][22][23][24] and to study Renyi entropies in[25][26][27].
For a related study of correlation functions in Lifshitz theories with d = z see[34].
For a similar analogy see[48] where the authors have argued that for a correct entanglement wedge reconstruction of Lifshitz spacetime, one needs to go beyond relativistic gravity theories, for instance Horava-Lifshit gravity.
AcknowledgementsWe would like to thank Alex Belin, Diptarka Das, Michal Heller, Christoph Herzog for fruitful discussions. We would like to also thank Bruno Nachtergaele for correspondence. We are grateful to Pasquale Calabrese and Tadashi Takayanagi for their comments on a draft of this manuscript. AM thanks the organizers of "Workshop on AQFT, Modular Techniques, and Renyi Entropy" held at AEI in Potsdam where these results where first presented. AM is supported by Alexander-von-Humboldt Foundation through a postdoctoral fellowship.
Evolution of entanglement entropy in one-dimensional systems. P Calabrese, J L Cardy, 10.1088/1742-5468/2005/04/P04010cond- mat/0503393J. Stat. Mech. 05044010P. Calabrese and J. L. Cardy, "Evolution of entanglement entropy in one-dimensional sys- tems," J. Stat. Mech. 0504, P04010 (2005) doi:10.1088/1742-5468/2005/04/P04010 [cond- mat/0503393].
Entanglement and thermodynamics after a quantum quench in integrable systems. V Alba, P Calabrese, PNAS. 1147947V. Alba and P. Calabrese, "Entanglement and thermodynamics after a quantum quench in integrable systems," PNAS 114, 7947 (2017).
Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons. M Rigol, V Dunjko, V Yurovsky, M Olshanii, 10.1103/Phys-RevLett.98.050405arXiv:cond-mat/0604476Phys. Rev. Lett. 9850405M. Rigol, V. Dunjko, V. Yurovsky and M. Olshanii, "Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons," Phys. Rev. Lett. 98, 050405 (2007), 10.1103/Phys- RevLett.98.050405, [arXiv:cond-mat/0604476].
Validity of the GGE for quantum quenches from interacting to noninteracting models. S Sotiriadis, P Calabrese, 10.1088/1742-5468/2014/07/P07024arXiv:1403.7431J. Stat. Mech. 14077024cond-mat.stat-mechS. Sotiriadis and P. Calabrese, "Validity of the GGE for quantum quenches from inter- acting to noninteracting models," J. Stat. Mech. 1407, P07024 (2014) doi:10.1088/1742- 5468/2014/07/P07024 [arXiv:1403.7431 [cond-mat.stat-mech]].
Entanglement dynamics after quantum quenches in generic integrable systems. V Alba, P Calabrese, 10.21468/SciPostPhys.4.3.017arXiv:1712.07529SciPost Phys. 4317cond-mat.stat-mechV. Alba and P. Calabrese, "Entanglement dynamics after quantum quenches in generic integrable systems," SciPost Phys. 4, no. 3, 017 (2018) doi:10.21468/SciPostPhys.4.3.017 [arXiv:1712.07529 [cond-mat.stat-mech]].
On the Theory of Second-Order Phase Transitions I & II. E M Lifshitz, Zh. Eksp. Teor. Fiz. 11E. M. Lifshitz, "On the Theory of Second-Order Phase Transitions I & II," Zh. Eksp. Teor. Fiz 11 (1941) 255 & 269.
Quantum critical phenomena. J A Hertz, 10.1103/PhysRevB.14.1165Phys. Rev. B. 141165J. A. Hertz, "Quantum critical phenomena," Phys. Rev. B 14, 1165 (1976). doi:10.1103/PhysRevB.14.1165
Entanglement Entropy in Non-Relativistic Field Theories. S N Solodukhin, 10.1007/JHEP04(2010)101arXiv:0909.0277JHEP. 1004101hep-thS. N. Solodukhin, "Entanglement Entropy in Non-Relativistic Field Theories," JHEP 1004, 101 (2010) doi:10.1007/JHEP04(2010)101 [arXiv:0909.0277 [hep-th]].
Entanglement entropy and mutual information of circular entangling surfaces in the 2+1-dimensional quantum Lifshitz model. T Zhou, X Chen, T Faulkner, E Fradkin, 10.1088/1742-5468/2016/09/093101arXiv:1607.01771J. Stat. Mech. 1609993101condmat.stat-mechT. Zhou, X. Chen, T. Faulkner and E. Fradkin, "Entanglement entropy and mutual information of circular entangling surfaces in the 2+1-dimensional quantum Lifshitz model," J. Stat. Mech. 1609, no. 9, 093101 (2016) doi:10.1088/1742-5468/2016/09/093101 [arXiv:1607.01771 [cond- mat.stat-mech]].
Holographic Entanglement Entropy on Generic Time Slices. Y Kusuki, T Takayanagi, K Umemoto, 10.1007/JHEP06(2017)021arXiv:1703.00915JHEP. 170621hepthY. Kusuki, T. Takayanagi and K. Umemoto, "Holographic Entanglement Entropy on Generic Time Slices," JHEP 1706, 021 (2017) doi:10.1007/JHEP06(2017)021 [arXiv:1703.00915 [hep- th]].
Entanglement in Lifshitz-type Quantum Field Theories. M R Mohammadi Mozaffar, A Mollabashi, 10.1007/JHEP07(2017)120arXiv:1705.00483JHEP. 1707120hepthM. R. Mohammadi Mozaffar and A. Mollabashi, "Entanglement in Lifshitz-type Quantum Field Theories," JHEP 1707, 120 (2017) doi:10.1007/JHEP07(2017)120 [arXiv:1705.00483 [hep- th]].
Entanglement Entropy in Lifshitz Theories. T He, J M Magan, S Vandoren, 10.21468/SciPostPhys.3.5.034arXiv:1705.01147SciPost Phys. 3534hep-thT. He, J. M. Magan and S. Vandoren, "Entanglement Entropy in Lifshitz Theories," SciPost Phys. 3, no. 5, 034 (2017) doi:10.21468/SciPostPhys.3.5.034 [arXiv:1705.01147 [hep-th]].
Logarithmic Negativity in Lifshitz Harmonic Models. M R Mohammadi Mozaffar, A Mollabashi, 10.1088/1742-5468/aac135arXiv:1712.03731J. Stat. Mech. 1805553113hep-thM. R. Mohammadi Mozaffar and A. Mollabashi, "Logarithmic Negativity in Lifshitz Har- monic Models," J. Stat. Mech. 1805, no. 5, 053113 (2018) doi:10.1088/1742-5468/aac135 [arXiv:1712.03731 [hep-th]].
Lifshitz entanglement entropy from holographic cMERA. S A Gentle, S Vandoren, arXiv:1711.11509hep-thS. A. Gentle and S. Vandoren, "Lifshitz entanglement entropy from holographic cMERA," arXiv:1711.11509 [hep-th].
J , 10.1142/S0217751X11054656arXiv:1109.5629Lifshitz-type Quantum Field Theories in Particle Physics. 264523hep-phJ. Alexandre, "Lifshitz-type Quantum Field Theories in Particle Physics," Int. J. Mod. Phys. A 26, 4523 (2011) doi:10.1142/S0217751X11054656 [arXiv:1109.5629 [hep-ph]].
Entanglement Tsunami: Universal Scaling in Holographic Thermalization. H Liu, S J Suh, 10.1103/PhysRevLett.112.011601arXiv:1305.7244Phys. Rev. Lett. 11211601hep-thH. Liu and S. J. Suh, "Entanglement Tsunami: Universal Scaling in Holographic Thermalization," Phys. Rev. Lett. 112, 011601 (2014) doi:10.1103/PhysRevLett.112.011601 [arXiv:1305.7244 [hep-th]].
Various Aspects of Holographic Entanglement Entropy and Mutual Information. J Jarvela, 10.3929/010735840Master's ThesisJ. Jarvela, "Various Aspects of Holographic Entanglement Entropy and Mutual Information," Master's Thesis, doi:10.3929/010735840, http://inspirehep.net/record/1598220/
Entanglement Growth after a Global Quench in Free Scalar Field Theory. J S Cotler, M P Hertzberg, M Mezei, M T Mueller, 10.1007/JHEP11(2016)166arXiv:1609.00872JHEP. 1611166hep-thJ. S. Cotler, M. P. Hertzberg, M. Mezei and M. T. Mueller, "Entanglement Growth after a Global Quench in Free Scalar Field Theory," JHEP 1611, 166 (2016) doi:10.1007/JHEP11(2016)166 [arXiv:1609.00872 [hep-th]].
Complexity and entanglement for thermofield double states. S Chapman, J Eisert, L Hackl, M P Heller, R Jefferson, H Marrochio, R C Myers, arXiv:1810.05151hep-thS. Chapman, J. Eisert, L. Hackl, M. P. Heller, R. Jefferson, H. Marrochio and R. C. Myers, "Complexity and entanglement for thermofield double states," arXiv:1810.05151 [hep-th].
V Alba, P Calabrese, arXiv:1809.09119Quantum information dynamics in multipartite integrable systems. cond-mat.stat-mechV. Alba and P. Calabrese, "Quantum information dynamics in multipartite integrable sys- tems," arXiv:1809.09119 [cond-mat.stat-mech].
Entanglement and quantum transport in integrable systems. V Alba, 10.1103/PhysRevB.97.245135arXiv:1706.00020Phys. Rev. B. 9724245135cond-mat.stat-mechV. Alba, "Entanglement and quantum transport in integrable systems," Phys. Rev. B 97, no. 24, 245135 (2018) doi:10.1103/PhysRevB.97.245135 [arXiv:1706.00020 [cond-mat.stat-mech]].
Spreading of entanglement and correlations after a quench with intertwined quasiparticles. A Bastianello, P Calabrese, 10.21468/SciPostPhys.5.4.033arXiv:1807.10176SciPost Phys. 533cond-mat.stat-mechA. Bastianello and P. Calabrese, "Spreading of entanglement and correlations after a quench with intertwined quasiparticles," SciPost Phys. 5, 033 (2018) doi:10.21468/SciPostPhys.5.4.033 [arXiv:1807.10176 [cond-mat.stat-mech]].
Entanglement evolution and generalised hydrodynamics: noninteracting systems. B Bertini, M Fagotti, L Piroli, P Calabrese, 10.1088/1751-8121/aad82earXiv:1805.01884J. Phys. A. 5139cond-mat.stat-mechB. Bertini, M. Fagotti, L. Piroli and P. Calabrese, "Entanglement evolution and generalised hy- drodynamics: noninteracting systems," J. Phys. A 51, no. 39, 39LT01 (2018) doi:10.1088/1751- 8121/aad82e [arXiv:1805.01884 [cond-mat.stat-mech]].
Entanglement and diagonal entropies after a quench with no pair structure. B Bertini, E Tartaglia, P Calabrese, 10.1088/1742-5468/aac73farXiv:1802.10589J. Stat. Mech. 1806663104cond-mat.stat-mechB. Bertini, E. Tartaglia and P. Calabrese, "Entanglement and diagonal entropies after a quench with no pair structure," J. Stat. Mech. 1806, no. 6, 063104 (2018) doi:10.1088/1742-5468/aac73f [arXiv:1802.10589 [cond-mat.stat-mech]].
Rnyi entropies of generic thermodynamic macrostates in integrable systems. M Mestyn, V Alba, P Calabrese, 10.1088/1742-5468/aad6b9arXiv:1806.00624J. Stat. Mech. 1808883104cond-mat.stat-mechM. Mestyn, V. Alba and P. Calabrese, "Rnyi entropies of generic thermodynamic macrostates in integrable systems," J. Stat. Mech. 1808, no. 8, 083104 (2018) doi:10.1088/1742-5468/aad6b9 [arXiv:1806.00624 [cond-mat.stat-mech]].
Rényi entropies after releasing the Néel state in the XXZ spinchain. V Alba, P Calabrese, 10.1088/1742-5468/aa934carXiv:1709.02193J. Stat. Mech. 113105condmat.stat-mechV. Alba and P. Calabrese, "Rényi entropies after releasing the Néel state in the XXZ spin- chain," J. Stat. Mech. (2017) 113105 doi:10.1088/1742-5468/aa934c [arXiv:1709.02193 [cond- mat.stat-mech]].
Quench action and Renyi entropies in integrable systems. V Alba, P Calabrese, 10.1103/PhysRevB.96.115421arXiv:1705.10765Phys. Rev. B. 9611115421condmat.stat-mechV. Alba and P. Calabrese, "Quench action and Renyi entropies in integrable systems," Phys. Rev. B 96, no. 11, 115421 (2017) doi:10.1103/PhysRevB.96.115421 [arXiv:1705.10765 [cond- mat.stat-mech]].
Quantum Quenches in Extended Systems. P Calabrese, J Cardy, 10.1088/1742-5468/2007/06/P06008arXiv:0704.1880J. Stat. Mech. 07066008cond-mat.stat-mechP. Calabrese and J. Cardy, "Quantum Quenches in Extended Systems," J. Stat. Mech. 0706, P06008 (2007) doi:10.1088/1742-5468/2007/06/P06008 [arXiv:0704.1880 [cond-mat.stat-mech]].
The finite group velocity of quantum spin systems. E H Lieb, D W Robinson, 10.1007/BF01645779Communications in Mathematical Physics. 28E. H. Lieb and D. W. Robinson, "The finite group velocity of quantum spin systems," In: Communications in Mathematical Physics 28.3 (1972), 251-257, doi: 10.1007/BF01645779.
Dynamics of Correlations and Quantum Phase Transitions in Bosonic Lattice Systems. O Buerschaper, MunichLudwig-Maximilians UniversityDiploma ThesisO. Buerschaper "Dynamics of Correlations and Quantum Phase Transitions in Bosonic Lattice Systems," Diploma Thesis, Ludwig-Maximilians University, Munich, 2007.
Lieb-Robinson bounds in quantum many-body physics. B Nachtergaele, R Sims, arXiv:1004.2086Entropy and the Quantum. Contemp. Math (2010). mathphB. Nachtergaele and R. Sims "Lieb-Robinson bounds in quantum many-body physics," In: Entropy and the Quantum". In: Contemp. Math (2010), pp. 141-176, [arXiv:1004.2086 [math- ph]].
. M R M Mozaffar, A Mollabashi, in preparationM.R.M. Mozaffar, A.Mollabashi, in preparation.
Lieb-Robinson Bound and the Butterfly Effect in Quantum Field Theories. D A Roberts, B Swingle, 10.1103/PhysRevLett.117.091602arXiv:1603.09298Phys. Rev. Lett. 117991602hep-thD. A. Roberts and B. Swingle, "Lieb-Robinson Bound and the Butterfly Ef- fect in Quantum Field Theories," Phys. Rev. Lett. 117, no. 9, 091602 (2016) doi:10.1103/PhysRevLett.117.091602 [arXiv:1603.09298 [hep-th]].
Correlation functions in theories with Lifshitz scaling. V Keranen, W Sybesma, P Szepietowski, L Thorlacius, 10.1007/JHEP05(2017)033arXiv:1611.09371JHEP. 170533hep-thV. Keranen, W. Sybesma, P. Szepietowski and L. Thorlacius, "Correlation functions in theories with Lifshitz scaling," JHEP 1705, 033 (2017) doi:10.1007/JHEP05(2017)033 [arXiv:1611.09371 [hep-th]].
Entanglement dynamics in short and long-range harmonic oscillators. M , Ghasemi Nezhadhaghighi, M A Rajabpour, 10.1103/PhysRevB.90.205438arXiv:1408.3744Phys. Rev. B. 9020205438cond-mat.stat-mechM. Ghasemi Nezhadhaghighi and M. A. Rajabpour, "Entanglement dynamics in short and long-range harmonic oscillators," Phys. Rev. B 90, no. 20, 205438 (2014) doi:10.1103/PhysRevB.90.205438 [arXiv:1408.3744 [cond-mat.stat-mech]].
Quantum quench in long-range field theories. M A Rajabpour, S Sotiriadis, 10.1103/PhysRevB.91.045131arXiv:1409.6558Phys. Rev. B. 91445131cond-mat.str-elM. A. Rajabpour and S. Sotiriadis, "Quantum quench in long-range field theories," Phys. Rev. B 91, no. 4, 045131 (2015) doi:10.1103/PhysRevB.91.045131 [arXiv:1409.6558 [cond-mat.str-el]].
Gravity duals of Lifshitz-like fixed points. S Kachru, X Liu, M Mulligan, 10.1103/PhysRevD.78.106005arXiv:0808.1725Phys. Rev. D. 78106005hep-thS. Kachru, X. Liu and M. Mulligan, "Gravity duals of Lifshitz-like fixed points," Phys. Rev. D 78, 106005 (2008) doi:10.1103/PhysRevD.78.106005 [arXiv:0808.1725 [hep-th]].
Zero Modes and Entanglement Entropy. Y K Yazdi, 10.1007/JHEP04(2017)140arXiv:1608.04744JHEP. 1704140hep-thY. K. Yazdi, "Zero Modes and Entanglement Entropy," JHEP 1704, 140 (2017) doi:10.1007/JHEP04(2017)140 [arXiv:1608.04744 [hep-th]].
Time Evolution of Entanglement Entropy from Black Hole Interiors. T Hartman, J Maldacena, 10.1007/JHEP05(2013)014arXiv:1303.1080JHEP. 130514hep-thT. Hartman and J. Maldacena, "Time Evolution of Entanglement Entropy from Black Hole Interiors," JHEP 1305, 014 (2013) doi:10.1007/JHEP05(2013)014 [arXiv:1303.1080 [hep-th]].
Thermalization in backgrounds with hyperscaling violating factor. M Alishahiha, A Astaneh, M R Mohammadi Mozaffar, 10.1103/PhysRevD.90.046004arXiv:1401.2807Phys. Rev. D. 90446004hep-thM. Alishahiha, A. Faraji Astaneh and M. R. Mohammadi Mozaffar, "Thermalization in backgrounds with hyperscaling violating factor," Phys. Rev. D 90, no. 4, 046004 (2014) doi:10.1103/PhysRevD.90.046004 [arXiv:1401.2807 [hep-th]].
Holographic thermalization with Lifshitz scaling and hyperscaling violation. P Fonda, L Franti, V Kernen, E Keski-Vakkuri, L Thorlacius, E Tonni, 10.1007/JHEP08(2014)051arXiv:1401.6088JHEP. 140851hep-thP. Fonda, L. Franti, V. Kernen, E. Keski-Vakkuri, L. Thorlacius and E. Tonni, "Holo- graphic thermalization with Lifshitz scaling and hyperscaling violation," JHEP 1408, 051 (2014) doi:10.1007/JHEP08(2014)051 [arXiv:1401.6088 [hep-th]].
S F Lokhande, arXiv:1808.09979Spread of Entanglement in Non-Relativistic Theories. hep-thS. F. Lokhande, "Spread of Entanglement in Non-Relativistic Theories," arXiv:1808.09979 [hep-th].
Universality of Abrupt Holographic Quenches. A Buchel, R C Myers, A Van Niekerk, 10.1103/PhysRevLett.111.201602arXiv:1307.4740Phys. Rev. Lett. 111201602hep-thA. Buchel, R. C. Myers and A. van Niekerk, "Universality of Abrupt Holographic Quenches," Phys. Rev. Lett. 111, 201602 (2013) doi:10.1103/PhysRevLett.111.201602 [arXiv:1307.4740 [hep-th]].
Quantum Quenches in Free Field Theory: Universal Scaling at Any Rate. S R Das, D A Galante, R C Myers, 10.1007/JHEP05(2016)164arXiv:1602.08547JHEP. 1605164hep-thS. R. Das, D. A. Galante and R. C. Myers, "Quantum Quenches in Free Field The- ory: Universal Scaling at Any Rate," JHEP 1605, 164 (2016) doi:10.1007/JHEP05(2016)164 [arXiv:1602.08547 [hep-th]].
Quantum Quench and Scaling of Entanglement Entropy. P Caputa, S R Das, M Nozaki, A Tomiya, 10.1016/j.physletb.2017.06.017arXiv:1702.04359Phys. Lett. B. 77253hep-thP. Caputa, S. R. Das, M. Nozaki and A. Tomiya, "Quantum Quench and Scaling of Entanglement Entropy," Phys. Lett. B 772, 53 (2017) doi:10.1016/j.physletb.2017.06.017 [arXiv:1702.04359 [hep-th]].
An exactly solvable quench protocol for integrable spin models. D Das, S R Das, D A Galante, R C Myers, K Sengupta, 10.1007/JHEP11(2017)157arXiv:1706.02322JHEP. 1711157hep-thD. Das, S. R. Das, D. A. Galante, R. C. Myers and K. Sengupta, "An exactly solvable quench protocol for integrable spin models," JHEP 1711, 157 (2017) doi:10.1007/JHEP11(2017)157 [arXiv:1706.02322 [hep-th]].
. M R M Mozaffar, A Mollabashi, work in progressM.R.M. Mozaffar, A.Mollabashi, work in progress.
Constructing entanglement wedges for Lifshitz spacetimes with Lifshitz gravity. J Cheyne, D Mattingly, 10.1103/PhysRevD.97.066024arXiv:1707.05913Phys. Rev. D. 97666024gr-qcJ. Cheyne and D. Mattingly, "Constructing entanglement wedges for Lifshitz spacetimes with Lifshitz gravity," Phys. Rev. D 97, no. 6, 066024 (2018) doi:10.1103/PhysRevD.97.066024 [arXiv:1707.05913 [gr-qc]].
| [] |
[] | [
"Eds C M Urry \nDemographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA\n",
"P Padovani \nDemographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA\n",
"Giovanni Fossati \nDemographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA\n"
] | [
"Demographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA",
"Demographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA",
"Demographics of Blazars\nCenter for Astrophysics and Space Sciences\nUniversity of California at San Diego\n9500 Gilman Drive, La Jolla92093-0424CAUSA"
] | [
"Blazar Demographics and Physics ASP Conference Series"
] | We discuss the preliminary results of an extensive effort to address the fundamental, and yet un-answered, question that can be trivialized as: "are there more blue or red blazars?". This problematic is tightly connected with the much debated issue of the unified picture(s) of radio-loud AGNs, which in turn revolves around the existence, and the properties of relativistic jets. We address this question by comparing -simultaneously-the properties of the collection of heterogeneously selected samples that are available now, with the predictions of a set of plausible unifications scenarios. We show that it is already possible to make significant progress even by using only the present samples. The important role of selection effects is discussed. For instance we show that the multiple flux selections typical of available surveys could induce some of the correlations found in color-color diagrams. These latter results should apply to any study of flux limited samples. | null | [
"https://arxiv.org/pdf/astro-ph/0012467v1.pdf"
] | 119,352,786 | astro-ph/0012467 | 200533b68b7636a82ca64d0cbc086ebdf342935a |
200
Eds C M Urry
Demographics of Blazars
Center for Astrophysics and Space Sciences
University of California at San Diego
9500 Gilman Drive, La Jolla92093-0424CAUSA
P Padovani
Demographics of Blazars
Center for Astrophysics and Space Sciences
University of California at San Diego
9500 Gilman Drive, La Jolla92093-0424CAUSA
Giovanni Fossati
Demographics of Blazars
Center for Astrophysics and Space Sciences
University of California at San Diego
9500 Gilman Drive, La Jolla92093-0424CAUSA
Blazar Demographics and Physics ASP Conference Series
200
We discuss the preliminary results of an extensive effort to address the fundamental, and yet un-answered, question that can be trivialized as: "are there more blue or red blazars?". This problematic is tightly connected with the much debated issue of the unified picture(s) of radio-loud AGNs, which in turn revolves around the existence, and the properties of relativistic jets. We address this question by comparing -simultaneously-the properties of the collection of heterogeneously selected samples that are available now, with the predictions of a set of plausible unifications scenarios. We show that it is already possible to make significant progress even by using only the present samples. The important role of selection effects is discussed. For instance we show that the multiple flux selections typical of available surveys could induce some of the correlations found in color-color diagrams. These latter results should apply to any study of flux limited samples.
The factor of 100 problem
More than 95% of all catalogued blazars have been found in either shallow radio or shallow X-ray surveys (e.g. see Padovani, these proceedings). Because of the range of blazar spectral energy distributions (SED) the two selection methods yield different types, the "red" objects (with the peak of the synchrotron emission at IR-optical wavelengths, LBL) in radio samples, and the "blue" (whose synchrotron emission peaks at UV-X-ray wavelengths, HBL) in X-ray samples. The differences in the SEDs do reflect different physical states but only as the extrema of an underlying continuous population.
The relative space densities of the different types, not to mention their absolute space densities or their evolution in cosmic time still remain indeterminate. Different scenarios predict a difference of two orders of magnitude (!) in the ratio of the "red" and "blue" types, nevertheless the presently available samples are unable to distinguish between them. The blazar demographics are this uncertain essentially because the flux limits of current complete samples are high, so only the tip of the population is sampled. The interpretation of observed phenomenology depends on the complicated sensitivity of diverse surveys to a range of spectral types. Ultimately, this means we do not know which kind of jets nature preferentially makes: those with and high B and γ e ("blue" blazars) or low B and γ e ("red" blazars). We also do not know whether they evolve differently and/or if "red" blazars dominate at high redshift and evolve into "blue" Evolution during the fit of the values of (a) χ 2 and (b) of the width σ of the L-ν peak relationship for the bolometric model. In (c) is plotted the area of the "LBL Gaussian" for the radio-leading model. blazars at low redshift, and what is the relationship between the "non-thermal" and "thermal" power/components. The implications for understanding jet formation are obvious.
Here we present a concise account of the preliminary results of numerical simulations of a set of unification models, including an actual fit of the model parameters to reproduce the general characteristics of a few reference samples ( §2). We also introduce a "concept" experiment, devised to address the role of selection effects ( §3), and discuss a couple of issues that are connected to this problem. In §4 we comment on future developments.
Testing unification scenarios
We compared the existing surveys with a set of three alternative unified schemes, following the discussion developed in recent years after Padovani & Giommi (1995), and Fossati et al. (1997and Fossati et al. ( , 1998. They are: i) the "radio-leading", where the primary 1 luminosity is the radio one and N LBL >N HBL . ii) The "X-rayleading", where the primary band are the X-ray, and N LBL <N HBL . iii) The "bolometric", where the SED properties (and in turn the distribution of L X /L R , i.e. the balance between LBL and HBL) are determined by the total power of the source, with HBLs being the less powerful objects. In Fossati et al. (1997) the input parameters of each model were pre-set to values based on those of the observed samples. The most interesting results was the success of the new model, the bolometric one.
The fit method, and results
In this work our approach is different. First we normalize/optimize each unifying scheme by performing an actual fit to three reference samples (EMSS, Slew, 1 Jy). We leave free to vary 7-8 variables, such as the normalization and slope a b c Figure 2. Radio log(N)-log(S) predicted by the (a) bolometric and the (b) radio-leading models for the DXRBS sample. (c) Bolometric and radio-leading predictions for the "sedentary" sample; the greyfilled square represents the observed density.
of the primary luminosity function, and the distribution of the L X /L R ratio 2 . For those parameter for which there is a measured value (e.g. the luminosity function) we allowed their values to move within their 2σ interval. The observational quantities to reproduce were the number, and average radio and X-ray luminosities of HBLs and LBLs.
The technique used for the fit is "simulated annealing" (e.g. Kirkpatrick et al. 1983), which is based on statistical mechanics, and implemented via MonteCarlo. It is a very robust technique, very well suited for many parameter fits. Moreover the "global" nature of the technique is very effective for cases where there might be multiple secondary local minima in the parameter space.
In Fig. 1 we show examples of the evolution of the fit. We here just point out interesting results concerning two of the "core" issues: i) the best fit of the bolometric model requires a finite width for the L-ν peak relationship (see Fig. 1b). The best fit value is σ ≃ 0.6, i.e. at any given L the synchrotron peak frequency will be distributed as a Gaussian of width σ centered at the ν peak value determined by the relationship. ii) In both the radio-and X-ray (not shown) leading cases the best fit L X /L R distribution is basically a single, broad, Gaussian (see Fig. 1c). For the radio-leading case the LBL Gaussian comprises 98% of the total area, and it is centered at ≃ −6.3 with σ ≃ 0.6.
Comparison with real samples
The next step is to use the results of the fits to predict the properties of samples that have not been used to optimize the parameters of the models. We present here only the integral log(N)-log(S) curves, and we only sub-divide the samples in HBL/LBL (according to the values of F X /F R ). It is worth noting that the absolute normalizations may not be completely reliable, because of uncertainties on the sky coverage. The uncertainty on the (details of) sky coverage is indeed Radio log(N)-log(S) predicted by the bolometric/radio-/X-ray-leading models: HBLs and LBLs for a sample with an additional cut at (a) m V <20, or (b) m V <22 .
probably the main one involved in the simulations. The relative fraction of HBL and LBL may be a more robust parameter, and it is the one more easily amenable to a quick comparison.
DXRBS The DXRBS sample (Perlman et al. 1998) is still in progress, but an "off record" comparison of the predicted log(N)-log(S) (shown if Fig. 2a,b) with the observed one seems to show that the models are (still) in good agreement with the data. The predictions of the bolometric and radio-leading models become radically different below about 100 mJy, a domain now reachable. The LBL/HBL density ratio at a few radio flux limits are the following:
Flux @5GHz
Bolometric X-ray leading Radio-leading @300 mJy 9.7 6.9 6.6 @150 mJy 6.2 5.7 6.4 @50 mJy 3.2 5.2 6.1
Sedentary survey
The "sedentary" sample (Giommi, Menna & Padovani 1999) comprises only HBLs because of the built-in cut in α RX . The radio log(N)log(S) is shown in Fig. 2c, where the grey square represent a density @10 mJy between the actual "sedentary" and the EMSS, showing that there is a quite good agreement. Here, as for the DXRBS, we do not plot the predictions of the X-ray leading model because they are not satisfactory. In fact this scenario does not seem to be able to explain the properties of these recent samples, at least with its parameters set at the best fit values.
Going deeper
In Fig. 3a,b we show the predictions of the 3 scenarios for the number densities of HBLs and LBLs in radio surveys with a secondary cut in optical magnitude at m V =20, and m V =22. We see that the X-ray leading model is giving a substantially different answer from the two other competing models, which seem to agree over most of the accessible radio flux range. The bolometric and radioleading models actually start to give different predictions only at very faint radio fluxes, as seen in Fig. 3b. In the radio-leading model the radio counts of HBLs and LBLs keep a fixed ratio by definition, while in the bolometric picture HBLs are deemed to eventually outnumber the LBLs, but this seems to happen at radio fluxes lower that expected. However, we are not far from the range of radio fluxes that will be the most sensitive to discriminate among the different pictures. Actually there are already a few samples going deep enough.
The "cube" (caveat #1)
To try to assess the problem of selection effects we introduced the "cube". (Fossati & Urry, in preparation), a toy model stripped down of every a priori assumption as to the presumed intrinsic properties of the SED. We assume that the radio/optical/X-ray luminosities are completely un-correlated, and we take simple power law luminosity functions. We then simulate samples of sources that would be selected by a generic flux limited radio or X-ray survey (including the flux dependent sky coverage), with a possible additional cut in another spectral band. An example of the results of this exercise is shown in Fig 4a. It seems to be relatively "natural" to obtain patterns in a color-color diagram which looks like those that are actually observed, and promptly interpreted as tracing intrinsic properties of the sources. Of course the "cube" is not able to reproduce the large variety of patterns and correlations observed in luminosityluminosity, color-color diagrams, nevertheless we regard it as a very instructive example of how careful we need to be when dealing with selection effects.
2.5. Caveat #2: on cutting in F X /F R color Figure 4b shows the log(N)-log(S) of observed extreme HBL (lower dashed line) and of intrinsic extreme HBL (upper solid line), defined as such according to the observed or intrinsic X-ray/radio ratio. Because of the K-correction and their SED shape, blazars systematically shift towards the LBL side when seen at higher redshift, when "classified" on the basis of the observed X/radio ratio. The effect can be sensible when comparing relative populations of HBL and LBL.
Conclusions: blazars demographics and not-so-perfect samples
On the basis of the analysis presented here, we think that there might be already enough information available to proceed to constrain meaningfully the main features of unified scenarios. The comparison of observed samples with simulations performed in a systematic fashion (e.g. by means of simultaneous fit) may provide an extremely powerful and effective tool to address the problem of the intrinsic properties of blazars.
In fact, although there is not a single sample comprising all the desirable characteristics to provide the least possible biased picture of the intrinsic properties of blazars, the F X /F R plane is now well covered (see Fig. 1, 2 in Padovani's contribution). Moreover, the quality of the most recent samples will allow to compare the predictions and the data directly by using the distribution of the α RX , an important step forward and past some confusion created by selection effects combined with the "two bins" approach (e.g. §2.5).
If the selection biases of each of surveys can be regarded as being under control (and therefore reliably implemented in the simulations) we may soon be able not only to test a given unified scheme, but even to derive directly from the data what should be the general properties of a successful unified scheme.
Finally, we think that more than ever it is necessary to shift the focus away from the BL Lacs sub-class, because this could still be the source of significant confusion. The best progress could be made by considering the BL Lacs-FSRQs relationship as a whole, also from the observational point of view. The bolometric scenario was meant from the beginning to unify BL Lacs and FSRQs, and it tries to connect some basic physical ideas to the observed phenomenology. On the other hand we need to figure out how to explain the HBL/LBL ratios assumed by the radio and X-ray leading scenarios, and in turn how to extend these models to include smoothly the FSRQs. There should be a way to tell from "first principles" on which side of the 1/10-10/1 range the real value of N HBL /N LBL ratio is more likely to belong.
Figure 1 .
1Figure 1. Evolution during the fit of the values of (a) χ 2 and (b) of the width σ of the L-ν peak relationship for the bolometric model. In (c) is plotted the area of the "LBL Gaussian" for the radio-leading model.
Figure 3. Radio log(N)-log(S) predicted by the bolometric/radio-/X-ray-leading models: HBLs and LBLs for a sample with an additional cut at (a) m V <20, or (b) m V <22 .
Figure 4 .
4(a) α RO vs. α OX diagram with the density contours predicted by the "cube" for an EMSS-like (dotted), a DXRBS-like (dashed), and a radio (solid) selected samples. (b) "Observed" (empty symbols) and "intrinsic" (filled) log(N)-log(S) of extreme HBLs.
Defined as the band where a flux limited selection would be objective with respect to the range of intrinsic properties.
Note on LX/LR: for the bolometric scenario we allow for a spread in the relationship between peak frequency and luminosity. For the radio and X-ray leading scenarios we use the combination of two Gaussians, for which we fit the mean, sigma and area.
Acknowledgments. I'd like to thank the organizers for a great workshop, and for bearing with my request of delaying my talk by one day, and Ilaria Cagnoni for very kindly accepting to swap our talks in the schedule. I also thank pippol for the neverending support.
. G Fossati, MNRAS. 289136Fossati, G. et al. 1997, MNRAS, 289, 136
. G Fossati, MNRAS. 299433Fossati, G. et al. 1998, MNRAS, 299, 433
. P Giommi, M T Menna, P Padovani, MNRAS. 310465Giommi, P., Menna, M.T., & Padovani, P. 1999, MNRAS, 310, 465
. S Kirkpatrick, Science. 220671Kirkpatrick, S. et al. 1983, Science, 220, 671
. P Padovani, P Giommi, ApJ. 444567Padovani, P., & Giommi, P. 1995, ApJ, 444, 567
. E Perlman, AJ. 1151253Perlman, E., et al. 1998, AJ, 115, 1253
| [] |
[
"HIGGS BOSONS IN THE STANDARD MODEL AND THE MINIMAL SUPERSYMMETRIC STANDARD MODEL a",
"HIGGS BOSONS IN THE STANDARD MODEL AND THE MINIMAL SUPERSYMMETRIC STANDARD MODEL a"
] | [
"M Quiros \nInstituto de Estructura de la Materia\nSerrano 12328006MadridSpain\n"
] | [
"Instituto de Estructura de la Materia\nSerrano 12328006MadridSpain"
] | [] | In these lectures we present a brief review of the Higgs boson sector in the Standard Model, and its Minimal Supersymmetric Extension, with particular emphasis on the main mechanisms for Higgs production and decay at LEP2 and LHC, and theoretical bounds on the Higgs boson masses. In the Standard Model the effective potential can develop a non-standard minimum for values of the field much larger than the weak scale. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M H , Mt)-plane which can be accomodated by the theory. In the Minimal Supersymmetric Standard Model, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out. An appropriate treatment of squark decoupling allows to consider large values of the stop mixing parameters and thus fix a reliable upper bound on the mass of the lightest CP-even Higgs boson mass. The discovery of the Higgs boson at LEP2 might put an upper bound (below the Planck scale) on the scale of new physics Λ and eventually disentangle between the Standard Model and the Minimal Supersymmetric Standard Model. | null | [
"https://export.arxiv.org/pdf/hep-ph/9609392v1.pdf"
] | 14,622,568 | hep-ph/9609392 | dafd731967f6aa3c5a6590b69f537497806b906e |
HIGGS BOSONS IN THE STANDARD MODEL AND THE MINIMAL SUPERSYMMETRIC STANDARD MODEL a
arXiv:hep-ph/9609392v1 17 Sep 1996 September 1995
M Quiros
Instituto de Estructura de la Materia
Serrano 12328006MadridSpain
HIGGS BOSONS IN THE STANDARD MODEL AND THE MINIMAL SUPERSYMMETRIC STANDARD MODEL a
arXiv:hep-ph/9609392v1 17 Sep 1996 September 1995a Based on lectures given at the XXIV INTERNATIONAL MEETING ON FUNDAMEN-TAL PHYSICS: From Tevatron to LHC, 22-26 April, 1996, Gandía (Valencia) Spain.
In these lectures we present a brief review of the Higgs boson sector in the Standard Model, and its Minimal Supersymmetric Extension, with particular emphasis on the main mechanisms for Higgs production and decay at LEP2 and LHC, and theoretical bounds on the Higgs boson masses. In the Standard Model the effective potential can develop a non-standard minimum for values of the field much larger than the weak scale. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M H , Mt)-plane which can be accomodated by the theory. In the Minimal Supersymmetric Standard Model, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out. An appropriate treatment of squark decoupling allows to consider large values of the stop mixing parameters and thus fix a reliable upper bound on the mass of the lightest CP-even Higgs boson mass. The discovery of the Higgs boson at LEP2 might put an upper bound (below the Planck scale) on the scale of new physics Λ and eventually disentangle between the Standard Model and the Minimal Supersymmetric Standard Model.
IEM-FT-141/95
September 1995 a Based on lectures given at the XXIV INTERNATIONAL MEETING ON FUNDAMEN-TAL PHYSICS: From Tevatron to LHC, 22-26 April, 1996, Gandía (Valencia) Spain.
Higgs bosons in the Standard Model
In this lecture we will review some elementary and/or well established features of the Higgs sector in the Standard Model (SM) 1 . Most of it should be viewed as an introduction for beginners and/or students in the field though we also have presented some recent results on Higgs mass bounds obtained by this author in various collaborations. The methods used to obtain the latter results are sometimes technical. Therefore, we have simplified the analysis and presented only the relevant results.
Why a Higgs boson?
The Higgs mechanism 2 is the simplest mechanism to induce spontaneous symmetry breaking of a gauge theory. In particular, in the Standard Model of electroweak interactions it achieves the breaking
SU (2) L × U (1) Y −→ U (1) em(1)
in a renormalizable quantum field theory, and gives masses to the gauge bosons W ± , Z, the Higgs boson and the fermions. The SM fermions are given by 3
q L = u L d L 1/6 , (u R ) 2/3 , (d R ) −1/3 ℓ L = ν L ℓ L −1/2 , (ℓ R ) −1(2)
where the hypercharge Y is related to the electric charge Q by, Q = T 3 + Y , and we are using the notation f = f L + f R , with
f L = 1 2 (1 − γ 5 )f f R = 1 2 (1 + γ 5 )f.(3)
The Higgs boson is an SU (2) L doublet, as given by
H = 1 √ 2 χ + Φ + iχ 0 1/2(4)
The physical Higgs φ is related to Φ by, Φ = φ + v, where v = ( √ 2G F ) −1/2 = 246. 22 GeV is the vacuum expectation value (VEV) of the Higgs. The (massless) fields χ ± , χ 0 are the Goldstone bosons.
A mass term for gauge bosons V µ , as 1 2 M 2 V V µ V µ is not gauge invariant, and would spoil the renormalizability properties of the theory. A mass term for fermions, m uqL u R + m dqL d R + m ℓlL ℓ R does not even exist (it is not SU (2) L × U (1) Y invariant). Both goals can be achieved through the Higgs mechanism 2 .
One can write the part of the SM Lagrangian giving rise to mass terms as (5) where H c ≡ iσ 2 H * , the covariant derivative D µ of the Higgs field is defined by 6) and the Higgs potential by
L = (D µ H) † (D µ H) − (h dqL Hd R + h uqL H c u R + h ℓlL Hℓ R + h.c.) − V (H)D µ H ≡ ∂ µ + ig σ 2 W µ + ig ′ 1 2 B µ H(V (H) = −µ 2 H † H + λ 2 H † H 2(7)
Minimization of (7) yields,
0|H|0 ≡ v √ 2 0 1 ; v = 2µ 2 λ(8)
Replacing now Φ = φ + v into (5) yields:
L = − 1 4 g 2 v 2 W + µ W µ− − 1 8 v 2 Z µ A µ g 2 + g ′2 0 0 0 Z µ A µ − vh u √ 2ū u − vh d √ 2d d − vh ℓ √ 2l ℓ(9)
where
W ± µ = 1 √ 2 (W µ1 ± iW µ2 ) Z µ = cos θ W W µ3 − sin θ W B µ(10)
A µ = sin θ W W µ3 + cos θ W B µ and the electroweak angle θ W is defined by tan θ W = g ′ /g. In this way the goal of giving masses to the gauge bosons and the fermions has then been achieved as b
M 2 W = 1 4 g 2 v 2 b
In the following we will use the notation mt, m H for the top-quark and Higgs boson running MS on-shell masses (defined at a scale equal to the corresponding mass), and Mt, M H for the corresponding pole (physical) masses. They are related by a contribution from selfenergies. Thus for the Higgs boson, the running and pole masses are related by 4
M 2 H = m 2 H (M H ) + ReΠ φφ (M H ) − ReΠ φφ (0). M 2 Z = 1 4 g 2 + g ′2 v 2 (11) m f = 1 √ 2 h f v m 2 H = λv 2 1.2
What we know about the Higgs: its couplings
The couplings (g, g ′ , v) are experimentally traded by a set of three observables, as e.g. (M W , M Z , G F ), or (α em , M Z , G F ), while the Yukawa couplings h f are measured by the fermion masses, m f . Only the quartic coupling λ in Eq. (5), which should be measured by the Higgs mass, is at present unknown. All Higgs interactions (cross-sections, branching ratios,...) are determined once the corresponding Feynman rules are known 5 . In Table 1 we summarize the main vertices involving the physical Higgs boson in the SM along with the rest of particles in the SM.
Vertex
Coupling Table 1 Higgs production at LEP2
φff −i g 2MW m f φW ± µ W ∓ ν igM W g µν φZ µ Z ν i gMZ cos θW g µν φφφ −i 3g 2MW M 2 H φφW ± µ W ∓ ν i 1 2 g 2 g µν φφZ µ Z ν i 1 2 g 2 cos 2 θW g µν φφφφ −i 3g 2 M 2 H 4M 2 W
The main mechanisms for production of Higgs particles at e + e − colliders, at the LEP2 energies, are 6 :
• HIGGS-STRAHLUNG: e + e − → Zφ, where the Higgs boson is radiated off the virtual Z-boson line exchanged in the s-channel. [Fig. 1, where the solid (fermion) lines are electrons, the wavy line is a Z boson and the dashed line a Higgs φ.] A detailed analysis of these processes for LEP2 can be found in Ref. 6 . There it is found that the Higgs-strahlung process dominates the cross-section for low values of the Higgs mass (M H < 105 GeV), while the WW-fusion process dominates it for large values of the Higgs mass (M H > 105 GeV).
Higgs production at LHC
The main mechanisms for production of Higgs bosons at pp colliders, at the LHC energies, are 7 :
• GLUON-GLUON FUSION: gg → φ, where two gluons in the sea of the protons collide through a loop of top-quarks, which subsequently emits a A complete analysis of the different production channels can be found, e.g. in Ref. 8 . It is found that for a top mass in the experimental range 9 the gluon-gluon fusion mechanism is dominating the production cross-section for any value of the Higgs mass. The subdominant process, WW(ZZ)-fusion is comparable in magnitude to the gluon-gluon process only for very large values of the Higgs mass M H ∼ 1 TeV. For low values of the Higgs mass, M H ∼ 100 GeV, the gluon-gluon fusion process is still dominant over all other channels by around one order of magnitude, while all the others are similar in magnitude for these values of the Higgs mass.
Higgs decays
For values of the Higgs mass relevant at LEP2 energies, the main decay modes of the Higgs boson are:
• φ → bb, cc, τ − τ + , which is dominated by the bb channel. A complete analysis of different Higgs decay channels reveals 6 that, for LEP2 range of Higgs masses, M H < 110 GeV, the bb channel dominates the Higgs branching ratio by ∼ one order of magnitude.
For M H > 110 GeV, the main decay modes relevant for LHC energies and pp colliders are 8 :
• φ → γγ, where the photons are produced by a top-quark loop emitted by the Higgs. [The inverse diagram as that of Fig. 3, where gluons are replaced by photons.]
• φ → W ± W ∓ , which requires M H > 2M W . • φ → ZZ, which requires M H > 2M Z . • φ → tt, which requires M H > 2M t .
For a heavy Higgs (M H > 150 GeV) the W W (ZZ) decay channels completely dominate the Higgs branching ratio, while the radiative decay γγ dominates for low values of the Higgs mass and is expected to close the LHC window for a light Higgs. The reader is referred to Ref. 8 for more details.
What we do not know about the Higgs: its mass
Being the Higgs boson the missing ingredient of the SM, the quartic coupling λ, and so its mass, are unknown. However we can have information on M h from experimental and theoretical input.
From experimental inputs we have direct and indirect information on the Higgs mass. Since direct experimental searches at LEP have been negative up to now, they translate into a lower bound on the Higgs mass 10 ,
M h > 67 GeV,(12)
Experimental searches also yield indirect information, which is the influence the Higgs mass has in radiative corrections and in precision measurements at LEP 10 . However, unlike the top quark mass, on which the radiative corrections are quadratically dependent, and so very sensitive, the dependence of oneloop radiative corrections on the Higgs mass is only logarithmic (the so-called Veltman's screening theorem), which means that radiative corrections in the SM have very little sensitivity to the Higgs mass, providing only very loose bounds from precision measurements. However, from the theoretical input the situation is rather different. In fact the theory has a lot of information on M h , which can be used to put bounds on the Higgs mass. If these bounds were evaded when the Higgs mass will be eventually measured, this measurement might lead to the requirement of new physics, just because the SM cannot accomodate such a value of the Higgs (and the top-quark) mass.
For particular values of the Higgs boson and top quark masses, M H and M t , the effective potential of the Standard Model (SM) develops a deep nonstandard minimum for values of the field φ ≫ G −1/2 F 11 . In that case the standard electroweak (EW) minimum becomes metastable and might decay into the non-standard one. This means that the SM might have troubles in certain regions of the plane (M H ,M t ), a fact which can be intrinsically interesting as evidence for new physics. Of course, the mere existence of the non-standard minimum, and also the decay rate of the standard one into it, depends on the scale Λ up to which we believe the SM results. In fact, one can identify Λ with the scale of new physics.
Stability bounds
The preliminary question one should ask is: When the standard EW minimum becomes metastable, due to the appearance of a deep non-standard minimum? This question was addressed in past years 11 taking into account leading-log (LL) and part of next-to-leading-log (NTLL) corrections. More recently, calculations have incorporated all NTLL corrections 12,13 resummed to all-loop by the renormalization group equations (RGE), and considered pole masses for the top-quark and the Higgs-boson. From the requirement of a stable (not metastable) standard EW minimum we obtain a lower bound on the Higgs mass, as a function of the top mass, labelled by the values of the SM cutoff (stability bounds). Our result 13 is lower than previous estimates by O (10) GeV. The problem to attack is easily stated as follows:
The effective potential in the SM can be written as (7)
V = − 1 2 m 2 φ 2 + 1 8 λφ 4 + · · ·(13)
where the ellipsis refers to radiative corrections and all parameters and fields in (13) are running with the renormalization group scale µ(t) = M Z exp(t).
The condition for having an extremal is V ′ (φ(t)) = 0, which has as solution
φ 2 = 2m 2 λ − 12 32π 2 h 4 t log h 2 t φ 2 2µ 2 − 1(14)
where h t refers to the top Yukawa coupling, and only the leading radiative corrections have been kept for simplicity. The curvature of the potential (13) at the extreme is given by
V ′′ (φ) = 2m 2 + 1 2 β λ φ 2(15)
The condition V ′ = 0 is obviously satisfied at the EW minimum where mum). However, the condition V ′ = 0 can also be satisfied for values of the field φ ≫ v and, since m = O(100) GeV, for those values
φ = v ∼ 246 GeV, λ ∼ (m H /v) 2 > 1/16, m 2 ∼ m 2 H /2 and V ′′ ( φ ) > 0 (a miniλ ∼ m φ 2 ≪ 1.
Therefore, for the non-standard extremals we have
β λ < 0 =⇒ V ′′ < 0 maximum β λ > 0 =⇒ V ′′ > 0 minimum.(16)
The one-loop effective potential of the SM improved by two-loop RGE has been shown to be highly scale independent 4 and, therefore, very reliable for the present study. In Fig. 5 we show (thick solid line) the shape of the effective potential for M t = 175 GeV and M H = 121.7 GeV. We see the appearance of the non-standard maximum, φ M , while the global non-standard minimum has been cutoff at M P ℓ . We can see from Fig. 5 the steep descent from the non-standard maximum. Hence, even if the non-standard minimum is beyond the SM cutoff, the standard minimum becomes metastable and might be destabilized. So for fixed values of M H and M t the condition for the standard minimum not to become metastable is
φ M > ∼ Λ(17)
Condition (17) makes the stability condition Λ-dependent. In fact we have plotted in Fig. 6 the stability condition on M H versus M t for Λ = 10 19 GeV and 10 TeV. The stability region corresponds to the region above the dashed curves.
Metastability bounds
In the last subsection we have seen that in the region of Fig. 6 below the dashed line the standard EW minimum is metastable. However we should not draw physical consequences from this fact since we still do not know at which minimum does the Higgs field sit. Thus, the real physical constraint we have to impose is avoiding the Higgs field sitting at its non-standard minimum. In fact the Higgs field can be sitting at its zero temperature non-standard minimum because:
1. The Higgs field was driven from the origin to the non-standard minimum at finite temperature by thermal fluctuations in a non-standard EW phase transition at high temperature. This minimum evolves naturally to the non-standard minimum at zero temperature. In this case the standard EW phase transition, at T ∼ 10 2 GeV, will not take place.
2. The Higgs field was driven from the origin to the standard minimum at T ∼ 10 2 GeV, but decays, at zero temperature, to the non-standard minimum by a quantum fluctuation. In Fig. 5 we have depicted the effective potential at T = 2.5 × 10 15 GeV (thin solid line) which is the corresponding transition temperature. Our finite temperature potential 14 incorporates plasma effects 15 by one-loop resummation of Debye masses 16 . The tunnelling probability per unit time per unit volume was computed long ago for thermal 17 and quantum 18 fluctuations. At finite temperature it is given by Γ/ν ∼ T 4 exp(−S 3 /T ), where S 3 is the euclidean action evaluated at the bounce solution φ B (0). The semiclassical picture is that unstable bubbles are nucleated behind the barrier at φ B (0) with a probability given by Γ/ν. Whether or not they fill the Universe depends on the relation between the probability rate and the expansion rate of the Universe. By normalizing the former with respect to the latter we obtain a normalized probability P , and the condition for decay corresponds to P ∼ 1. Of course our results are trustable, and the decay actually happens, only if φ B (0) < Λ, so that the similar condition to (17) is
Λ < φ B (0)(18)
The condition of no-decay (metastability condition) has been plotted in Fig. 6 (solid lines) for Λ = 10 19 GeV and 10 TeV. The region between the dashed and the solid line corresponds to a situation where the non-standard minimum exists but there is no decay to it at finite temperature. In the region below the solid lines the Higgs field is sitting already at the non-standard minimum at T ∼ 10 2 GeV, and the standard EW phase transition does not happen. We also have evaluated the tunnelling probability at zero temperature from the standard EW minimum to the non-standard one. The result of the calculation should translate, as in the previous case, in lower bounds on the Higgs mass for differentes values of Λ. The corresponding bounds are shown in Fig. 6 in dotted lines. Since the dotted lines lie always below the solid ones, the possibility of quantum tunnelling at zero temperature does not impose any extra constraint.
As a consequence of all improvements in the calculation, our bounds are lower than previous estimates 19 . To fix ideas, for M t = 175 GeV, the bound reduces by ∼ 10 GeV for Λ = 10 4 GeV, and ∼ 30 GeV for Λ = 10 19 GeV.
Perturbativity bounds
Up to here we have described lower bounds on the Higgs mass based on stability arguments. Another kind of bounds, which have been used in the literature, are upper bounds based on the requirement of perturbativity of the SM up to the high scale (the scale of new physics) Λ.
Since the quartic coupling grows with the scale c , it will blow up to infinity at a given scale: the scale where λ has a Landau pole. The position of the c In fact the value of the renormalization scale where the quartic coupling starts growing depends on the value of the top-quark mass.
Landau pole Λ is, by definition, the maximum scale up to which the SM is perturbatively valid. In this way assuming the SM remains valid up to a given scale Λ amounts to requiring an upper bound on the Higgs mass from the perturbativity condition 6 λ(Λ) 4π ≤ 1
This upper bound depends on the scale Λ and very mildly on the top-quark mass M t through its influence on the renormalization group equations of λ.
We have plotted in Fig. 7 this upper bound for different values of the high scale Λ, along with the corresponding stability bounds. Taking, for instance, M H < ∼ 95 GeV and 170 GeV < M t < 180 GeV, then Λ < ∼ 10 7 GeV, while for 180 GeV < M t < 190 GeV, Λ < ∼ 10 4 GeV, as can be deduced from Fig. 8. Finally, using as upper bound for the top-quark mass M t < 180 GeV [Ref. 9 ] we obtain from (20) that only if the condition M h > 128 GeV (21) is fulfilled, the SM can be a consistent theory up to the Planck scale, where gravitational effects can no longer be neglected.
Higgs bosons in the Minimal Supersymmetric Standard Model
The Minimal Supersymmetric Standard Model (MSSM) 20 is the best motivated extension of the SM where some of their theoretical problems (e.g. the hierarchy problem inherent with the fact that the SM cannot be considered as a fundamental theory for energies beyond the Planck scale) find at least a technical solution 21 . In this lecture we will concentrate on the Higgs sector of the MSSM that is being the object of experimental searches at present accelerators (LEP), and will equally be one of the main goals at future colliders (LHC).
The Higgs sector in the Minimal Supersymmetric Standard Model
The Higgs sector of the MSSM 22 requires two Higgs doublets, with opposite hypercharges, as
H 1 = H 0 1 H − 1 −1/2 , H 2 = H + 2 H 0 2 1/2(22)
The reason for this duplicity is twofold. On the one hand it is necessary to cancel the triangular anomalies generated by the higgsinos. On the other hand it is required by the structure of the supersymmetric theory to give masses to all fermions. The most general gauge invariant scalar potential is given, for a general two-Higgs doublet model, by:
V = m 2 1 |H 1 | 2 + m 2 2 |H 2 | 2 + (m 2 3 H 1 H 2 + h.c.) + 1 2 λ 1 (H † 1 H 1 ) 2 + 1 2 λ 2 (H † 2 H 2 ) 2 + λ 3 (H † 1 H 1 )(H † 2 H 2 ) + λ 4 (H 1 H 2 )(H † 1 H † 2 )(23)+ 1 2 λ 5 (H 1 H 2 ) 2 + λ 6 (H † 1 H 1 ) + λ 7 (H † 1 H † 2 ) (H 1 H 2 ) + h.c.
However, supersymmetry provides the following tree-level relations between the previous couplings. The non-vanishing ones are:
λ 1 = λ 2 = 1 4 (g 2 + g ′2 ), λ 3 = 1 4 (g 2 − g ′2 ), λ 4 = − 1 4 g 2(24)
Replacing (24) into (23) one obtains the tree-level potential of the MSSM, as:
V MSSM = m 2 1 H † 1 H 1 + m 2 2 H † 2 H 2 + m 2 3 (H 1 H 2 + h.c.)(25)+ 1 8 g 2 H † 2 σH 2 + H † 1 σH 1 2 + 1 8 g ′2 H † 2 H 2 − H † 1 H 1 2
This potential, along with the gauge and Yukawa couplings in the superpotential,
W = h u Q · H 2 U c + h d Q · H 1 D c + h ℓ L · H 1 E c + µH 1 · H 2(26)
determine all couplings and masses (at the tree-level) of the Higgs sector in the MSSM.
After gauge symmetry breaking,
v 1 = Re H 0 1 v 2 = Re H 0 2(27)
the Higgs spectrum contains one neutral CP-odd Higgs A (with mass m A , that will be taken as a free parameter) A = cos β ImH 0 2 + sin β ImH 0 1 (28) and one neutral Goldstone χ 0 χ 0 = − sin β ImH 0 2 + cos β ImH 0 1 (29) with tan β = v 2 /v 1 . It also contains one complex charged Higgs H ± ,
H + = cos β H + 2 + sin β (H − 1 ) * (30) with a (tree-level) mass m 2 H ± = M 2 W + m 2 A(31)
and one charged Goldstone χ ± ,
χ + = − sin β H + 2 + cos β (H − 1 ) * .(32)
Finally the Higgs spectrum contains two CP-even neutral Higgs bosons H, H (the light and the heavy mass eigenstates) which are linear combinations of Re H 0 1 and Re H 0 2 , with a mixing angle α given by
cos 2α = − cos 2β m 2 A − M 2 Z m 2 H − m 2 H(33)
and masses
m 2 H,H = 1 2 m 2 A + M 2 Z ∓ (m 2 A + M 2 Z ) 2 − 4m 2 A M 2 Z cos 2 2β (34)
The Higgs couplings
All couplings in the Higgs sector are functions of the gauge (G F , g, g ′ ) and
Yukawa couplings, as in the SM, and of the previously defined mixing angles β, α. Some relevant couplings are contained in Table 2 where all particle momenta, in squared brackets, are incoming.
Vertex Couplings Table 2 Higgs production at LEP2
(H, H)W W (φW W ) SM [sin(β − α), cos(β − α)] (H, H)ZZ (φZZ) SM [sin(β − α), cos(β − α)] (H, H, A)[p]W ± H ∓ [k] ∓i g 2 (p + k) µ [cos(β − α), − sin(β − α), ±i] (H, H, A)uū (φuū) SM [ cos α sin β , sin α sin β , −iγ 5 cot β] (H, H, A)dd (φdd) SM [− sin α cos β , cos α cos β , −iγ 5 tan β] H − ud ig 2 √ 2M W [(m d tan β + m u cot β) − (m d tan β − m u cot β)γ 5 ] H +ū d ig 2 √ 2M W [(m d tan β + m u cot β) + (m d tan β − m u cot β)γ 5 ] (γ, Z)H + [p]H − [k] −i(p + k) µ e, g cos 2θ W 2 cos θ W h[p]A[k]Z − e 2 cos θ W sin θ W (p + k) µ cos(β − α)
The main mechanisms for production of neutral Higgs particles at e + e − colliders, at the LEP2 energies, are 6 :
• HIGGS-STRAHLUNG: e + e − → ZH, where the Higgs boson is radiated off the virtual Z-boson line. This process is identical to the SM Higgsstrahlung. [See Fig. 1.] • ASSOCIATED PAIR PRODUCTION: e + e − → HA, e + e − → H ± H ∓ . The production of HA is mediated by a Z-boson in the s-channel (it uses the coupling hAZ in Table 2). The production of H ± H ∓ can be mediated by either γ and Z, using the (γ, Z)H ± H ∓ vertex in Table 2.
A detailed analysis of these processes for LEP2 can be found in Ref. 6 .
Higgs production at LHC
The main mechanisms for production of neutral Higgs bosons at pp colliders, at the LHC energies, are 7 :
• The production of a charged Higgs boson is through the process gg → tt, where the gluons exchange a top-quark in the t-channel, and subsequent decay t → bH + . This process is available only when M t > m H + + M b . Otherwise the detection of the charged Higgs is much more difficult. [ Fig. 10 where curly lines are gluons, the fermion exchanged between the gluons a t quark, the external fermions b quarks and the external bosons (dashed) are H ± .] A complete analysis of the different production channels can be found, e.g. in Ref. 23 .
Higgs decays
Assuming R-parity conservation, two-body decays should be into SM particles, or two supersymmetric partners if the supersymmetric spectrum is kinematically accesible. Assuming the supersymmetric spectrum to be heavy enough (a useful working hypothesis), the decays are always into SM particles. The main decay modes of the Higgs boson are then:
• (H, H, A) → bb, cc, τ − τ + , tt, gg, γγ, W * W * , Z * Z * , Zγ, which is very similar to the corresponding SM modes.
• H → AA.
• H → hh, AA, ZA.
• A → ZH:
• H + → cs, τ + ν τ , tb, W + H.
A complete analysis of the decay modes in the MSSM can be found in Ref. 6 , for LEP2, and 23 for LHC.
Radiative corrections
All previous Higgs production and decay processes depend on the Higgs masses m H , m H , m A , m H ± , and couplings g, g ′ , G F , tan β, cos α, h f , λ 1 , . . . , λ 7 . We have already given their tree-level values. In particular, the mass spectrum satisfies at tree-level the following relations:
m H < M Z | cos 2β| m H < m A (35) m H ± > M W
which could have a number of very important phenomenological implications, as it is rather obvious. However, it was discovered that radiative corrections are important and can spoil the above tree level relations with a great phenomenological relevance. A detailed knowledge of radiatively corrected couplings and masses is necessary for experimental searches in the MSSM.
The effective potential methods to compute the (radiatively corrected) Higgs mass spectrum in the MSSM are useful since they allow to resum (using Renormalization Group (RG) techniques) LL, NTLL,..., corrections to all orders in perturbation theory. These methods 24,25 , as well as the diagrammatic methods 26 to compute the Higgs mass spectrum in the MSSM, were first developed in the early nineties.
Effective potential methods are based on the run-and-match procedure by which all dimensionful and dimensionless couplings are running with the RG scale, for scales greater than the masses involved in the theory. When the RG scale equals a particular mass threshold, heavy fields decouple, eventually leaving threshold effects in order to match the effective theory below and above the mass threshold. For instance, assuming a common soft supersymmetry breaking mass for left-handed and right-handed stops and sbottoms, M S ∼ m Q ∼ m U ∼ m D , and assuming for the top-quark mass, m t , and for the CPodd Higgs mass, m A , the range m t ≤ m A ≤ M S , we have: for scales Q ≥ M S , the MSSM, for m A ≤ Q ≤ M S the two-Higgs doublet model (2HDM), and for m t ≤ Q ≤ m A the SM. Of course there are thresholds effects at Q = M S to match the MSSM with the 2HDM, and at Q = m A to match the 2HDM with the SM. As we have said, the neutral Higgs sector of the MSSM contains, on top of the CP-odd Higgs A, two CP-even Higgs mass eigenstates, H (the heaviest one) and H (the lightest one). It turns out that the larger m A the heavier the lightest Higgs H. Therefore the case m A ∼ M S is, not only a great simplification since the effective theory below M S is the SM, but also of great interest, since it provides the upper bound on the mass of the lightest Higgs (which is interesting for phenomenological purposes, e.g. at LEP2). In this case the threshold correction at M S for the SM quartic coupling λ is:
∆ th λ = 3 16π 2 h 4 t X 2 t M 2 S 2 − 1 6 X 2 t M 2 S(36)
where h t is the SM top Yukawa coupling and X t = (A t − µ/ tan β) is the mixing in the stop mass matrix, the parameters A t and µ being the trilinear soft-breaking coupling in the stop sector and the supersymmetric Higgs mixing mass, respectively. The maximum of (36) corresponds to X 2 t = 6M 2 S which provides the maximum value of the lightest Higgs mass: this case will be referred to as the case of maximal mixing.
We have plotted in Fig. 11
An analytical approximation
We have seen 4 that, since radiative corrections are minimized for scales Q ∼ m t , when the LL RG improved Higgs mass expressions are evaluated at the top-quark mass scale, they reproduce the NTLL value with a high level of accuracy, for any value of tan β and the stop mixing parameters 27
m H,LL (Q 2 ∼ m 2 t ) ∼ m H,N T LL .(37)
Based on the above observation, we can work out a very accurate analytical approximation to m H,N T LL by just keeping two-loop LL corrections at Q 2 = m 2 t , i.e. corrections of order t 2 , where t = log(M 2 S /m 2 t ). Again the case m A ∼ M S is the simplest, and very illustrative, one. We have found 27,28 that, in the absence of mixing (the case X t = 0) two-loop corrections resum in the one-loop result shifting the energy scale from M S (the tree-level scale) to √ M S m t . More explicitly,
m 2 H = M 2 Z cos 2 2β 1 − 3 8π 2 h 2 t t + 3 2π 2 v 2 m 4 t ( M S m t )t(38)
where v = 246.22 GeV.
In the presence of mixing (X t = 0), the run-and-match procedure yields an extra piece in the SM effective potential ∆V th [φ(M S )] whose second derivative gives an extra contribution to the Higgs mass, as
∆ th m 2 H = ∂ 2 ∂φ 2 (t) ∆V th [φ(M S )] = 1 ξ 2 (t) ∂ 2 ∂φ 2 (t) ∆V th [φ(M S )](39)
which, in our case, reduces to
∆ th m 2 H = 3 4π 2 m 4 t (M S ) v 2 (m t ) X 2 t M 2 S 2 − 1 6 X 2 t M 2 S(40)
We have compared our analytical approximation 27 with the numerical NTLL result 4 and found a difference < ∼ 2 GeV for all values of supersymmetric parameters. The case m A < M S is a bit more complicated since the effective theory below the supersymmetric scale M S is the 2HDM. However since radiative corrections in the 2HDM are equally dominated by the top-quark, we can compute analytical expressions based upon the LL approximation at the scale Q 2 ∼ m 2 t . Our approximation 27 differs from the LL all-loop numerical resummation by < ∼ 3 GeV, which we consider the uncertainty inherent in the theoretical calculation, provided the mixing is moderate and, in particular, bounded by the condition,
as sin(β − α) → 1, or are indistinguisable from the SM ones
h u sin β ≡ h SM u h d,ℓ cos β ≡ h SM d,ℓ(46)
In this way the tan β dependence of the couplings, either disappears or is absorbed in the SM couplings. However, from the previous sections it should be clear that the Higgs and top mass measurements could serve to discriminate between the SM and its extensions, and to provide information about the scale of new physics Λ. In Fig. 13 we give the SM lower bounds on M H for Λ > ∼ 10 15 GeV (thick lines) and the upper bound on the mass of the lightest Higgs boson in the MSSM (thin lines) for M S ∼ 1 TeV. Taking, for instance, M t = 180 GeV, close to the central value recently reported by CDF+D0 9 , and M H > ∼ 130 GeV, the SM is allowed and the MSSM is excluded. On the other hand, if M H < ∼ 130 GeV, then the MSSM is allowed while the SM is excluded. Likewise there are regions where the SM is excluded, others where the MSSM is excluded and others where both are permitted or both are excluded.
Conclusion
To conclude, we can say that the search of the Higgs boson at present and future colliders is, not only an experimental challenge, being the Higgs boson the last missing ingredient of the Standard Model, but also a theoretically appealing question from the more fundamental point of view of physics beyond the Standard Model. In fact, if we are lucky enough and the Higgs boson is detected soon (preferably at LEP2) and light, its detection might give sensible information about the possible existence of new physics. In that case, the experimental search of the new physics should be urgent and compelling, since the existence of new phenomena might be necessary for our present understanding of the physics for energies at reach in the planned accelerators.
Figure 1 :
1Higgs-strahlung process for Higgs production.• WW-FUSION: e + e − → φν e ν e , where the Higgs boson is formed in the fusion of virtual W W exchanged in the t-channel. The virtual W 's are radiated off the electron and positron of the beam. [Fig. 2, where the incoming lower (upper) fermion line is an electron (positron) and the corresponding outcoming fermion a ν e (ν e ). Wavy lines are W and the dashed line a Higgs.]
Figure 2 :
2Vector-Vector fusion process for Higgs production.
Figure 3 :
3Gluon-gluon fusion process for Higgs production. Higgs boson. [Fig. 3 where the curly lines are gluons, the internal fermion line a top and the dashed line a Higgs.] • WW (ZZ)-FUSION: W ± W ∓ (ZZ) → φ, where the Higgs boson is formed in the fusion of W W (ZZ), the virtual W (Z)'s being exchanged in the t-channel and radiated off a quark in the proton beam. [Fig. 2, where wavy lines are W (Z), the incoming fermions quarks q and the outcoming fermions quarks q (q ′ ). The dashed line is the Higgs.] • HIGGS STRAHLUNG: qq ( ′ ) → Z(W )φ, where the Higgs boson is radiated off the virtual Z(W )-boson line exchanged in the s-channel. [Fig. 1, where wavy lines are Z (W ), the incoming fermion a quark q and the outcoming fermion a quark q (q ′ ).] • ASSOCIATED PRODUCTION WITH tt: gg → φtt, where the gluons from the proton sea exchange a top quark in the t-channel, which emits a Higgs boson. [Fig. 4, where curly lines are gluons and the fermion line corresponds to a quark t. The dashed line is the Higgs boson.]
Figure 4 :
4Associated production of Higgs with ff .
• φ → gg, where the gluons are produced by a top-quark loop emitted by the Higgs.[The inverse diagram ofFig. 3.]• φ → W W * → W ff ′ , which is relevant for values of the Higgs mass, M H > M W .
Figure 5 :
5Plot of the effective potential for Mt = 175 GeV, M H = 121.7 GeV at T = 0 (thick solid line) and T = Tt = 2.5 × 10 15 GeV (thin solid line).
Figure 6 :
6Lower bounds on M H as a function of Mt, for Λ = 10 19 GeV (upper set) and Λ = 10 TeV (lower set). The dashed curves correspond to the stability bounds and the solid (dotted) ones to the metastability bounds at finite (zero) temperature.
Figure 7 :
7Perturbativity and stability bounds on the SM Higgs boson. Λ denotes the energy scale where the particles become strongly interacting. 1.4 A light Higgs can measure the scale of New Physics From the bounds on M H (Λ) previously obtained (see Fig. 8) one can easily deduce that a measurement of M H might provide an upper bound (below the Planck scale) on the scale of new physics provided that the present experimental bound from LEP, M H > 67 GeV, would imply, from (20), M t > 153 GeV, which is fulfilled by experimental detection of the top 9 . Even non-observation of the Higgs at LEP2 (i.e. M H > ∼ 95 GeV), would leave an open window (M t > ∼ 165 GeV) to the possibility that a future Higgs detection at LHC could lead to an upper bound on Λ. Moreover, Higgs detection at LEP2 would put an upper bound on the scale of new physics.
Figure 8 :
8SM lower bounds on M H from metastability requirements as a function of Λ for different values of Mt.
GLUON-GLUON FUSION: gg → (H, H, A), where two gluons in the sea of the protons collide through a loop of top-quarks, bottom-quarks, stops and sbottoms which subsequently emit a Higgs boson. The contribution of a (s)bottom loop is only relevant for large values of tan β. [Figs. 3 and 9, where curly lines are gluons, internal fermion lines quarks t and b, internal boson (dashed) lines squarkst andb and the dashed line is a Higgs boson H, H or A.]
Figure 9 :
9Gluon-gluon fusion process for Higgs production with a squark loop. • WW (ZZ)-FUSION: W ± W ∓ → (H, H, A), ZZ → (H, H, A), where the Higgs boson is formed in the fusion of W W (ZZ), the virtual W (Z)'s being radiated off a quark in the proton beam. [See Fig. 2 where the external dashed line corresponds to a Higgs boson H, H or A.] • HIGGS STRAHLUNG: qq → Z(H, H, A), qq ′ → W (H, H, A) where the corresponding Higgs boson is radiated off the virtual Z(W )-boson line. [See Fig. 1, where the dashed line is a Higgs boson, H, H or A.] • ASSOCIATED PRODUCTION WITH tt, bb: gg → tt(H, H, A), gg → bb(H, H, A) where the gluons from the proton sea exchange a top (bottom)quark in the t-channel, the exchanged top (bottom) quark emitting a Higgs boson. [See Fig. 4 where the curly lines are gluons, the fermion line a t or b quark and the dahsed line a Higgs boson H, H or A.]
Figure 10 :
10Charged higgs production process.
Figure 11 :
11Plot of M H as a function of Mt for tan β ≫ 1 (solid lines), tan β = 1 (dashed lines), and X 2 t = 6M 2 S (upper set), Xt = 0 (lower set). The experimental band from the CDF/D0 detection is also indicated.
Figure 12 :
12The neutral (H, H ≡ H h in the figure) and charged (H + ) Higgs mass spectrum as a function of the CP-odd Higgs mass m A for a physical top-quark mass Mt = 175 GeV and M S = 1 TeV, as obtained from the one-loop improved RG evolution (solid lines) and the analytical formulae (dashed lines). All sets of curves correspond to tan β = 15 and large squark mixing, X 2 t = 6M 2 S (µ = 0).
Higgs sector H, A, H ± decouple, while the H couplings go the SM φ couplings HXY −→ (φXY ) SM
Figure 13 :
13SM lower bounds on M H (thick lines) as a function of Mt, for Λ = 10 19 GeV, from metastability requirements, and upper bound on the lightest Higgs boson mass in the MSSM (thin lines) for M S = 1 TeV.
the lightest Higgs pole mass M H , where all NTLL corrections are resummed to all-loop by the RG, as a function of M t 4 . From Fig. 11 we can see that the present experimental band from CDF/D0 for the top-quark mass requires M H < ∼ 140 GeV, while if we fix M t = 170 GeV, the upper bound M H < ∼ 125 GeV follows. It goes without saying that these figures are extremely relevant for MSSM Higgs searches at LEP2.
AcknowledgmentsWork supported in part by the European Union (contract CHRX-CT92-0004) and CICYT of Spain (contract AEN95-0195). I wish to thank my collaborators in the subjects whose results are reported in the present lectures: M. Carena, J.A. Casas, J.R. Espinosa, A. Riotto, C. Wagner and F. Zwirner. I also want to thank A. Riotto for his help in drawing some of the diagrams contained in this paper.Threshold effectsThere are two possible caveats in the analytical approximation we have just presented: i) Our expansion parameter log(M 2 S /m 2 t ) does not behave properly in the supersymmetric limit M S → 0, where we should recover the tree-level result. ii) We have expanded the threshold function ∆V th [φ(M S )] to order X 4 t . In fact keeping the whole threshold function ∆V th [φ(M S )] we would be able to go to larger values of X t and to evaluate the accuracy of the approximation(36)and(40). Only then we will be able to check the reliability of the maximum value of the lightest Higgs mass (which corresponds to the maximal mixing) as provided in the previous sections. This procedure has been properly followed27,29for the most general case m Q = m U = m D . We have proved that keeping the exact threshold function ∆V th [φ(M S )], and properly running its value from the high scale to m t with the corresponding anomalous dimensions as in (39), produces two effects: i) It makes a resummation from M 2 S to M 2 S + m 2 t and generates as (physical) expansion parameter logii) It generates a whole threshold function X eff t such that (40) becomesandThe numerical calculation shows 29 that X eff t has the maximum very close to X 2 t = 6(M 2 S + m 2 t ), what justifies the reliability of previous upper bounds on the lightest Higgs mass.The case of obese supersymmetryWe will conclude this lecture with a very interesting case, where the Higgs sector of the MSSM plays a key role in the detection of supersymmetry. It is the case where all supersymmetric particles are superheavy M S ∼ 1 and escape detection at LHC.
where t 1,2 are the two stop mass eigenstates. Fig. 12 the Higgs mass spec. where t 1,2 are the two stop mass eigenstates. In Fig. 12 the Higgs mass spec-
. S L Glashow, Nucl. Phys. 22579S.L. Glashow, Nucl. Phys. 22 (1961)579;
. S Weinberg, Phys. Rev. Lett. 191264S. Weinberg, Phys. Rev. Lett. 19 (1967) 1264;
A Salam, Proc. 8th Nobel Symposium. N. Svartholm (Almqvist and Wiksells8th Nobel SymposiumStockholm; Stockholm367A. Salam, Proc. 8th Nobel Symposium, Stockholm, 1968, ed. N. Svartholm (Almqvist and Wiksells, Stockholm, 1968), p. 367.
. P W Higgs, Phys. Rev. Lett. 12321Phys. Rev.P.W. Higgs, Phys. Rev. Lett. 12 (1964) 132; and Phys. Rev. 13 (1964) 321;
. F Englert, R Brout, Phys. Rev. Lett. 13321F. Englert and R. Brout, Phys. Rev. Lett. 13 (1964) 321;
. G S Guralnik, C R Hagen, T W Kibble, Phys. Rev. Lett. 13585G.S. Gu- ralnik, C.R. Hagen and T.W. Kibble, Phys. Rev. Lett. 13 (1964) 585.
. J Bernabeu, in these proceedingsJ. Bernabeu, in these proceedings.
. J A Casas, J R Espinosa, M Quirós, A Riotto, Nucl. Phys. 436466) 3; (E)J.A. Casas, J.R. Espinosa, M. Quirós and A. Riotto, Nucl. Phys. B436 (1995) 3; (E) B439 (1995) 466.
. K Aoki, Z Hioki, R Kawabe, M Konuma, T Muta, Prog. Theor. Phys. 731Suppl.See for instance: K. Aoki, Z. Hioki, R. Kawabe, M. Konuma and T. Muta, Prog. Theor. Phys. (Suppl.) 73 (1982) 1.
CERN 96-01Higgs Physics Working Group (convs. M. Carena and P. Zerwas). LEP2, G. Altarelli, T. Sjostrand and F. ZwirnerGeneva1ReportHiggs Physics Working Group (convs. M. Carena and P. Zerwas), in Vol. 1 of Physics at LEP2, G. Altarelli, T. Sjostrand and F. Zwirner, eds., Report CERN 96-01, Geneva (1996).
. A Ferrando, in these proceedingsA. Ferrando, in these proceedings.
preprint CERN 90-10Proceedings of the ECFA Large Hadron Collider Workshop. the ECFA Large Hadron Collider WorkshopAachen (GermanyIIHiggs Physics Working GroupHiggs Physics Working Group, in Proceedings of the ECFA Large Hadron Collider Workshop, Vol.II, Aachen (Germany), 4-9 October 1990 (ed. G. Jarlskog and D. Rein), preprint CERN 90-10, ECFA 90-133.
. F Abe, CDF CollaborationPhys. Rev. 502966F. Abe et al., CDF Collaboration, Phys. Rev. D50 (1994) 2966;
. Phys. Rev. Lett. 732626Phys. Rev. Lett.Phys. Rev. Lett. 73 (1994) 225; Phys. Rev. Lett. 74 (1995) 2626;
. S Abachi, D0 CollaborationPhys. Rev. Lett. 722138S. Abachi et al., D0 Collaboration, Phys. Rev. Lett. 72 (1994) 2138;
. Phys. Rev. Lett. 742632Phys. Rev. Lett.Phys. Rev. Lett. 74 (1995) 2422; Phys. Rev. Lett. 74 (1995) 2632.
For a recent analysis, see. J Ellis, G L Fogli, E Lisi, hep-ph/9608329preprint CERN-TH/96-216, and LBNL-39237. and references thereinFor a recent analysis, see: J. Ellis, G.L. Fogli and E. Lisi, preprint CERN- TH/96-216, and LBNL-39237 [hep-ph/9608329], and references therein.
. N Cabibbo, L Maiani, G Parisi, R Petronzio, Nucl. Phys. 158295N. Cabibbo, L. Maiani, G. Parisi and R. Petronzio, Nucl. Phys. B158 (1979) 295;
. M Lindner, Z. Phys. 31295M. Lindner, Z. Phys. C31 (1986) 295;
. M Sher, Phys. Rep. 179273M. Sher, Phys. Rep. 179 (1989) 273;
. M Lindner, M Sher, H W Zaglauer, Phys. Lett. 228139M. Lindner, M. Sher and H.W. Zaglauer, Phys. Lett. B228 (1989) 139;
. M Sher, ; C Ford, D R T Jones, P W Stephenson, M B Einhorn, Addendum: Phys. Lett. 31717Nucl. Phys.M. Sher, Phys. Lett. B317 (1993) 159; Addendum: Phys. Lett. B331 (1994) 448; C. Ford, D.R.T. Jones, P.W. Stephenson and M.B. Einhorn, Nucl. Phys. B395 (1993) 17.
. G Altarelli, I Isidori, Phys. Lett. 337141G. Altarelli and I. Isidori, Phys. Lett. B337 (1994) 141.
. J A Casas, J R Espinosa, M Quirós, Phys. Lett. 342171J.A. Casas, J.R. Espinosa and M. Quirós, Phys. Lett. B342 (1995) 171;
. J A Casas, J R Espinosa, M Quirós, hep-ph/9603227Phys. Lett. B. to appear inJ.A. Casas, J.R. Espinosa and M. Quirós, to appear in Phys. Lett. B (1996), [hep-ph/9603227].
. J R Espinosa, M Quirós, Phys. Lett. 355257J.R. Espinosa and M. Quirós, Phys. Lett. B355 (1995) 257
For a recent review, see. Helv. Phys. Acta. M. Quirós67451For a recent review, see, e.g.: M. Quirós, Helv. Phys. Acta 67 (1994) 451.
. L Dolan, R Jackiw, Phys. Rev. 93320L. Dolan and R. Jackiw, Phys. Rev. D9 (1974) 3320;
. S Weinberg, Phys. Rev. 93357S. Weinberg, Phys. Rev. D9 (1974) 3357;
. D J Gross, R D Pisarski, L G Yaffe, Rev. Mod. Phys. 5343D.J. Gross, R.D. Pisarski and L.G. Yaffe, Rev. Mod. Phys. 53 (1981) 43;
. M E Carrington, Phys. Rev. 452933M.E. Carrington, Phys. Rev. D45 (1992) 2933;
B277 (1992) 324 and (E). M E Shaposhnikov, Phys. Lett. 282483Phys. Lett.M.E. Shaposhnikov, Phys. Lett. B277 (1992) 324 and (E) Phys. Lett. B282 (1992) 483;
B283 (1992) 319 and. M Dine, R G Leigh, P Huet, A Linde, D Linde, Phys. Lett. 46550Phys. Rev.M. Dine, R.G. Leigh, P. Huet, A. Linde and D. Linde, Phys. Lett. B283 (1992) 319 and Phys. Rev. D46 (1992) 550;
. J R Espinosa, M Quirós, Phys. Lett. 30598J.R. Espinosa and M. Quirós, Phys. Lett. B305 (1993) 98;
. J R Espinosa, M Quirós, F G Zwirner ; C, D E Boyd, S D Brahm, ; P Hsu, O Arnold, Espinosa, Phys. Lett. 3143546Phys. Rev.J.R. Espinosa, M. Quirós and F. Zwirner, Phys. Lett. B314 (1993) 206; C.G. Boyd, D.E. Brahm and S.D. Hsu, Phys. Rev. D48 (1993) 4963; P. Arnold and O. Espinosa, Phys. Rev. D47 (1993) 3546;
. W Buchmüller, T Helbig, D Walliser, Nucl. Phys. 407387W. Buchmüller, T. Helbig and D. Walliser, Nucl. Phys. B407 (1993) 387.
. A D Linde, Phys. Lett. 70306A.D. Linde, Phys. Lett. B70 (1977) 306;
. Phys. Lett. 10037Phys. Lett. B100 (1981) 37;
. Nucl. Phys. 216421Nucl. Phys. B216 (1983) 421.
. S Coleman, Phys. Rev. 152929S. Coleman, Phys. Rev. D15 (1977) 2929.
. P Arnold, S Vokos, Phys. Rev. 443620P. Arnold and S. Vokos, Phys. Rev. D44 (1991) 3620.
. H P Nilles, Phys. Rep. 1101H.P. Nilles, Phys. Rep. 110 (1984) 1;
. H E Haber, G L Kane, Phys. Rep. 11775H.E. Haber and G.L. Kane, Phys. Rep. 117 (1985) 75;
. R Barbieri, Riv. Nuovo Cim. 111R. Barbieri, Riv. Nuovo Cim. 11 (1988) 1.
. C Wagner, in these proceedingsC. Wagner, in these proceedings.
E G J F See, H E Gunion, G L Haber, S Kane, Dawson, The Higgs Hunter's Guide. Addison-WesleySee, e.g.J.F. Gunion, H.E. Haber, G.L. Kane and S. Dawson, The Higgs Hunter's Guide, Addison-Wesley 1990.
. Z Kunszt, F Zwirner, Nucl. Phys. 3853Z. Kunszt and F. Zwirner, Nucl. Phys. B385 (1992) 3.
. Y Okada, M Yamaguchi, T Yanagida, Prog. Theor. Phys. 851Y. Okada, M. Yamaguchi and T. Yanagida, Prog. Theor. Phys. 85 (1991) 1;
. Phys. Lett. 26254Phys. Lett. B262 (1991) 54;
. J Ellis, G Ridolfi, F Zwirner, Phys. Lett. 257477Phys. Lett.J. Ellis, G. Ridolfi and F. Zwirner, Phys. Lett. B257 (1991) 83; Phys. Lett. B262 (1991) 477;
. R Barbieri, M Frigeni, Phys. Lett. 258395R. Barbieri and M. Frigeni, Phys. Lett. B258 (1991) 395;
. R Barbieri, M Frigeni, F Caravaglios, Phys. Lett. 258167R. Barbieri, M. Frigeni and F. Caravaglios, Phys. Lett. B258 (1991) 167.
. J R Espinosa, M Quirós, Phys. Lett. 266389J.R. Espinosa and M. Quirós, Phys. Lett. B266 (1991) 389.
. H E Haber, R Hempfling, Phys. Rev. Lett. 661815H.E. Haber and R. Hempfling, Phys. Rev. Lett. 66 (1991) 1815;
. A Yamada, Phys. Lett. 263233A. Ya- mada, Phys. Lett. B263 (1991) 233.
. M Carena, J R Espinosa, M Quirós, C E M Wagner, Phys. Lett. 355209M. Carena, J.R. Espinosa, M. Quirós and C.E.M. Wagner, Phys. Lett. B355 (1995) 209.
. H E Haber, R Hempfling, A H Hoang, hep-ph/9609331H.E. Haber, R. Hempfling and A.H. Hoang, preprint CERN-TH/95-216, and TTP95-09 (1995) [hep-ph/9609331].
. M Carena, M Quirós, C E M Wagner, Nucl. Phys. 461407M. Carena, M. Quirós and C.E.M. Wagner, Nucl. Phys. B461 (1996) 407.
| [] |
[
"A systematic survey for eruptive young stellar objects using mid-infrared photometry",
"A systematic survey for eruptive young stellar objects using mid-infrared photometry"
] | [
"Alexander Scholz \nSchool of Cosmic Physics\nDublin Institute for Advanced Studies\n31 Fitzwilliam PlaceDublin\n",
"Dirk Froebrich \nIreland\n\nCentre for Astrophysics and Planetary Science\nUniversity of Kent\nCT2 7NHCanterburyUnited Kingdom\n",
"Kenneth Wood \nSchool of Physics and Astronomy\nUniversity of St\nAndrews\n\nThe North Haugh\nKY16 9SSSt. Andrews, FifeUnited Kingdom\n"
] | [
"School of Cosmic Physics\nDublin Institute for Advanced Studies\n31 Fitzwilliam PlaceDublin",
"Ireland",
"Centre for Astrophysics and Planetary Science\nUniversity of Kent\nCT2 7NHCanterburyUnited Kingdom",
"School of Physics and Astronomy\nUniversity of St\nAndrews",
"The North Haugh\nKY16 9SSSt. Andrews, FifeUnited Kingdom"
] | [
"Mon. Not. R. Astron. Soc"
] | Accretion in young stellar objects (YSOs) is at least partially episodic, i.e. periods with high accretion rates ('bursts') are interspersed by quiescent phases. These bursts manifest themselves as eruptive variability. Here we present a systematic survey for eruptive YSOs aiming to constrain the frequency of accretion bursts. We compare mid-infrared photometry from Spitzer and WISE separated by ∼ 5 yr for two samples of YSOs, in nearby star forming regions and in the Galactic plane, each comprising about 4000 young sources. All objects for which the brightness at 3.6 and 4.5 µm is increased by at least 1 mag between the two epochs may be eruptive variables and burst candidates. For these objects, we carry out follow-up observations in the nearinfrared. We discover two new eruptive variables in the Galactic plane which could be FU Ori-type objects, with K-band amplitudes of more than 1.5 mag. One object known to undergo an accretion burst, V2492 Cyg, is recovered by our search as well. In addition, the young star ISO-Oph-50, previously suspected to be an eruptive object, is found to be better explained by a disk with varying circumstellar obscuration. In total, the number of burst events in a sample of 4000 YSOs is 1-4. Assuming that all YSOs undergo episodic accretion, this constraint can be used to show that phases of strong accretion (> 10 −6 M yr −1 ) occur in intervals of about 10 4 yr, most likely between 5000 and 50000 yr. This is consistent with the dynamical timescales for outflows, but not with the separations of emission knots in outflows, indicating that episodic accretion could either trigger or stop collimated large-scale outflows. | 10.1093/mnras/stt091 | [
"https://arxiv.org/pdf/1301.3152v1.pdf"
] | 28,800,877 | 1301.3152 | a9afaf2eecc412b25badc379432dd6318b6a6b8b |
A systematic survey for eruptive young stellar objects using mid-infrared photometry
2002
Alexander Scholz
School of Cosmic Physics
Dublin Institute for Advanced Studies
31 Fitzwilliam PlaceDublin
Dirk Froebrich
Ireland
Centre for Astrophysics and Planetary Science
University of Kent
CT2 7NHCanterburyUnited Kingdom
Kenneth Wood
School of Physics and Astronomy
University of St
Andrews
The North Haugh
KY16 9SSSt. Andrews, FifeUnited Kingdom
A systematic survey for eruptive young stellar objects using mid-infrared photometry
Mon. Not. R. Astron. Soc
0002002Accepted. Received.Printed (MN L A T E X style file v2.2)stars: low-mass, brown dwarfsstars: activitystars: pre-main-sequenceaccretion, accretion discs
Accretion in young stellar objects (YSOs) is at least partially episodic, i.e. periods with high accretion rates ('bursts') are interspersed by quiescent phases. These bursts manifest themselves as eruptive variability. Here we present a systematic survey for eruptive YSOs aiming to constrain the frequency of accretion bursts. We compare mid-infrared photometry from Spitzer and WISE separated by ∼ 5 yr for two samples of YSOs, in nearby star forming regions and in the Galactic plane, each comprising about 4000 young sources. All objects for which the brightness at 3.6 and 4.5 µm is increased by at least 1 mag between the two epochs may be eruptive variables and burst candidates. For these objects, we carry out follow-up observations in the nearinfrared. We discover two new eruptive variables in the Galactic plane which could be FU Ori-type objects, with K-band amplitudes of more than 1.5 mag. One object known to undergo an accretion burst, V2492 Cyg, is recovered by our search as well. In addition, the young star ISO-Oph-50, previously suspected to be an eruptive object, is found to be better explained by a disk with varying circumstellar obscuration. In total, the number of burst events in a sample of 4000 YSOs is 1-4. Assuming that all YSOs undergo episodic accretion, this constraint can be used to show that phases of strong accretion (> 10 −6 M yr −1 ) occur in intervals of about 10 4 yr, most likely between 5000 and 50000 yr. This is consistent with the dynamical timescales for outflows, but not with the separations of emission knots in outflows, indicating that episodic accretion could either trigger or stop collimated large-scale outflows.
INTRODUCTION
Accretion flows from a circumstellar disk onto a young stellar object (YSO) play a key role in the early evolution of objects over a wide range of masses, from massive Herbig Ae/Be stars to brown dwarfs. Observations suggest that the accretion process is non-steady, with episodic bursts with high rates of mass accretion interspersed by significantly longer quiescent phases. The evidence for episodic accretion rests on three findings: 1) the fact that the luminosities of most protostars are dominated by internal radiation, not by heating due to accretion (e.g. Evans et al. 2009); 2) the discontinuities seen in protostellar outflows, which constitute a fossil record of the accretion history (e.g. Ioannidis & Froebrich 2012); 3) the discovery of a small number of E-mail: [email protected] eruptive variables which are currently experiencing strongly enhanced accretion rates with respect to the typical YSOs, with FU Ori as the prototype (e.g. Hartmann & Kenyon 1996;Reipurth & Aspin 2010).
While the general idea of episodic accretion is wellestablished, the driving force of the bursts is not known yet. In general, these events are explained in the framework of various disk instabilities, e.g. thermal instabilities (see Bell & Lin 1994, and references therein), gravitational instabilities (Vorobyov & Basu 2005;Dunham & Vorobyov 2012), or different types of magnetic instabilities (Armitage et al. 2001;Martin & Lubow 2011;Zhu et al. 2009). In addition, various types of trigger events are discussed in this context, e.g. star-star encounters (Forgan & Rice 2010), star-disk encounters (Pfalzner 2008), tidal effects from a companion star (Bonnell & Bastien 1992), or interactions between the disk and a massive planet (Lodato & Clarke 2004;Clarke et al. 2005). These various scenarios lead to specific predictions regarding the frequency and properties of bursts.
Strong accretion bursts may also be a relevant factor in the context of planet formation and could have an impact on the architecture and frequency of planetary systems. For example, the length of the 'lulls' between bursts may limit the efficiency of planet formation via disk fragmentation (Stamatellos et al. 2011). FU Ori-type bursts, caused by gravitational instabilities, have also been suggested as events that provide the transient shock heating needed to explain the formation of chondrules (e.g. Boley & Durisen 2008).
In this context, it may be useful to see accretion eruptions as a weather-like phenomenon in the disk ('disk weather'): a process that affects the physics of the disk, but is to some extent random and occurs on timescales that are extremely short compared with the disk lifetime. Observational studies on large samples are essential to constrain the characteristics of this process and to guide the theoretical work. So far, however, most FU Ori-type and other bursts have been found serendipitously, which does not allow to put rigorous constraints on their frequency. The advent of wide-area, infrared surveys of large numbers of star forming regions makes systematic surveys for accretion bursts feasible. In this paper we present the results from such a survey. The goal is to derive an estimate of the frequency of bursts using two epochs of mid-infrared photometry provided by the Spitzer and the WISE satellites. We aim to probe the largest sample of YSOs that is available for such a comparison, in total about 8000 objects covering a wide range of masses and ages.
THE APPROACH
The data
We aim to constrain the frequency of accretion bursts by comparing two epochs of mid-infrared photometry from Spitzer and WISE (Wright et al. 2010). Two of the channels used by these satellites can be compared with each other: IRAC1 and WISE1 with central wavelengths at 3.4-3.6 µm as well as IRAC2 and WISE2 at 4.5-4.6 µm. The differences in these two bands between the two telescopes are minor and can be neglected here as we are only interested in variability with large amplitudes. The epoch difference between the Spitzer and WISE observations depends on the area of the sky; the samples of YSOs used in this paper have been observed by Spitzer between 2003 and 2006, whereas most of the WISE data has been taken in 2010. Thus, the epoch differences in our samples are 4-7 yr, with a typical value of ∼ 5 yr.
A simplified model for episodic accretion
With two epochs we can only test for a specific type of episodic accretion. We will search for objects undergoing a burst event with a rise time t1 and a decline time t2, where t1 and t2 are assumed to be shorter and longer than our typical epoch difference of 5 yr, respectively. We also assume that any additional variability in YSOs is small compared with the events caused by the accretion bursts. These conditions are fulfilled for most, but not all, of the known FU Ori type objects. One exception is V1515 Cyg, one of the best-studies FU Ori objects, which exhibits a long rise time of about 20 yr (Clarke et al. 2005). In general, the known FU Oris show considerable diversity in their lightcurves which is not represented in this simple model. The quantity we are aiming to constrain is the typical interval between consecutive bursts. According to previous estimates, this interval is in the order of several thousands of years and thus much longer than the typical duration of a burst (Herbig 1977;Hartmann & Kenyon 1996).
When comparing two epochs of photometry, the burst interval can be crudely estimated as I = ∆t × N/nB. Here ∆t is the epoch difference between the two observations, N the sample size, nB the number of detected bursts in that sample, and I the desired quantity. This simple relation serves as a useful starting point for the analysis; for a more accurate statistical evaluation we will use Monte-Carlo simulations (Sect. 5). It is clear that maximum information can be gained by maximising the sample size and the epoch difference. For our study the epoch difference is fix, i.e. the key is to make the sample as large as possible. For example, with ∆t = 5 yr, we need in the order of 1000 objects to have a substantial chance of detecting at least one event, if the interval between bursts is 5000 yr. Based on the expected intervals, we thus need to cover several thousands of young stars to be able to provide useful limits.
In the literature the quantity that is often used to describe episodic accretion is the 'duty cycle', i.e. the fraction of time a YSO spends in the FU Ori state. Measuring the duty cycles requires knowledge of the duration of accretion bursts. Since the slow decline is much more difficult to constrain from direct observations than the fast rise of an accretion burst, we focus here on the burst interval rather than the duty cycle.
We note that with our approach we do not make an attempt to distinguish between the various types of accretion bursts presented in the literature, with FU Oris as the most extreme examples and EXors as smaller events (see Reipurth & Aspin 2010). We are simply interested in any type of eruptive event in a YSO, which could be due to an increase in mass accretion rate.
Flux increase during accretion bursts
Objects undergoing an accretion burst manifest themselves as eruptive variables with strongly increased luminosities at all optical and infrared wavelengths. Assuming that all the gravitational energy from infalling material is converted to radiation, the additional luminosity from an accretion rate of 10 −6 M yr −1 , exceeds the solar luminosity by more than one order of magnitude (factor 15, assuming a star with M = 1 M and R = 2 R ). To evaluate how this additional energy is distributed across the spectrum, we used the Monte Carlo radiative transfer models discussed in detail in Scholz et al. (2006) (see also Robitaille et al. (2006) for more information). In short, the code is based on the following assumptions: 1) NextGen stellar atmospheres are used for the photospheric spectrum; 2) the grain size distribution in the disk follows a power law with an exponential decay for particles with sizes above 50 µm and a formal cutoff at 1 mm; 3) dust in regions close to the star is destroyed if the temperature is above the dust sublimation threshold; 4) the scaleheight of the disk increases with radius following h(r) = h0(r/R ) β ; 5) the accretion luminosity is split between disk and star, where the stellar part is distributed evenly over the stellar surface (i.e. no hot spots).
For the purposes of this paper, we do not aim to explore in detail the parameter space; instead we want to find a typical value for the flux increase in a given wavelength domain as a function of accretion rate. We also neglect the fact that a strong increase in the mass accretion rate will affect the structure of the disk. In Fig. 1 we show model SEDs for a prototypical Class I source (stellar mass 0.5 M , disk mass 0.1 M , envelope mass 2.0 M ) and for a prototypical Class II source (stellar mass 0.5 M , disk mass 0.01 M , no envelope), for a range of accretion rates from 0.0 to 10 −5 M yr −1 . These figures illustrate that the flux increase at 2-5 µm compared with the zero accretion case is around one order of magnitude for accretion rates of 10 −6 M yr −1 or larger. Fig. 1 warrants two additional comments: 1) At all accretion rates, the amplitude is substantially larger for the Class II prototype. This is caused by the presence of an envelope in the Class I system, combined with its high photospheric luminosity, which is due to the inflated radius of the central source. The additional infrared flux from the envelope, heated by a brighter central source, 'drowns' the contribution from the accretion, i.e. the relative flux increase due to accretion is smaller than in the Class II stage. 2) The model with the lowest fluxes corresponds to a (theoretical) accretion rate of zero. In practise, this model is indistinguishable from models with 10 −9 M yr −1 or lower, values which are frequently measured for T Tauri stars (Natta et al. 2006). This illustrates that for most T Tauri stars accretion does not contribute significantly to the mid-infrared flux.
This exercise suggests that Class I and II sources whose accretion rate increases to 10 −6 M yr −1 or more are expected to increase in brightness by at least 2.5 mag at near/mid-infrared wavelengths. In contrast, the typical, short-term, near/mid-infrared variations in large samples of YSOs, due to rotation, hot spots, and inner disk inhomogenities, are in the range of 0.1-0.6 mag (Morales-Calderón et al. 2011;Flaherty et al. 2012). In our Spitzer-WISE comparison we will therefore adopt a cutoff of 1.0 mag to select burst candidates. On one hand, this should avoid most of the other types of variability in these sources; on the other hand it should also select eruptions where the two epochs of photometry do not catch the maximum and minimum.
IDENTIFICATION OF BURST CANDIDATES
In the following section, we will discuss the selection of possible eruptive variables and thus burst candidates from archival Spitzer and WISE photometry, as well as the followup observations and their results.
The C2D catalogue
The 'Cores to Disks' (C2D) Spitzer legacy program has provided a catalogue of YSO candidates for nearby molecular clouds and small cores, identified using near-and mid-infrared colour criteria (Evans et al. 2009). In total, the catalogue comprises 1478 sources from the subsamples CLOUDS, OFF-CLOUD, CORES, and STARS. We obtained this list from IPAC and searched for matches in the WISE all-sky catalogue (Wright et al. 2010). For 1323 objects a match was found within 2", for the overwhelming majority of them the distance between Spitzer and WISE coordinates is well below 1". 1301 of these objects have a robust detection in the Spitzer and WISE channels at 3.6 and 4.5 µm (signal-to-noise ratio > 5 for WISE, error < 20% for Spitzer). 1296 of these are also robustly detected in the Spitzer channels at 5.8 and 8.0 µm. In Fig. 2 (left panel) we show the IRAC colour-colour plot for this sample. Two cumulations, around the origin and right of the origin of the diagram, are clearly seen and can be identified as the locus of the Class III (no disk) and II sources. The sample contains 115 objects with typical Class III colours (around the origin) and 324 with typical Class II colours (right of the origin). 249 objects are above the Class II box, which makes them good candidates for embedded Class I sources. The remaining sources are scattered around these areas. According to Evans et al. (2009), about one third of the CLOUDS subsample are in the early embedded stage (Class I or Flat).
In addition, Fig. 2 (right panel) shows the (J,J-K) colour magnitude diagram for the 1228 sources with 2MASS near-infrared photometry in the sample, to assess the properties of the central sources. Overplotted are the BCAH 1 Myr isochrones (Baraffe et al. 1998) which range from 0.02 to 1.4 M , for distances of 150 pc and 300 pc, bracketing the regions covered by C2D, and for AV = 0, 10, and 20 mag. The comparison with the models illustrates that the sources cover the low-mass regime down to the substellar limit, including brown dwarfs at low extinctions, but only relatively few objects with M > 1.4 M . About two thirds to three quarters of the sample have extinctions below AV = 10 mag.
In Fig. 3 (left panel) we illustrate the selection of variables from this sample. The differences in the magnitudes at 3.6 and 4.5 µm between C2D and WISE show a clear cumulation around (0,0), as expected, because these bandpasses of IRAC and WISE are comparable. The objects of interest to us are located in the upper right corner. 23 sources are more than 1 mag brighter in WISE compared with C2D in the two bands, providing evidence for a substantial increase in the brightness. These objects are also overplotted in Fig. 2, as far as they have the required photometry (22 in the left panel, 20 in the right panel). They do not show an obvious bias in the (J,J-K) diagram, but most of them are above the Class II locus in the IRAC colour-colour plot, indicating that they may be embedded Class I sources.
All 23 highly variable sources were checked individually in the available images from WISE, Spitzer, and 2MASS. 5 of them are galaxies in 2MASS images and can be ruled out. For the remaining we obtained the C2D and WISE images at 3.6 and 4.5 µm and compared them. In at least 4 cases the flux increase in the WISE catalogue can be attributed to close neighbours that were not resolved with WISE, due to its significantly broader PSF (6" vs. 2", see Wright et al. (2010)). For 8 others, the IRAC photometry is affected by saturation. 4 more are extended objects in the IRAC images and could be part of a protostellar outflow. For the remaining 2, no obvious reason for the flux difference in the C2D and WISE catalogues can be identified, but the images clearly show that the object did not become significantly brighter. Thus, none of the candidates from the C2D sample classifies as a burst candidate.
The Cluster catalogue
The second sample is derived from the catalogue of YSOs in clusters within 1 kpc published by Gutermuth et al. (2009). The list of 2548 objects has been selected based on Spitzer/IRAC and MIPS data using mid-infrared colour cuts. It covers 36 nearby clusters, star forming clouds, and young groups, including some overlap with the regions covered in the C2D sample. We obtained the catalogue from Vizier and cross-matched with the WISE database. 1796 objects have a WISE match within 2", 1672 of them within 1". 1745 have a robust detection (criteria as above) in the 3.6 and 4.5 µm channels of Spitzer and WISE, 1587 of them with additional data in the J-and K-band from 2MASS, 1642 of them with data in the two IRAC channels at 5.8 and 8.0 µm. Note that 380 objects from the Cluster sample are also contained in the C2D sample.
As for the C2D sample, we show the IRAC colour-colour plot and the (J, J-K) near-infrared colour-magnitude diagram for this sample in Fig. 4. In contrast to the C2D sample, the Cluster objects do not contain a significant fraction of Class III sources, the majority is classified as Class II. Based on our diagram, we estimate that at least 1226 out of 1642 are Class II (75%), the classification provided by Gutermuth et al. (2009) yields an even higher fraction of 86%. About 15-20% objects in this sample are Class I.
In the near-infrared plot we show the 1 Myr BCAH isochrones for distances of 200 and 800 Myr, bracketing most of the objects in the sample, for three different extinctions. The plot demonstrates that the sample is dominated by low-mass stars at extinctions of AV < 20 mag. The sample includes substellar objects, but only for the regions with distances < 500 pc and low extinctions. In general, the characteristics of this sample make it comparable to the C2D sample. Fig. 3 (right panel) shows the variability in the Cluster sample. 24 objects fulfill our variability criterion and have an increased brightness by > 1.0 mag in the WISE catalogue in the two bands. One of these objects has already been identified in the C2D sample. As before, these burst candidates were checked in the available images. Seven of them have close, usually brighter neighbours, which may have caused an apparent brightness increase. For 7 others the brightness in WISE is probably affected by the surrounding nebulosity. Four more are saturated in the Spitzer/IRAC images. We are left with 5 candidates which appear to be brighter in the WISE images and remain burst candidates. One of them, ISO-Oph-50 in the star forming region ρ-Ophiuchus, has been suspected to be an outbursting young star by Alves de Oliveira & Casali (2008), due to a brightening by more than 1 mag over about a year, although it could also be a different type of variable (Alves de Oliveira et al. 2012)see discussion in Sect. 4.1.
Complementary samples similar to C2D and Cluster
We carried out the same test as above in three smaller samples of YSOs, gathered from the literature. According to Gutermuth et al. (2009), the Cluster sample covers all clusters within 1 kpc from the Lada & Lada (2003) census, with the exception of NGC2264 and the Orion Nebula Cluster. For NGC2264 there is a comprehensive catalogue of the Spitzer photometry available (Sung et al. 2009), which allows us to include it in this study. Out of the 490 cluster members identified by Hα photometry by Dahm & Simon (2005), 485 have a Spitzer counterpart within 3". Out of these, 355 have a WISE counterpart within 3" with robust photometry (defined as in the other samples). From this list, 5 objects have increased their brightness in the two mid-infrared bands by at least 1 mag. Two of them have little Hα emission (< 5Å) and very low IRAC colours (I1-I2 < 0.1), which rules out that they harbour a disk. Two sit very close to bright stars (or multiple stars) which contaminate their WISE fluxes. One has a nearby equally bright neighbour which is not resolved in WISE. To sum up, none of the likely members of NGC2264 is a burst candidate.
For the nearby star forming region Taurus, Rebull et al. (2010) published a census of previously confirmed members and new candidate members based on Spitzer photometry. Combining their list of known and new objects and exluding a few without IRAC photometry yields 328 objects from which 236 have previously been known or have been classified by Rebull et al. (2010) as 'most believable'. From this sample of 328, 320 have robust photometry in the first two WISE bands. Only one of them is more than 1 mag brighter in the WISE photometry compared with the Spitzer magnitudes; this object, however, exhibits a 'halo' and is probably a galaxy.
Another new sample of YSOs from Spitzer data has been published for the various clusters in the North American and Pelican Nebulae ). Their total sample comprises 1286 IRAC and MIPS-selected candidate YSOs, about half of them Class II. 1099 have reliable fluxes in the first two IRAC and WISE channels. From these, 935 objects have membership are most likely YSOs with flag 'A' or 'B' ). This 'A+B' sample may still be affected by significant contamination by AGB stars, estimated to be between 5 and 25% by ). Conservatively subtracting about 20% reduces the total sample size to about 700.
Four objects fulfill our variability criterium (flux increase by more than 1 mag). One of them appears to be extended in the Spitzer images (and has membership flag 'C'), for another one the WISE photometry is contaminated by several neighbours. The remaining two are isolated and clearly brighter in the WISE images and remain candidates. One of them is the recently identified outbursting star V2492 Cyg ) and has magnitude differences close to 3 mag at 3.6 and 4.5 µm. This object became brighter between December 2009 and June 2010 (Kóspál et al. 2011) and was observed by WISE between June and September 2010, i.e. just after the burst.
Note that this region harbours two more known outbursting stars. The recently identified FU Ori candidate V2493 Cyg Kóspál et al. 2011) increased its brightness between May and August 2010 and was observed at the end of May with WISE. We find a flux increase by 0.5 and 0.8 mag in the mid-infrared channels at 3.6 and 4.5 µm, i.e. the Spitzer-WISE comparison may have captured the onset of the burst.
The well-known FU Ori star V1057 Cyg with an outburst in 1969 (Welin 1971) is not contained in the Rebull et al. (2011) catalogue, presumably due to saturation: The star is listed in the WISE catalogue with 4.9 mag at 3.6 µm and -0.3 mag at 22 µm, which is brighter than the upper limits in the colour-magnitude plots shown by Rebull et al. (2011).
All three additional samples discussed here show similar characteristics to the C2D and Cluster sample (similar mass range, similar extinction range, mostly Class II sources). Therefore it is legitimate to add them to the C2D and Cluster samples. In total, the sum of C2D, Cluster, NGC2264, Taurus, and North American/Pelican Nebulae, minus the objects which appear twice, comprises about 4000 objects, hereafter called sample A. This sample yields 7 candidate bursts, out of which 2 have been independently discovered elsewhere.
The Robitaille catalogue
Furthermore, we use the list of intrinsically red sources from Robitaille et al. (2008). This sample contains 18949 objects selected from the Glimpse I and II survey data (Benjamin et al. 2003;Churchwell et al. 2009). Robitaille et al. (2008) estimate that 50 % -70 % of the objects are YSOs and 30 % -50 % are AGB stars. The YSOs in the Robitaille list are expected to be more distant than 1 kpc and thus on average more massive than the sources covered in sample A. The Robitaille catalogue is in the following called sample B.
From the full sample we select only objects which have a detection at 3.6 and 4.5 µm in Glimpse and WISE, whereby the positions in the two surveys do not differ by more than one arcsecond. This leaves 12961 targets. To make the sample as 'clean' as possible, we only consider sources which are brighter than the completeness limit in this sample (11.5 mag at 3.6 µm, 11.0 mag at 4.5 µm) in both surveys. Here the completeness limit was determined as the peak in the 3.6 and 4.5 µm magnitude distribution.
One potential issue of this sample is the high stellar density in the Galactic plane. Since the WISE survey has a larger point spread function than Spitzer, the presence of bright neighbour stars can cause an apparent increase in the brightness, when the two surveys are compared. To account for that, we exclude all objects that have a nearby Glimpse source (within 6" of the Robitaille object) that is bright enough to cause an increase of more than 0.1 mag in either the 3.6 or 4.5 µm filter. This final sample contains 7101 objects, which are, as mentioned above, a mix of YSOs and AGB stars.
In Fig. 5 we show the usual IRAC colour-colour plot and (J,J-K) colour-magnitude diagram for a subsample of the Robitaille catalogue that is most likely to be dominated by YSOs (see Appendix A on the selection of this subsample). As in the other samples, most of the sources can be considered Class II based on their mid-infrared colours. As expected, the typical J −K colours are larger than in sample A, indicating higher extinction.
Out of these 7101 sources, there are 77 objects which increase their brightness by more than one magnitude at 3.6 and 4.5 µm (see Fig. 6) and are possible candidates for outbursting YSOs. 72 of them have a > 5σ detection in the two WISE bands. As for the other samples, we checked the Spitzer and WISE images for all these candidates. For the clear majority of them (60/77) it turns out that they have neighbour star in 10" distance or less, which likely affects the WISE photometry. This indicates that the 6" criterion chosen in the preparation of the catalogue (see above) to account for the broad WISE PSF was slightly too conservative. In addition, there are 7 objects within a nebulosity. Again, this might cause problems in the WISE photometry. In all these cases the image comparison excludes that the objects are in fact significantly brighter in the WISE survey. The remaining 10 sources remain good candidates and require further evaluation.
Glimpse provides for a fraction of the total area multiple epochs of Spitzer photometry with baselines up to 1 yr, particularly in the additional Glimpse-II survey. Based on this information, the Glimpse catalogues exclude variable sources for their final merged photometry. The Robitaille list, on the other hand, uses the Glimpse-II first epoch photometry, and thus attempts to exclude as few variables as possible (Robitaille et al. 2008), making this sample suitable for our purposes.
In Appendix A we provide an estimate of the contamination by AGB stars in the Robitaille catalogue, both in the entire sample and in the subsample of variable sources. Among the variable candidates, the contamination is negligible. The total sample of 7101 objects should contain about 3700-3800 YSOs; the remaining sources are probably AGBs.
FOLLOW-UP OBSERVATIONS
We summarise the results from the previous section and the selection of burst candidates in Table 1, As outlined above, about 130 of the objects in the samples considered here show the signature of a brightness eruption when comparing Spitzer and WISE photometry (listed as 'highly variable' in Table 1), but most of them are clearly spurious based on an inspection of the images. To verify our candidates, we re-observed a subset of them in the near-infrared. This was particularly important for objects which are confirmed to be brighter in WISE after visual inspection ('burst candidates'). If any of these sources is indeed a burst (as defined in Sect. 2), we expect it to be several magnitudes brighter in the near-infrared compared with 2MASS. In total, we observed 20 from the highly variable objects, including 13 out of 17 burst candidates. By design, these observations also covered some of the spurious detections, to double-check our rejection based on visual examination. We used the 1.3 m telescope at the Cerro Tololo International Observatory with the instrument Andicam, a double-channel camera which allows us to take optical and near-infrared images simultaneously. The follow-up observations were taken as part of the SMARTS collaboration in program DUBLIN-11B-001 and DUBLIN-12A-001 (PI: A. Scholz). For all objects we obtained optical images in the Rand I-bands (3×120 sec exposures) and near-infrared images in the J-and either K-or H-band (5 × 30 sec in a 5-position dither pattern), but only the near-infrared images are used here, since most objects are embedded and hence invisible in the optical.
We carried out a standard image reduction, including sky subtraction and flatfielding, and aperture photometry. The near-infrared photometry was calibrated in comparison with 2-5 other stars in the images, which are listed in the 2MASS point-source catalogue. For about half of the objects the new photometry is consistent with the 2MASS values, i.e. the variation in the mid-infrared cannot be caused by a long-lasting eruptive event. Most of the remaining objects have only variations with < 1 mag, which is too little to qualify as an accretion burst according to our criterion (see Sect. 2). We list these excluded objects in Table 2. In particular, our follow-up observations confirmed that none of the highly variable objects seen as spurious in the visual inspection was misclassified.
From the 17 burst candidates, we observed 13 and rejected 10 of them (contained in Table 2). The remaining 7 are listed in Table 3. Two objects not previously known are confirmed by our SMARTS photometry as eruptive vari- Table 2. Highly variable objects found in this study by comparing Spitzer and WISE photometry and ruled out by SMARTS photometry. The offsets between WISE and Spitzer photometry are listed in columns 4 and 5; 2MASS photometry in columns 6 and 7. Our SMARTS photometry with the observing dates and the most likely reason for the photometry offset in the mid-infrared data are contained in columns 8 and 9. ables and are good burst candidates: 2MASS J16443712-4604017 (hereafter 2M1644-4604) and 2MASS J15111357-5902366 (hereafter 2M1511-5902). These two objects, together with the two previously identified possible burst objects ISO-Oph-50 and V2592 Cyg, are discussed in more detail in Sect. 4.1. Three objects remain unconfirmed because they are too far north to be observed from Cerro Tololo. Given the fact that most of our candidates so far have been ruled out by follow-up observations, the likelihood that one of these three turns out to be bursts is fairly low. ). This behaviour is not comparable to typical stars undergoing accretion-related eruptions. EX Lupi, probably the best studied YSO with short-term and recurring accretion bursts of EXor-type, had 4 bursts in 9 years between 1995 and 2004, but all four were different in amplitudes. Taken together, the bursts lasted in total about 1 year, i.e. ∼ 10% of the entire time (Herbig 2007). ISO-Oph-50 is much more often found in the bright state. Also, as noted by Alves de Oliveira et al. (2012), the object becomes bluer when fainter, which is not typical for accretion-related bursts. It can safely be concluded that this source is not an accretion burst, in particular not a FU Ori object. Apart from the variability, the most remarkable feature of ISO-Oph-50 is its low luminosity. At the age and distance of the ρ-Oph star forming region, a M3 star would be expected to have an H-band magnitude of 8-10, i.e. even with AV = 10 mag it would be brighter than H = 12 mag, whereas the object is never observed to be brighter than H = 13 mag. The luminosity of this source, estimated from the J-band magnitude, is log (L/L ) ∼ −2.56 (Alves de Oliveira, priv. comm), which is more than two orders of magnitude too low for this spectral type. Given that and the colour trend in the variability, the variability is likely related to the disk. We speculate that the most likely cause for the variations is an rotating, inhomogenuous edge-on disk. Alves de Oliveira et al. (2012) come to a similar conclusion, but also invoke the presence of a (hypothetical) companion to explain the variations. Monitoring with simultaneous measurements in multiple bands and detailed modeling is needed to constrain the nature of this source.
Comments on specific objects
V2492 Cyg: This object was already known in the literature as an outbursting protostar although it does not fit into the FU Ori category (Kóspál et al. 2011;Covey et al. 2011). It was confirmed by our Spitzer-WISE comparison. Its magnitude differences in the mid-infrared are almost 3 mag and very large compared with most of our other candidates. In optical and near-infrared bands Kóspál et al. (2011) report amplitudes of more than 5 mag. 2M1644-4604: As pointed out above, this object was identified as a new eruptive variable and possible accretion burst. The available photometry for the object is summarised in Table 5, including near-infrared data from the first data release from the VISTA/VVV survey (Saito et al. 2012). In Fig. 7 we show the spectral energy distribution preand post-burst, including our new datapoints from 2012. In near-infrared data from 2010-12 the source is much brighter than in 2MASS -more than 4 mag in J, more than 3 mag in H, and more than 2 mag in K. The near-infrared photometry indicates significant evolution from 2010 to 2012. In addition, the object has become more than 1 mag brighter at 3.6 and 4.5 µm. The WISE flux at 22 µm is slightly brighter than the 24 µm from Spitzer as well. The difference in magnitudes is increasing towards shorter wavelengths, i.e. the object became bluer during the burst. The near-infrared photometry indicates a position below the reddening path, i.e. it is indeed a likely YSO (see Appendix A). Spectroscopic follow-up observations are in preparation, to confirm its youth and to look for evidence of enhanced accretion.
2M1511-5902: This is the second possible new accretion burst identified in our survey. We summarise the available photometry, including our own follow-up, in Table 6. The spectral energy distribution is plotted in Fig. 7. Com- paring pre-2010 with 2010 datapoints, the object is 1.5 mag brighter in K-band, 1.1 mag at 3.6 µm and 1.3 mag at 4.5 µm. This trend is also seen at 22-24 µm. Between 2010 and 2012 the changes are marginal, i.e. the brightening appears to be persistent. Similar to 2M1644-4604, the colours indicate that this is indeed a YSO, but more follow-up observations are needed to confirm the nature of the source and to make sure that the brightening is indeed due to enhanced accretion.
THE STATISTICS OF ACCRETION BURSTS
In this paper we have systematically searched for eruptive variables that may be accretion bursts fulfilling specific conditions outlined in Sect. 2.2. We find 1 known accretion burst and three more possible bursts in sample A and 2 probable bursts in sample B. In the following sections we will use this result to derive constraints on the typical interval between bursts and compare with other constraints from theory and observations. We will treat sample A and B separately, because they are significantly different in terms of the typical ranges of stellar masses -while sample A is dominated by low-mass stars with masses around or below 1 M , objects in sample B are much further away and will therefore have on average masses higher than 1 M .
Statistical estimate of the burst frequency
For one burst out of 4000 stars and an epoch difference of 5 yr a crude estimate following the arguments given in Sect.
2.2 gives a burst interval of 20000 yr. To obtain confidence intervals for this number, we implemented simple Monte-Carlo simulations: For a given burst interval, we calculated the probability that a star experiences a burst over a given epoch difference. For each star we then obtain a random number between 0 and 1 and count the ones for which this number exceeds the burst probability. This procedure was repeated over 10000 runs; then we can calculate the probability to find a given number of bursts (in our case one). In Fig. 8 we show the results from this simulation when applied to the Spitzer-WISE comparison. For an epoch difference of 5 yr and a sample size of 4000 stars, the detection of one burst implies that we can rule out a burst interval below 20000 yr with 95% confidence. The upper limit is not well-defined due to the poor statistics. For two bursts, the 95% lower limit drops to around 10000 yr, for 4 bursts to 3000 yr. As noted above, 4 bursts is the most conservative upper limit we derive from our survey. Thus, from the Spitzer-WISE comparison we can derive a robust lower limit for the burst interval in the range of 10 4 yr.
A similar type of simulation was used to derive an estimate of the burst frequency from the known FU Ori outbursts. Among the known FU Ori objects, 10 have an observed burst event, 9 of them between 1936 and 1999, the 10th probably before 1888 (Reipurth & Aspin 2010). Since most of these objects have been found serendipitously and outside systematic surveys, the choice of parameters (number of monitored stars N and time baseline t) for the simulation is not trivial. For a rough estimate we assume that optical surveys based on photographic observations had access to at most about 1000 young stars in the solar neighbourhood. We note that a few more possible FU Ori outbursts have been found over the past 3 years Reipurth et al. 2012;Caratti o Garatti et al. 2011).
In Fig. 9 we show the probability to find 10 bursts as a function of interval. For N = 1000 and t = 100 yr the burst interval is in the range of 8000-12000 yr, with an upper limit at 22000 yr (95% confidence) and a lower limit around 5000 yr. Using t = 50 yr (maybe more plausible, given that only 2 events have been recorded prior to 1940) these numbers would be halved. On the other hand, doubling the sample size to 2000 stars would also double the estimated interval. Given the uncertainties in the choice of the parameters, we conclude that the known FU Ori events constrain the burst interval to 2000-50000 yr.
Taken these numbers together, the interval between consecutive accretion bursts with a) a mass accretion rate increasing to 10 −6 M yr −1 or more, b) a rise time of < 5 yr and c) a decline time of > 5 yr is most likely in the range of 10 4 yr, and with high confidence between 5000 and 50000 yr. We note that this is consistent with lower limits derived from near-infrared surveys of YSOs, which give intervals longer than 2000-3000 yr (Carpenter et al. 2001;Scholz 2012).
Comparison with constraints from outflows
FU Ori bursts are associated with strongly enhanced rates of mass accretion as well as enhanced mass outflow rates (e.g. Hartmann & Kenyon 1996). The properties of jets and outflows may therefore be connected with these events (Reipurth 1985). If this is the case, then the burst interval should be reflected in a corresponding time scale for jets and outflows. One possibility is that accretion bursts also power enhanced collimated ejection. This in turn would lead to the formation of new jet knots with separations of the order of the burst frequency. A second option is that accretion bursts either cause an enhanced mass outflow rate and thus trigger strong outflow activity, or destabilise the large-scale magnetic field and thus terminate an episode of collimated outflow activity, i.e. switch to a wide-angled wind.
Over the last decade the census of jets and outflows in nearby star forming regions has become more and more complete (Bally et al. 2007). Furthermore, there are now largescale unbiased surveys to establish outflow properties along the Galactic Plane (e.g. Froebrich et al. 2011). Ioannidis & Froebrich (2012) have recently determined typical dynamical jet lifetimes and time gaps between emission knots for an unbiased sample of 130 jets and outflows from the Galactic Plane survey UWISH2. They find that the time gaps between emission knots are of the order of 10 3 yrs and the dynamical lifetimes are an order of magnitude larger, i.e. 10 4 yrs. Interestingly, this is in line with earlier estimates obtained for the molecular outflows of FU Ori stars (Evans et al. 1994).
As discussed in Sect. 5.1 our statistical limits on the burst interval from the Spitzer-WISE comparison as well as the constraint obtained from the known sample of FU Oris gives ∼ 10 4 yr for the burst frequency. Values on the order of 10 3 yr are highly unlikely. Thus, only the dynamical timescales of outflows can be identified with the timescale between consecutive accretion bursts, not the separation of outflow knots. This could mean that a strong burst triggers the formation of a new outflow or terminates the collimated outflow activity. The outflow knots then represent variations on shorter timescales.
One possible caveat in this comparison is that the largescale outflows are mostly driven by sources in an early evolutionary stage whereas our analysis is biased towards Class II sources. It remains to be explored whether the frequency of eruptive variables increases significantly in the Class I stage.
Comparison with constraints from protostellar luminosities
The statistics of the protostellar luminosities does not provide a direct constraint on the burst frequency, as defined in this paper, but can be used to estimate the duty cycle. The best observational limit for this number comes from the Spitzer C2D survey. Evans et al. (2009) estimate that for a specific model stars accrete half of their mass in ∼ 40000 yr, which corresponds to 7% of the Class I lifetime of ∼ 0.5 Myr.
On the other hand, by comparing the C2D dataset with models for episodic accretion driven by gravitational instabilities, Dunham & Vorobyov (2012) find that YSOs spend on average only 1.3% of their total time of ∼ 1 Myr in accretion bursts (0-12%), i.e. around 13000 yr. The protostellar lifetimes in these estimates (0.5 and 1.0 Myr) are comparable to the typical ages of the clusters and star forming regions covered in our analysis, i.e. a comparison with our results is valid. For that purpose, however, we need to assume a typical duration for the bursts. Assuming that the bursts occur over 0.5 Myr the burst interval of 10 4 yr is consistent with a duty cycle of 7% if the burst duration is on the order of 800 yr. For 1 Myr and a 1.3% duty cycle, the burst duration has to be 130 yr to be consistent with our interval. These values are plausible given the slow decline observed in the most extreme known FU Ori bursts. Thus, assuming burst durations of hundreds of years our constraint is consistent with the ones derived from protostellar luminosities.
Further discussion
While our estimate is robust for the assumptions given in Sect. 2, two additional caveats should be kept in mind when interpreting our findings. First, it is conceivable that episodic accretion does not affect all stars in the same way. Some of the bursts could be triggered by mechanisms that are not applicable to all known YSOs, for example, the presence of a companion or disk-planet interaction (see Sect. 1). This would imply that the frequency of bursts is strongly variable among protostars and it is not valid to extrapolate from the sample of known bursts.
Second, accretion bursts are often thought to occur mostly in the Class I stage of the protostellar evolution and less frequent at Class II stage. This is supported by the finding that the known FU Ori-type objects tend to be more comparable to Class I objects in terms of their disk/envelope properties (Sandell & Weintraub 2001). Our samples include objects in these early stages -probably around one quarter to one third -but they are still dominated by the slightly older Class II objects. If we would limit the statistical analysis to the Class I objects, the burst interval could be by a factor of 3-4 lower than in our estimate. This, however, would conflict with the constraint from the known FU Oris. Therefore, we do not think that the burst interval will be significantly below 5000 yr, even if bursts only occur in the Class I stage.
Constraining the burst interval for a simple model of accretion bursts as shown in Sect. 2.2 is only the first step in a characterisation of the accretion history of YSOs. Various arguments suggest that the accretion history is in fact more complex than the simple model that is tested and constrained here (e.g. Offner & McKee 2011;Dunham & Vorobyov 2012). Indeed, as already acknowledged in Sect. 2.2, the known accretion bursts show a significant degree of diversity in rise time, decline time, and amplitude. Thus, the long-term goal should be to derive the frequency spectrum of bursts, and not only the interval. In addition, episodic accretion events may be combined with more gradual trends in the mass accretion rate that cannot be captured on timescales of years.
With only few epochs of photometry available for most of the YSOs in the solar neighbourhood, deriving direct observational constraints for these more complex scenarios is not feasible at the moment. With our approach, we only probe the contrast between the strong bursts and the quiescent phases. Substantial accretion rate variations in the quiescent phases would mask the signals and prevent a detection. Long-term monitoring of large samples or followup on the variable objects below our threshold of 1.0 mag will yield more information about the presence and characteristic of additional variations in the accretion histories of YSOs. The observational record of accretion histories will become more complete with new time-domain surveys like Pan-Starrs, VISTA/VVV, Gaia, and ultimately LSST. Out of these four, however, only VVV operates in the infrared and has access to the embedded, strongly reddened populations of YSOs.
CONCLUSIONS
We have searched for eruptive variables among YSOs by comparing Spitzer and WISE photometry. In our first sample of ∼ 4000 nearby YSOs, we find one previously known outbursting protostar and three more possible variables with an eruption of > 1 mag at 3.6 and 4.5 µm. In a second sample of ∼ 4000 YSOs in the Galactic plane we find two new eruptive variables which may be outbursting protostars. Based on the statistics of these findings, we estimate that longlasting, strong accretion bursts in protostars occur with intervals of ∼ 10 4 yr, with high confidence between 5000 and 50000 yr. For this estimate we assume that additional variability is small compared with these events and that episodic accretion affects all stars in the same way. The estimate is consistent with constraints from protostellar luminosities. It is also comparable to the dynamical timescales of protostellar outflows, indicating that accretion bursts may be responsible for either triggering or terminating large-scale outflows.
ACKNOWLEDGMENTS
This publication makes use of data products from the Widefield Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. We also use data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation This work also makes use of observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Part of this work was funded by the Science Foundation Ireland through grant no. 10/RFP/AST2780.
APPENDIX A: THE ROBITAILLE CATALOGUE -SEPARATION OF YSOS AND AGB STARS
In order to estimate the fraction of AGB stars in the Robitaille sample, we started with our full sample of 7101 objects as defined in Sect. 3.4, and selected all objects with near-infrared colours from 2MASS in each band. Using the (H-K, J-H) colour-colour diagram for this subsample of 3709 objects, we can establish three distinct groups (see Fig. A1). Objects below the 'standard' reddening band (defined as J-H < -0.3 + 1.7 × H-K, or seperator1) are clearly YSOs with K-band excess emission (YSO1a, hereafter -1389 objects). The group at the top of the reddening band (J-H > 0.1 + 1.7 × H-K, or seperator2) are clearly AGB objects (AGB1a, hereafter -1269 objects). The objects inbetween are probably a mix of AGB stars and YSOs (YSO2a and AGB2a, hereafter -1051 objects).
We investigate if there is a further way to separate these sources by plotting a K-band luminosity function of YSO1a and AGB1a (see Fig. A2). While YSO1a shows a well defined peak between 11 and 13 mag in K, the AGB1a sample is brighter, mostly confined to 8-10 mag with a tail of fainter objects. The K-band luminosity function of AGB2a and YSO2a together (also shown in Fig. A2), is a clear mix of the the other two, with two peaks. Based on this plot, we defined four 'clean' samples of objects:
(i) YSO1: below seperator1, 11<K<13 (819 objects) (ii) AGB1: above seperator2, 8<K<10 (696 objects) (iii) YSO2: between seperator1 and seperator2, 11<K<13 (428 objects) (iv) AGB2: between seperator1 and seperator2, 8<K<10 (399 objects)
These objects are taken out of all 3709 objects which have a JHK detektion as well as no 'bright' neighbours in Glimpse as discussed above (Sect. 3.4). With the described magnitude cuts there are 2342 objects remaining, from which 1247 (53 %) are YSOs and 1095 (47 %) are AGB stars, in line with the percentages given in Robitaille et al. (2008). We obtain a very similar result using only the YSO1a and AGB1a samples without K-band magnitude cuts (1389 YSOs vs. 1269 AGBs, i.e. 52.2 % YSOs). We expect that the total sample of 7101 objects has a comparable YSO fraction of 52 -53 %, i.e. the sample contains 3700-3800 YSOs.
The contamination among the burst candidates is significantly lower. In the subsamples YSO1, YSO2, AGB1, AGB2, as defined above, there are 27 burst candidates, with brightness increase of > 1.0 mag in the 3.6 and 4.5 µm bands. 17 of them are from subsample YSO1 and 10 from YSO2, none from the AGB samples. Despite making up almost half of the entire sample, there are no AGB stars which are highly variable. Thus, it is likely that the fraction of AGB stars among the 77 burst candidates in the total sample is negligible. More quantitatively, we expect that the upper limit for the fraction of AGB stars among burst candidates is 4 % (1/27), which corresponds to 3 objects in the sample of 77 burst candidates.
Figure 1 .
1Spectral energy distributions from radiative transfer modeling for prototypical YSOs with varying accretion rates. Left panel: 'Class I prototype' with disk and massive envelope. Right panel: 'Class II prototype' with disk only. For each prototype, 5 SEDs are shown for accretion rates of 10 −5 , 10 −6 , 10 −7 , 10 −8 and 0.0 M yr −1 (from top to bottom). The dashed line is the photospheric SED.
Figure 2 .
2Colours of objects in the C2D sample. Burst candidates are marked with large red squares. Left panel: IRAC colour-colour plot for all objects with robust detections in all 4 IRAC channels (1296 out of the total sample of 1301). The Class III and Class II locus are shown as dotted blue boxes around the origin and right of the origin. Right panel: Near-infrared colour-magnitude diagram for the subsample with 2MASS photometry (1228 objects). BCAH isochrones for an age of 1 Myr, distances of 150 and 300 pc, and extinctions of A V = 0, 10, 20 mag are overplotted.
Figure 3 .
3Variability in C2D sample (left) and Cluster (right) sample. The variations are calculated as difference between C2D magnitudes and WISE magnitudes, i.e. positive values indicate a brightening. Objects in the upper right corner (large symbols) show a brightness increase by more than 1 mag in the two mid-infrared bands.
Figure 4 .
4Colours of objects in the Cluster sample. Burst candidates are marked with large symbols. Left panel: IRAC colour-colour plot for all objects with robust detections in all 4 IRAC channels (1642 out of the total sample of 1745). The Class III and Class II locus are shown as dotted blue boxes around the origin and right of the origin. Right panel: Near-infrared colour-magnitude diagram for the subsample with 2MASS photometry (1587 objects). BCAH isochrones for an age of 1 Myr, distances of 200 and 800 pc, and extintions of A V = 0, 10, 20 mag are overplotted.
Figure 6 .
6Variability in Robitaille sample (only subsamples YSO1 and YSO2, see Appendix A). The variations are calculated as difference between C2D magnitudes and WISE magnitudes, i.e. positive values indicate a brightening. Objects in the upper right corner (large symbols) show a brightness increase by more than 1 mag in the two mid-infrared bands.
Figure 5 .
5Colours of objects in the Robitaille sample (only subsamples YSO1 and YSO2, see Appendix A). Burst candidates are marked with large symbols. Left panel: IRAC colour-colour plot, the Class III and Class II locus are shown as blue dotted boxes around the origin and right of the origin. Right panel: Near-infrared colour-magnitude diagram for the subsample with 2MASS photometry.
Figure 8 .
8Monte-Carlo simulations of burst statistics for the Spitzer-WISE comparison. Probability to find a 1 burst as a function of burst interval for a total sample of 4000 stars and a epoch difference of 5 yr. The 95% lower limit is indicated by the dotted line.
Figure 7 .Figure 9 .
79Spectral energy distributions for the two newly identified eruptive YSO candidates, 2M1644-4604 (left panel) and 2M1511-5902 (right panel). Blue datapoints show photometry pre-2010 from 2MASS and Spitzer. Green datapoints show the 2010 data from VVV and WISE. Red datapoints are from our own observations in 2012. The photometric errorbars are < 10% and thus small compared with the size of the symbols. Note that the datapoints plotted in blue are not from the same year (seeTables 5 and 6for the complete list of epochs). Monte-Carlo simulations of burst statistics for the known FU Ori events. Probability to find 10 bursts as a function of burst interval assuming a total sample size of 1000 and an epoch difference of 100 yr. The 95% upper limit is indicated by the dotted line.
Figure A1 .
A1Near-infrared J-H vs H-K colour-colour diagram of the 3709 objects in the Robitaille sample with JHK detection. The two solid lines indicate the two seperators used to identify YSOs (below bottom line) and AGB stars (above top line). See text for more information.
Figure A2 .
A2K-band histogram of the JHK detected objects in the Robitaille sample. The red dotted line is the YSO1 sample, the blue dashed line the AGB1 sample and the solid black line the YSO2+AGB2 sample.
Table 1 .
1Summary of samples used in this paperSample
No.
C2D total (Sect. 3.1)
1478
-with WISE
1301
-highly variable
23
-burst candidates
0
Cluster total (Sect. 3.2)
2548
-with WISE
1745
-highly variable
24
-burst candidates
5
Complementary (Sect. 3.3)
-NGC2264 with WISE
355
-Taurus with WISE
320
-NaP with WISE
935
-highly variable
10
-burst candidates
2
Robitaille, (Sect. 3.4)
18949
-cleaned, with WISE
7101
-highly variable
77
-burst candidates
10
Table 3 .
3Burst candidates found in this study by comparing Spitzer and WISE photometry.The primary criterion is a brightness
are around H = 14 mag, while the others are around H = 16 mag. In addition, there is evidence for significant variability on short timescales of days and weeks (Alves de Oliveira & Casali 2008ISO-Oph-50: As pointed out in Sect. 3.2, one of the candi-
dates from the Cluster sample, ISO-Oph-50 (or CFHTWIR-
Oph 30) was previously suspected to be an outbursting YSO
(Alves de Oliveira & Casali 2008), maybe of EXor type.
Alves de Oliveira et al. (2012) measure an optical spectral
type of M3.25 for this object. In Table 4 we list the avail-
able photometry in the H-band (the band with the most
measurements) for this object, including a new value ob-
tained from our SMARTS imaging on August 8 2012. Out
of 6 epochs, 3
Table 4 .
4H-band photometry for ISO-Oph-50Epoch
H (mag)
Comments
1993-94
13.93
Barsony et al. (1997)
Apr 1999
16.01
2MASS
Apr 2005
15.91
UKIDSS/GCS
May 2005 15.9
Alves de Oliveira et al. (2008)
Jun 2006
13.3-14.7 Alves de Oliveira et al. (2008)
Aug 2012
14.1
SMARTS (also, J ∼ 16.4 mag)
Table 5. Photometry for 2M1644-4604
Epoch
Band
Magnitude Comments
1999-05-20
J
>17.34
2MASS
1999-05-20
H
>16.06
2MASS
1999-05-20
Ks
13.75
2MASS
2010-05-09
J
13.32
VVV
2010-05-09
H
12.67
VVV
2010-08-18
Ks
11.49
VVV
2012-07-28
J
13.58
SMARTS
2012-07-28
H
11.71
SMARTS
2012-09-15
J
13.53
SMARTS
2012-09-15
Ks
10.42
SMARTS
2004-09-05
3.6 µm
10.74
Glimpse
2004-09-05
4.5 µm
10.11
Glimpse
2004-09-05
5.8 µm
9.27
Glimpse
2004-09-05
8.0 µm
8.80
Glimpse
2006-10-03
24 µm
3.29
Robitaille et al. (2008)
2010-06-02 1
3.6 µm
9.548
WISE
2010-06-02 1
4.5 µm
8.644
WISE
2010-06-02 1
12 µm
6.925
WISE
2010-06-02 1
22 µm
3.071
WISE
1 Several epochs from 2010-06-02 to 2010-06-05
Table 6 .
6Photometry for 2M1511-5902Epoch
Band
Magnitude Comments
1999-07-07
J
>17.59
2MASS
1999-05-07
H
>16.03
2MASS
1999-07-07
Ks
12.54
2MASS
2010-04-11
J
16.64
VVV
2010-04-11
H
13.39
VVV
2010-08-14
Ks
11.09
VVV
2012-08-14
J
16.76
SMARTS
2012-08-14
Ks
10.86
SMARTS
2012-09-12
Ks
11.16
SMARTS
2004-03-12
3.6 µm
9.21
Glimpse
2004-03-12
4.5 µm
7.93
Glimpse
2004-03-12
5.8 µm
6.91
Glimpse
2004-03-12
8.0 µm
6.10
Glimpse
2006-04-11
24 µm
4.41
Robitaille et al. (2008)
2010-02-21 1
3.6 µm
8.148
WISE
2010-02-21 1
4.5 µm
6.578
WISE
2010-02-21 1
12 µm
4.811
WISE
2010-02-21 1
22 µm
3.761
WISE
1 Several epochs from 2010-02-21 to 2010-02-23
c 2002 RAS, MNRAS 000, 1-14
. C Alves De Oliveira, M Casali, A&A. 485155Alves de Oliveira C., Casali M., 2008, A&A, 485, 155
. C Alves De Oliveira, E Moraux, J Bouvier, H Bouy, A&A. 539151Alves de Oliveira C., Moraux E., Bouvier J., Bouy H., 2012, A&A, 539, A151
. P J Armitage, M Livio, J E Pringle, MNRAS. 324705Armitage P. J., Livio M., Pringle J. E., 2001, MNRAS, 324, 705
J Bally, B Reipurth, C J Davis, Protostars and Planets V. Bally J., Reipurth B., Davis C. J., 2007, Protostars and Planets V, pp 215-230
. I Baraffe, G Chabrier, F Allard, P H Hauschildt, A&A. 337403Baraffe I., Chabrier G., Allard F., Hauschildt P. H., 1998, A&A, 337, 403
. M Barsony, S J Kenyon, E A Lada, P J Teuben, ApJS. 112109Barsony M., Kenyon S. J., Lada E. A., Teuben P. J., 1997, ApJS, 112, 109
. K R Bell, D N C Lin, ApJ. 427987Bell K. R., Lin D. N. C., 1994, ApJ, 427, 987
. R A Benjamin, E Churchwell, B L Babler, T M Bania, D P Clemens, M Cohen, J M Dickey, R Indebetouw, J M Jackson, H A Kobulnicky, PASP. 115953Benjamin R. A., Churchwell E., Babler B. L., Bania T. M., Clemens D. P., Cohen M., Dickey J. M., Indebetouw R., Jackson J. M., Kobulnicky H. A., et al. 2003, PASP, 115, 953
. A C Boley, R H Durisen, ApJ. 6851193Boley A. C., Durisen R. H., 2008, ApJ, 685, 1193
. I Bonnell, P Bastien, ApJ. 40131Bonnell I., Bastien P., 1992, ApJ, 401, L31
. A Caratti O Garatti, Garcia Lopez, R Scholz, A Giannini, T Eislöffel, J Nisini, B Massi, F Antoniucci, S Ray, T P , A&A. 5261Caratti o Garatti A., Garcia Lopez R., Scholz A., Giannini T., Eislöffel J., Nisini B., Massi F., Antoniucci S., Ray T. P., 2011, A&A, 526, L1+
. J M Carpenter, L A Hillenbrand, M F Skrutskie, AJ. 1213160Carpenter J. M., Hillenbrand L. A., Skrutskie M. F., 2001, AJ, 121, 3160
. E Churchwell, B L Babler, M R Meade, B A Whitney, R Benjamin, R Indebetouw, C Cyganowski, T P Robitaille, M Povich, C Watson, S Bracker, PASP. 121213Churchwell E., Babler B. L., Meade M. R., Whitney B. A., Benjamin R., Indebetouw R., Cyganowski C., Robitaille T. P., Povich M., Watson C., Bracker S., 2009, PASP, 121, 213
. C Clarke, G Lodato, S Y Melnikov, M A Ibrahimov, MNRAS. 361942Clarke C., Lodato G., Melnikov S. Y., Ibrahimov M. A., 2005, MNRAS, 361, 942
. K R Covey, L A Hillenbrand, A A Miller, D Poznanski, S B Cenko, J M Silverman, J S Bloom, M M Kasliwal, W Fischer, J Rayner, L M Rebull, N R Butler, A V Filippenko, N M Law, E O Ofek, M Agüeros, R G Dekany, AJ. 14140Covey K. R., Hillenbrand L. A., Miller A. A., Poznanski D., Cenko S. B., Silverman J. M., Bloom J. S., Kasliwal M. M., Fischer W., Rayner J., Rebull L. M., Butler N. R., Filippenko A. V., Law N. M., Ofek E. O., Agüeros M., Dekany R. G., et al. 2011, AJ, 141, 40
. S E Dahm, T Simon, AJ. 129829Dahm S. E., Simon T., 2005, AJ, 129, 829
. M M Dunham, E I Vorobyov, ApJ. 74752Dunham M. M., Vorobyov E. I., 2012, ApJ, 747, 52
. N J Evans, M M Dunham, J K Jørgensen, M L Enoch, B Merín, E F Van Dishoeck, J M Alcalá, P C Myers, K R Stapelfeldt, ApJS. 181321Evans N. J., Dunham M. M., Jørgensen J. K., Enoch M. L., Merín B., van Dishoeck E. F., Alcalá J. M., Myers P. C., Stapelfeldt K. R., et al. 2009, ApJS, 181, 321
. Ii N J Evans, S Balkum, R M Levreault, L Hartmann, S Kenyon, ApJ. 424793Evans II N. J., Balkum S., Levreault R. M., Hartmann L., Kenyon S., 1994, ApJ, 424, 793
. K M Flaherty, J Muzerolle, G Rieke, R Gutermuth, Z Balog, W Herbst, S T Megeath, M Kun, ApJ. 74871Flaherty K. M., Muzerolle J., Rieke G., Gutermuth R., Balog Z., Herbst W., Megeath S. T., Kun M., 2012, ApJ, 748, 71
. D Forgan, K Rice, MNRAS. 4021349Forgan D., Rice K., 2010, MNRAS, 402, 1349
. D Froebrich, C J Davis, G Ioannidis, T M Gledhill, M Takami, A Chrysostomou, J Drew, J Eislöffel, A Gosling, R Gredel, J Hatchell, K W Hodapp, M S N Kumar, P W Lucas, H Matthews, MNRAS. 413480Froebrich D., Davis C. J., Ioannidis G., Gledhill T. M., Takami M., Chrysostomou A., Drew J., Eislöffel J., Gosling A., Gredel R., Hatchell J., Hodapp K. W., Kumar M. S. N., Lucas P. W., Matthews H., et al. 2011, MNRAS, 413, 480
. R A Gutermuth, S T Megeath, P C Myers, L E Allen, J L Pipher, G G Fazio, ApJS. 18418Gutermuth R. A., Megeath S. T., Myers P. C., Allen L. E., Pipher J. L., Fazio G. G., 2009, ApJS, 184, 18
. L Hartmann, S J Kenyon, ARA&A. 34207Hartmann L., Kenyon S. J., 1996, ARA&A, 34, 207
. G H Herbig, ApJ. 217693Herbig G. H., 1977, ApJ, 217, 693
. G H Herbig, AJ. 1332679Herbig G. H., 2007, AJ, 133, 2679
. G Ioannidis, D Froebrich, Arxiv E-Prints Kóspálá, P Ábrahám, J A Acosta-Pulido, Arévalo Morales, M J Carnerero, M I Elek, E Kelemen, J Kun, M Pál, A Szakáts, R Vida, K , A&A. 527133Ioannidis G., Froebrich D., 2012, ArXiv e-prints KóspálÁ.,Ábrahám P., Acosta-Pulido J. A., Arévalo Morales M. J., Carnerero M. I., Elek E., Kelemen J., Kun M., Pál A., Szakáts R., Vida K., 2011, A&A, 527, A133+
. C J Lada, E A Lada, ARA&A. 4157Lada C. J., Lada E. A., 2003, ARA&A, 41, 57
. G Lodato, C J Clarke, MNRAS. 353841Lodato G., Clarke C. J., 2004, MNRAS, 353, 841
. R G Martin, S H Lubow, ApJ. 7406Martin R. G., Lubow S. H., 2011, ApJ, 740, L6
. A A Miller, L A Hillenbrand, K R Covey, D Poznanski, J M Silverman, I K W Kleiser, B Rojas-Ayala, P S Muirhead, S B Cenko, J S Bloom, M M Kasliwal, A V Filippenko, N M Law, E O Ofek, R G Dekany, G Rahmer, ApJ. 73080Miller A. A., Hillenbrand L. A., Covey K. R., Poznanski D., Silverman J. M., Kleiser I. K. W., Rojas-Ayala B., Muirhead P. S., Cenko S. B., Bloom J. S., Kasliwal M. M., Filippenko A. V., Law N. M., Ofek E. O., Dekany R. G., Rahmer G., et al. 2011, ApJ, 730, 80
. M Morales-Calderón, J R Stauffer, L A Hillenbrand, R Gutermuth, I Song, L M Rebull, P Plavchan, J M Carpenter, B A Whitney, K Covey, C Alves De Oliveira, E Winston, M J Mccaughrean, ApJ. 73350Morales-Calderón M., Stauffer J. R., Hillenbrand L. A., Gutermuth R., Song I., Rebull L. M., Plavchan P., Car- penter J. M., Whitney B. A., Covey K., Alves de Oliveira C., Winston E., McCaughrean M. J., et al. 2011, ApJ, 733, 50
. A Natta, L Testi, S Randich, A&A. 452245Natta A., Testi L., Randich S., 2006, A&A, 452, 245
. S S R Offner, C F Mckee, ApJ. 73653Offner S. S. R., McKee C. F., 2011, ApJ, 736, 53
. S Pfalzner, A&A. 492735Pfalzner S., 2008, A&A, 492, 735
. L M Rebull, S Guieu, J R Stauffer, L A Hillenbrand, A Noriega-Crespo, K R Stapelfeldt, S J Carey, J M Carpenter, D M Cole, D L Padgett, S E Strom, S C Wolff, ApJS. 19325Rebull L. M., Guieu S., Stauffer J. R., Hillenbrand L. A., Noriega-Crespo A., Stapelfeldt K. R., Carey S. J., Car- penter J. M., Cole D. M., Padgett D. L., Strom S. E., Wolff S. C., 2011, ApJS, 193, 25
. L M Rebull, D L Padgett, C.-E Mccabe, L A Hillenbrand, K R Stapelfeldt, A Noriega-Crespo, S J Carey, T Brooke, T Huard, S Terebey, M Audard, J.-L Monin, M Fukagawa, M Güdel, ApJS. 186259Rebull L. M., Padgett D. L., McCabe C.-E., Hillenbrand L. A., Stapelfeldt K. R., Noriega-Crespo A., Carey S. J., Brooke T., Huard T., Terebey S., Audard M., Monin J.-L., Fukagawa M., Güdel M., et al. 2010, ApJS, 186, 259
. B Reipurth, A&A. 143435Reipurth B., 1985, A&A, 143, 435
B Reipurth, C Aspin, Evolution of Cosmic Objects through their Physical Activity FUors and Early Stellar Evolution. Harutyunian H. A., Mickaelian A. M., Terzian Y.Reipurth B., Aspin C., 2010, in Harutyunian H. A., Mick- aelian A. M., Terzian Y., eds, Evolution of Cosmic Objects through their Physical Activity FUors and Early Stellar Evolution. pp 19-38
. B Reipurth, C Aspin, G H Herbig, ApJ. 7485Reipurth B., Aspin C., Herbig G. H., 2012, ApJ, 748, L5
. T P Robitaille, M R Meade, B L Babler, B A Whitney, K G Johnston, R Indebetouw, M Cohen, M S Povich, M Sewilo, R A Benjamin, E Churchwell, AJ. 1362413Robitaille T. P., Meade M. R., Babler B. L., Whitney B. A., Johnston K. G., Indebetouw R., Cohen M., Povich M. S., Sewilo M., Benjamin R. A., Churchwell E., 2008, AJ, 136, 2413
. T P Robitaille, B A Whitney, R Indebetouw, K Wood, P Denzmore, ApJS. 167256Robitaille T. P., Whitney B. A., Indebetouw R., Wood K., Denzmore P., 2006, ApJS, 167, 256
. R K Saito, M Hempel, D Minniti, P W Lucas, M Rejkuba, I Toledo, O A Gonzalez, J Alonso-García, M J Irwin, E Gonzalez-Solares, S T Hodgkin, J R Lewis, N Cross, V D Ivanov, E Kerins, A&A. 537107Saito R. K., Hempel M., Minniti D., Lucas P. W., Rejkuba M., Toledo I., Gonzalez O. A., Alonso-García J., Irwin M. J., Gonzalez-Solares E., Hodgkin S. T., Lewis J. R., Cross N., Ivanov V. D., Kerins E., et al. 2012, A&A, 537, A107
. G Sandell, D A Weintraub, ApJS. 134115Sandell G., Weintraub D. A., 2001, ApJS, 134, 115
. A Scholz, MNRAS. 4201495Scholz A., 2012, MNRAS, 420, 1495
. A Scholz, R Jayawardhana, K Wood, ApJ. 6451498Scholz A., Jayawardhana R., Wood K., 2006, ApJ, 645, 1498
. D Stamatellos, A P Whitworth, D A Hubber, ApJ. 73032Stamatellos D., Whitworth A. P., Hubber D. A., 2011, ApJ, 730, 32
. H Sung, J R Stauffer, M Bessell, AJ. 1381116Sung H., Stauffer J. R., Bessell M. S., 2009, AJ, 138, 1116
. E I Vorobyov, S Basu, ApJ. 633137Vorobyov E. I., Basu S., 2005, ApJ, 633, L137
. G Welin, A&A. 12312Welin G., 1971, A&A, 12, 312
. E L Wright, P R M Eisenhardt, A K Mainzer, M E Ressler, R M Cutri, T Jarrett, J D Kirkpatrick, D Padgett, R S Mcmillan, M Skrutskie, S A Stanford, M Cohen, R G Walker, J C Mather, AJ. 1401868Wright E. L., Eisenhardt P. R. M., Mainzer A. K., Ressler M. E., Cutri R. M., Jarrett T., Kirkpatrick J. D., Padgett D., McMillan R. S., Skrutskie M., Stanford S. A., Cohen M., Walker R. G., Mather J. C., et al. 2010, AJ, 140, 1868
. Z Zhu, L Hartmann, C Gammie, J C Mckinney, ApJ. 701620Zhu Z., Hartmann L., Gammie C., McKinney J. C., 2009, ApJ, 701, 620
| [] |
[
"One dimensional reflected BSDEs with two barriers under logarithmic growth and applications",
"One dimensional reflected BSDEs with two barriers under logarithmic growth and applications"
] | [
"Brahim El Asri ",
"Khalid Oufdil ",
"Nacer Ourkiya "
] | [] | [] | In this paper we deal with the problem of the existence and the uniqueness of a solution for one dimensional reflected backward stochastic differential equations with two strictly separated barriers when the generator is allowing a logarithmic growth (|y|| ln |y|| + |z| | ln |z||) in the state variables y and z. The terminal value ξ and the obstacle processes (L t ) 0≤t≤T and (U t ) 0≤t≤T are L p -integrable for a suitable p > 2. The main idea is to use the concept of local solution to construct the global one. As applications, we broaden the class of functions for which mixed zero-sum stochastic differential games admit an optimal strategy and the related double obstacle partial differential equation problem has a unique viscosity solution. | 10.37190/0208-4147.00067 | [
"https://arxiv.org/pdf/2202.04940v1.pdf"
] | 246,706,252 | 2202.04940 | d57932fb6a7c313628052567071da8a75fcb6955 |
One dimensional reflected BSDEs with two barriers under logarithmic growth and applications
10 Feb 2022
Brahim El Asri
Khalid Oufdil
Nacer Ourkiya
One dimensional reflected BSDEs with two barriers under logarithmic growth and applications
10 Feb 2022arXiv:2202.04940v1 [math.PR]Reflected BSDEsMixed zero-sum stochastic differential gamePenalizationViscosity solution AMS Subject Classifications (2010): 91A60, 91A15, 60H10, 60H30
In this paper we deal with the problem of the existence and the uniqueness of a solution for one dimensional reflected backward stochastic differential equations with two strictly separated barriers when the generator is allowing a logarithmic growth (|y|| ln |y|| + |z| | ln |z||) in the state variables y and z. The terminal value ξ and the obstacle processes (L t ) 0≤t≤T and (U t ) 0≤t≤T are L p -integrable for a suitable p > 2. The main idea is to use the concept of local solution to construct the global one. As applications, we broaden the class of functions for which mixed zero-sum stochastic differential games admit an optimal strategy and the related double obstacle partial differential equation problem has a unique viscosity solution.
Introduction
In this paper we are concerned with the problem of the existence and uniqueness of a solution for one dimensional reflected backward stochastic differential equations (BSDEs for short) driven by Brownian motion (B t ) t≤T with two continuous reflecting barriers L := (L t ) t≤T and U := (U t ) t≤T and whose coefficient and terminal value are f and ξ respectively. Meaning, we want to show the existence of a unique quadruple (Y, Z, K + , K − ) of F t -adapted processes such that:
Y t = ξ + T t f (s, Y s , Z s )ds + (K + T − K + t ) − (K − T − K − t ) − T t Z s dB s , t ∈ [0, T ]; ∀t ∈ [0, T ] L t ≤ Y t ≤ U t ; T 0 (Y s − L s ) dK + s = T 0 (U s − Y s ) dK − s = 0.
(1.1)
In the framework of a Brownian filtration, the notion of BSDEs was first introduced by Pardoux and Peng [18]. Then in [11], El-Karoui et al. introduced BSDEs with a lower obstacle L := (L t ) t≤T
where the solution Y is assumed to be above L, after that, Cvitanic and Karatzas [4] generalized these results to BSDEs with two barriers (upper and lower). Due to their appearance in many finance problems such as the model behind the Black and Scholes formula for the pricing and hedging of options in mathematical finance, as well as their many applications in other several problems; optimal switching, stochastic games, non-linear PDEs...etc (see [7,11,12,17] and the references therein). Many authors have attempted to improve the result of [4] and establish the existence and the uniqueness of the solution by focusing on weakening the Lipschitz property of the coefficient or the square integrability of the data (see [8] for the later).
The main objective of this paper is to show the existence and the uniqueness of the solution for BSDEs with two reflecting barriers with a generator allowing a logarithmic growth in the state variables y and z:
| f (t, ω, y, z) |≤ |η t | + c 0 |y|| ln |y|| + c 1 |z| | ln(|z|)| ∀(t, ω, y, z) ∈ [0, T ] × Ω × R × R d , with the terminal data ξ and the barriers being merely p-integrable (with p > 2). For example, let f (y) = −Ky ln |y|, and let us consider the following BSDE,
Y t = ξ + T t f (Y s )ds + (K + T − K + t ) − (K − T − K − t ) − T t Z s dB s , t ∈ [0, T ]; ∀t ∈ [0, T ] L t ≤ Y t ≤ U t ; T 0 (Y s − L s ) dK + s = T 0 (U s − Y s ) dK − s = 0.
(1.
2)
The generator in (1.2) is not locally monotone nor of sublinear growth in the y-variable, moreover, its growth is big power than y. The logarithmic nonlinearity y ln |y| which appears in (1.2) is interesting in itself and in our knowledge it has not been covered yet, the same thing goes for f (z) = |z| | ln |z||.
As we can see, our assumption covers both cases; f (y) = −Ky ln |y| and f (z) = |z| | ln |z||. Moreover, we also impose an other assumption on f (see, (H.4) below) which is local in y, z and also in ω, this enables us to cover certain BSDEs with stochastic monotone generators.
There are mainly two reasons why we study this kind of a problem. The first one is zero-sum games, Dynkin type or of mixed ones, where we broaden the class of data for which those games have a value. It is well known that double barrier reflected BSDEs are connected with mixed zero-sum games which we describe briefly. Assume that we have a stochastic system whose dynamic (x t ) t≤T satisfies:
x t = x 0 + t 0 ϕ(s, x s , u s , v s )ds + t 0 σ(s, x s )dB s , t ∈ [0, T ] and x ∈ R d ,
ϕ is the drift of the system and the stochastic processes (u t ) t≤T and (v t ) t≤T are adapted and stand for, respectively, the intervention functions of two agents A 1 and A 2 on that system; (the system could be for example a stock market and A 1 and A 2 are two traders). Moreover, the two agents can exit the system whenever they want, meaning, they can stop controlling at stopping times τ and σ. However, their actions are not free and their advantages are antagonistic, i.e., there is a payoff J(u, τ ; v, σ) between them such that J(u, τ ; v, σ) = E (u,v) where h is the instantaneous reward of A 2 , L (resp. U ) is the reward if A 2 decides to stop at σ (resp. τ ) before the terminal time T and ξ is the reward if he decides to stay until T .
The first (resp. second) player chooses a pair (u, τ ) (resp. (v, σ)) of continuous control and stopping time, and looks for minimizing (resp. maximizing) this payoff, meaning we aim to find a pair of strategies (u * , τ * ) and (v * , σ * ) for A 1 and A 2 respectively such that J(u * , τ * ; v, σ) ≤
J(u * , τ * ; v * , σ * ) ≤ J(u, τ ; v * , σ * ).
The main idea is we characterize the value function as a solution of a specific reflected BSDE with two barriers. This problem has already been studied, for example, in [15] when σ −1 , ϕ and h are bounded and in [13] when σ −1 ϕ is bounded and h is of linear growth with respect to the x-variable. We however, consider the case when h and ϕ are of linear growth with respect to the x-variable.
The second reason for considering this problem is to weaken the hypotheses under which the two obstacle parabolic partial differential variational inequality has a unique solution in the viscosity sense. We consider for example the Markovian of the BSDEs (1.2), which is defined by the system SDE-BSDE:
X t,x s = x + s t b(u, X t,x u )du + s t σ(u, X t,x u )dB u , Y t,x s = g(X t,x T ) − K T s Y t,x u ln |Y t,x u |du + T s dK +,t,x u − T s dK −,t,x u − T s Z t,x u dB u , ∀s ∈ [t, T ], h(s, X t,x s ) ≤ Y t,x s ≤ h ′ (s, X t,x s ), T t Y t,x s − h(s, X t,x s ) dK +,t,x s = T t h ′ (s, X t,x s ) − Y t,x s dK −,t,x s = 0. (1.3)
The system of double obstacle variational inequality associated with (1.3) is given by
min u(t, x) − h(t, x), max − ∂u ∂t (t, x) − Lu(t, x) +Ku(t, x) ln |u(t, x)|, u(t, x) − h ′ (t, x) = 0, (t, x) ∈ [0, T ) × R d ; u(T, x) = g(x), ∀x ∈ R d , (1.4) where L = 1 2 d i,j=1 ((σσ * ) (t, x)) i,j ∂ 2 ∂x i ∂x j + d i=1 (b(t, x)) i ∂ ∂x i .
The logarithmic nonlinearity u ln |u| is interesting on its own, since it is neither locally Lipschitz nor uniformly continuous. This paper is organized as follows. In Section 2, we present the notations and the assumptions used through out the paper. Moreover, we give some preliminary results that would be useful in this paper. In Section 3, we show the existence of a local solution for the two barriers reflected BSDE. Later we show the existence and the uniqueness of the solution for (1.1). In Section 4, we apply the obtained results and we prove that the value function of a mixed zero-sum stochastic differential game problem can be characterized as the solution of a specific BSDE with two barriers. In Section 5, we show that, provided the problem is formulated within a Markovian framework, the solution of the reflected BSDE provides a probabilistic representation for the unique viscosity solution of the related obstacle parabolic partial differential variational inequality.
Notations, Assumptions and Preliminary results
Notations
Let (Ω, F, P ) be a fixed probability space on which is defined a standard d-dimensional Brownian motion B = (B t ) 0≤t≤T whose natural filtration is (F 0 t := σ{B s , s ≤ t}) 0≤t≤T . Let F = (F t ) 0≤t≤T be the completed filtration of (F 0 t ) 0≤t≤T with the P -null sets of F.
Next for any p > 0
• S p be the space of R-valued F t -adapted and continuous processes (Y t ) t∈[0,T ] such that ||Y || S p = E sup t≤T |Y t | p 1 p < +∞. • M denote the set P-measurable processes (Z t ) t∈[0,T ] with values in R d such that: T 0 |Z s | 2 ds < +∞ P-a.s.;
and M p is a subset of M such that:
||Z|| M p = E T 0 |Z s | 2 ds p 2 1 p < +∞.
• A be the set of adapted continuous non decreasing processes (K t ) t∈[0,T ] such that K 0 = 0 and K T < +∞, P-a.s. and A p is the subset of A such that E K p T < +∞.
Assumptions
Now we are given four data:
• ξ is an R-valued and F T -measurable random variable.
• f : [0, T ] × Ω × R × R d → R
be a random function which associates (t, ω, y, z) with f (t, ω, y, z).
• L := (L t ) 0≤t≤T and U := (U t ) 0≤t≤T are two continuous progressively measurable R-valued processes.
On the data ξ, f , L and U we make the following assumptions:
(H.1) There exists a positive constant λ large enough such that E |ξ| e λT +1 < +∞.
(H.
2) The two barriers (L t ) 0≤t≤T and (U t ) 0≤t≤T satisfy L t < U t , ∀t ∈ [0, T ] and L T ≤ ξ ≤ U T . In addition for p ∈]1, 2[ we have
E sup 0≤t≤T (L + t ) e λT +1 p p−1 < +∞ and E sup 0≤t≤T (U − t ) e λT +1 p p−1 < +∞, where L + = L ∨ 0 and U − = (−U ) ∨ 0. (H.3) (i) f is continuous in (y, z) for almost all (t, ω).
(ii) There exist three positive constants c 0 , λ (large enough) and c 1 and a process (η t ) t≤T such that:
| f (t, ω, y, z) |≤ |η t | + c 0 |y|| ln |y|| + c 1 |z| | ln(|z|)| ∀(t, ω, y, z) ∈ [0, T ] × Ω × R × R d , and E T 0 |η s | e λT +1 ds < +∞. (H.4) There exist v ∈ L q ′ (Ω × [0, T ]; R + ) (for some q ′ > 0), a real valued sequence (A N ) N >1
and constants M ∈ R + , r > 0 such that:
(i) ∀N > 1, 1 < A N ≤ N r . (ii) lim N →∞ A N = +∞.
(iii) For every N ∈ N, and every y, y ′ , z, z ′ such that | y |, | y ′ |, | z |, | z ′ |≤ N , we have:
y − y ′ f (t, ω, y, z) − f (t, ω, y ′ , z ′ ) 1 1 {vt(ω)≤N } ≤ M | y − y ′ | 2 ln A N + | y − y ′ || z − z ′ | ln A N + ln A N A N .
Preliminary results
Now let us define the notion of the local and the global solution of the reflected BSDE associated with the quadruple (ξ, f, L, U ) which we consider throughout this paper. We start with the global solution.
Definition 2.1. We say that {(Y t , Z t , K + t , K − t ); 0 ≤ t ≤ T } is a solution of the reflected BSDE associated with two continuous barriers L and U , a terminal condition ξ and a generator f if the followings hold:
Y ∈ S e λT +1 , Z ∈ M, K ± ∈ A; Y t = ξ + T t f (s, Y s , Z s )ds + (K + T − K + t ) − (K − T − K − t ) − T t Z s dB s , t ∈ [0, T ]; ∀t ∈ [0, T ] L t ≤ Y t ≤ U t ; T 0 (Y s − L s ) dK + s = T 0 (U s − Y s ) dK − s = 0.
(2.1)
Since in many applications, especially in stochastic games or mathematical finance, we don't need strong integrability conditions on Z and K ± , that is why we don't require them as one can notice from the definition 2.1. Now we define the local solution. In the following p ∈]1, 2[ Definition 2.2. Let τ and γ be two stopping times such that τ ≤ γ P -a.s. We say that (Y t , Z t , K + t , K − t ) 0≤t≤T is a local solution on [τ, γ] of the reflected BSDE associated with two continuous barriers L and U , a terminal condition ξ and a generator f if the followings hold:
Y ∈ S e λT +1 , Z ∈ M 2 , K ± ∈ A p ; Y t = Y γ + γ t f (s, Y s , Z s )ds + (K + γ − K + t ) − (K − γ − K − t ) − γ t Z s dB s , ∀t ∈ [τ, γ]; Y T = ξ; L t ≤ Y t ≤ U t , ∀t ∈ [τ, γ] and γ τ (Y s − L s ) dK + s = γ τ (U s − Y s ) dK − s = 0. (2.2)
We first begin with an estimation of f which can easily be proved.
E T 0 |f (s, Y s , Z s )| 2 α ds ≤ KE T 0 |η s | 2 ds + sup s≤T |Y s | 4 α + T 0 |Z s | 2 ds ,
where K is a positive constant that depends on c 0 and T .
(i) ξ ≤ ξ ′ P -a.s. (ii) f (t, y, z) ≤ f ′ (t, y, z) dP × dt a.e., ∀(t, y, z) ∈ [0, T ] × R × R d . (iii) L t ≤ L ′ t ; ∀t ∈ [0, T ] P -a.s.
Let (Y, Z, K + ) be the solution of the reflected BSDE with one lower barrier associated with (ξ, f, L)
i.e. Y t = ξ + T t f (s, Y s , Z s )ds + K + T − K + t − T t Z s dB s , t ∈ [0, T ]; ∀t ∈ [0, T ] L t ≤ Y t ; T 0 (Y s − L s ) dK + s = 0,
and (Y ′ , Z ′ , K ′+ ) the solution of the reflected BSDE with one lower barrier associated with (ξ ′ , f ′ , L ′ ). Then,
Y t ≤ Y ′ t , 0 ≤ t ≤ T P -a.s.f (t, y, z) ≤ f ′ (t, y, z) and U ≤ U ′ , then P -a.s., Y t ≤ Y ′ t , ∀ 0 ≤ t ≤ T , where (Y, Z, K − )
is the solution of the one upper barrier reflected BSDEs associated with (ξ, f, U ) i.e.
Y t = ξ + T t f (s, Y s , Z s )ds − K − T + K − t − T t Z s dB s , t ∈ [0, T ]; ∀t ∈ [0, T ] U t ≥ Y t ; T 0 (U s − Y s ) dK − s = 0, and (Y ′ , Z ′ , K ′− )
is the solution of the one upper barrier reflected BSDEs associated with (ξ ′ , f ′ , U ′ ).
• If L = −∞, then K + = 0 and the comparison theorem holds in the case with no barrier.
Existence and uniqueness of the solution
In this section we are going to show the existence and the uniqueness of the solution for (2.1), but first we show that it has a local solution in the sens of Definition 2.2, before we show that this solution is in fact a global one when the barriers are completely. The main difficulty, in this section, is to show that the solution of the one barrier reflected BSDE studied in [9] can be obtained using penalization method and comparison theorem, since the authors of [9] used localization technique to get that result. Actually, we have the following theorem.
Theorem 3.1. ∀n ≥ 0, let (y n t , z n t ) t≤T be the unique solution of the BSDE
y n t = ξ + T t f (s, y n s , z n s ) + n(L s − y n s ) + ds − T t z n s dB s , t ∈ [0, T ],(3.s 0 n(L r − y n r ) + dr) s≤T converges to (y s , z s , k s ) s≤T solution of E sup 0≤s≤T |y s | e λT +1 + T 0 |z s | 2 ds + k p T < +∞, y t = ξ + T t f (s, y s , z s )ds + k T − k t − T t z s dB s , t ∈ [0, T ]. ∀t ∈ [0, T ] L t ≤ y t ; T 0 (y s − L s ) dk s = 0. (3.2)
Proof. The first part of Equation (
|y t | e λt +1 + T 0 |z s | 2 ds + k p T ≤ C(λ, p, c 0 , c 1 , , T )E 1 + |ξ| e λT +1 + T 0 | η s | e λs +1 ds + sup 0≤t≤T (L + t ) e λt p p−1 . (3.3) Next, we define k n t = n t 0 (L s − y n s ) + ds, t ∈ [0, T ]. Hence, from (3.3), we have that for p ∈]1, 2[ E sup 0≤s≤T |y n s | e λT +1 + T 0 |z n s | 2 ds + (k n T ) p < +∞, ∀n ≥ 0. (3.4) Note that if we define f n (t, y, z) = f (t, x, y) + n(L t − y) + , then f n (t, y, z) ≤ f n+1 (t, y, z).
Using the comparison Theorem in [9], it follows that y n t ≤ y n+1 t , 0 ≤ t ≤ T , a.s. Therefore, by dominated convergence we have
E T 0 (y t − y n t ) e λT +1 dt → 0 as n → +∞. (3.5)
The rest of the proof will be divided into two steps.
Step 1. We will show that for
p ∈] e λT +1 e λT +1 −1 , 2[ E sup 0≤s≤T L s − y n s + p p−1 p−1 p → 0 as n → +∞. (3.6)
For any n ≥ 0 and t ≤ T , we have
y n t = ξ + T t f (s, y n s , z n s )ds + k n t − T t z n s dB s . (3.7)
Putting g n s = f (s, y n s , z n s ) and writing (3.7) forwardly we get
k n t = y n 0 − y n t − t 0 g n s ds + t 0 z n s dB s .
Since from Lemma 2.1 and (3.4) for any n ≥ 0
E sup 0≤t≤T |y n t | e λT +1 + T 0 |g n s | 2 α ds + T 0 |z n s | 2 ds < +∞,
then there exist subsequences and processes (g t ) 0≤t≤T and (z t ) 0≤t≤T which are the weak limit of (g n t ) 0≤t≤T and (z n t ) 0≤t≤T respectively. Henceforth, for any stopping timeτ ≤ T the following weak convergence holds It follows that
k n τ → kτ = y 0 − yτ − τ 0 g s ds + τ 0 z s dB s .
Now for any stopping timesσ ≤τ ≤ T we have k n σ ≤ k n τ , therefore, it holds true that kσ ≤ kτ . Hence,
(k t ) 0≤t≤T is an increasing process. Additionally, E [(k T ) p ] ≤ lim inf n→+∞ E [(k n T ) p ] < +∞.
Henceforth, thanks to the monotonic limit of Peng [[19], Lemma 2.2], the processes (y t ) 0≤t≤T and (k t ) 0≤t≤T are RCLL.
Next, due to the fact that E [(k n T ) p ] < +∞ for any n ≥ 0, we deduce, in taking the limit n → +∞, that
E T 0 (L s − y s ) + ds = 0. Therefore, P -a.s. y t ≥ L t for any t < T . But ξ ≥ L T , then y ≥ L. Hence, (L t − y n t ) + ↓ 0, 0 ≤ t ≤ T , a.s. and from Dini's theorem the convergence is uniform in t. Since (L t − y n t ) + ≤ |L t | + |y 0 t |, the result follows.
Step 2. We will show that (y n , z n , k n ) converges to (y, z, k) solution of (3.2).
Let 0 ≤ T ′ ≤ T and put ∆ t := |y n t − y m t | 2 + (A N ) −1 and Φ(s) = |y n s | + |y m s | + |z n s | + |z m s | + v s . Then, for C > 0 and 1 < β < min{(3 − α), 2} e Ct ∆ β 2 t + C T ′ t e Cs ∆ β 2 s ds = e CT ′ ∆ β 2 T ′ +β T ′ t e Cs ∆ β 2 −1 s y n s − y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) 1 1 Φ(s)>N ds +β T ′ t e Cs ∆ β 2 −1 s y n s − y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) 1 1 Φ(s)≤N ds − β 2 T ′ t e Cs ∆ β 2 −1 s |z n s − z m s | 2 ds − β T ′ t e Cs ∆ β 2 −1 s (y n s − y m s ) (z n s − z m s ) dB s −β( β − 2 2 ) T ′ t e Cs ∆ β 2 −2 s ((y n s − y m s )(z n s − z m s )) 2 ds +β T ′ t e Cs ∆ β 2 −1 s y n s − y m s dk n s − dk m s . First let us deal with β T ′ t e Cs ∆ β 2 −1 s y n s − y m s dk n s − dk m s . Actually, β T ′ t e Cs ∆ β 2 −1 s y n s − y m s dk n s − dk m s = β T ′ t e Cs |y n t − y m t | 2 + (A N ) −1 β 2 −1 y n s − y m s dk n s +β T ′ t e Cs |y m t − y n t | 2 + (A N ) −1 β 2 −1 y m s − y n s dk m s
Since dk n s = 1 1 {y n s ≤Ls} dk n s and dk m s = 1 1 {y m s ≤Ls} dk m s and the function x → βe Cs (|x − y| 2 +
(A N ) −1 ) β 2 −1 x − y is non-decreasing, it follows that β T ′ t e Cs ∆ β 2 −1 s y n s − y m s dk n s − dk m s = β T ′ t e Cs |L s − y m s | 2 + (A N ) −1 β 2 −1 L s − y m s dk n s +β T ′ t e Cs |L s − y n s | 2 + (A N ) −1 β 2 −1 L s − y n s dk m s ≤ 2βe CT ′ sup 0≤s≤T |L s − y m s | 2 + (A N ) −1 β 2 −1 L s − y m s + k n T +2βe CT ′ sup 0≤s≤T |L s − y n s | 2 + (A N ) −1 β 2 −1 L s − y n s + k m T . Since β 2 − 1 < 0 and since ∀t ∈ [0, T ] (A N ) −1 ≤ |L t − y m t | 2 + (A N ) −1 and (A N ) −1 ≤ |L t − y n t | 2 + (A N ) −1 then |L t − y n t | 2 + (A N ) −1 β 2 −1 ≤ (A N ) 2−β 2 and |L t − y m t | 2 + (A N ) −1 β 2 −1 ≤ (A N ) 2−β 2 .
It follows that
β T ′ t e Cs ∆ β 2 −1 s y n s − y m s dk n s − dk m s ≤ 2(A N ) 2−β 2 βe CT ′ sup 0≤s≤T L s − y m s + k n T +2(A N ) 2−β 2 βe CT ′ sup 0≤s≤T L s − y n s + k m T .
Next we put
J 1 = β T ′ t e Cs ∆ β 2 −1 s y n s − y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) 1 1 Φ(s)>N ds J 2 = β T ′ t e Cs ∆ β 2 −1 s y n s − y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) 1 1 Φ(s)≤N ds. Let κ = 3 − α − β. Since (β−1) 2 + κ 2 + α 2 = 1, we use Hölder's inequality to obtain J 1 ≤ βe CT ′ 1 N κ T ′ t ∆ s ds β−1 2 × T ′ t Φ(s) 2 ds κ 2 × T ′ t |f (s, y n s , z n s ) − f (s, y m s , z m s )| 2 α ds α 2 .
For J 2 we use assumption (H.4) and we obtain
J 2 ≤ βM T ′ t e Cs ∆ β 2 −1 s |y n s − y m s | 2 ln A N + ln A N A N + |y n s − y m s ||z n s − z m s | ln A N 1 1 {Φ(s)≤N } ds ≤ βM T ′ t e Cs ∆ β 2 −1 s ∆ s ln A N + |y n s − y m s ||z n s − z m s | ln A N 1 1 {Φ(s)≤N } ds.
We apply Lemma 4.6 in [1] and we choose C = C N =
2M 2 β β − 1 ln A N to get e C N t ∆ β 2 t + β(β − 1) 4 T ′ t e C N s ∆ β 2 −1 s |z n s − z m s | 2 ds ≤ e C N T ′ ∆ β 2 T ′ − β T ′ t e C N s ∆ β 2 −1 s (y n s − y m s ) (z n s − z m s ) dB s +βe C N T ′ 1 N κ T ′ t ∆ s ds β−1 2 × T ′ t Φ(s) 2 ds κ 2 × T ′ t |f (s, y n s , z n s ) − f (s, y m s , z m s )| 2 α ds α 2 +2(A N ) 2−β 2 βe CT ′ sup 0≤s≤T L s − y m s + k n T +2(A N ) 2−β 2 βe CT ′ sup 0≤s≤T L s − y n s + k m T .
Therefore, from Burkholder's inequality, Hölder's inequality, (3.4) and Lemma 2.1 there exists a universal constant ℓ such that for
p ∈] e λT +1 e λT +1 −1 , 2[ E sup (T ′ −δ ′ ) + ≤t≤T ′ |y n t − y m t | β + E T ′ (T ′ −δ ′ ) + |z n s − z m s | 2 (|y n s − y m s | 2 + ν R ) 2−β 2 ds ≤ ℓ e C N δ ′ E |y n T ′ − y m T ′ | β + A 2M 2 δ ′ β β−1 N (A N ) β 2 + A 2M 2 δ ′ β β−1 N (A N ) κ r +ℓ(A N ) 2−β 2 e C N δ ′ E sup 0≤s≤T L s − y n s + p p−1 p−1 p +ℓ(A N ) 2−β 2 e C N δ ′ E sup 0≤s≤T L s − y m s + p p−1 p−1 p where ν R = sup (A N ) −1 , N ≥ R . Hence for δ ′ < (β − 1) min 1 4M 2 , κ 2rM 2 β we derive lim N →+∞ A 2M 2 δ ′ β β−1 N (A N ) β 2 = 0 and lim N →+∞ A 2M 2 δ ′ β β−1 N (A N ) κ r = 0.
It follows then from (3.6) that, It follows from Itô's formula that:
lim sup n,m→+∞ E sup (T ′ −δ ′ ) + ≤t≤T ′ |y n t − y m t | β ≤ ε + ℓe C N δ ′ lim sup n,m→+∞ E |y n T ′ − y m T ′ | β .|y n 0 − y m 0 | 2 + T 0 |z n s − z m s | 2 ds (3.11) = 2 T 0 y n s − y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) ds + 2 T 0 y n s − y m s dk n s − dk m s − 2 T 0 (y n s − y m s ) (z n s − z m s ) dB s .
First we argue that the third term of the right side in (3.11) is a martingale. We can deduce from Burkholder-Davis-Gundy's inequality and Lemma 3.4 that there exists a constant c > 0 such that: .
E sup 0≤t≤T t 0 (y n s − y m s ) (z n s − z m s ) dB s (3.12) ≤ cE sup 0≤s≤T |y n s − y m s | 2 + cE T 0 |z n s − z m s | 2 ds < +∞.
Now we deal with the term
We plug the last inequality in (3.14) and we get
E T 0 |z n s − z m s | 2 ds (3.16) ≤ cE T 0 |y n s − y m s | 2 2−α ds 2−α 2 × E T 0 f (s, y n s , z n s ) − f (s, y m s , z m s ) 2 α ds α 2 +cE sup 0≤s≤T | L s − y m s + | p p−1 E [(k n ) p ] + cE sup 0≤s≤T | L s − y n s + | p p−1 E [(k m ) p ] .
Then, from Lemma 2.1, (3.4), (3.5) and (3.6) (for λ large enough and 1 < α < 2 − 2 e λT +1 )
E T 0 |z n s − z m s | 2 ds → 0 as (n, m) → +∞. (3.17)
Consequently, from (3.1),
E sup 0≤t≤T |k n t − k m t | → 0 as (n, m) → +∞.
Therefore, there exists a pair (z, k) of progressively measurable processes such that
E T 0 |z n s − z s | 2 ds + sup 0≤t≤T |k n t − k t | → 0 as n → +∞.
It remains to show that T 0 (y s − L s )dk s = 0.
Clearly, (k t ) 0≤t≤T is increasing. Moreover, (y n , k n ) tends to (y, k) uniformly in t in probability. Then Hence, (y, z, k) solves the reflected BSDE associated with (ξ, f, L).
We now focus on the uniqueness of the solution for BSDEs (2.1), Actually, we have the following proposition. Proof. Let suppose that there exists two solutions (Y, Z, K + , K − ) and (Y ′ , Z ′ , K ′+ , K ′− ) for (2.1), and for N ∈ N ⋆ we set, ∆ t :
= |Y t − Y ′ t | 2 + (A N ) −1 .
Following the same argument as in step 2 in the proof of Proposition 3.1, one can prove that for every R ∈ N and for every ε > 0 there exists N 0 such that for every N > N 0
E sup (T ′ −δ ′ ) + ≤t≤T ′ |Y t − Y ′ t | β + E T ′ (T ′ −δ ′ ) + |Z s − Z ′ s | 2 (|Y ′ s − Y ′ s | 2 + ν R ) 2−β 2 ds ≤ ℓe C N δ ′ E |Y T ′ − Y ′ T ′ | β + ε. (3.18) where ν R = sup (A N ) −1 , N ≥ R and ℓ a universal constant. Taking successively T ′ = T , T ′ = (T − δ ′ ) + , T ′ = (T − 2δ ′ ) + ... in (3.18), we obtain Y = Y ′ , Z = Z ′ , K + − K − = K ′+ − K ′− .
Finally, let us show that K + = K ′+ and
K − = K ′− . For any t ≤ T , t 0 (Y s − L s )dK s = t 0 (Y s − L s )dK ′ s , where K = K + − K − and K ′ = K ′+ − K ′− . But t 0 (Y s − L s )dK s = − t 0 (U s − L s )dK − s and t 0 (Y s − L s )dK ′ s = − t 0 (U s − L s )dK ′− s . Then t 0 (U s − L s )dK − s = t 0 (U s − L s )dK ′− s , ∀t ≤ T.
Since K − 0 = K ′− 0 = 0 and L t < U t , ∀t ≤ T it follows that K − = K ′− , and we also obtain that K + = K ′+ , which completes the proof.
After overcoming the main difficulty of this section (Theorem 3.1) we can now address the question of the existence of a local solution for (2.1). Actually, we have the following theorem. (ii) For any stopping time τ there exists another stopping time λ τ ≥ τ , P -a.s., and a triplet of processes (Z τ , K τ, (iii) If ν τ and π τ are two stopping times such that:
+ , K τ,− ) ∈ M 2 × A p × A p , (K τ,± τ = 0) such that P -a.s. Y t = Y λτ + λτ t f (s, Y s , Z τ s )ds + (K τ,+ λτ − K τ,+ t ) − (K τ,− λτ − K τ,− t ) − λτ t Z τ s dB s , t ∈ [τ, λ τ ]; λτ τ (Y s − L s ) dK τ,+ s = λτ τ (U s − Y s ) dK τ,− s = 0.ν τ = inf{s ≥ τ, Y s = U s } ∧ T and π τ = inf{s ≥ τ, Y s = L s } ∧ T, then P -a.s., ν τ ∨ π τ ≤ λ τ .
Proof. After we have proved Theorem 3.1, the remaining steps to prove Theorem 3.2 are actually the same as in [8]. Thus, to avoid repetition, we only give sketch of the poof and for more details we refer the reader to [8] pages 914-924. First, we analyze the following increasing penalization scheme: for any n ≥ 0 Next, since the sequence f n (t, y, z) = f (t, y, z) + n(L t − y) + is increasing, then from Remark 2.1 we have that for any n ≥ 0, Y n ≤ Y n+1 ≤ U . Then, (Y n t ) n≥0 converges to a lower semi-continuous optional process Y = (Y t ) 0≤t≤T that satisfies, Y t ≤ U t , ∀t ≤ T P -a.s., and E sup t≤T |Y t | e λT +1 < +∞.
Y n ∈ S e λT +1 , Z n ∈ M 2 , K n,− ∈ A p ; Y n t = ξ + T t (f (s, Y n s , Z n s ) + n(L s − Y n s ) + ) ds − (K n,− T − K n,− t ) − T t Z n s dB s , t ∈ [0, T ]; Y n t ≤ U t , ∀t ∈ [0, T ] and
Next, we put It follows that (ii) There exist two adapted processes (K τ,+ t ) 0≤t≤T and (Z τ t ) 0≤t≤T such that (Y t ,Z τ t ,K τ,+ t , 0) 0≤t≤T is a local solution of the reflected BSDE (2.1) on [τ, θ τ ], which mean it satisfies the following:
θ n τ = inf{s ≥ τ, Y n s = U s } ∧ T, θ τ = limK n,+ γ → K + γ and Y t = Y τ − t τ g s ds − K + t + t τ Z s dB s , such that E (K + θτ ) p ≤ lim inf n→+∞ E (K n,+ θτ ) p < +∞. Since Y n ≤ Y n+1 , Z τ ∈ M 2 ,K τ,+ ∈ A p ; Y t = Y θτ + θτ t f (s, Y s ,Z τ s )ds + (K τ,+ θτ −K τ,+ t ) − θτ tZ s dB s , ∀t ∈ [τ, θ τ ]; Y T = ξ; ∀t ∈ [τ, θ τ ] L t ≤ Y t ≤ U t , and θτ τ (Y s − L s ) dK τ,+ s = 0. (3.21) (iii) v τ = inf{s ≥ τ, Y s = U s } ∧ T , then v τ ≤ θ τ .
Now by analyzing the decreasing penalization scheme that is: for any m ≥ 0 (ii) There exist a pair of adapted processes (Z τ t ,K τ,− t ) t≤T such that the quadruple Next using the comparison result and the technique in [8] page 923, we can prove that P -a.s., for any t ≤ T , Y t =Ỹ t . Finally, we proceed, once again, as in [8] page 924 to finish the proof.
E sup 0≤s≤T |Ỹ m s | e λT +1 + T 0 |Z m s | 2 ds + (K m,+ T ) p < +∞; Y m t = ξ + T t f (s,Ỹ m s ,Z m s ) − m(Ỹ m s − U s ) + ds + (K m,+ T − K m,+ t ) − T tZ m s dB s , t ∈ [0, T ]; Y m t ≥ L t , t ∈ [0, T ] and(Ỹ t ,Z τ t , 0,K τ,− t ) t≤T satisfy Z τ ∈ M 2 ,K τ,− ∈ A p ; Y t =Ỹ δτ + δτ t f (s,Ỹ s ,Z τ s )ds − (K τ,− δτ −K τ,− t ) − δτ tZ τ s dB s , t ∈ [τ, δ τ ]; Y T = ξ; L t ≤Ỹ t ≤ U t , ∀t ∈ [τ, δ τ ] and
Next we can proceed as in [ [14], Theorem 3.7] to show that the local solution is actually a global one, meaning, we have the following theorem: (ii) There exists a constant C > 0 such that:
∀(t, ω, ω ′ ) ∈ [0, T ] × Ω × Ω |σ(t, ω) − σ(t, ω ′ )| ≤ C||ω − ω ′ || t , |σ(t, ω)| ≤ C(1 + ||ω|| t ) and |σ −1 (t, ω)| ≤ C.
Let x 0 ∈ R d and x = (x t ) t≤T be the solution of the following standard functional differential equation:
x t = x 0 + t 0 σ(s, x)dB s , t ≤ T ; (4.1)
The assumptions on σ imply that the equation (4.1) has a unique solution x (see, [20] Let us consider now a compact metric spaceŪ (resp. V ) and U (resp. V) the space of all Pmeasurable processes with values inŪ (resp. V ), and let ϕ :
[0, T ] × R d ×Ū × V → R d and h : [0, T ] × R d ×Ū × V → R d be such that: (A2) (i) For each (u, v) ∈Ū × V , the function (t, x) → ϕ(t, x, u, v) is predictable. (ii) ∀(t, x) ∈ [0, T ] × R d , ϕ(t,
x, ., .) and h(t, x, ., .) are continuous onŪ × V .
(iii) There exists a real constant K > 0 such that
|h(t, x, u, v)| + |ϕ(t, x, u, v)| ≤ K(1 + ||x|| t ), ∀(t, x, u, v) ∈ [0, T ] × R d ×Ū × V. (4.3)
Under the previous assumption, and for any (u, v) ∈ U × V, we define a probability on (Ω, F) by
dP (u,v) dP = exp T 0 σ −1 (s, x)ϕ(s, x, u s , v s )dB s − 1 2 T 0 |σ −1 (s, x)ϕ(s, x, u s , v s )| 2 ds .
We now consider the payoff
J(u, τ ; v, σ) = E (u,v) τ ∧σ 0 h(s, x, u s , v s )ds + L σ 1 1 {σ≤τ <T } + U τ 1 1 {τ <σ} + ξ1 1 {τ ∧σ=T } ,(4.4)
where L, U and ξ are those of the previous sections. The problem we are interested in is finding a saddle-point for the payoff functional J(u, τ ; v, σ), meaning we are looking for two interventions strategies (u * , τ * ) and (v * , σ * ) that satisfies
J(u * , τ * ; v, σ) ≤ J(u * , τ * ; v * , σ * ) ≤ J(u, τ ; v * , σ * ). (4.5)
Now we define the Hamiltonian associated with this mixed stochastic game problem by
H(t, x, z, u, v) := zσ −1 (t, x)ϕ(t, x, u, v) + h(t, x, u, v) ∀(t, x, z, u, v) ∈ [0, T ] × R d × R d ×Ū × V.
Under Isaacs's condition, and Benes' theorem [3], there exits a couple of P ⊗ B-measurable functions u * ≡ u * (t, x, z) and v * ≡ v * (t, x, z) with values inŪ and V respectively such that:
∀(t, x, u, v) ∈ [0, T ] × R d ×Ū × V H * (t,
x, z) = H(t, x, z, u * (t, x, z), v * (t, x, z)) = inf Under the assumptions (A1) and (A2), there exists a quadruple of adapted processes (Y * , Z * , K * ,+ , K * ,− ) such that it is the uniques solution of the finite horizon reflected BSDE associated with (ξ, H * , L, U ). We denote by τ * and σ * the stopping times defined as follow:
σ * = inf{t ≥ 0, Y * t = L t } ∧ T and τ * = inf{t ≥ 0, Y * t = U t } ∧ T.
Then, Y * 0 = J(u * , τ * ; v * , σ * ) and (u * , τ * ; v * , σ * ) is a saddle-point for the mixed stochastic game problem.
Proof. Since H * satisfy (H.3) and (H.4) (see [9]), the quadruple (Y * , Z * , K * ,+ , K * ,− ) exists and unique. the rest of the proof is classical, thus we leave to the reader.
Connection with Double Obstacle Variational Inequalities
Let b : [0, T ] × R d → R d , σ : [0, T ] × R d → R d×d
be two globally Lipschitz functions and let us consider the following SDE:
dX t = b(t, X t )dt + σ(t, X t )dB t , t ≤ T.
We denote by (X t,x s ) s≥t the unique solution of the previous SDE starting from x at time s = t. Now we are given four functions |f (s, X t,x s , 0, 0)| p ds < +∞,
f : [0, T ] × R d × R × R d → R, g : R d → R, and h, h ′ : [0, T ] × R d → R,(H'.2) ∀(t, x) ∈ [0, T ] × R d , h(t, x) < h ′ (t, x) and h(T, x) ≤ g(x) ≤ h ′ (T, x), in addition there exists a constant C > 0 such that |h ′ (t, x)| + |h(t, x)| + |g(x)| ≤ C(1 + ||x|| t ).
Connection with one Obstacle variational inequalities
Let us define (Y t,x s , Z t,x s , K t,x s ) s∈[t,
T ] the solution of the following reflected BSDE :
Y t,x s = g(X t,x T ) + T s f (u, X t,x u , Y t,x u , Z t,x u )du + K t,x T − K t,x s − T s Z t,x u dB u ∀s ∈ [t, T ] Y t,x s ≥ h(s, X t,x s ), and T t h(u, X t,x u ) − Y t,x u dK t,x u = 0 (5.1) Moreover on [0, t] we set Y t,x s = Y t,
x t , Z t,x s = K t,x s = 0. For every (t, x) we will show that Y t,x t is deterministic and we define a function
u(t, x) = Y t,x t . (5.2)
First we will prove that u is continuous and is a viscosity solution of the following obstacle problem.
∀(t, x) ∈ [0, T ] × R d min u(t, x) − h(t, x), − ∂u ∂t (t, x) − Lu(t, x) − f (t, x, u(t, x), σ(t, x)∇u(t, x)) = 0,(5.3)
with u(T, x) = g(x), x ∈ R d . Then we will prove that it is the unique continuous viscosity solution that belongs to some class of functions.
Continuity
Theorem 5.1. For every (t, x) ∈ [0, T ] × R d , the function u(t, x) = Y t,x
t is continuous and of polynomial growth.
Proof. Let (t n , x n ) → (t, x), since |Y t,x t − Y tn,xn tn | is deterministic, we have |Y t,x t − Y tn,xn tn | = E |Y t,x t − Y tn,xn tn | ≤ E |Y t,x t − Y t,x tn | + E |Y t,x tn − Y tn,xn tn | .
Then from (3.3) and Lemma 2.1 we get lim
n→+∞ E |Y t,x t − Y t,x tn | = 0.
Now we shall show that lim n→+∞ E |Y t,x tn − Y tn,xn tn | = 0, for that we use the fact that
E |Y t,x tn − Y tn,xn tn | ≤ E sup 0≤s≤T |Y t,x s − Y tn,xn s | .
We proceed as in the proof of step 2 Proposition 3.1, which that for β ∈]1, min (3 − α, 2) [, we have: Since f , g and h are continuous in the x-variable, then for δ ′ < (β − 1) min 1 4M 2 , κ 2rM 2 β we pass to the limit on n and then to the limit on N , and by taking successively T ′ = T , T ′ = (T − δ ′ ) + ,
E sup (T ′ −δ ′ ) + ≤s≤T ′ |Y t,x s − Y tn,xn s | β + E T ′ (T ′ −δ ′ ) + Z t,x s − Z tn,xn s 2 |Y t,x s − Y tn,xn s | 2 + ν R 2−β 2 ds ≤ ℓe C N δ ′ E |g X t,x T ′ − g X tn,xn T ′ | β + ℓ A 2M 2 δ ′ β β−1 N (A N ) β 2 + A 2M 2 δ ′ β β−1 N (A N ) κ r + ℓe C N δ ′ β[2N 2 + ν 1 ] β−1 2 × E T ′ t |f (u, X t,x u , Y t,x u , Z t,x u ) − f (u, X tn,xn u , Y t,x u , Z t,x u )|du + ℓe C N δ ′ E supT ′ = (T − 2δ ′ ) + ... we get for every β ∈]1, min (3 − α, 2) [, lim n→+∞ E sup 0≤s≤T |Y t,x s − Y tn,xn s | β = 0.
Finally, since β > 1, then by using Hölder's inequality the result follows. The polynomial growth of u follows from (3.3). Proof. Let us consider the following reflected BSDE:
Y t,x,n s = g(X t,x T ) + T s f n (r, X t,x r , Y t,x,n r , Z t,x,n r )dr − T s Z t,x,n r dB r , (5.4) where f n (r, X t,x r , Y t,x,n r , Z t,x,n r ) = f (r, X t,x r , Y t,x,n r , Z t,x,n r ) + n Y t,x,n r − h(r, X t,x r ) − .
Then, from [2], u n (t, x) = Y t,x,n t is the viscosity solution of ∂u n ∂t (t, x) + Lu n (t, x) + f n (t, x, u n (t, x), σ(t, x)∇u n (t, x)) = 0. (5.5)
From the comparison theorem we have that u n is increasing, and we can argue as in [11] to show that u n converges to u the solution of (5.3).
Connection with Double Obstacle Variational Inequalities
Let (Y t,x s , Z t,x s , K +,t,x s , K −,t,x s ) t≤s≤T as the solution of the following reflected BSDE :
Y t,x s = g(X t,x T ) + T s f (u, X t,x u , Y t,x u , Z t,x u )du + T s dK +,t,x u − T s dK −,t,x u − T s Z t,x u dB u , ∀s ∈ [t, T ], h(s, X t,x s ) ≤ Y t,x s ≤ h ′ (s, X t,x s ), T t Y t,x u − h(u, X t,x u ) dK +,t,x u = T t h ′ (u, X t,x u ) − Y t,x u dK −,t,x u = 0. (5.6)
The objective of this section is to show that u(t, x) = Y t,x t is continuous and it is the solution in the viscosity sense of the following obstacle problem:
min u(t, x) − h(t, x), max − ∂u ∂t (t, x) − Lu(t, x) −f (t, x, u(t, x), σ(t, x)∇u(t, x)), u(t, x) − h ′ (t, x) = 0; (t, x) ∈ [0, T ) × R d u(T, x) = g(x), ∀x ∈ R d .
(5.7)
The continuity of the viscosity solution
Proposition 5.1. For every (t, x) ∈ [0, T ] × R d , the function u(t, x) = Y t,x t is continuous.
Proof. For any n ≥ 0 let (Y t,x,n s ) s≤T (resp. (Y t,x,n s ) s≤T ) be the first component of the unique solution of the BDSE with one reflecting lower (resp. upper) barrier associated with (g(X t,x T ), f (s, X t,x s , y, z)− n(h ′ (s, X t,x s ) − y) − , h(s, X t,x s )) (resp. (g(X t,x T ), f (s, X t,x s , y, z) + n(h(s, X t,x s ) − y) + , h ′ (s, X t,x s ))). As it has been shown in the previous subsection, for any n ≥ 0 there exist two deterministic functions
u n (t, x) = Y t,x,n t andū n (t, x) = Y t,
x,n t such that they are the viscosity solution of
min u(t, x) − h(t, x), − ∂u ∂t (t, x) − Lu(t, x) − f (t, x, u(t, x), σ(t, x)∇u(t, x)) (5.8) +n(h ′ (t, x) − u(t, x)) − ] = 0, (t, x) ∈ [0, T ) × R d ; u(T, x) = g(x), and max u(t, x) − h ′ (t, x), − ∂u ∂t (t, x) − Lu(t, x) − f (t, x, u(t, x), σ(t, x)∇u(t, x)) (5.9) −n(h(t, x) − u(t, x)) + ] = 0, (t, x) ∈ [0, T ) × R d ; u(T, x) = g(x)
respectively. Now thanks to the results of the previous sections, the sequence (Y t,x,n ) n≥0 converges increasingly to Y t,x and the sequence (Y t,x,n ) n≥0 converges decreasingly to the same Y t,x , meaning that u n (t, x) ց u(t, x) andū n (t, x) ր u(t, x). Since u n andū n are both continuous, then u is at the same time, lower and upper semi-continuous, therefore, it is continuous. Proof. First note that since u n ,ū n and u are continuous, then from Dini's Lemma, u n andū n converge uniformly to u on compact subsets of [0, T ] × R d .
Existence of the solution
Let us now show that u is a viscosity subsolution of (5.7). Let φ be a C 1,2 (0, T ) × R d , and (t n , x n ) be a sequence of local maximum points of u n − φ such that it converges to (t, x). For n large enough we have u n (t n , x n ) > h(t n , x n ), and since u n is a viscosity solution of (5.8) we have:
− ∂φ ∂t (t n , x n ) − Lφ(t n , x n ) − f (t n , x n , u n (t n , x n ), σ(t n , x n )∇φ(t n , x n )) +n(h ′ (t n , x n ) − u n (t n , x n )) − ≤ 0, then, − ∂φ ∂t (t n , x n ) − Lφ(t n , x n ) − f (t n ,
x n , u n (t n , x n ), σ(t n , x n )∇φ(t n , x n )) ≤ 0.
Now due to the continuity of the functions and the uniform convergence of u n we obtain
− ∂φ ∂t (t, x) − Lφ(t, x) − f (t,
x, u(t, x), σ(t, x)∇φ(t, x)) ≤ 0.
Since u(T, x) = g(x) and h(t, x) ≤ u(t, x) ≤ h ′ (t, x), u is a viscosity subsolution of (5.7). In the same way, with converse inequalities, we show that u is also a viscosity supersolution of (5.7).
Uniqueness of the viscosity solution
We are going now to address the question of uniqueness of the viscosity solution of (5.7). But first we recall the following proposition.
Proposition 5.2. w is a viscosity solution of min w(t, x) − h(t, x), − ∂w ∂t (t, x) − Lw(t, x) −f (t, x, w(t, x), σ(t, x)∇w(t, x))] = 0, (t, x) ∈ [0, T [×R d w(T, x) = g(x), x ∈ R d , (5.10)
iff w(t, x) = e t w(t, x), for any t ∈ [0, T ] and x ∈ R d , is a viscosity solution of
min w(t, x) − e t h(t, x), − ∂w ∂t (t, x) + w(t, x) − Lw(t, x) −e t f (t, x, e −t w(t, x), σ(t, x)∇(e −t w(t, x))) = 0, (t, x) ∈ [0, T [×R d w(T, x) = e T g(x), x ∈ R d . (5.11)
We now have the following theorem. Proof. In order to prove the uniqueness of the solution it is enough to show that if v and u are viscosity supersolution and subsolution of (5.7) respectively, then
u(t, x) ≤ v(t, x), ∀(t, x) ∈ [0, T ] × R d .
First, note that v ≥ h et u ≤ h ′ and setv := v ∧ h ′ andū := u ∨ h. Then,ū (resp.v) is a viscosity subsolution (resp. supersolution) of (5.7). It follows thatū (resp.v) is a viscosity subsolution (resp. supersolution) of (5.3)
Now we show thatv andū satisfyū ≤v. Actually for some R > 0 suppose there exists
(t, x) ∈ [0, T ] × B R (B R := {x ∈ R d ; |x| < R}) such that: max t,x (u ′ (t, x) − v ′ (t, x)) = u ′ (t, x) − v ′ (t, x) = η > 0,(5.12)
where v ′ (t, x) = e tv (t, x) and u ′ (t, x) = e tū (t, x) for any t ∈ [0, T ] and x ∈ R d . Let us take θ, λ and β ∈ (0, 1] small enough. Then, for a small ǫ > 0, let us define:
Φ ǫ (t, x, y) = (1 − λ)u ′ (t, x) − v ′ (t, y) − 1 2ǫ |x − y| 4 − θ(|x − x| 4 + |y − x| 4 ) − β(t − t) 2 . (5.13)
We have u ′ and v ′ are bounded, there exists a (t ǫ , x ǫ , y ǫ ) ∈ [0, T ] × B R × B R , for R large enough, such that:
Φ ǫ (t ǫ , x ǫ , y ǫ ) = max (t,x,y) Φ ǫ (t, x, y).
On the other hand, from 2Φ ǫ (t ǫ , x ǫ , y ǫ ) ≥ Φ ǫ (t ǫ , x ǫ , x ǫ ) + Φ ǫ (t ǫ , y ǫ , y ǫ ), we have
1 ǫ |x ǫ − y ǫ | 4 ≤ (1 − λ)(u ′ (t ǫ , x ǫ ) − u ′ (t ǫ , y ǫ )) + (v ′ (t ǫ , x ǫ ) − v ′ (t ǫ , y ǫ )),
and consequently 1 ǫ |x ǫ − y ǫ | 4 is bounded, and as ǫ → 0, |x ǫ − y ǫ | → 0. Since u ′ and v ′ are uniformly continuous on [0, T ] × B R , then 1 2ǫ |x ǫ − y ǫ | 4 → 0 as ǫ → 0.
Since (1 − λ)u ′ (t, x) − v ′ (t, x) ≤ Φ ǫ (t ǫ , x ǫ , y ǫ ) ≤ (1 − λ)u ′ (t ǫ , x ǫ ) − v ′ (t ǫ , y ǫ ),
it follow as λ → 0 and the continuity of u ′ and v ′ that, up to a subsequence,
(t ǫ , x ǫ , y ǫ ) → (t,
x, x). (5.14)
Next let us show that t ǫ < T. Actually if t ǫ = T then,
Φ ǫ (t, x, x) ≤ Φ ǫ (T, x ǫ , y ǫ ),
and,
(1 − λ)u ′ (t, x) − v ′ (t, x) ≤ (1 − λ)e T g(x ǫ ) − e T g(y ǫ ) − β(T − t ǫ ) 2 , since u ′ (T, x ǫ ) = e T g(x ǫ )
, v ′ (T, y ǫ ) = e T g(y ǫ ) and g is uniformly continuous on B R . Then as λ → 0 we have, η ≤ −β(T − t) 2 η < 0, which yields a contradiction and we have t ǫ ∈ [0, T ). Now we claim that u ′ (t ǫ , x ǫ ) − e tǫ h(t ǫ , x ǫ ) > 0. If not, there exist a subsequence such that u ′ (t ǫ , x ǫ ) − e tǫ h(t ǫ , x ǫ ) ≤ 0, then as λ → 0 we have, u ′ (t, x) − e t h(t, x) ≤ 0 but from the assumption u ′ (t, x) − v ′ (t, x) > 0, we deduce that 0 ≥ u ′ (t, x) − e t h(t, x) > v ′ (t, x)− h(t, x). Therefore we have v ′ (t, x)− e t h(t, x) < 0, which leads to a contradiction with (5.11). Next let us denote ψ ǫ (t, x, y) = 1 2ǫ |x − y| 4 + θ(|x − x| 4 + |y − x| 4 ) + β(t − t) 2 .
Then we have:
D t ψ ǫ (t, x, y) = 2β(t − t), D x ψ ǫ (t, x, y) = 2 ǫ (x − y)|x − y| 2 + 4θ(x − x)|x − x| 2 , D y ψ ǫ (t, x, y) = − 2 ǫ (x − y)|x − y| 2 + 4θ(y − x)|y − x| 2 ,
B(t, x, y) = D 2 x,y ψ ǫ (t, x, y) = 1 ǫ a 1 (x, y) −a 1 (x, y) −a 1 (x, y) a 1 (x, y) + a 2 (x) 0 0 a 2 (y)
(1 − λ)u ′ (t, x) − v ′ (t, y) − ψ ǫ (t, x, y)
at the point (t ǫ , x ǫ , y ǫ ), for any ǫ 1 > 0, we can find c, c 1 ∈ R and X, Y ∈ S(d), such that:
(c, 2 ǫ (x ǫ − y ǫ )|x ǫ − y ǫ | 2 + 4θ(x ǫ − x)|x ǫ − x| 2 , X) ∈ J 2,+ ((1 − λ)u ′ (t ǫ , x ǫ )), (−c 1 , 2 ǫ (x ǫ − y ǫ )|x ǫ − y ǫ | 2 − 4θ(y ǫ − x)|y ǫ − x| 2 , Y ) ∈ J 2,− (v ′ (t ǫ , y ǫ )), c + c 1 = D t ψ ǫ (t ǫ , x ǫ , y ǫ ) = 2β(t ǫ − t) and finally −( 1 ǫ 1 + ||B(t ǫ , x ǫ , y ǫ )||)I ≤ X 0 0 −Y ≤ B(t ǫ , x ǫ , y ǫ ) + ǫ 1 B(t ǫ , x ǫ , y ǫ ) 2 .
(5.17)
Taking now into account (5.15), and the definition of viscosity solution, we get:
−c − 1 2 T r[σ * (t ǫ , x ǫ )Xσ(t ǫ , x ǫ )] − 2 ǫ (x ǫ − y ǫ )|x ǫ − y ǫ | 2 + 4θ(x ǫ − x)|x ǫ − x| 2 , b(t ǫ , x ǫ ) + (1 − λ)u ′ (t ǫ , x ǫ ) − (1 − λ)e tǫ f (t ǫ ,
x ǫ , e −t u ′ (t ǫ , x ǫ ), σ(t ǫ , x ǫ )∇(e −tǫ u ′ (t ǫ , x ǫ ))) ≤ 0 and c 1 − 1 2 T r[σ * (t ǫ , y ǫ )Y σ(t ǫ , y ǫ )] − 2 ǫ (x ǫ − y ǫ )|x ǫ − y ǫ | 2 − 4θ(y ǫ − x)|y ǫ − x| 2 , b(t ǫ , y ǫ ) + v ′ (t ǫ , y ǫ ) − e tǫ f (t ǫ , y ǫ , e −t v ′ (t ǫ , y ǫ ), σ(t ǫ , y ǫ )∇(e −tǫ v ′ (t ǫ , y ǫ ))) ≥ 0 which implies that:
(1 − λ)u ′ (t ǫ , x ǫ ) − v ′ (t ǫ , y ǫ ) − c − c 1 ≤ 1 2 T r[σ * (t ǫ , x ǫ )Xσ(t ǫ , x ǫ ) − σ * (t ǫ , y ǫ )Y σ(t ǫ , y ǫ )] + 2 ǫ (x ǫ − y ǫ )|x ǫ − y ǫ | 2 , b(t ǫ , x ǫ ) − b(t ǫ , y ǫ )
+ 4θ(x ǫ − x)|x ǫ − x| 2 , b(t ǫ , x ǫ ) + 4θ(y ǫ − x)|y ǫ − x| 2 , b(t ǫ , y ǫ ) +(1 − λ)e tǫ f (t ǫ , x ǫ , e −t u ′ (t ǫ , x ǫ ), σ(t ǫ , x ǫ )∇(e −tǫ u ′ (t ǫ , x ǫ ))
−e tǫ f (t ǫ , y ǫ , e −t v ′ (t ǫ , y ǫ ), σ(t ǫ , y ǫ )∇(e −tǫ v ′ (t ǫ , y ǫ )).
(5.18)
But from (5.16) there exist two constants C and C 1 such that: ||a 1 (x ǫ , y ǫ )|| ≤ C|x ǫ − y ǫ | 2 and (||a 2 (x ǫ )|| ∨ ||a 2 (y ǫ )||) ≤ C 1 θ.
As B = B(t ǫ , x ǫ , y ǫ ) = 1 ǫ a 1 (x ǫ , y ǫ ) −a 1 (x ǫ , y ǫ ) −a 1 (x ǫ , y ǫ ) a 1 (x ǫ , y ǫ ) + a 2 (x ǫ ) 0 0 a 2 (y ǫ ) then B ≤ C ǫ |x ǫ − y ǫ | 2 I −I −I I + C 1 θ I 0 0 I . Now, from the Lipschitz continuity of σ, (5.17) and (5.19) we get:
It follows that:
B + ǫ 1 B 2 ≤ C( 1 ǫ |x ǫ − y ǫ | 2 +1 2
T r[σ * (t ǫ , x ǫ )Xσ(t ǫ , x ǫ ) − σ * (t ǫ , y ǫ )Y σ(t ǫ , y ǫ )] ≤ C ǫ (|x ǫ − y ǫ | 4 + |x ǫ − y ǫ | 6 ) + C 1 θ.
Next by plugging into (5.18) we obtain:
(1 − λ)u ′ (t ǫ , x ǫ ) − v ′ (t ǫ , y ǫ ) − 2β(t ǫ − t)
≤ (1 − λ)e tǫ f (t ǫ , x ǫ , e −t u ′ (t ǫ , x ǫ ), σ(t ǫ , x ǫ )∇(e −tǫ u ′ (t ǫ , x ǫ ))
−e tǫ f (t ǫ , y ǫ , e −t v ′ (t ǫ , y ǫ ), σ(t ǫ , y ǫ )∇(e −tǫ v ′ (t ǫ , y ǫ )) + C ǫ (|x ǫ − y ǫ | 4 + |x ǫ − y ǫ | 6 ) + C 1 θ.
By sending ǫ → 0, λ → 0, θ → 0 and taking into account of the continuity of f , we obtain η < 0 which is a contradiction.
Now we have u ≤ u ∨ h ≤ v ∧ h ′ ≤ v.
which means that if we have u andǔ two solutions of (5.7) then u ≤ǔ andǔ ≤ u. Hence, obviously, u =ǔ.
s, x, u s , v s )ds + L σ 1 1 {σ≤τ <T } + U τ 1 1 {τ <σ} + ξ1 1 {τ ∧σ=T } ,
We now introduce the comparison result established in [[9], Theorem 4.1] which also holds in our setting. Proposition 2.1. Let (ξ, f, L) and (ξ ′ , f ′ , L ′ ) be two sets of data that satisfies all the assumptions; (H.1), (H.2), (H.3), and (H.4). And suppose in addition the followings:
T ′ = T , T ′ = (T − δ ′ ) + , T ′ = (T − 2δ ′ ) + ... in (3.8) we get lim n,m→+∞ E sup 0≤t≤T |y n t − y m t | β = 0.
− y m s f (s, y n s , z n s ) − f (s, y m s , z m s ) ds ( 3 ff
3Combining (3.11),(3.12) and(3.13) to obtain that there exists a constant c such that: (s, y n s , z n s ) − f (s, y m s , z m s ) (s, y n s , z n s ) − f (s, y m s , z m s )
s − L s )dk s in probability as n → +∞. Therefore, since T 0 (y n s −L s )dk n s ≤ 0, n ∈ N, we have T 0 (y s −L s )dk s ≤ 0. On the other hand T 0 (y s − L s )dk s ≥ 0. Thus, T 0 (y s − L s )dk s = 0. a.s.
Proposition 3. 1 .
1Assume that (H.1)-(H.4) are satisfied, then the reflected BSDE associated with (ξ, f, L, U ) has at most one solution.
Theorem 3 . 2 .
32There exists a unique continuous process Y = (Y t ) t∈[0,T ] such that (i) E sup s≤T |Y s | e λT +1 < +∞ and satisfies L ≤ Y ≤ U and Y T = ξ.
note that (Y n , Z n , K n,− ) exists due to Theorem 3.1 and the fact that (Y, Z, K) is a solution of the reflected BSDE with a lower obstacle associated with (ξ, f, L) iff (−Y, −Z, K) is a solution of the reflected BSDE with an upper obstacle associated with (−ξ, −f (t, −Y, −Z), −L).
show that Y is RCLL on [τ, θ τ ]. Indeed, since K n,− does not increase before θ τ , on [τ, θ τ ]. Then, as a result of Lemma 2.1 and (3.4), there exists subsequences of ((g n s 1 1 [τ,θτ ] (s)) s≤T ) n≥0 and ((Z n s 1 1 [τ,θτ ] (s)) s≤T ) n≥0 , which we still index by n, and processes (g s 1 1 [τ,θτ ] (s)) s≤T and (Z s 1 1 [τ,θτ ] (s)) s≤T such that for any stopping timeγ satisfying τ ≤γ ≤ θ τ the following weak convergence holds: ds, as n → +∞.
Proposition 3 . 2 .
32Assume that (H.1)-(H.4) are satisfied. Then, the following holds true: (i) P -a.s., Y θτ 1 1 {θτ <T } = U θτ 1 1 {θτ <T } and P -a.s., ∀t ≤ T , L t ≤ Y t .
m ,Z m , K m,+ ) exists due to Theorem 3.1) we can also show that Proposition 3.3. The following hold:(i) P -a.s.,Ỹ δτ 1 1 {δτ <T } = L δτ 1 1 {δτ <T } and P -a.s., ∀t ≤ T , we haveỸ t ≤ U t .
) Put µ τ = inf{s ≥ τ,Ỹ s = L s } ∧ T , then µ τ ≤ δ τ , whereỸ = lim m→+∞Ỹ m and δ τ = lim m→+∞ δ m τ with δ m τ = inf{s ≥ τ,Ỹ m s = L s } ∧ T, ∀m ≥ 0.
Theorem 3. 3 .
3Under the assumptions (H.1), (H.2), (H.3) and (H.4), the reflected BSDE (2.1) associated with (ξ, f, L, U ) has a unique solution that is the quadruple (Y, Z, K + , K − ).
4
Mixed zero-sum stochastic differential game problem Now we deal with an application of the double barrier reflected BSDEs tool for solving stochastic mixed games problems. First, let us briefly describe the setting of the considered problem. In the sequel Ω = C([0, T ], R d ) is the space of continuous functions from [0, T ] to R d .Put ||ω|| t = sup s≤t |ω s | and let us consider a mapping σ : (t, ω) ∈ [0, T ] × Ω → σ(t, ω) ∈ R d R d satisfying the following assumptions (A1) (i) σ is P-measurable and invertible.
H
(t, x, z, u, v) = sup v∈V inf u∈Ū H(t, x, z, u, v).
such that the followings holds:(H'.1) f satisfies assumptions (H.3) and (H.4), moreover there exists p > 1 such that for every (t, x) ∈ [0, T ] × R d E T 0
0≤s≤T |
0≤s≤T|h(s, X t,x s ) − h(s, X tn,xn s )| 2 + (A N ) −1 β 2 −1 × h(s, X t,x s ) − h(s, X tn ,
. 2 .
2Assume that (H'.1) and (H'.2) are satisfied, then the function u : (t, x) → u(t, x) = Y t,x t is a viscosity solution of the obstacle problem in (5.3).
Theorem 5. 3 .
3Assume that (H'.1) and (H'.2) are satisfied, then the function u : (t, x) → u(t, x) = Y t,x t is a viscosity solution of the obstacle problem in (5.7).
Theorem 5 . 4 .
54Under (H'.1) and (H'.2), the equation (5.7) has at most one solution.
1 (x, y) = 2|x − y| 2 I + 4(x − y)(x − y) * and a 2 (x) = 4θ|x − x| 2 I + 8xx * . Taking into account (5.15) then applying the result by Crandall et al. (Theorem 8.3, [5]) to the function
ǫ 1 ǫ 2
2|x ǫ − y ǫ | 4 )where C and C 1 which hereafter may change from line to line. Choosing now ǫ 1 = ǫ, yields the relationB + ǫ 1 B 2 ≤ C ǫ (|x ǫ − y ǫ | 2 + |x ǫ − y ǫ | 4 ) I −I −I I + C 1 θ I 0 0 I . (5.19)
we can deduce from a result by S. Peng [[19], Lemma 2.2] that Y is RCLL on [τ, θ τ ]. Next, we can show as in [8] that we have the following proposition which can be considered as steps of the proof.
Stochastic optimal control and BSDEs with logarithmic growth. K Bahlali, B El Asri, Bulletin des Sciences Mathématiques. 6Bahlali, K. and El Asri, B. (2012). Stochastic optimal control and BSDEs with logarithmic growth. Bulletin des Sciences Mathématiques, 136(6), 617-637.
One dimensional BSDEs with logarithmic growth application to PDEs. K Bahlali, O Kebiri, N Khelfallah, H Moussaoui, Stochastics. 896-7Bahlali, K., Kebiri, O., Khelfallah, N. and Moussaoui, H. (2017). One dimensional BSDEs with logarithmic growth application to PDEs. Stochastics, 89(6-7), 1061-1081.
Existence of optimal stochastic control laws. V E Benes, SIAM Journal on Control and Optimization. 8Benes, V. E. (1970). Existence of optimal stochastic control laws. SIAM Journal on Control and Optimization, 8, 179-188.
Backward stochastic differential equations with reflection and Dynkin games. The Annals of Probability. J Cvitanic, I Karatzas, Cvitanic, J. and Karatzas, I. (1996). Backward stochastic differential equations with reflection and Dynkin games. The Annals of Probability, 2024-2056.
User's guide to viscosity solutions of second order partial differential equations. M G Crandall, H Ishii, P L Lions, Bulletin of the American mathematical society. 271Crandall, M. G., Ishii, H. and Lions, P. L. (1992). User's guide to viscosity solutions of second order partial differential equations. Bulletin of the American mathematical society, 27(1), 1-67.
Probabilités et Potentiel I-IV. C Dellacherie, P A Meyer, HermannParisDellacherie, C. and Meyer, P. A. (1975). Probabilités et Potentiel I-IV. Hermann, Paris.
A finite horizon optimal multiple switching problem. B Djehiche, S Hamadène, A Popier, SIAM Journal on Control and Optimization. 484Djehiche, B., Hamadène, S. and Popier, A. (2009). A finite horizon optimal multiple switching problem. SIAM Journal on Control and Optimization, 48(4), 2751-2770.
L p -solutions for doubly reflected backward stochastic differential equations. Stochastic analysis and applications. B El Asri, S Hamadène, H Wang, 29El Asri, B., Hamadène, S. and Wang, H. (2011). L p -solutions for doubly reflected backward stochastic differential equations. Stochastic analysis and applications, 29(6), 907-932.
Reflected BSDEs with Logarithmic Growth and Applications in Mixed Stochastic Control Problems. B El Asri, K Oufdil, 10.1080/17442508.2022.2034818Stochastics: An International Journal of Probability and Stochastic Processes. El Asri, B. and Oufdil, K. (2021). Reflected BSDEs with Logarithmic Growth and Applications in Mixed Stochastic Control Problems, Stochastics: An International Journal of Probability and Stochastic Processes (https://doi.org/10.1080/17442508.2022.2034818 ).
The finite horizon optimal multi-modes switching problem: the viscosity solution approach. B El Asri, S Hamadène, Applied Mathematics and Optimization. 602El Asri, B. and Hamadène, S. (2009). The finite horizon optimal multi-modes switching prob- lem: the viscosity solution approach. Applied Mathematics and Optimization, 60(2), 213-235.
Reflected solutions of backward SDE's and related obstacle problems for PDE's. the Annals of Probability. N El Karoui, C Kapoudjian, E Pardoux, S Peng, M C Quenez, 25El Karoui, N., Kapoudjian, C., Pardoux, E., Peng, S. and Quenez, M. C. (1997). Reflected solutions of backward SDE's and related obstacle problems for PDE's. the Annals of Probability, 25(2), 702-737.
Backward stochastic differential equations in finance. N El Karoui, S Peng, M C Quenez, Mathematical finance. 71El Karoui, N., Peng, S. and Quenez, M. C. (1997). Backward stochastic differential equations in finance. Mathematical finance, 7(1), 1-71.
Mixed zero-sum stochastic differential game and American game options. S Hamadène, SIAM Journal on Control and Optimization. 452Hamadène, S. (2006). Mixed zero-sum stochastic differential game and American game options. SIAM Journal on Control and Optimization, 45(2), 496-518.
BSDEs with two reflecting barriers: the general result. S Hamadène, M Hassani, 132Probab. Theory RelatHamadène, S., and Hassani, M. (2005). BSDEs with two reflecting barriers: the general result. Probab. Theory Relat. Fields 132, 237?264.
Reflected BSDEs and mixed game problem. Stochastic Processes and their Applications. S Hamadène, J P Lepeltier, 85Hamadène, S. and Lepeltier, J. P. (2000). Reflected BSDEs and mixed game problem. Stochastic Processes and their Applications, 85, 177-188.
Brownian Motion and Stochastic Calculus. Second Edition. I Karatzas, S E Shreve, Springer-VerlagNew YorkKaratzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus. Second Edi- tion, Springer-Verlag, New York.
Reflected backward stochastic differential equations under monotonicity and general increasing growth conditions. J P Lepeltier, A Matoussi, M Xu, Advances in applied probability. 371Lepeltier, J. P., Matoussi, A. and Xu, M. (2005). Reflected backward stochastic differential equations under monotonicity and general increasing growth conditions. Advances in applied probability, 37(1), 134-159.
Adapted solution of a backward stochastic differential equation. E Pardoux, S Peng, Systems and Control Letters. 141Pardoux, E. and Peng, S. (1990). Adapted solution of a backward stochastic differential equa- tion. Systems and Control Letters, 14(1), 55-61.
Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyers type. S Peng, Probability theory and related fields. 113Peng, S. (1999). Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyers type. Probability theory and related fields, 113(4), 473-499.
Continuous Martingales and Brownian Motion. D Revuz, M Yor, Springer-VerlagBerlinRevuz, D. and Yor, M. (1991). Continuous Martingales and Brownian Motion. Springer-Verlag, Berlin.
| [] |
[
"Enhancing secure key rates of satellite QKD using a quantum dot single-photon source",
"Enhancing secure key rates of satellite QKD using a quantum dot single-photon source"
] | [
"Poompong Chaiwongkhot \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics\nFaculty of Science\nMahidol University\n10400BangkokThailand\n",
"Sara Hosseini \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n",
"Arash Ahmadi \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nWalter Schottky Institute\nTechnische Universitt Mnchen\n85748GarchingGermany\n",
"Brendon L Higgins \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n",
"Dan Dalacu \nNational Research Council of Canada\nK1A 0R6OttawaOntarioCanada\n",
"Philip J Poole \nNational Research Council of Canada\nK1A 0R6OttawaOntarioCanada\n",
"Robin L Williams \nNational Research Council of Canada\nK1A 0R6OttawaOntarioCanada\n",
"Michael E Reimer \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Electrical & Computer Engineering\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n",
"Thomas Jennewein \nInstitute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n\nDepartment of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada\n"
] | [
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics\nFaculty of Science\nMahidol University\n10400BangkokThailand",
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Walter Schottky Institute\nTechnische Universitt Mnchen\n85748GarchingGermany",
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"National Research Council of Canada\nK1A 0R6OttawaOntarioCanada",
"National Research Council of Canada\nK1A 0R6OttawaOntarioCanada",
"National Research Council of Canada\nK1A 0R6OttawaOntarioCanada",
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Electrical & Computer Engineering\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Institute for Quantum Computing\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada",
"Department of Physics and Astronomy\nUniversity of Waterloo\nN2L 3G1WaterlooONCanada"
] | [] | Global quantum secure communication can be achieved using quantum key distribution (QKD) with orbiting satellites. Established techniques use attenuated lasers as weak coherent pulse (WCP) sources, with so-called decoy-state protocols, to generate the required single-photon-level pulses. While such approaches are elegant, they come at the expense of attainable final key due to inherent multi-photon emission, thereby constraining secure key generation over the high-loss, noisy channels expected for satellite transmissions. In this work we improve on this limitation by using true single-photon pulses generated from a semiconductor quantum dot (QD) embedded in a nanowire, possessing low multi-photon emission (< 10 −6 ) and an extraction system efficiency of −15 dB (or 3.1%). Despite the limited efficiency, the key generated by the QD source is greater than that generated by a WCP source under identical repetition rate and link conditions representative of a satellite pass. We predict that with realistic improvements of the QD extraction efficiency to −4.0 dB (or 40%), the quantum-dot QKD protocol outperforms WCP-decoy-state QKD by almost an order of magnitude. Consequently, a QD source could allow generation of a secure key in conditions where a WCP source would simply fail, such as in the case of high channel losses. Our demonstration is the first specific use case that shows a clear benefit for QD-based single-photon sources in secure quantum communication, and has the potential to enhance the viability and efficiency of satellitebased QKD networks. | null | [
"https://export.arxiv.org/pdf/2009.11818v1.pdf"
] | 221,879,102 | 2009.11818 | 2ebd055818f96c645bf2d45adc6675820345ecdc |
Enhancing secure key rates of satellite QKD using a quantum dot single-photon source
Poompong Chaiwongkhot
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics
Faculty of Science
Mahidol University
10400BangkokThailand
Sara Hosseini
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1WaterlooONCanada
Arash Ahmadi
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1WaterlooONCanada
Walter Schottky Institute
Technische Universitt Mnchen
85748GarchingGermany
Brendon L Higgins
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1WaterlooONCanada
Dan Dalacu
National Research Council of Canada
K1A 0R6OttawaOntarioCanada
Philip J Poole
National Research Council of Canada
K1A 0R6OttawaOntarioCanada
Robin L Williams
National Research Council of Canada
K1A 0R6OttawaOntarioCanada
Michael E Reimer
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Electrical & Computer Engineering
University of Waterloo
N2L 3G1WaterlooONCanada
Thomas Jennewein
Institute for Quantum Computing
University of Waterloo
N2L 3G1WaterlooONCanada
Department of Physics and Astronomy
University of Waterloo
N2L 3G1WaterlooONCanada
Enhancing secure key rates of satellite QKD using a quantum dot single-photon source
(Dated: January 25, 2022)
Global quantum secure communication can be achieved using quantum key distribution (QKD) with orbiting satellites. Established techniques use attenuated lasers as weak coherent pulse (WCP) sources, with so-called decoy-state protocols, to generate the required single-photon-level pulses. While such approaches are elegant, they come at the expense of attainable final key due to inherent multi-photon emission, thereby constraining secure key generation over the high-loss, noisy channels expected for satellite transmissions. In this work we improve on this limitation by using true single-photon pulses generated from a semiconductor quantum dot (QD) embedded in a nanowire, possessing low multi-photon emission (< 10 −6 ) and an extraction system efficiency of −15 dB (or 3.1%). Despite the limited efficiency, the key generated by the QD source is greater than that generated by a WCP source under identical repetition rate and link conditions representative of a satellite pass. We predict that with realistic improvements of the QD extraction efficiency to −4.0 dB (or 40%), the quantum-dot QKD protocol outperforms WCP-decoy-state QKD by almost an order of magnitude. Consequently, a QD source could allow generation of a secure key in conditions where a WCP source would simply fail, such as in the case of high channel losses. Our demonstration is the first specific use case that shows a clear benefit for QD-based single-photon sources in secure quantum communication, and has the potential to enhance the viability and efficiency of satellitebased QKD networks.
Global quantum secure communication can be achieved using quantum key distribution (QKD) with orbiting satellites. Established techniques use attenuated lasers as weak coherent pulse (WCP) sources, with so-called decoy-state protocols, to generate the required single-photon-level pulses. While such approaches are elegant, they come at the expense of attainable final key due to inherent multi-photon emission, thereby constraining secure key generation over the high-loss, noisy channels expected for satellite transmissions. In this work we improve on this limitation by using true single-photon pulses generated from a semiconductor quantum dot (QD) embedded in a nanowire, possessing low multi-photon emission (< 10 −6 ) and an extraction system efficiency of −15 dB (or 3.1%). Despite the limited efficiency, the key generated by the QD source is greater than that generated by a WCP source under identical repetition rate and link conditions representative of a satellite pass. We predict that with realistic improvements of the QD extraction efficiency to −4.0 dB (or 40%), the quantum-dot QKD protocol outperforms WCP-decoy-state QKD by almost an order of magnitude. Consequently, a QD source could allow generation of a secure key in conditions where a WCP source would simply fail, such as in the case of high channel losses. Our demonstration is the first specific use case that shows a clear benefit for QD-based single-photon sources in secure quantum communication, and has the potential to enhance the viability and efficiency of satellitebased QKD networks.
I. INTRODUCTION
Quantum key distribution (QKD) [1,2] such as the seminal BB84 protocol [3] generates unconditionally secure keys between two distant parties by transmitting encoded photons. While terrestrial implementations have limited range due to inherent channel losses, implementations with orbiting satellites can extend the range of QKD across the globe [4,5]. However, key generation is diminished by high transmission loss, noise, limited contact time, and imperfections [6,7]. In particular, when employing a weak coherent pulse (WCP) source, the secure key length is hampered to account for multi-photon emissions which leak information [8][9][10], to the extent that a given pass of a QKD satellite might not yield any usable secure key. Because the cost of a QKD platform in orbit is substantial and only offers limited access time to a ground station, improvements to the rate of secure key generation are crucial for the platform's viability. While channel performance is limited by telescope apertures and atmospheric quality, the use of true single-photon sources to eliminate information leakage could improve satellite QKD transfer. This is now feasible given recent advances of single-photon emitter devices. Here, * [email protected] † [email protected] ‡ [email protected] we study QKD using single-photon pulses generated by a semiconductor quantum dot (QD) [11] embedded in a photonic nanowire (see, e.g., [12]) that ensures efficient and directional light extraction [13] and as well as high pulse rate and purity [14,15]. Such devices could be excellent candidates for a ground-to-satellite uplink implementation, such as the Canadian QEYSSat mission [16], where the sizeable cryogenics and pump lasers required to operate the QD are located on ground. While the development of single-photon sources using semiconductor quantum dots has progressed steadily in recent years, their benefit for QKD has yet to be clearly demonstrated. Previous proof-of-concept studies demonstrated that QKD with QD sources could generate keys up to the sifting step [17][18][19]. Here, we theoretically and experimentally compare the QKD performance of a QD source and a WCP source, focusing on a regime of high channel loss and including finite-size effects, where statistical estimates possess significant uncertainty (and thereby impact secure key length) due to small sample sizes. We estimate secure key rates using the formalism of Ref. 20 for QD QKD, after taking practical coupling losses with the QD into account [21,22], and the decoystate model of Refs. 8-10, and 23 for WCP QKD. Note that, to provide a fair comparison of the two sources, we model our calculations assuming the same pulse repetition rates, as well as the same realistic conditions for satellite-based QKD links, including channel losses of 25 to 35 dB, fly-by pass duration of 100 s, and background photon noise rate of several 100 Hz. We believe our study is the first of its kind that demonstrates a quantum dot single-photon source can substantially enhance the performance of a BB84 QKD protocol under such satellitelink conditions.
II. EXPERIMENT SETUP
Our experimental apparatus is shown in Fig. 1. The optical source, QD or WCP, is coupled into multi-mode fiber. The transmitting party, Alice, utilizes a polarizer and a half-wave plate in a motorized rotating stage to encode one of four equally-distributed linear polarizations-horizontal (H), diagonal (D), vertical (V), or anti-diagonal (A)-onto each photon pulse from the selected source. The photons then pass through attenuators emulating channel loss before passing through the free-space channel to the receiver. The receiving party, Bob, uses a beam splitter and polarizing beam splitters to discriminate the four polarization states with passive choice of measurement basis-the passive-basischoice design is generally favourable for a satellite payload as it avoids active elements which are more likely to fail in orbit [10].
Alice's photons enter Bob's receiver through a focusing lens (L1, diameter 50 mm, focal length 250 mm), and a collimating lens (L2, diameter 5 mm, focal length 11 mm). The collimated beam of 2 mm diameter then passes through a 50:50 beam splitter (BS), and polarization beam splitters (PBS1 HV(DA) and PBS2 HV(DA) ) are placed in HV(DA) arm (PBS2 suppresses noise due to comparatively low polarization extinction ratio in the reflected path of PBS1). Lenses (L3) in each of the four polarized outputs focus the beams into four multi-mode fibers connected to single-photon detectors. The times of detection pulses are recorded using a time-tagging system.
Link parameters are chosen following the study of satellite QKD in Ref. 6. For each source, we record a series of pulses in each channel transmission loss value, one polarization at a time, for 100 s each, which is a typical usable link time of a satellite in low Earth orbit.
The key generation rate is determined using the average observed statistics of each polarization.
III. QUANTUM DOT QKD
We utilize a single-photon signal from a wurtzite indium arsenide phosphide (InAsP) quantum dot, embedded in a tapered Indium Phosphide (InP) nanowire [14,15]. Non-resonant, or incoherent, pulsed pumping is used to excite the quantum dot, effectively releasing carriers above the bandgap of the wurtzite-InP nanowire bandgap transition. The photoluminescence spectrum of the quantum dot, Fig. 2, is captured under off-resonance excitation by 830 nm laser pulses from a ti-tanium:sapphire (Ti:sapph) mode-locked laser at 420 nW power and 76.4 MHz repetition rate. The quantum dot emits exciton photons at 892.67 nm and biexciton photons at 894.2 nm. To separate these two emission lines, we send the quantum dot emission to a polarizationindependent transmission grating with 1504 grooves per millimetre. The photons from the excitonic emission are coupled to a multi-mode optical fiber and sent to the QKD state preparation apparatus-these are chosen as they have a higher rate than those from the biexciton emission (see Fig. 2). The QD source has an internal loss of about 15 dB due to imperfect photon generation and collection, resulting in an effective pulse rate of 2.6 MHz.
For QKD security analysis we assume that the phases of each photon pulse are independent, which is a good approximation for QD single photon pulses [24], while any residual phase could be erased by randomizing the pump phase [25]. A secure implementation of a QKD source requires fast, random polarization encoding. This could be achieved by multiple methods, including direct [25] or on-chip phase modulation, fast external electro-optical phase modulators (typical insertion loss of 4 dB at repetition rates of 10 GHz [26]) or passively combining four QDs with dedicated polarization orientations and switching of the excitation laser pulse to address a random QD at each time slot (insertion loss of 3 dB, repetition rates at GHz levels). For our demonstration we use slow variation of the polarization encoding based on a motorized wave-plate.
A low multi-photon emission probability is most critical for a secure QKD implementation. Impressively, the nanowire quantum dot source has a measured secondorder correlation g 2 (0) ≈ 0.015 when excited offresonance (see Fig. 3). Although in semiconductor quantum optics there is a special emphasis placed on the measurement of the g 2 (0), recently it has been shown that g 2 (0) < 1/2 does not provide the exact probability of single-or multi-photon emission [27] but only suggests a non-zero single-photon contribution in the quantum state of the light. This means that even at a low g 2 (0) the source may emit a small fraction of multi-photon pulses, which could permit an adversary to perform a photon number splitting attack and potentially gain information about the key.
To suppress information leakage due to multi-photon emission from the QD, we use the key rate equation for BB84 QKD with an imperfect photon source [28,29],
L QD ≥ nqA 1 − H Ẽ A − f H(E) − ∆ ,(1)
where n is the number of raw key bits, q = 1/2 is the sifting ratio, and E is the observed quantum bit error ratio (QBER).Ẽ = E + 1 2 {2 ln(1/ε PE ) + 2 ln(m + 1)}(1/m) takes into account the chance that the QBER estimated from a sifted key of size m = qn deviates from the actual value [20,30,31], and ε PE is the probability that such deviation occurs. is the binary Shannon entropy, and f H(E) is information leakage during error correction with error correction code efficiency f . A correction term,
H(x) = −x log 2 (x) − (1 − x) log 2 (1 − x)∆ = −7m 1 m log 2 2 ε − 2 log 2 1 ε PA − log 2 2 ε EC(2)
accounts for statistical deviations due to finite-size ef- fects [8][9][10]23], with security parameter ε =ε+ε PA +ε EC . In this experiment, we choose ε EC = 10 −10 , andε and ε PA are numerically optimized for the key size under the constraint 1 − ε EC >ε > ε PA ≥ 0. The correction term A = (p det − P m )/p det accounts for an adversary's information due to multi-photon pulses [28,29], where p det is the probability of detection and P m is the probability of a multi-photon pulse generated by Alice per time slot. Because the photon number distribution of the quantum dot emission is not precisely known, and certainly cannot be presumed to follow that of the coherent state, we em-ploy an alternative method to establish an upper bound for P m .
First, from the QKD data recorded in Bob's apparatus, we determine the likelihood for a three-way coincidence detection, where more than two detectors 'click' within the same time window (5 ns), to be less than 10 −9 . This probability is similar to accidental coincidences caused from background noise in the channel and dark counts from the detectors, and implies that source contributions of three of more photons is negligible-we thus do not consider those further.
The remaining two-photon contributions are characterized with the help of a 50:50 beam splitter, where each output is coupled to an APD, at coupling efficiency η t = 10% and APD detection efficiency of η d = 60%. The experiment is run for a duration of 10 hours to obtain sufficient probability of coincident clicks, C, in a 5 ns window. With emission of i-photon Fock-states at probabilities given by p i , the probability for a 2-fold coincidence is
C = 1 2 p 2 η 2 + O(ηD) + O(D 2 ) (3) = N C /N,(4)
where the detection efficiency of the testing device is given by η = η t η d , D is the dark count probability, N C is the total number of coincident detections, and N is the number of time slots during 10 hours of data collection. We similarly also determine the probability of 'solitary' events S where only one detector clicks within the window,
S = p 1 η + p 2 η( 3 2 − η) + O(D) (5) = N S /N,(6)
where N S is number of solitary detections over the data collection period. In this setup the probability of dark count per detection event is lower than 10 −7 per detection. Thus, the contribution of dark counts to C and S is negligible. By combining Eq. (4) and Eq. (6), we find
p 2 = 2κp 1 η − 3κ + 2κη ,(7)
where κ ≡ C/S is calculated from the measurement data. Under the assumption that higher photon terms can be neglected, we arrive at a bound for P m ,
P m ≤ 2κR η − 3κ + 2κη ,(8)
where the probability of non-empty pulses R = p 1 + p 2 ≥ p 1 can be measured directly from the source. From our measurements, we find κ = 1.1 × 10 −5 , η = 0.06, and R = 0.033, and with Eq. (8) we determine the probability of multi-photon emission to be P m ≤ 4.5 × 10 −6 .
IV. WEAK COHERENT PULSED QKD
We compare the QD to a WCP source using the decoystate BB84 protocol, which employs multiple intensity levels to counter photon number splitting attacks. The WCP source is realized using pulses from a Ti:sapph mode-locked laser, attenuated to a mean photon number of µ = 0.5 per signal pulse and ν = 0.1 per decoy pulse. Note that because the optical pulses from a Ti:sapph laser have a phase relation, they are not directly suitable for secure QKD without phase randomization. (We omit this step for simplicity.)
The estimated key length of decoy-state BB84 QKD with two intensity levels is
L WCP ≥ nK µ q Y L 1 (1 − H(E U 1 )) − Q µ f H(E µ ) − Q µ ∆/n µ ,(9)
where n is the total number of transmitted pulses, n µ is the number of detected signal pulses, Q µ is the gain of the signal state, Y L 1 is the lower bound of the singlephoton gain, E U 1 is the upper bound of the QBER with a correction for the finite-size effects on decoy state characterization [23], K µ = 0.9 is a fraction of the pulses that are in a signal state, and ∆ is a correction term for finitesize effects [20,30,31]. Other parameters are as for the QD source.
V. RESULTS AND DISCUSSION
Experimental and theoretical results are shown in Fig. 4. With an observed QBER of ≈2%, the QD QKD system can effectively tolerate channel loss up to ≈27 dB (green line in Fig. 4). This tolerable loss of the QD QKD system is notably higher than the WCP protocol with the same repetition rate (red dashed line) despite its high internal loss of 15 dB.
The performance of a QD QKD system will be further enhanced by realistic improvements to the source's internal losses. Other demonstrations of QD sources [32,33] have reported an optimistic 4.0 dB (or 60%) internal loss, owing to a 50% fiber coupling efficiency and an 80% practical photon generation efficiency. Such improved internal coupling of the source brings the loss tolerance of the system up to 32 dB (blue line in Fig. 4), even surpassing a decoy-state QKD system with 300 MHz repetition rate (black dashed line). Finally, assuming QD uses highefficiency coupling and is also operated at the 300 MHz rate (light blue line), our extrapolation predicts the QD QKD could tolerate significantly higher channel lossclose to 37 dB-than a WCP QKD (32 dB), or generate up to one order of magnitude higher key length per 100 s satellite pass, significantly outperforming the WCP QKD system under the same channel conditions. The generation of the four QKD states from a QD emitter may introduce internal losses of 3 to 4 dB due to insertion losses from electro-optic modulators or passive coupling, and the key generation rates in would be proportionally lower. However, the advantage of QD QKD over WCP QKD still holds.
VI. CONCLUSION
We experimentally and theoretically compared the performance of QD and WCP QKD under finite-size effects for the purposes of secure communication in constrained channels such as with satellites. In particular, we devised a novel method to characterize and determine the upper bound of multi-photon emissions from a QD emitter, and include finite-size effects due to the limited link duration of about 100 s expected for a satellite contact. Remarkably, our results show that a QD QKD system operated at 76.4 MHz repetition rate, and despite 15 dB internal loss, outperforms a decoy-state WCP QKD running at the same repetition rate, especially at high channel transmission loss. The performance of QD QKD could be improved further by reducing the internal loss, and at an optimistic, but still practical 4.0 dB the performance advantage for a QD system is almost an order of magnitude better than WCP. This QD sample was driven by an off-resonant excitation scheme, and future work using resonant excitation could reduce the multi-photon emission even further as well as improve timing jitter and repetition rate of the emitted photon pulses.
State-of-the-art quantum dots coupled to microcavities have shown lifetimes of ≈60 ps [34], and can be driven on resonance by a GHz-rate pulsed laser. This is equivalent to the current clock rate used in high-speed WCP QKD [35][36][37]. Utilizing quantum dot sources in satelliteuplink-based QKD is very appealing, because the bulky components, such as the cryogenic system for the singlephoton source, are located on the ground station. Note, however, the wavelength of single-photon pulses in this study (≈890 nm) is not optimal for the satellite uplink [6]. Further study on other quantum dot materials that emit at better wavelengths, or the possibility of using frequency conversion of the quantum dot single-photon source, is needed.
The narrow-band emission of QD is suited well for filters used in very light-polluted, or even daylight, environments. In addition, QD devices have the potential to generate entangled photon pairs, or even produce multiplexed emission, all of which could be helpful for interconnecting multiple users and enhancing the QKD rates in the future. Our theoretical analysis should be applicable to QKD systems with other single-photon emitters, and we believe our study can help spark interest in the advancements of QKD with true single-photon sources. As secure communication over long and global distances becomes more important than ever, the enhancement of satellite QKD using true single-photon emitters is anticipated to have a major contribution to help make this happen. Our findings demonstrate that a quantum dot can indeed be a viable and beneficial photon emitter in a QKD system.
VII. ACKNOWLEDGMENT
We thank N. Lütkenhaus, J.P. Bourgoin, and J. Lin for discussions and technical support. This work was supported by the Industry Canada, Canada Fund for Innovation, Ontario MRI, Ontario Research Fund, NSERC (programs Discovery, CryptoWorks21, Strategic Partnership Grant), and the Canadian Space Agency. P.C. acknowledges support by Thai DPST scholarship.
FIG. 1 .
1Experimental apparatus. For QD QKD, the QD is excited with a mode-locked titanium:sapphire (Ti:sapph) laser. A grating and a wedge mirror are used to separate exciton pulses from bi-exciton pulses. The photons are then sent to the QKD system. For WCP QKD, photon pulses are sent directly from the Ti:sapph laser to the QKD system. At Alice, the signal polarization is first cleaned up with a polarizing beam splitter (PBS) and then encoded through a motorized half-wave plate (HWP). Attenuators (Att) are used to simulate channel loss, as well as to select intensity levels for the decoy-state WCP protocol. The signal is sent through a free-space quantum channel and measured by a passive basis choice polarization-encoding QKD receiver at Bob (see text for details).
FIG. 2 .
2Observed emission spectrum of the QD excited at 830 nm, i.e., above bandgap non-resonant excitation. The spectrum shows three peaks attributed to the exciton (X), biexciton (XX), and charged exciton or trion (T ). The spectrum is measured by an imaging spectrometer using a 1200 grooves/mm grating.
FIG. 3 .
3Measured autocorrelation histogram of photon emission from the QD. The data are presented without any background corrections.
FIG. 4 .
4Secret key size over 100 s exchange, including finitesize effects, for varying channel loss. Symbols indicate experimental results, lines theoretical. Red circles and dashed line, WCP QKD with 76.4 MHz; green pluses and line, QD QKD with 76.4 MHz excitation frequency and 15 dB internal loss; blue line, extrapolation of 76.4 MHz QD QKD to 4.0 dB internal loss; black dashed line, extrapolation of WCP QKD to 300 MHz; light-blue line, extrapolation of QD QKD to 300 MHz excitation and 4.0 dB internal loss.
. N Gisin, G Ribordy, W Tittel, H Zbinden, 10.1103/RevModPhys.74.145Rev. Mod. Phys. 74145N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
. V Scarani, H Bechmann-Pasquinucci, N J Cerf, M Dušek, N Lütkenhaus, M Peev, 10.1103/RevModPhys.81.1301Rev. Mod. Phys. 811301V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek, N. Lütkenhaus, and M. Peev, Rev. Mod. Phys. 81, 1301 (2009).
C H Bennett, G Brassard, 10.1016/j.tcs.2014.05.025Proc. IEEE International Conference on Computers, Systems, and Signal Processing. IEEE International Conference on Computers, Systems, and Signal essingBangalore, India; New YorkIEEE PressC. H. Bennett and G. Brassard, in Proc. IEEE Inter- national Conference on Computers, Systems, and Signal Processing (Bangalore, India) (IEEE Press, New York, 1984) pp. 175-179.
. S.-K Liao, W.-Q Cai, W.-Y Liu, L Zhang, Y Li, J.-G Ren, J Yin, Q Shen, Y Cao, Z.-P Li, F.-Z Li, X.-W , S.-K. Liao, W.-Q. Cai, W.-Y. Liu, L. Zhang, Y. Li, J.-G. Ren, J. Yin, Q. Shen, Y. Cao, Z.-P. Li, F.-Z. Li, X.-W.
. L.-H Chen, J.-J Sun, J.-C Jia, X.-J Wu, J.-F Jiang, Y.-M Wang, Q Huang, Y.-L Wang, L Zhou, T Deng, L Xi, T Ma, Q Hu, Y.-A Zhang, N.-L Chen, X.-B Liu, Z.-C Wang, C.-Y Zhu, R Lu, C.-Z Shu, J.-Y. Peng, J.-W Wang, Pan, 10.1038/nature23655Nature. 54943Chen, L.-H. Sun, J.-J. Jia, J.-C. Wu, X.-J. Jiang, J.-F. Wang, Y.-M. Huang, Q. Wang, Y.-L. Zhou, L. Deng, T. Xi, L. Ma, T. Hu, Q. Zhang, Y.-A. Chen, N.-L. Liu, X.-B. Wang, Z.-C. Zhu, C.-Y. Lu, R. Shu, C.-Z. Peng, J.-Y. Wang, and J.-W. Pan, Nature 549, 43 EP (2017).
. S.-K Liao, W.-Q Cai, J Handsteiner, B Liu, J Yin, L Zhang, D Rauch, M Fink, J.-G Ren, W.-Y Liu, Y Li, Q Shen, Y Cao, F.-Z Li, J.-F Wang, Y.-M , S.-K. Liao, W.-Q. Cai, J. Handsteiner, B. Liu, J. Yin, L. Zhang, D. Rauch, M. Fink, J.-G. Ren, W.-Y. Liu, Y. Li, Q. Shen, Y. Cao, F.-Z. Li, J.-F. Wang, Y.-M.
. L Huang, T Deng, L Xi, T Ma, L Hu, N.-L Li, F Liu, P Koidl, Y.-A Wang, X.-B Chen, M Wang, G Steindorfer, C.-Y Kirchner, R Lu, R Shu, T Ursin, C.-Z Scheidl, J.-Y Peng, A Wang, J.-W Zeilinger, Pan, 10.1103/PhysRevLett.120.030501Phys. Rev. Lett. 12030501Huang, L. Deng, T. Xi, L. Ma, T. Hu, L. Li, N.-L. Liu, F. Koidl, P. Wang, Y.-A. Chen, X.-B. Wang, M. Steindor- fer, G. Kirchner, C.-Y. Lu, R. Shu, R. Ursin, T. Scheidl, C.-Z. Peng, J.-Y. Wang, A. Zeilinger, and J.-W. Pan, Phys. Rev. Lett. 120, 030501 (2018).
. J.-P Bourgoin, E Meyer-Scott, B L Higgins, B Helou, C Erven, H Hübel, B Kumar, D Hudson, I Souza, R Girard, R Laflamme, T Jennewein, 10.1088/1367-2630/15/2/023006New J. Phys. 1523006J.-P. Bourgoin, E. Meyer-Scott, B. L. Higgins, B. Helou, C. Erven, H. Hübel, B. Kumar, D. Hudson, I. D'Souza, R. Girard, R. Laflamme, and T. Jennewein, New J. Phys. 15, 023006 (2013).
. M Polnik, L Mazzarella, M D Carlo, D K Oi, A Riccardi, A Arulselvan, 10.1140/epjqt/s40507-020-0079-6EPJ. Quant. Tech. 7M. Polnik, L. Mazzarella, M. D. Carlo, D. K. Oi, A. Ric- cardi, and A. Arulselvan, EPJ. Quant. Tech. 7 (2020), 10.1140/epjqt/s40507-020-0079-6.
. W.-Y Hwang, 10.1103/PhysRevLett.91.057901Phys. Rev. Lett. 9157901W.-Y. Hwang, Phys. Rev. Lett. 91, 057901 (2003).
. X Ma, B Qi, Y Zhao, H.-K Lo, 10.1103/PhysRevA.72.012326Phys. Rev. A. 7212326X. Ma, B. Qi, Y. Zhao, and H.-K. Lo, Phys. Rev. A 72, 012326 (2005).
. J.-P Bourgoin, N Gigov, B L Higgins, Z Yan, E Meyer-Scott, A K Khandani, N Lütkenhaus, T Jennewein, 10.1103/PhysRevA.92.052339Phys. Rev. A. 9252339J.-P. Bourgoin, N. Gigov, B. L. Higgins, Z. Yan, E. Meyer-Scott, A. K. Khandani, N. Lütkenhaus, and T. Jennewein, Phys. Rev. A 92, 052339 (2015).
. P Senellart, G Solomon, A White, 10.1038/nnano.2017.218Nanotech. 1210261039P. Senellart, G. Solomon, and A. White, Nanotech 12, 10261039 (2017).
. D Dalacu, P J Poole, R L Williams, 10.1088/1361-6528/ab0393Nanotechnology. 30232001D. Dalacu, P. J. Poole, and R. L. Williams, Nanotech- nology 30, 232001 (2019).
. M E Reimer, G Bulgarini, A Fognini, R W Heeres, B J Witek, M A M Versteegh, A Rubino, T Braun, M Kamp, S Höfling, D Dalacu, J Lapointe, P J Poole, V Zwiller, 10.1103/PhysRevB.93.195316Phys. Rev. B. 93195316M. E. Reimer, G. Bulgarini, A. Fognini, R. W. Heeres, B. J. Witek, M. A. M. Versteegh, A. Rubino, T. Braun, M. Kamp, S. Höfling, D. Dalacu, J. Lapointe, P. J. Poole, and V. Zwiller, Phys. Rev. B 93, 195316 (2016).
. M E Reimer, G Bulgarini, N Akopian, M Hocevar, M B Bavinck, M A Verheijen, E P A M Bakkers, L P Kouwenhoven, V Zwiller, 10.1038/ncomms1746Nat. Commun. 3737M. E. Reimer, G. Bulgarini, N. Akopian, M. Hocevar, M. B. Bavinck, M. A. Verheijen, E. P. A. M. Bakkers, L. P. Kouwenhoven, and V. Zwiller, Nat. Commun 3, 737 (2012).
. D Dalacu, K Mnaymneh, J Lapointe, X Wu, P J Poole, G Bulgarini, V Zwiller, M E Reimer, 10.1021/nl303327hNanoLett. 115919D. Dalacu, K. Mnaymneh, J. Lapointe, X. Wu, P. J. Poole, G. Bulgarini, V. Zwiller, and M. E. Reimer, NanoLett 11, 5919 (2012).
T Jennewein, J P Bourgoin, B Higgins, C Holloway, E Meyer-Scott, C Erven, B Heim, Z Yan, H Hbel, G Weihs, E Choi, I Souza, D Hudson, R Laflamme, 10.1117/12.2041693Advances in Photonics of Quantum Computing, Memory, and Communication VII. Z. U. Hasan, P. R. Hemmer, H. Lee, and C. M. SantoriSPIE8997T. Jennewein, J. P. Bourgoin, B. Higgins, C. Hol- loway, E. Meyer-Scott, C. Erven, B. Heim, Z. Yan, H. Hbel, G. Weihs, E. Choi, I. D'Souza, D. Hudson, and R. Laflamme, in Advances in Photonics of Quan- tum Computing, Memory, and Communication VII , Vol. 8997, edited by Z. U. Hasan, P. R. Hemmer, H. Lee, and C. M. Santori, International Society for Optics and Pho- tonics (SPIE, 2014) pp. 21 -27.
. P M Intallura, M B Ward, O Z Karimov, Z L Yuan, P See, A J Shields, 10.1063/1.2799756Appl. Phys. Lett. 91161103P. M. Intallura, M. B. Ward, O. Z. Karimov, Z. L. Yuan, P. See, and A. J. Shields, Appl. Phys. Lett. 91, 161103 (2007).
. T Heindel, C A Kessler, M Rau, C Schneider, M Frst, F Hargart, W.-M Schulz, M Eichfelder, R Robach, S Nauerth, 10.1088/1367-2630/14/8/083001New J. Phys. 1483001T. Heindel, C. A. Kessler, M. Rau, C. Schneider, M. Frst, F. Hargart, W.-M. Schulz, M. Eichfelder, R. Robach, and S. Nauerth, New J. Phys. 14, 083001 (2012).
. K Takemoto, Y Nambu, T Miyazawa, Y Sakuma, T Yamamoto, S Yorozu, Y Arakawa, 10.1038/srep14383Sci. Rep. 514383K. Takemoto, Y. Nambu, T. Miyazawa, Y. Sakuma, T. Yamamoto, S. Yorozu, and Y. Arakawa, Sci. Rep. 5, 14383 (2015).
. R Y Q Cai, V Scarani, 10.1088/1367-2630/11/4/045024New J. Phys. 1145024R. Y. Q. Cai and V. Scarani, New J. Phys. 11, 045024 (2009).
. R Renner, N Gisin, B Kraus, 10.1103/PhysRevA.72.012332Phys. Rev. A. 7212332R. Renner, N. Gisin, and B. Kraus, Phys. Rev. A 72, 012332 (2005).
. R König, R Renner, A Bariska, U Maurer, 10.1103/PhysRevLett.98.140502Phys. Rev. Lett. 98140502R. König, R. Renner, A. Bariska, and U. Maurer, Phys. Rev. Lett. 98, 140502 (2007).
. M Curty, X Ma, B Qi, T Moroder, 10.1103/PhysRevA.81.022310Phys. Rev. A. 8122310M. Curty, X. Ma, B. Qi, and T. Moroder, Phys. Rev. A 81, 022310 (2010).
. H Jayakumar, A Predojević, T Huber, T Kauten, G S Solomon, G Weihs, 10.1103/PhysRevLett.110.135505Phys. Rev. Lett. 110135505H. Jayakumar, A. Predojević, T. Huber, T. Kauten, G. S. Solomon, and G. Weihs, Phys. Rev. Lett. 110, 135505 (2013).
. J P Lee, L M Wells, B Villa, S Kalliakos, R M Stevenson, D J P Ellis, I Farrer, D A Ritchie, A J Bennett, A J Shields, 10.1103/PhysRevX.8.021078Phys. Rev. X. 821078J. P. Lee, L. M. Wells, B. Villa, S. Kalliakos, R. M. Stevenson, D. J. P. Ellis, I. Farrer, D. A. Ritchie, A. J. Bennett, and A. J. Shields, Phys. Rev. X 8, 021078 (2018).
. Oespace, OEspace, https://www.eospace.com/phase-modulator, visited August 2020.
. P Grnwald, 10.1088/1367-2630/ab3ae0New J. Phys. 2193003P.Grnwald, New J. Phys. 21, 093003 (2019).
. N Lütkenhaus, 10.1103/PhysRevA.61.052304Phys. Rev. A. 6152304N. Lütkenhaus, Phys. Rev. A 61, 052304 (2000).
D Gottesman, H.-K Lo, N Lütkenhaus, J Preskill, 10.1109/ISIT.2004.1365172Quantum Inf. Comput. 4325D. Gottesman, H.-K. Lo, N. Lütkenhaus, and J. Preskill, Quantum Inf. Comput. 4, 325 (2004).
R Renner, R Koenig, Second Theory of Cryptography Conference. J. KilianBerlinSpringer Verlag3378TCC 2005R. Renner and R. Koenig, in Second Theory of Cryptog- raphy Conference, TCC 2005, LNCS, Vol. 3378, edited by J. Kilian (Springer Verlag, Berlin, 2005) pp. 407-425.
. V Scarani, R Renner, 10.1103/PhysRevLett.100.200501Phys. Rev. Lett. 100200501V. Scarani and R. Renner, Phys. Rev. Lett. 100, 200501 (2008).
. O Gazzano, S Michaelis De Vasconcellos, C Arnold, A Nowak, E Galopin, I Sagnes, L Lanco, A Lemaître, P Senellart, 10.1038/ncomms2434Nature Communications. 41425O. Gazzano, S. Michaelis de Vasconcellos, C. Arnold, A. Nowak, E. Galopin, I. Sagnes, L. Lanco, A. Lemaître, and P. Senellart, Nature Communications 4, 1425 (2013).
. G Bulgarini, M E Reimer, M Bavinck, K D Jöns, D Dalacu, P J Poole, E P A M Bakkers, V Zwiller, http:/arxiv.org/abs/https:/doi.org/10.1021/nl501648fpMID: 24926884Nano Letters. 14G. Bulgarini, M. E. Reimer, M. Bouwes Bavinck, K. D. Jöns, D. Dalacu, P. J. Poole, E. P. A. M. Bakkers, and V. Zwiller, Nano Letters 14, 4102 (2014), pMID: 24926884, https://doi.org/10.1021/nl501648f.
. H Wang, Y He, T Chung, 10.1038/s41566-019-0494-3Nat. Photonics. 13770775H. Wang, Y. He, T. Chung, and et al., Nat. Photonics 13, 770775 (2019).
. K Gordon, V Marmol, G Buller, I Rech, S Cova, P Townsend, 10.1364/OPEX.13.003015Opt. express. 133015K. Gordon, V. Marmol, G. Buller, I. Rech, S. Cova, and P. Townsend, Opt. express 13, 3015 (2005).
. A R Dixon, Z L Yuan, J F Dynes, A W Sharpe, A J Shields, 10.1364/OE.16.018790Opt. Express. 1618790A. R. Dixon, Z. L. Yuan, J. F. Dynes, A. W. Sharpe, and A. J. Shields, Opt. Express 16, 18790 (2008).
. S Wang, W Chen, J.-F Guo, Z.-Q Yin, H.-W Li, Z Zhou, G.-C Guo, Z.-F Han, 10.1364/OL.37.001008Opt. Lett. 371008S. Wang, W. Chen, J.-F. Guo, Z.-Q. Yin, H.-W. Li, Z. Zhou, G.-C. Guo, and Z.-F. Han, Opt. Lett. 37, 1008 (2012).
| [] |
[
"Accelerating Multi-Model Inference by Merging DNNs of Different Weights",
"Accelerating Multi-Model Inference by Merging DNNs of Different Weights"
] | [
"Joo Seong Jeong \nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n\n",
"Soojeong Kim \nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n\n",
"Gyeong-In Yu \nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n\n",
"Yunseong Lee \nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n\n",
"Byung-Gon Chun \nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n\n"
] | [
"Seoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n",
"Seoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n",
"Seoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n",
"Seoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n",
"Seoul National University\nSeoul National University\nSeoul National University\nSeoul National University\nSeoul National University\n"
] | [] | Standardized DNN models that have been proved to perform well on machine learning tasks are widely used and often adopted as-is to solve downstream tasks, forming the transfer learning paradigm. However, when serving multiple instances of such DNN models from a cluster of GPU servers, existing techniques to improve GPU utilization such as batching are inapplicable because models often do not share weights due to fine-tuning. We propose NETFUSE, a technique of merging multiple DNN models that share the same architecture but have different weights and different inputs. NETFUSE is made possible by replacing operations with more general counterparts that allow a set of weights to be associated with only a certain set of inputs. Experiments on ResNet-50, ResNeXt-50, BERT, and XLNet show that NETFUSE can speed up DNN inference time up to 3.6× on a NVIDIA V100 GPU, and up to 3.0× on a TITAN Xp GPU when merging 32 model instances, while only using up a small additional amount of GPU memory. * Yunseong is currently affiliated with Qualcomm AI Research.Preprint. Under review. | null | [
"https://arxiv.org/pdf/2009.13062v1.pdf"
] | 221,970,804 | 2009.13062 | 41711661c5142f3f83be714bd486653fcaa5a1a6 |
Accelerating Multi-Model Inference by Merging DNNs of Different Weights
Joo Seong Jeong
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Soojeong Kim
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Gyeong-In Yu
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Yunseong Lee
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Byung-Gon Chun
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Seoul National University
Accelerating Multi-Model Inference by Merging DNNs of Different Weights
Standardized DNN models that have been proved to perform well on machine learning tasks are widely used and often adopted as-is to solve downstream tasks, forming the transfer learning paradigm. However, when serving multiple instances of such DNN models from a cluster of GPU servers, existing techniques to improve GPU utilization such as batching are inapplicable because models often do not share weights due to fine-tuning. We propose NETFUSE, a technique of merging multiple DNN models that share the same architecture but have different weights and different inputs. NETFUSE is made possible by replacing operations with more general counterparts that allow a set of weights to be associated with only a certain set of inputs. Experiments on ResNet-50, ResNeXt-50, BERT, and XLNet show that NETFUSE can speed up DNN inference time up to 3.6× on a NVIDIA V100 GPU, and up to 3.0× on a TITAN Xp GPU when merging 32 model instances, while only using up a small additional amount of GPU memory. * Yunseong is currently affiliated with Qualcomm AI Research.Preprint. Under review.
Introduction
Various standardized deep neural network (DNN) models exist for modern machine learning tasks. For example, attention models such as BERT and XLNet have recently been proven to be particularly effective for language understanding tasks [8,32]. Meanwhile, the ResNet, Inception, and ResNeXt models are widely used for image classification tasks [11,28,31]. Such models are recognized for their ability to learn general representations for their respective input data distributions, and are often used to solve new tasks with little to no modifications in model architecture. The applicability of these representative models is backed by the transfer learning paradigm -knowledge earned by a model while training on one task can be passed down to another model training on some other related task. In fact, pretrained parameters for these models (on major datasets) are publicly available online in the form of model zoos.
Before being deployed to serving systems, such models undergo a fine-tuning process in which the parameters of the models are altered specifically for the task in hand. This process is necessary because the model output required by a specific task often does not align with that of the general task employed to learn the pretrained parameters. For example, a ResNet [11] trained on a 1000-class dataset [26] cannot be adopted as-is for a 10-class dataset [18]; the last classification layer must be replaced with an appropriate substitute, followed by additional training. Fine-tuning also brings the benefit of pushing the model parameters towards the specific task, making the model lose generality but gain specialty. This results in a situation where many specialized models have similar architectures but different internal parameters. Figure 1: Comparison of serving systems that employ computation merging. NETFUSE is the only one that is able to merge computations when both inputs and weights are different.
Many systems for serving DNN models on CPUs and GPUs have been proposed, each focusing on a different aspect of DNN serving, or DNN inference. When it comes to serving multiple models from a cluster of servers, DNN inference is known to have poor resource utilization, especially for GPUs [15]. This is mainly due to how the computation capabilities of modern GPUs are usually more than enough to handle the actual amount of computation required during inference. Unlike training, DNN inference does not involve the usual "backward pass" in which gradients are calculated from a certain loss function, resulting in a much less amount of floating point operations (FLOPs).
A common technique to improve resource utilization during DNN inference is batching [7,22,10]. Thanks to the sheer amount of cores within a GPU, simply providing the GPU with many, mutually independent operations at once by batching inputs is actually a good approach to boost GPU utilization. However, conventional batching is confined to only single-model settings, limiting the applicability of batching in the context of serving systems that deal with multiple models. There have been recent systems that propose multi-model batching [17,27] as an alternative, but this is not feasible when models have completely different parameters, which is in fact a relatively common case considering transfer learning and fine-tuning. Without batching (both single-model and multi-model), models must be run in isolation from one another, missing out on optimization opportunities.
In this paper, we propose NETFUSE, a technique of coalescing weights to merge multiple DNN models of different parameters 2 as well as different inputs. Our technique fuses several instances of a DNN operation, that do not share weights, into one larger operation by carefully aligning the weights so that a set of weights is only associated with its original corresponding input. Since the act of associating a set of weights with only a certain input is not allowed for some types of operations, we substitute such operations with more general counterparts, if necessary -e.g., replace layer normalization with group normalization, and convolution with grouped convolution. Unlike batching, NETFUSE does not require models to share parameters or inputs.
We evaluate NETFUSE using four representative models (ResNet-50, ResNeXt-50, BERT, XLNet) for vision and language tasks on two types of NVIDIA GPUs (V100, TITAN Xp). Under various settings that merge different number of models with different batch sizes, NETFUSE outperforms other baselines by up to 3.6× in terms of inference time.
2 Related Work
Multi-Model Inference
Many inference systems have proposed solutions for improving the GPU utilization of DNN inference by either reusing computations when possible, or merging several computations into one large computation [7,22,10,17,27,23]. Different systems assume different input and model settings, leading to different merging scenarios and strategies ( Figure 1).
Same inputs, same weights. The inference result for a certain query can be reused as-is for another query if both the query input and the target DNN weights are identical. Referred to as model sharing by MCDNN [10] and Mainstream [17], this technique can be applied to cases where a single input is processed by multiple DNNs of similar tasks, e.g., age classification and gender classification on 2 The terms weight and parameter are interchangeable within the context of DNNs. We use both terms throughout this paper. human faces. The system identifies a common subnetwork from multiple DNNs and runs the input through the common subnetwork only once so that the same computation is not repeated needlessly. Different inputs, same weights. Inference queries on different inputs can also be merged if the weights of the target DNN are the same, in the form of batching. Batching several DNN inputs together is a classic technique for exploiting the GPU's parallel computing power, and many DNN operations are implemented with batch size in mind. Systems such as Clipper [7] and PRETZEL [22] employ batching by delaying the processing of a certain query and merging the query with subsequent queries, improving inference throughput with the cost of sacrificing latency. Another system, Nexus [27], demonstrates a similar batching technique of aggregating inputs from multiple queries on different DNNs, assuming the DNNs have large subnetworks in common.
Same inputs, different weights. HiveMind [23] introduces cross-model layer fusion, a technique of merging DNNs of different weights. Instead of batching inputs, HiveMind batches the weights and applies them to the same input. In essence, cross-model layer fusion can be regarded as the weight-input counterpart of batching.
Different inputs, different weights. All aforementioned systems require one major condition: when merging computations, either the inputs or the DNN weights (or both) must be identical. This condition prohibits merging computations of different inputs and different weights.
Consider the following situation: a server is serving instances of the BERT [8] model for several natural language processing tasks such as question answering, sentence prediction, and text generation. Each task demands its own fine-tuning procedure, i.e., the DNN weights of each task are different. Moreover, given the nature of the tasks, each task is associated with a different input stream. Existing approaches on computation merging are inapplicable here, because of the input/weight differences.
Researchers have demonstrated various benefits of fine-tuning all DNN weights in many real-world use cases. For example, in natural language processing, the state-of-the-art neural networks fine-tune all parameters of a pre-trained model for each downstream task [8,25,32,13]. Fine-tuning is also shown to be effective in computer vision [33,6,34]. Classic model ensemble techniques [21,20] utilize models of the same architecture and different weights as well; at test time, the inference results of each model are aggregated to produce the final result. Some researches have even proposed branched models [9,29] which contain specialized subnetworks of the same architecture. Previous computation merging techniques are invalid for all such cases, despite the DNN architecture being largely unchanged. A recent line of frameworks -TASO [16], TVM [5], TensorFlow XLA [2] and TensorRT [1] -propose graph rewriting as a method for optimizing DNN models. Graph rewriting frameworks apply graph substitution rules, either hand-written or automatically generated, to a DNN model and generate a new model that outputs mathematically equivalent results but can be executed faster on accelerators. At first glance, graph rewriting seems like a good approach for optimizing the aforementioned use case of models with different inputs and weights. Since a set of M disjoint models, with one input each, can be considered as one large model with M inputs, we can simply feed the models as-is into a graph rewriting framework for optimization and hope the framework cleverly merges operations across models.
Graph Rewriting Frameworks
Unfortunately, we find that existing graph rewriting frameworks are ineffective for supporting multi-model inference. First, the greedy search strategies of existing frameworks prefer single-model optimizations over multi-model optimizations, as multi-model optimizations are often hidden behind overheads. Figure 2 depicts an instance in which the execution of two convolutions from two separate models in (a) can be accelerated by concatenating the inputs/outputs and merging the convolutions into a grouped convolution, as shown in (b). TASO [16], a state-of-the-art graph rewriting framework, is unable to discover this optimization, despite the grouped convolution being faster. We can manipulate TASO into finding this optimization by giving it extremely aggressive search hyperparameters, but this leads us to the next problem. Second, frameworks that automatically generate and apply substitution rules experience scalability issues when optimizing multiple disjoint models. For example, TASO takes more than 30 hours to fully explore the optimization space (even under conservative search hyperparameters) when given four instances of ResNeXt-50 [31], and flat-out runs out of memory when given eight instances. Even then, we found that TASO does not apply any kind of significant multi-model optimization, because TASO's default substitution rules do not cover the multi-model inference case on hand.
NETFUSE
The key to merging multiple instances of an operation with different inputs and different weights, is to find a more general counterpart of the operation that allows a set of weights to be paired with only a certain set of inputs. Throughout this section, we will use the term input-weight pair to indicate a tuple of inputs and weights that are used together, exclusively. We will also use the term input-weight local computation when referring to the computation of a specific input-weight pair.
We highlight in detail why operations that incorporate input-weight local computations are necessary when merging operations, with Figure 3. Figure 3a depicts an abstract illustration of DNN operations whose inputs are associated with all available weights, symbolizing operations such as matrix multiplication (fully connected layers) and convolution. In the figure, we are trying to merge an operation of inputs x A and weights w A with another operation of inputs x B and weights w B . Without parting from the structure shown in Figure 3a, there is virtually no way of preventing inputs x B from being processed by w A , because all inputs are associated with all weights. In order to separate x A and w A from x B and w B , we need another operation that involves input-weight local computations. Figure 3b portrays another category of DNN operations that consists of multiple input-weight pairs. With this structure, it is possible to isolate a set of inputs x A and weights w A from the other set of inputs and weights. In fact, major DNN operations all have some form of counterpart operation that performs the same type of computation as the original operation, but additionally allows a certain degree of local computation among inputs and weights, as shown in Table 1.
Such operations were not originally designed to be used for merging operations of different weights.
For instance, one of the first well-known usages of the grouped convolution operation is none other than the acclaimed AlexNet [19]; the authors describe in their paper that the restriction on convolution channels was actually a compromise they had to make because of the memory limitations of the GPU at that time. Another use of the operation was MobileNet [12], in which the depthwise convolution (an extreme version of grouped convolution) operation was employed in place of an ordinary convolution to reduce FLOPs and allow deployment on mobile devices, with the cost of sacrificing accuracy. Interestingly, another DNN, ResNeXt [31], has recently been proposed to experiment with grouped convolutions to improve model accuracy. While existing applications of grouped convolution can all be classified as attempts to alter the properties (GPU memory, FLOPs, accuracy) of a single convolution operation, we approach from a different point of view and instead apply the operation as a means of merging multiple operations.
For the rest of this section, we look into merging widely used DNN operations and what general counterpart operations they require (Section 3.1). We then elucidate how operation merging can be extended to entire DNNs (Section 3.2).
Merging Individual Operations
Matrix multiplication Multiple matrix multiplications, i.e., fully connected layers, can be merged into a batch matrix multiplication. Batch matrix multiplication is simply matrix multiplication with a batch of inputs and a batch of weight tensors. Each input is multiplied with only one weight tensor, which is exactly how we want to merge operations. In fact, the kernel implementations of matrix multiplication in frameworks such as TensorFlow [3] and PyTorch [24] support batch matrix multiplication by default. Merging several instances of matrix multiplication is done by first concatenating the inputs and weights along the batch dimension into one big input batch and one big weight batch, respectively, and then replacing the individual operations with a single batch matrix multiplication operation.
It is noteworthy that since a matrix multiplication can be converted to a mathematically equivalent 1x1 convolution, it is also possible to merge several matrix multiplications as if they were convolutions. However, we have found that this results in very slow inference speed for even moderately sized DNNs, due to how the implementation of the convolution operation is not optimized towards single matrix multiplications.
Convolution The convolution operation, unlike matrix multiplication, does not have a straightforward "batched" version. Instead, we make use of the more general grouped convolution operation, which is similar to the original convolution operation but has the restriction that each output channel is calculated from only a certain group of input channels rather than all input channels.
In the context of operation merging, we discovered that holding several groups that are confined from each other is congruent with having isolated input-weight pairs. Each group corresponds to an input-weight pair. We merge two convolutions by concatenating the inputs along the channel dimension (same for weights as well), then placing a grouped convolution that consists of a number of groups equal to the number of merged convolutions (i.e., the number of input-weight pairs). A formal derivation showing that a grouped convolution can produce the exact same results as a set of ordinary convolutions is given in Appendix A.
Layer normalization Layer normalization [4] instances can be merged into a single group normalization. Because all input channels are aggregated and normalized at once for layer normalization, simply concatenating the inputs and then using a larger layer normalization instance does not suffice; separate sets of inputs would not be isolated from each other. Instead, we turn our eyes toward another normalization method, group normalization [30], that breaks up channels into disjoint groups, akin to grouped convolution. This enables merging layer normalization instances in a manner similar to convolution; we concatenate the inputs and weights along the channel dimension to generate a large input tensor and a large weight tensor, and create a group normalization instance with a number of groups equal to the number of merged layer normalizations.
Operations with input-weight local computations The general counterpart operations mentioned in the previous sections -i.e., batch matrix multiplication, grouped convolution, and group normalization -can be merged without changing the operation type. Since these operations allow input-weight local computations by nature, multiple instances of such operations can be merged by concatenating inputs and weights, and increasing the number of local computation groups. For example, merging 4 grouped convolutions that use 2 groups each would result in a large grouped convolution of 4 × 2 = 8 groups. Batch normalization [14] can also be merged without special manipulations; the calculations of batch normalization are done in a per-channel manner, so inputs and weights just need to be concatenated along the channel dimension.
Non-trainable operations All non-trainable operations can be merged seamlessly, as there are no weights to be merged. This includes activation functions (e.g., ReLU, Swish, Tanh), max-pooling, mean-pooling, and also other element-wise operations such as plain addition or multiplication.
End-to-end DNN Merging
We now extend our discussion to entire DNNs. More specifically, we show how we merge multiple DNNs that share the same architecture -i.e., the same sequence of operations -but incorporate different inputs and different weights. DNNs can be merged by first merging operations independently, and then reshaping and transposing intermediate tensors between merged operations if necessary. Whether to add reshaping and transposing operations or not depends on the tensor dimension that operation merging was done.
We demonstrate DNN merging with an example of a classical feedforward neural network (FFNN) consisting of a fully connected layer (i.e., matrix multiplication) followed by a layer normalization layer. Figure 4 shows two instances of a basic FFNN that share the same network architecture, but contain different weights and serve different inputs (depicted by the difference in shades). First, the matrix multiplications can be merged into a batch matrix multiplication, given that the inputs are correctly concatenated along the batch dimension. Next, the layer normalizations can be merged into a group normalization of two groups, but the fact that the previously merged operation produces tensors packed along the batch dimension conflicts with group normalization's merging condition of tensors being concatenated along the channel dimension. Therefore, we insert a reshaping operation between the previous batch matrix multiplication and the new grouped normalization to ensure that the input tensor of the grouped normalization does indeed have the expected tensor shape. The rest of the operations in the network are all non-trainable operations, and thus do not require any particular reshaping.
We formally describe the merging process in Algorithm 1. Basically, the algorithm is a BFS graph traversal algorithm with a time complexity of O(|V | + |E|). Visiting each operation, we first check the required merge dimension of the operation (lines [12][13][14][15][16]. Merging matrix multiplications to batch matrix multiplication demands tensors to be concatenated along the Batch dimension, while grouped convolution, layer normalization, and batch normalization demand concatenation along the Channel dimension. Non-trainable operations do not require a specific concatenation scheme, hence we set the dimension as DontCare. Next, we check whether the merge dimension of the operation is compatible with that of the parent operations (lines 29-31). If the merge dimensions are not compatible (either d j = Batch, d i = Channel or vice versa), then we insert a reshape operation before the merged operation (lines 32-36).
Although we focused our discussion on DNNs of the exact same architecture, NETFUSE is also applicable to DNNs that share common backbones. In such cases, we merge only the common backbones via Algorithm 1, and do not merge the other layers. opi ← Q.Dequeue() 10:
MarkVisited (
Implementation
We have implemented NETFUSE as an automated tool on PyTorch 1.3.1 [24]. NETFUSE receives a computation graph (widely employed by modern DL frameworks, including PyTorch) of a DNN model and the number of model instances to merge as input, and outputs a merged version of the computation graph. Instead of using the common PyTorch model format (nn.Module) as-is, we prepare a PyTorch model in the form of a Torchscript graph, which can be generated from nn.Modules via the Torchscript API. The Torchscript graph format allows us to implement the graph traversal of Algorithm 1, whereas nn.Modules do not due to the imperative programming model of PyTorch. The overall merging mechanism follows the process described in Section 3.2.
The merging process occurs only once per model, offline. At inference time, the merged model can be run repeatedly without having to go through the merging process again. In other words, the time overhead for merging models can be amortized across multiple runs. The largest merging overhead we observed during our experiments was 600 milliseconds for merging 32 ResNeXt-50 instances.
The overhead mostly comes from graph traversal, and does not scale linearly with the number of model instances.
Some Torchscript details require us to treat certain operations with specific measures. Torchscript considers convolution operations and grouped convolution operations as the same type, aten::_convolution, and differentiates them by the corresponding integer attribute value for the number of convolution groups. In other words, a normal convolution op is of aten::_convolution type with the value 1 for the num_groups attribute, while a grouped convolution op is of aten::_convolution type with the number of convolution groups for the num_groups attribute. Thus, when converting a convolution op into a grouped convolution op, we don't actually change the op type, but rather adjust the num_groups attribute instead.
On the other hand, matrix multiplication operations do not always share the same type as batch matrix multiplications. PyTorch provides several ways to define a matrix multiplication operation (aten::addmm, aten::baddbmm, etc.). Therefore, depending on which PyTorch interface is used, it may or may not be possible to convert a matrix multiplication op into a batch matrix multiplication op with a simple tweak in attribute values. When a type conversion is needed, we not only change the op type but also rewire inputs and outputs, according to the op signature.
Experimental Results
In this section, we evaluate NETFUSE by measuring the inference time of DNNs merged via NETFUSE while varying the number of DNNs, the DNN model, the inference batch size, and GPU hardware. We also check the memory usage of DNN inference and perform other experiments to further understand the characteristics of NETFUSE. NETFUSE does not alter the computation results in any way and thus inference accuracy is not affected by our technique.
Evaluation Setup
Environment. We implemented NETFUSE on PyTorch 1.3.1 and used NVIDIA CUDA 10.0 and cuDNN 7.6 to run GPU kernels. We use an AWS EC2 p3.2xlarge instance, which includes an NVIDIA V100 GPU. We also use an NVIDIA TITAN Xp on our server machine of two 18-core Intel Xeon E5-2695 @ 2.10 GHz processors with 256GB RAM, for whose experiment results are shown in Appendix B.
Models. We first experiment on ResNet-50 [11] and ResNeXt-50 [31], representative CNNs that are widely used in computer vision. As ResNet and ResNeXt are mainly used for image classification, we replace the final layer with a fully connected layer of varying output classes to correctly represent multiple classification tasks that have all undergone their own fine-tuning processes. Excluding the final fully connected layer, all other layers can be merged via NETFUSE. We use synthetic 224x224 RGB images as inputs.
We also perform experiments on BERT [8] and XLNet [32] as representatives of natural language processing models. Following the paper's guidelines, we run inference tasks by feeding the output of BERT and XLNet to additional fully connected layers. Each type of task (e.g., question-andanswering, named entity recognition, and sentence/token classification) is associated with its own inputs and number of outputs. The models themselves are merged via NETFUSE. We use synthetic embeddings of length 128 as inputs.
Baselines. As stated in Section 2, no existing system attempts to merge computations of multiple DNNs when both inputs and weights are completely different. We implement various baselines on PyTorch that represent a serving system's behavior and compare NETFUSE with the baselines:
• Sequential: Selects a DNN from the given DNNs in a round-robin fashion and performs inference on each DNN one by one.
• Concurrent: Assigns a process per DNN and lets the processes perform inference on their corresponding DNN without any synchronization across other processes.
• Hybrid: Concurrently runs as many models as the GPU memory allows, and then sequentially runs remaining models in the next batches. Figure 5 presents the inference time of performing inference on various models for NETFUSE and the two baselines, on a V100 GPU. Each bar represents the mean inference time of 1,000 runs for the corresponding configuration.
Inference Time
Experiment results on ResNet-50 are depicted on Figure 5a. As we increase the number of models, the inference time of the sequential baseline grows linearly because it must sequentially process each inference without any overlapping any computations across models. The concurrent baseline performs better than the sequential baseline as the GPU is being fed with more requests, but fails to reach the speed of NETFUSE because the computations of different models are still being launched as independent kernels. In fact, the concurrent baseline runs out of GPU memory for large numbers of models (explained in Section 5.3). On the other hand, NETFUSE is able to merge computations of different models and achieve lower inference time then other baselines. A similar trend is repeated for ResNeXt-50 in Figure 5b and BERT in Figure 5c. Interestingly, the concurrent baseline is the slowest for XLNet, as shown in Figure 5d. We conjecture that the extra computations used in XLNet's base architecture, Transformer-XL, compared to BERT's base architecture, Transformer, renders concurrent executions more ineffective. The inference time speedup is up to 2.6×, 3.4×, 2.7×, 3.6× for ResNet-50, ResNeXt-50, BERT, and XLNet, respectively.
In order to examine how batch size affects NETFUSE's effectiveness, we repeated the previous experiment for BERT on greater batch sizes and draw the results in Figure 6. The inference times of the baselines are shown as relative numbers against NETFUSE (the blue dotted horizontal line 1x). Although NETFUSE is faster than the other baselines for most configurations, the gap between NETFUSE and the baselines gradually decreases as the batch size increases. There even exists a configuration (batch size 8) where NETFUSE performs more poorly than the baselines. This is because the GPU is already well saturated with a large batch size, and thus further merging computations does not affect the GPU's utilization enough to improve speed. Nevertheless, NETFUSE performs significantly better than other baselines for small batch sizes and does not experience GPU memory issues like the concurrent baseline.
Memory Footprint
We further investigate the GPU memory issue of the concurrent baseline by measuring the peak GPU memory usage of NETFUSE and the baselines during inference. In Figure 7, we show the maximum amount of memory used by NETFUSE and the baselines for different configurations. The hatched portion of each bar indicates the amount of memory used as inference workspace (weights Sequential-concurrent hybrid strategy. The concurrent baseline generally tends to be faster than the sequential baseline, but suffers from memory issues for large numbers of models. Naturally, one can think of a hybrid approach that combines the strengths of the concurrent and sequential baselines -spawn concurrent processes (per model) as much as the GPU memory allows, and make each process run a number of models sequentially. For instance, instead of creating 32 processes to serve 32 models like the concurrent baseline, we can generate 4 processes that run 8 models each. Figure 8 shows the inference time of this hybrid approach for running 32 models, along with the other baselines and NETFUSE. While the concurrent baseline runs out of memory, the hybrid approach is able to avoid this issue by spawning less processes. As an example, we can see in Figure 8a that the hybrid configurations of spawning 2, 4, and 8 processes do not run out of memory. At the same time, they exhibit shorter inference times than the purely seqential baseline by running multiple models concurrently. Nonetheless, NETFUSE still outperforms the hybrid baseline by up to 2.5× for ResNeXt-50 and 7.2× for XLNet. Note that the hybrid approach may still be susceptible to memory issues, depending on the model; as can be seen in Figure 8d, even a relatively small number of processes leads to running out of memory for XLNet.
Discussion
Applicability of NETFUSE on training models. NETFUSE can be used to train several models as one large model. As the group operations listed in Table 1 (grouped convolution, batch matrix multiplication, and group normalization) all have proper backpropagation operations, a merged model can be trained on deep learning frameworks just as ordinary models. Indeed, we have confirmed that NETFUSE brings similar performance benefits to both training and inference.
However, NETFUSE may be less effective on training than inference, for a few practical reasons. First, DNN training is typically done in larger batch sizes than inference (10s to 1000s). As presented in Figure 6, NETFUSE's performance degrades when the batch size increases, so NETFUSE should be applied selectively to training models of small batch sizes or training small models. Second, individual models may have different training lengths as well as different hyperparameters and optimizers. In order to accommodate these factors when training merged models, additional measures are required such as excluding models that have finished training and merging the optimizers of each individual model.
Applicability of NETFUSE on models of different architectures. NETFUSE, as well as all previous frameworks [7,22,10,17,27,23] noted in our paper, are not applicable to models with completely different architectures. For instance, we do not consider merging a CNN with an LSTM; the two models are structurally too different. That said, NETFUSE is applicable to models that are not quite completely identical, but do have common backbones. The BERT model is a perfect example -the attention layers are unmodified (only the weights are fine-tuned), but the following fully connected layers are customized depending on the NLP task. In such cases, we merge the backbones, but leave the customized layers (fully connected layers specific for downstream tasks) as-is. In fact, this is how we merged the models in our experiments.
Conclusion
In this paper, we introduce NETFUSE, a merging technique that can be applied to merging DNNs that share the same architecture but house different parameters and different inputs. By finding general counterparts for DNN operations that allow input-weight local computations, NETFUSE is able to merge multiple operations into a single large operation while preserving the same outputs. We also show how merging works for whole DNNs and propose an algorithm for DNN merging. Our experiments show that NETFUSE indeed performs faster than baselines with a small cost of additional GPU memory.
GroupConv(x, w, G) of shape (C out G, H out , W out ) can be expressed as:
y[c] = Cin−1 cin=0 w[c, c in ] x[g + c in ] where g = C in c C out (2)
The term c Cout in g indicates which convolution group c belongs to. For example, the channels c = 0, 1, ..., C out − 1 compose the first convolution group, and thus c Cout = 0. Note that when G = 1, this becomes an ordinary convolution, i.e., GroupConv(x, w, 1) = Conv(x, w).
Finally, we show that it is possible to perform M convolutions with a single grouped convolution operation. Given M input tensors {x m } M −1 m=0 (x m is of shape (C in , H in , W in )) and M weight tensors {w m } M −1 m=0 (w m is of shape (C out , C in , K, K)), we concatenate the input tensors, along the channel dimension, into a large input tensor x of shape (C in M, H in , W in ). This way, a specific subtensor of x corresponds to a specific subtensor of x m , as in
x[C in m + c in ] = x m [c in ].
We repeat this process for the weight tensors as well to create a large weight tensor w.
With x and w in hand, we define y as the output of performing grouped convolution on x and w with M groups. Considering y has a shape of (C out M, H out , W out ), we denote the first C out subtensors of y as y 0 , the second C out subtensors of y as y 1 , and so on:
y m := y[C out m : C out (m + 1)] ⇐⇒ y m [c out ] = y[C out m + c out ] (m = 0, 1, ..., M − 1)(3)
Using Eq. 2 and Eq. 3, we can derive the following for a specific subtensor of y m :
y m [c out ] = y[C out m + c out ] = cin w[C out m + c out , c in ] x[g + c in ] = cin w m [c out , c in ] x[g + c in ](4)
Note that all channels c out of y m correspond to the m-th convolution group. This is confirmable by recalculating g in Eq. 2 with the fact that c has been replaced with C out m + c out from Eq. 4 in mind:
g = C in (C out m + c out ) C out = C in m + c out C out = C in m (∵ 0 ≤ c out < C out )(5)
B Experimental Results for NVIDIA TITAN Xp
In this section, we present the results of experiments executed on an NVIDIA TITAN Xp GPU. Similar to Sections 5.2 and 5.3, we examined both the inference time and memory footprint of NETFUSE and the baselines.
B.1 Inference Time Figure 9 shows the inference time when performing inference for the four models described in Section 5.2. The height of each bar indicates the mean inference time of 1,000 runs for the corresponding configuration. The overall trend of NETFUSE outperforming the baselines, which was seen in V100 experiments, is also present here as well. The relative performance gains are lower than the gains on V100, which is due to the fact that the V100 GPU has significantly more cores than the TITAN Xp GPU, and thus can more effectively parallelize the processing of merged models.
B.2 Memory Footprint
The peak GPU memory usage of each configuration is plotted in Figure 10. Compared with the results on V100 (Figure 7), the peak memory usage generally remains unchanged. However, we have observed some unexpected results. Unlike the results on V100, the concurrent baseline does not run out of memory when running 16 ResNet-50s and ResNeXt-50s. Moreover, the sequential baseline runs out of memory when merging 32 XLNets. We have not yet identified the root cause, though we conjecture that PyTorch's GPU memory caching allocator is exhibiting inconsistent behavior.
Figure 2 :
2A multi-model optimization example.
Figure 3 :
3Structure of operations regarding the pairing of inputs and weights
Figure 4 :
4An example of merging two FFNNs. Individual operations are merged as according to Section 3. In case of shape inconsistencies regarding merge dimensions (batch or channel), we insert reshape operations to fix the shapes.
Input: A common subgraph (V, E) of M DNNs and weights {wij | wij is the weight parameter for opi ∈ V in DNNj } 2: Output: A graph (Vmerge, Emerge) for the merged DNN 3: 4: // start with root ops, with no incoming edges 5: Q ← empty queue 6: Q.EnqueueAll({op ∈ V | ∃( * , op) ∈ E}) 7: 8: while not Q.IsEmpty() do 9:
Figure 5 :
5Mean inference time of NETFUSE and the sequential and concurrent baselines for a varying number of models on V100. The batch size is set to 1. The error bars indicate the standard deviation.
Figure 6 :
6Inference time of NETFUSE and the sequential and concurrent baselines on BERT, normalized by the inference time of NETFUSE, for a varying number of batch sizes (bs) on V100. The error bars indicate the standard deviation.
Figure 7 :
7Peak GPU memory usage of NETFUSE and the sequential and concurrent baselines for a varying number of models on V100. The batch size is set to 1. For each vertical bar, the hatched portion denotes the amount of memory used as inference workspace to house weights and intermediate activation values, while the solid portion corresponds to the base memory reserved by the framework, PyTorch (500 MBs per process). V100 has a total amount of 16 GB memory.
Figure 8 :
8Mean inference time of NETFUSE and the sequential, concurrent, and hybrid baselines on V100. The batch size is set to 1. (Ap, Bm) denotes a configuration of A concurrent processes, each running B models sequentially. and activations), while the remaining solid portion of each bar indicates the base memory held by the framework, PyTorch. The main reason for the concurrent baseline running out of memory is the base memory, as PyTorch takes 500 MBs per process when using the GPU. Spawning 16 processes to serve 16 models results in PyTorch taking approximately 8 GBs, which is already half of the V100 GPU's total memory, 16 GBs. Additionally, the memory used by the sequential baseline is the smallest for all cases because the sequential baseline performs only one model's worth of inference at a time, unlike the concurrent baseline and NETFUSE.
At last, substituting C in m for g in Eq. 4 gives us:y m [c out ] = cin w m [c out , c in ] x[g + c in ] = cin w m [c out , c in ] x[C in m + c in ] (6) = cin w m [c out , c in ] x m [c in ] =⇒ y m = Conv(x m , w m )Thus, we are essentially performing all M convolutions with one single grouped convolution to evaluate the exact same results, with no redundant nor missing computations.
Figure 9 :
9Mean inference time of NETFUSE and the sequential and concurrent baselines for a varying number of models on TITAN Xp. The batch size is set to 1. The error bars indicate the standard deviation.
Figure 10 :
10Peak GPU memory usage of NETFUSE and the sequential and concurrent baselines for various models on TITAN Xp. The batch size is set to 1. For each vertical bar, the hatched portion denotes the workspace memory, while the solid portion corresponds to the base memory reserved by the framework. TITAN Xp has a total of 12 GB memory.
Table 1 :
1List of various DNN operations and their input-weight local computation counterparts.No local computations Allows local computations
Convolution
Grouped Convolution
Matrix Multiplication
Batch Matrix Multiplication
Layer Normalization
Group Normalization
-
Batch Normalization
Pooling (Max-pooling, ...)
Activation Functions (ReLU, ...)
Element-wise Operations
AcknowledgmentsWe thank Taebum Kim for his helpful comments on the paper.Appendix A Convolution DerivationWe now formally show that a grouped convolution can produce the exact same results as a set of ordinary convolutions, when merged correctly. We present our analysis based on 2D convolutions, but we note that this can be readily generalized to both 1D and 3D convolutions. We first go over the definitions of the convolution and grouped convolution operations, and then derive that a grouped convolution of M groups is mathematically equivalent to M convolutions.Consider a convolution operation which takes a tensor x of shape (C in , H in , W in ) as input, where C in denotes the number of channels (filter maps) and H in , W in denote the height and width, respectively. Below, we use the notation x[c] to indicate x's c-th subtensor of shape (H in , W in ). We also define the weight tensor, w of shape (C out , C in , K, K), and the output tensor, y of shape (C out , H out , W out ), in a similar manner. C out indicates the number of output channels, H out and W out denote the height and width of the output tensor, and K dictates the kernel size of the weight tensor. Then, a specific subtensor of y := Conv(x, w) can be calculated as follows:denotes the valid cross-correlation operator for 2D tensors.Next, a grouped convolution (G convolution groups) of an input tensor x of shape (C in G, H in , W in ) and a weight tensor w of shape (C out G, C in , K, K) that produces an output tensor y :=
. Nvidia Tensorrt, NVIDIA TensorRT. https://developer.nvidia.com/tensorrt.
. Xla Tensorflow, TensorFlow XLA. https://www.tensorflow.org/xla.
TensorFlow: A System for Large-Scale Machine Learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghe-Mawat, G Irving, M Isard, M Kudlur, J Levenberg, R Monga, S Moore, D G Murray, B Steiner, P Tucker, V Vasudevan, P Warden, M Wicke, Y Yu, X Zheng, OSDI. ABADI, M., BARHAM, P., CHEN, J., CHEN, Z., DAVIS, A., DEAN, J., DEVIN, M., GHE- MAWAT, S., IRVING, G., ISARD, M., KUDLUR, M., LEVENBERG, J., MONGA, R., MOORE, S., MURRAY, D. G., STEINER, B., TUCKER, P., VASUDEVAN, V., WARDEN, P., WICKE, M., YU, Y., AND ZHENG, X. TensorFlow: A System for Large-Scale Machine Learning. In OSDI (2016).
. J L Ba, J R Kiros, G E Hinton, Layer Normalization, arXiv:1607.06450BA, J. L., KIROS, J. R., AND HINTON, G. E. Layer Normalization. arXiv:1607.06450 (2016).
TVM: An Automated End-to-End Optimizing Compiler for Deep Learning. T Chen, T Moreau, Z Jiang, L Zheng, E Yan, H Shen, M Cowan, L Wang, Y Hu, L Ceze, C Guestrin, A Krishnamurthy, OSDI. CHEN, T., MOREAU, T., JIANG, Z., ZHENG, L., YAN, E., SHEN, H., COWAN, M., WANG, L., HU, Y., CEZE, L., GUESTRIN, C., AND KRISHNAMURTHY, A. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning. In OSDI (2018).
Best Practices for Fine-Tuning Visual Classifiers to New Domains. B Chu, V Madhavan, O Beijbom, J Hoffman, T Darrell, ECCV TASK-CV Workshop. CHU, B., MADHAVAN, V., BEIJBOM, O., HOFFMAN, J., AND DARRELL, T. Best Practices for Fine-Tuning Visual Classifiers to New Domains. In ECCV TASK-CV Workshop (2016), pp. 435-442.
Clipper: A Low-Latency Online Prediction Serving System. D Crankshaw, X Wang, G Zhou, M J Franklin, J E Gonzalez, I Stoica, CRANKSHAW, D., WANG, X., ZHOU, G., FRANKLIN, M. J., GONZALEZ, J. E., AND STOICA, I. Clipper: A Low-Latency Online Prediction Serving System. In NSDI (2017).
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, arXiv:1810.04805Pre-training of Deep Bidirectional Transformers for Language Understanding. DEVLIN, J., CHANG, M.-W., LEE, K., AND TOUTANOVA, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 (2018).
Y Guo, H Shi, A Kumar, K Grauman, T Rosing, R Feris, Spottune, Transfer Learning Through Adaptive Fine-Tuning. In CVPR. GUO, Y., SHI, H., KUMAR, A., GRAUMAN, K., ROSING, T., AND FERIS, R. SpotTune: Transfer Learning Through Adaptive Fine-Tuning. In CVPR (2019).
MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints. S Han, H Shen, M Philipose, S Agarwal, A Wolman, A Krishnamurthy, MobiSys. HAN, S., SHEN, H., PHILIPOSE, M., AGARWAL, S., WOLMAN, A., AND KRISHNAMURTHY, A. MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints. In MobiSys (2016).
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. HE, K., ZHANG, X., REN, S., AND SUN, J. Deep residual learning for image recognition. In CVPR (2016).
A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, M Andreetto, H Adam, Mobilenets, arXiv:1704.04861Efficient Convolutional Neural Networks for Mobile Vision Applications. HOWARD, A. G., ZHU, M., CHEN, B., KALENICHENKO, D., WANG, W., WEYAND, T., ANDREETTO, M., AND ADAM, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 (2017).
Universal Language Model Fine-tuning for Text Classification. J Howard, S Ruder, arXiv:1801.06146HOWARD, J., AND RUDER, S. Universal Language Model Fine-tuning for Text Classification. arXiv:1801.06146 (2018).
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. S Ioffe, C Szegedy, arXiv:1502.03167IOFFE, S., AND SZEGEDY, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 (2015).
Dynamic Space-Time Scheduling for GPU Inference. P Jain, X Mo, A Jain, H Subbaraj, R S Durrani, A Tumanov, J Gonzalez, I Stoica, Systems for ML Workshop at NeurIPS. JAIN, P., MO, X., JAIN, A., SUBBARAJ, H., DURRANI, R. S., TUMANOV, A., GONZALEZ, J., AND STOICA, I. Dynamic Space-Time Scheduling for GPU Inference. In Systems for ML Workshop at NeurIPS (2018).
Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions. Z Jia, O Padon, J Thomas, T Warszawski, M Zaharia, A Aiken, Taso, SOSP. JIA, Z., PADON, O., THOMAS, J., WARSZAWSKI, T., ZAHARIA, M., AND AIKEN, A. TASO: Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions. In SOSP (2019), pp. 47-62.
Mainstream: Dynamic Stem-Sharing for Multi-Tenant Video Processing. A H Jiang, D L Wong, C Canel, L Tang, I Misra, M Kaminsky, M A Kozuch, P Pillai, D G Andersen, G R Ganger, USENIX ATC. JIANG, A. H., WONG, D. L., CANEL, C., TANG, L., MISRA, I., KAMINSKY, M., KOZUCH, M. A., PILLAI, P., ANDERSEN, D. G., AND GANGER, G. R. Mainstream: Dynamic Stem- Sharing for Multi-Tenant Video Processing. In USENIX ATC (2018).
Learning Multiple Layers of Features from Tiny Images. A Krizhevsky, Tech ReportKRIZHEVSKY, A. Learning Multiple Layers of Features from Tiny Images. Tech Report (2009).
ImageNet Classification with Deep Convolutional Neural Networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. KRIZHEVSKY, A., SUTSKEVER, I., AND HINTON, G. E. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS (2012).
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. B Lakshminarayanan, A Pritzel, C Blundell, LAKSHMINARAYANAN, B., PRITZEL, A., AND BLUNDELL, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In NIPS. 2017, pp. 6402-6413.
S Lee, S Purushwalkam, M Cogswell, D Crandall, D Batra, arXiv:1511.06314Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks. LEE, S., PURUSHWALKAM, S., COGSWELL, M., CRANDALL, D., AND BATRA, D. Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks. arXiv:1511.06314 (2015).
PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems. Y Lee, A Scolari, B.-G Chun, M D Santambrogio, M Weimer, M In-Terlandi, OSDI. LEE, Y., SCOLARI, A., CHUN, B.-G., SANTAMBROGIO, M. D., WEIMER, M., AND IN- TERLANDI, M. PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems. In OSDI (2018).
Accelerating Deep Learning Workloads through Efficient Multi-Model Execution. D Narayanan, K Santhanam, A Phanishayee, M Zaharia, Systems for ML Workshop at NeurIPS. NARAYANAN, D., SANTHANAM, K., PHANISHAYEE, A., AND ZAHARIA, M. Accelerating Deep Learning Workloads through Efficient Multi-Model Execution. In Systems for ML Workshop at NeurIPS (2018).
PyTorch: An Imperative Style, High-Performance Deep Learning Library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Et Al, NIPS. PASZKE, A., GROSS, S., MASSA, F., LERER, A., BRADBURY, J., CHANAN, G., KILLEEN, T., LIN, Z., GIMELSHEIN, N., ANTIGA, L., ET AL. PyTorch: An Imperative Style, High- Performance Deep Learning Library. In NIPS (2019), pp. 8024-8035.
Improving Language Understanding with Unsupervised Learning. A Radford, K Narasimhan, T Salimans, I Sutskever, OpenAITechnical reportRADFORD, A., NARASIMHAN, K., SALIMANS, T., AND SUTSKEVER, I. Improving Language Understanding with Unsupervised Learning. Technical report, OpenAI (2018).
Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, Et Al, International Journal of Computer Vision. 115RUSSAKOVSKY, O., DENG, J., SU, H., KRAUSE, J., SATHEESH, S., MA, S., HUANG, Z., KARPATHY, A., KHOSLA, A., BERNSTEIN, M., ET AL. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115, 3 (2015), 211-252.
Nexus: A GPU Cluster Engine for Accelerating DNN-Based Video Analysis. H Shen, L Chen, Y Jin, L Zhao, B Kong, M Philipose, A Krishnamurthy, R Sundaram, SOSP. SHEN, H., CHEN, L., JIN, Y., ZHAO, L., KONG, B., PHILIPOSE, M., KRISHNAMURTHY, A., AND SUNDARAM, R. Nexus: A GPU Cluster Engine for Accelerating DNN-Based Video Analysis. In SOSP (2019).
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, CVPR. SZEGEDY, C., VANHOUCKE, V., IOFFE, S., SHLENS, J., AND WOJNA, Z. Rethinking the inception architecture for computer vision. In CVPR (2016).
HydraNets: Specialized Dynamic Architectures for Efficient Inference. R Teja Mullapudi, W R Mark, N Shazeer, K Fatahalian, In CVPR. TEJA MULLAPUDI, R., MARK, W. R., SHAZEER, N., AND FATAHALIAN, K. HydraNets: Specialized Dynamic Architectures for Efficient Inference. In CVPR (2018).
. Y Wu, K He, Group Normalization. In ECCV. WU, Y., AND HE, K. Group Normalization. In ECCV (2018).
Aggregated Residual Transformations for Deep Neural Networks. S Xie, R Girshick, P Dollár, Z Tu, K He, XIE, S., GIRSHICK, R., DOLLÁR, P., TU, Z., AND HE, K. Aggregated Residual Transforma- tions for Deep Neural Networks. In CVPR (2017).
Z Yang, Z Dai, Y Yang, J Carbonell, R R Salakhutdinov, Q V Le, Xlnet, Generalized Autoregressive Pretraining for Language Understanding. In NIPS. YANG, Z., DAI, Z., YANG, Y., CARBONELL, J., SALAKHUTDINOV, R. R., AND LE, Q. V. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NIPS (2019).
How transferable are features in deep neural networks? In NIPS. J Yosinski, J Clune, Y Bengio, H Lipson, YOSINSKI, J., CLUNE, J., BENGIO, Y., AND LIPSON, H. How transferable are features in deep neural networks? In NIPS. 2014, pp. 3320-3328.
Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally. Z Zhou, J Shin, L Zhang, S Gurudu, M Gotway, J Liang, CVPR. ZHOU, Z., SHIN, J., ZHANG, L., GURUDU, S., GOTWAY, M., AND LIANG, J. Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally. In CVPR (2017), pp. 7340-7351.
| [] |
[
"Task Transfer and Domain Adaptation for Zero-Shot Question Answering",
"Task Transfer and Domain Adaptation for Zero-Shot Question Answering"
] | [
"Xiang Pan \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n",
"Alex Sheng [email protected] \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n",
"David Shimshoni \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n",
"Aditya Singhal \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n",
"Sara Rosenthal [email protected] \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n",
"Avirup Sil \nNew York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n\n"
] | [
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n",
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n",
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n",
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n",
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n",
"New York University\nNew York University\nNew York University\nNew York University\nIBM Research AI\nIBM Research AI\n"
] | [
"Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing"
] | Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms Domain-Adaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains. | 10.18653/v1/2022.deeplo-1.12 | [
"https://www.aclanthology.org/2022.deeplo-1.12.pdf"
] | 249,642,460 | 2206.06705 | 1c139f1a5f6b54f0505bd012a00f238d973a0d5c |
Task Transfer and Domain Adaptation for Zero-Shot Question Answering
July 14, 2022
Xiang Pan
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
Alex Sheng [email protected]
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
David Shimshoni
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
Aditya Singhal
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
Sara Rosenthal [email protected]
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
Avirup Sil
New York University
New York University
New York University
New York University
IBM Research AI
IBM Research AI
Task Transfer and Domain Adaptation for Zero-Shot Question Answering
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
the Third Workshop on Deep Learning for Low-Resource Natural Language ProcessingJuly 14, 2022
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms Domain-Adaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains.
Introduction
Pretrained language models (Liu et al., 2019;Wolf et al., 2020) require substantial quantities of labeled data to learn downstream tasks. For domains that are novel or where labeled data is in short supply, supervised learning methods may not be suitable (Zhang et al., 2020;Madasu and Rao, 2020;Rietzler et al., 2020). Collecting sufficient quantities of labeled data for each new application can be resource intensive, especially when aiming for both a specific task type and a specific data domain. By traditional transfer learning methods, it is prohibitively difficult to fine-tune a pretrained model on a domain-specific downstream task for which there is no existing training data. In light of this, we would like to use more readily available labeled indomain data from unrelated tasks to domain-adapt our fine-tuned model.
In this paper, we consider a problem setting where we have a domain-specific target task (QA) for which we do not have any in-domain training * Equal Contribution data (SQuAD). However, we assume that we have generic training data for the target task type, and in-domain training data for another task. To address this problem setting, we present Task and Domain Adaptive Pretraining (T+DAPT), a technique that combines domain adaptation and task adaptation to improve performance in downstream target tasks. We evaluate the effectiveness of T+DAPT in zero-shot domain-specific machine reading comprehension (MRC) (Hazen et al., 2019;Reddy et al., 2020;Wiese et al., 2017) by pretraining on in-domain NER data and fine-tuning for generic domain-agnostic MRC on SQuADv1 (Rajpurkar et al., 2018), combining knowledge from the two different tasks to achieve zero-shot learning on the target task. We test the language model's performance on domain-specific reading comprehension data taken from 4 domains: News, Movies, Biomedical, and COVID-19. In our experiments, RoBERTa-Base models trained using our approach perform favorably on domain-specific reading comprehension tasks compared to baseline RoBERTa-Base models trained on SQuAD as well as Domain Adaptive Pretraining (DAPT). Our code is publicly available for reference. 1 We summarize our contributions as follows:
• We propose Task and Domain Adaptive Pretraining (T+DAPT) combining domain adaptation and task adaptation to achieve zeroshot learning on domain-specific downstream tasks. • We experimentally validate the performance of T+DAPT, showing our approach performs favorably compared to both a previous approach (DAPT) and a baseline RoBERTa finetuning approach. • We analyze the adaptation performance on different domains, as well as the behavior of DAPT and T+DAPT under various experimental conditions.
Related Work
It has been shown that pretrained language models can be domain-adapted with further pretraining (
Experiments
We aim to achieve zero-shot learning for an unseen domain-specific MRC task by fine-tuning on both a domain transfer task and a generic MRC task. The model is initialized by pretrained RoBERTa weights (Liu et al., 2019), then fine-tuned using our approach with a domain-specific supervised task to augment domain knowledge, and finally trained on SQuAD to learn generic MRC capabilities to achieve zero-shot MRC in the target domain on an unseen domain-specific MRC task without explicitly training on the final task. This method is illustrated in Figure 1.
Datasets
We explore the performance of this approach in the Movies, News, Biomedical, and COVID-19 domains.
Methods
We compare our approach (T+DAPT) to a previous approach (DAPT) as well as a baseline model. For the baseline, the pretrained RoBERTa-Base model is fine-tuned on SQuAD and evaluated on domain-specific MRC without any domain adaptation. In the DAPT approach, RoBERTa-Base is first initialized with fine-tuned DAPT weights (NewsRoBERTa and BioRoBERTa) provided by Gururangan et al. (2020) or implemented ourselves using the methodology described in Gururangan et al. (2020) and different Movies and COVID-19 datasets (Maas et al., 2011;Danescu-Niculescu-Mizil and Lee, 2011;Pang et al., 2019). These models are initialized by DAPT weights-which have been fine-tuned beforehand on unsupervised text corpora for domain adaptation-from the Hugging-Face model hub (Wolf et al., 2020), fine-tuned on SQuAD, and evaluated on domain-specific MRC.
Results
We compare the effectiveness of our approach, which uses NER instead of language modeling 2 https://github.com/tsantosh7/ COVID-19-Named-Entity-Recognition 3 https://github.com/davidcampos/ covid19-corpus (as in DAPT) for the domain adaptation method in a sequential training regime. Our experiments cover every combination of domain (Movies, News, Biomedical, or COVID) and domain adaptation method (T+DAPT which uses named entity recognition vs. DAPT which uses language modeling vs. baseline with no domain adaptation at all).
Our results are presented in Table 2. We use F1 score to evaluate the QA performance of each model in its target domain. In our experiments, DAPT performs competitively with baseline models and outperforms in one domain (CovidQA). Our T+DAPT approach (RoBERTA + Domain NER + SQuAD) outperforms the baseline in three out of four domains (Movies, Biomedical, COVID) and outperforms DAPT in three out of four domains (Movies, News, Biomedical). We also test a combination of DAPT and T+DAPT by retraining DAPT models on domain NER then SQuAD, and find that this combined approach underperforms compared to either T+DAPT alone or DAPT alone in all four domains. We further discuss the possible reasons for these results in Section 4.
Analysis
Specific domains learn from adaptation: Our approach shows promising performance gains when used for zero-shot domain-specific question answering, particularly in the biomedical, movies, and COVID domains, where the MRC datasets were designed with the evaluation of domainspecific features in mind. Performance gains are less apparent in the News domain, where the NewsQA dataset was designed primarily to evaluate causal reasoning and inference abilitieswhich correlate strongly with SQuAD and base- (Table 4). When does DAPT succeed or fail: In zeroshot QA, DAPT performs competitively with the baseline in all domains and outperforms in the COVID domain. This builds upon the results of Gururangan et al. (2020), which reports superior performance on tasks like relation classification, sentiment analysis, and topic modeling, but does not address reading comprehension tasks, which DAPT may not have originally been optimized for. Unsupervised language modeling may not provide readily transferable features for reading comprehension, as opposed to NER which identifies key tokens and classifies those tokens into specific entities. These entities are also often answer tokens in reading comprehension, lending to transferable representations between NER and reading comprehension. Another possible factor is that RoBERTa was pretrained on the English Wikipedia corpus, the same source that the SQuAD questions were drawn from. Because of this, it is possible that pretrained RoBERTa already has relevant representations that would provide an intrinsic advantage for SQuAD-style reading comprehension which would be lost due to catastrophic forgetting after retraining on another large language modeling corpus in DAPT.
In the COVID domain, we use the article dataset from Wang et al. (2020). These articles also make the basis for the CovidNER and CovidQA (Möller et al., 2020) datasets, which may explain the large performance improvement from DAPT in this domain. These results suggest that the performance of DAPT is sensitive to the similarity of its language modeling corpus to the target task dataset. 1
Conclusion
We evaluate the performance of our T+DAPT approach with domain-specific NER, achieving positive results in a zero-shot reading comprehension setting in four different domain-specific QA datasets. These results indicate that our T+DAPT approach robustly improves performance of pretraining language models in zero-shot domain QA across several domains, showing that T+DAPT is a promising approach to domain adaptation in lowresource settings for pretrained language models, particularly when directly training on target task data is difficult.
In future work, we intend to explore various methods to improve the performance of T+DAPT by remedying catastrophic forgetting and maximizing knowledge transfer. For this we hope to emulate the regularization used by Xu et al. (2020) and implement multi-task learning and continual learning methods like AdapterNet (Hazan et al., 2018). In order to improve the transferability of learned features, we will explore different auxiliary tasks such as NLI and sentiment analysis in addition to few-shot learning approaches.
Ethical Considerations
Question answering systems are useful tools in complement to human experts, but the "word-of-
BioQA Samples
Q: what sugar is found in rna DAPT: ribose, whereas the sugar in DNA is deoxyribose T+DAPT: ribose Q: normal blood pressure range definition DAPT: 120 mm Hg1 T+DAPT: a blood pressure of 120 mm Hg1 when the heart beats (systolic) and a blood pressure of 80 mm Hg when the heart relaxes (diastolic) MoviesQA Samples Q: what is cyborgs real name DAPT: Victor Stone/Cyborg is a hero from DC comics most famous for being a member of the Teen Titans T+DAPT: Victor Stone Q: who plays klaus baudelaire in the show DAPT: Liam Aiken played the role of Klaus Baudelaire in the 2004 movie A Series of Unfortunate Events. T+DAPT: Liam Aiken Table 3: Samples from BioQA and MoviesQA where T+DAPT achieves exact match with the label answer, and DAPT produces a different answer. Answers from each approach are shown side-by-side for comparison. machine effect" (Longoni and Cian, 2020) demonstrates the effects of a potentially dangerous overtrust in the results of such systems. While the methods proposed in this paper would allow more thorough usage of existing resources, they also bestow confidence and capabilities to models which may not have much domain expertise. T+DAPT models aim to mimic extensively domain-trained models, which are themselves approximations of real experts or source documents. Use of domain adaptation methods for low-data settings could propagate misinformation from a lack of source data. For example, while making an information-retrieval system for biomedical and COVID information could become quicker and less resource-intensive using our approach, people should not rely on such a system for medical advice without extensive counsel from a qualified medical professional. Table 5: Zero-shot F1 performance of RoBERTa-Base models on NewsQA following different amounts of SQuAD fine-tuning. For comparison the score of our News model from the main paper with 2 epochs and all samples is included as an upper bound, alongside a head tuning baseline where all weights are frozen except the classifier layer.
A.1 Experiment Details and Additional Experiments
Freezing Layer -We tried to freeze the bottom layer after NER training and only train the QA layer on SQuAD, the performance is worse than finetuning the whole RoBERTa and QA layer. NER and QA may not rely on the exact same features for the final task which may be the reason that freezing causes a performance decrease. Different Training Epoch and Training Examples -When selecting the best performance model, we use a validation set in target domain to evaluate the performance. From Table 5, we show our trials with different amounts of SQuAD training in the News Domain and how it affected performance in NewsQA.
Different Training Order -We tried to use different training order, for example, we train on SQuAD1.1 task first and then on NER, the F1 score is 42.15 in CovidQA, which has some improvement, but QA as the last task performs better.
Another Auxiliary Task -In the Covid domain, we also do experiments on a more QA-relevant task, question classification (QCLS) (Wei et al., 2020). We show the result in Table 4. The experiments show that QCLS task have more improvements than NER task. In addition, we test the model trained on CovidQA as the performance upper bound.
Figure 1 :
1sequential transfer learning procedures of T+DAPT, DAPT, and a RoBERTa baseline for zero-shot question answering.
What is the molecular structure of bovine coronavirus? A: single-stranded, linear, and nonsegmented RNATable 1: Overview of the domain-specific MRC datasets used in our experiments. The number of question-answer pairs in the train set and development set for each domain is shown, along with a sample question-answer pair from each domain. The datasets share the same format as SQuAD. and COVID-NER 2 . The domain-specific language modeling tasks for DAPT are performed using unsupervised text from IMDB (Maas et al., 2011), the RealNews Corpus (Zellers et al., 2020), the Semantic Scholar Open Research Corpus (Lo et al., 2020) and the Covid-19 Corpus 3 .Specifically, our target domain-specific
MRC tasks are MoviesQA (Xu et al., 2020),
NewsQA (Trischler et al., 2017), BioQA (Xu et al.,
2020), and CovidQA (Möller et al., 2020), respec-
tively. We choose to use named entity recognition
(NER) as our supervised domain adaptation task
for all four target domains, as labeled NER data is
widely available across various domains. Further-
more, NER and MRC share functional similarities,
as both rely on identifying key tokens in a text
as entities or answers. The domain-specific NER
tasks are performed using supervised training data
Dataset
Dev Set Sample
MoviesQA 755
Q: After its re-opening, which types of movies did the Tower Theatre show?
A: second and third run movies, along with classic films
NewsQA 934
Q: Who is the struggle between in Rwanda?
A: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed
by Congo.
BioQA
4,790
Q: What is hemophilia?
A: a bleeding disorder characterized by low levels of clotting factor proteins.
CovidQA 2,019
Q: from the MIT Movie Corpus (Liu et al., 2013),
CoNLL 2003 News NER (Tjong Kim Sang and
De Meulder, 2003), NCBI-Disease (Dogan et al.,
2014)
Table 2 :
2F1 score of pretrained RoBERTa-Base models on dev sets of MRC datasets for given domains with the stated retraining regimensline RoBERTa pretraining-rather than domain-
specific features and adaptation. The lack of perfor-
mance gains from either T+DAPT or DAPT in the
News domain could also possibly be attributed to
the nature of the domain: Gururangan et al. (2020)
found that the News domain had the highest vo-
cabulary overlap of any domain (54.1%) with the
RoBERTa pretraining corpus, so the baseline for
this domain could have had an advantage in the
News domain that would be lost due to catastrophic
forgetting while little relevant knowledge is gained
from domain adaptation. We perform follow-up
experiments with varying amounts of epochs and
training data in SQuAD fine-tuning to analyze the
tradeoff between more thorough MRC fine-tuning
and better preservation of source domain knowl-
edge from DAPT and auxiliary domain adaptation
tasks. The results from these runs are in the Ap-
pendix
Table 4 :
4Zero-shot F1 performance of RoBERTa-Base models on dev sets of QA data for COVID domain with SQuAD1.1 following different intermediate pretraining regimens. The CovidQA upper bound score is attained by training directly on the CovidQA train set.Model
NewsQA
RoBERTa-Base
1 Epoch, 1000 Samples
19.9953
2 Epochs, 1000 Samples
35.2666
2 Epochs, 5000 Samples
47.0090
2 Epochs, All Samples
56.9803
2 Epochs, All Samples (Head) 05.5891
NewsRoBERTa (DAPT)
1 Epoch, 1000 Samples
17.9025
2 Epochs, 1000 Samples
28.4453
2 Epochs, 5000 Samples
44.1206
https://github.com/adityaarunsinghal/ Domain-Adaptation
Additional experiments in the COVID domain with different auxiliary tasks are presented in the Appendix A.1
What are people asking about covid-19? a question classification dataset. Jerry Wei, Chengyu Huang, Soroush Vosoughi, Jason Wei, Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020. the 1st Workshop on NLP for COVID-19 at ACL 2020Jerry Wei, Chengyu Huang, Soroush Vosoughi, and Ja- son Wei. 2020. What are people asking about covid- 19? a question classification dataset. In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020.
Neural domain adaptation for biomedical question answering. Georg Wiese, Dirk Weissenborn, Mariana Neves, 10.18653/v1/K17-1029Georg Wiese, Dirk Weissenborn, and Mariana Neves. 2017. Neural domain adaptation for biomedical ques- tion answering.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processingThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingface's transformers: State-of-the-art natural lan- guage processing.
Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension. Y Xu, X Zhong, A J J Yepes, J H Lau, arXiv:1911.00202csY. Xu, X. Zhong, A. J. J. Yepes, and J. H. Lau. 2020. Forget me not: Reducing catastrophic forgetting for domain adaptation in reading comprehension. arXiv:1911.00202 [cs].
. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, Paul Allen, 2020. Defending against neural fake newsRowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, and Paul Allen. 2020. Defending against neu- ral fake news.
Rong Zhang, Revanth Gangi Reddy, Md Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Efsun Sarioglu Kayi, Salim Roukos, arXiv:2010.05904Avirup Sil, and Todd Ward. 2020. Multi-stage pre-training for lowresource domain adaptation. csRong Zhang, Revanth Gangi Reddy, Md Arafat Sultan, Vittorio Castelli, Anthony Ferritto, Radu Florian, Ef- sun Sarioglu Kayi, Salim Roukos, Avirup Sil, and Todd Ward. 2020. Multi-stage pre-training for low- resource domain adaptation. arXiv:2010.05904 [cs].
| [
"https://github.com/tsantosh7/",
"https://github.com/davidcampos/",
"https://github.com/adityaarunsinghal/"
] |
[
"Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning",
"Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning"
] | [
"Yizhu Jiao \nShanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina\n",
"Yun Xiong \nShanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina\n",
"Jiawei Zhang \nDepartment of Computer Science\nIFM Lab\nFlorida State University\nFLUSA\n",
"Yao Zhang \nShanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina\n",
"Tianqi Zhang \nShanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina\n",
"Yangyong Zhu [email protected]@ifmlab.org \nShanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina\n"
] | [
"Shanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina",
"Shanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina",
"Department of Computer Science\nIFM Lab\nFlorida State University\nFLUSA",
"Shanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina",
"Shanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina",
"Shanghai Key Laboratory of Data Science\nShanghai Institute for Advanced Communication and Data Science\nSchool of Computer Science\nFudan University\nShanghaiChina"
] | [] | Graph representation learning has attracted lots of attention recently. Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs. Thus, it remains a great challenge to capture rich information in large-scale graph data. Besides, these methods mainly focus on supervised learning and highly depend on node label information, which is expensive to obtain in the real world. As to unsupervised network embedding approaches, they overemphasize node proximity instead, whose learned representations can hardly be used in downstream application tasks directly. In recent years, emerging self-supervised learning provides a potential solution to address the aforementioned problems. However, existing self-supervised works also operate on the complete graph data and are biased to fit either global or very local (1-hop neighborhood) graph structures in defining the mutual information based loss terms.In this paper, a novel self-supervised representation learning method via Sub-graph Contrast, namely SUBG-CON, is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information. Instead of learning on the complete input graph data, with a novel data augmentation strategy, SUBG-CON learns node representations through a contrastive loss defined based on subgraphs sampled from the original graph instead. Compared with existing graph representation learning approaches, SUBG-CON has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization. Extensive experiments verify both the effectiveness and the efficiency of our work compared with both classic and state-of-theart graph representation learning approaches on multiple realworld large-scale benchmark datasets from different domains. | 10.1109/icdm50108.2020.00031 | [
"https://arxiv.org/pdf/2009.10273v1.pdf"
] | 221,836,653 | 2009.10273 | c904002770c0e9e8fae5185ec1110380a493f1e9 |
Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning
Yizhu Jiao
Shanghai Key Laboratory of Data Science
Shanghai Institute for Advanced Communication and Data Science
School of Computer Science
Fudan University
ShanghaiChina
Yun Xiong
Shanghai Key Laboratory of Data Science
Shanghai Institute for Advanced Communication and Data Science
School of Computer Science
Fudan University
ShanghaiChina
Jiawei Zhang
Department of Computer Science
IFM Lab
Florida State University
FLUSA
Yao Zhang
Shanghai Key Laboratory of Data Science
Shanghai Institute for Advanced Communication and Data Science
School of Computer Science
Fudan University
ShanghaiChina
Tianqi Zhang
Shanghai Key Laboratory of Data Science
Shanghai Institute for Advanced Communication and Data Science
School of Computer Science
Fudan University
ShanghaiChina
Yangyong Zhu [email protected]@ifmlab.org
Shanghai Key Laboratory of Data Science
Shanghai Institute for Advanced Communication and Data Science
School of Computer Science
Fudan University
ShanghaiChina
Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning
Index Terms-Self-Supervised LearningGraph Representa- tion LearningSubgraph ContrastGraph Neural Networks
Graph representation learning has attracted lots of attention recently. Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs. Thus, it remains a great challenge to capture rich information in large-scale graph data. Besides, these methods mainly focus on supervised learning and highly depend on node label information, which is expensive to obtain in the real world. As to unsupervised network embedding approaches, they overemphasize node proximity instead, whose learned representations can hardly be used in downstream application tasks directly. In recent years, emerging self-supervised learning provides a potential solution to address the aforementioned problems. However, existing self-supervised works also operate on the complete graph data and are biased to fit either global or very local (1-hop neighborhood) graph structures in defining the mutual information based loss terms.In this paper, a novel self-supervised representation learning method via Sub-graph Contrast, namely SUBG-CON, is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information. Instead of learning on the complete input graph data, with a novel data augmentation strategy, SUBG-CON learns node representations through a contrastive loss defined based on subgraphs sampled from the original graph instead. Compared with existing graph representation learning approaches, SUBG-CON has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization. Extensive experiments verify both the effectiveness and the efficiency of our work compared with both classic and state-of-theart graph representation learning approaches on multiple realworld large-scale benchmark datasets from different domains.
Abstract-Graph representation learning has attracted lots of attention recently. Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs. Thus, it remains a great challenge to capture rich information in large-scale graph data. Besides, these methods mainly focus on supervised learning and highly depend on node label information, which is expensive to obtain in the real world. As to unsupervised network embedding approaches, they overemphasize node proximity instead, whose learned representations can hardly be used in downstream application tasks directly. In recent years, emerging self-supervised learning provides a potential solution to address the aforementioned problems. However, existing self-supervised works also operate on the complete graph data and are biased to fit either global or very local (1-hop neighborhood) graph structures in defining the mutual information based loss terms.
In this paper, a novel self-supervised representation learning method via Sub-graph Contrast, namely SUBG-CON, is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information. Instead of learning on the complete input graph data, with a novel data augmentation strategy, SUBG-CON learns node representations through a contrastive loss defined based on subgraphs sampled from the original graph instead. Compared with existing graph representation learning approaches, SUBG-CON has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization. Extensive experiments verify both the effectiveness and the efficiency of our work compared with both classic and state-of-theart graph representation learning approaches on multiple realworld large-scale benchmark datasets from different domains.
Index Terms-Self-Supervised Learning; Graph Representation Learning; Subgraph Contrast; Graph Neural Networks
I. INTRODUCTION
Graph representation learning [1] has attracted much attention recently. Its basic idea is to extract the high-dimensional information in graph-structured data and embed it into lowdimensional vector representations. These node representation vectors can be potentially used in various downstream tasks such as node classification [2], link prediction [3], graph classification [4], and graph alignment [5]. Graph representation learning problems have been studied on graph data from many different domains such as social networks [6], chemical molecular graphs [7], and bio-medical brain graphs [8].
Most existing successful methods are based on graph neural networks (GNNs) [2], [9]- [11] , which learn nodes' contextualized representations via effective neighborhood information aggregation. These methods usually take a complete graph as the input, which can hardly be applied to large-scale graph data, e.g., Facebook and Twitter with millions or even billions of nodes. What's more, the inter-connected graph structure also prevents parallel graph representation learning, which is especially critical for large-sized graph data. In addition, most of these existing graph neural networks focus on supervised learning. They encode the graph structure into representation vectors with the supervision of label information. However, for real-world graph data, manual graph labeling can be very tedious and expensive, which becomes infeasible for largescale graphs. To overcome this challenge, some works try unsupervised learning settings instead. They optimize models with objective functions defined for capturing node proximity [1] or reconstructing graph structures [12]. However, detached from supervision information, representations learned by such unsupervised approaches can hardly work well in downstream applications with specific task objectives [13].
Self-supervised learning [14] has recently emerged as a promising approach to overcome the dilemma of lacking available supervision. Its key idea is defining an annotationfree pretext task and generating surrogate training samples automatically to train an encoder for representation learning. In the field of computer vision, data augmentation [15], such as flipping or cropping, is commonly used for training sample generation, which can improve the generalization of selfsupervised learning. However, due to the unordered vertexes and extensive connections in graph data, such existing techniques mentioned above cannot work anymore and new data augmentation methods for graph data specifically are needed.
Self-supervised graph representation learning is a new research problem, but there're still existing some prior works on this topic, e.g., Deep Graph Infomax [16] and Graphical Mutual Information [17] (even though these approaches pose themselves as unsupervised models initially). Deep Graph Infomax (DGI) [16] introduces a global level pretext task to discriminate actual node representations from the corrupted ones based on the global graph representation. Graphical Mutual Information (GMI) [17] is centered about local struc- proposed SUBG-CON (right). Little colored squares denote learnt representations. Red nodes denote central nodes of context subgraphs. SUBG-CON utilizes the strong correlation between central nodes and their context subgraphs sampled from the original graph. Note that SUBG-CON encodes the sampled subgraphs while the other two methods take the complete graph as the input. Besides, SUBG-CON captures structure information from regional neighborhoods instead of tending to be biased in fitting either the overall or very local (1-hop neighbor) graph structures.
tures by maximizing mutual information between the hidden representation of each node and the original features of its directly adjacent neighbors. As illustrated in the left of Fig. 1, these works tend to be biased in fitting either the overall or very local (1-hop neighbor) graph structures in defining the mutual information based loss terms, which would harm the quality of learned representations. Besides, these selfsupervised works adopt a graph neural network as the encoder and also need to take the complete graph as the input, which restricts their scalability on large-sized graphs.
Intuitively, nodes and their regional neighbors are more correlated while other nodes that are very far away hardly influence them, especially in large-scale graphs. Therefore, subgraphs consisting of regional neighborhoods play a critical role to provide structure contexts for node representation learning. In this paper, we propose a novel scalable self-supervised graph representation via Sub-graph Contrast, SUBG-CON. It takes the strong correlation between central nodes and their regional subgraphs (involving both direct neighbors and other nodes that are further away) into consideration as illustrated in Fig. 1. More specifically, we introduce a data augmentation strategy on graphs based on subgraph sampling firstly. The central nodes together with their closely related surrounding nodes are sampled from the original graph to compose context subgraphs. Then, these subgraphs are fed into graph neural network encoders to obtain the representations of central nodes and subgraphs after pooling. Finally, a contrastive loss is introduced in the latent space to train the encoder to distinguish the generated positive and negative samples (to be introduced later), so that nodes with different regional structures can be well-differentiated. Compared with the previous methods operating on the complete graph structure, SUBG-CON can capture regional information in context subgraphs of smaller sizes and simpler structures with lower time and space costs. Besides, based on sampled subgraph instances, SUBG-CON is easy to parallelize, which is critical for large-sized graph data.
Through an empirical assessment on several benchmark graph datasets with different sizes from multiple diverse fields, we demonstrate that the representations learned by our SUBG-CON model are consistently competitive on node classification tasks, often outperforming both supervised and unsupervised strong baselines. Besides, we verify the efficiency of SUBG-CON, both on training time and computation memory, compared with state-of-the-art self-supervised methods that work on the complete graph.
To summarize, our major contributions include:
• We propose a novel self-supervised graph representation learning method via sub-graph contrast. It utilizes the correlation of central nodes and context subgraphs to capture regional graph structure information. • We introduce a data augmentation strategy on graphs, which aims at increasing the training samples from the existing graph using subgraph sampling for selfsupervised graph representation learning. • By training with subgraphs of small sizes and simple structures, SUBG-CON method requires lower training time and computation memory costs for graph representation learning. • Based on the sampled subgraph instances, our method enables parallel graph representation learning to further improve efficiency and scalability. • Extensive experiments verify both the effectiveness and the efficiency of our work compared with prior unsupervised and supervised approaches on multiple real-world graph datasets from different domains.
II. RELATED WORK
A. Graph Neural Networks
Graph neural networks use the graph structure as well as node features to learn node representation vectors. Existing graph neural networks follow a neighborhood aggregation strategy, which we iteratively update the representation of a node by aggregating representations of its neighboring nodes and combining with its representations [18]. Existing graph neural networks have led to advances in multiple successful applications across different domains [2], [9], [18], [19]. However, they usually take a complete graph as input. Thus, they can hardly be applied to large-scale graph data. Whats more, the inter-connected graph structure also prevents parallel graph representation learning, which is especially critical for large-sized graph data. To handle these issues, samplingbased methods are proposed to train GNNs based on minibatch of nodes, which only aggregate the representations of a subset of randomly sampled nodes in the mini-batch [6], [20]. Although this kind of approaches reduces the computation cost in each aggregation operation, the total cost can still be large. Besides, these graph neural networks mainly focus on supervised learning and require the supervision of label information. It is intractable for them to handle unlabeled graphs, which are widely available in practically.
B. Unsupervised Node Representation Learning
There is abundant literature in traditional unsupervised representation learning of nodes within graphs. Existing methods optimize models with random walk-based objectives [3], [21], [22] or reconstructing graph structures [12], [20]. The underlying intuition is to train an encoder network so that nodes that are close in the input graph are also close in the representation space. Although these methods claim to capture node proximity, they still suffer from some limitations. Most prominently, they over-emphasizing proximity similarity, making it difficult to capture the inherent graph structural information. Besides, as these encoders already enforce an inductive bias that neighboring nodes have similar representations, it is unclear whether such objectives actually provide any useful signal for training an encoder. Thus, existing methods fail to solve real-world tasks as strongly as supervised methods do.
C. Self-supervised Learning
Self-supervised learning has recently emerged as a promising approach to overcome the dilemma of lacking available supervision. Its key idea is defining an annotation free pretext task and generating surrogate training samples automatically to train an encoder for representation learning. A wide variety of pretext tasks have been proposed for visual representation learning [23]- [25]. However, there are a few works of literature about self-supervised methods for graph representation learning so far. Deep graph infomax [16] aims to train a node encoder that maximizes mutual information between node representations and the pooled global graph representation. Graphical mutual information [17] proposes to maximize the mutual information between the hidden representation of each node and the original features of its 1-hop neighbors. These works tend to be biased in fitting either the overall or very local (1-hop neighbor) graph structures in defining the mutual information based loss terms, which would harm the quality of learned representations. Besides, these self-supervised works also need to take the complete graph as the input, which restricts their scalability on large-sized graphs.
III. METHOD
In this section, we will present our framework in a topdown fashion. It starts with an abstract overview of our specific subgraph-based representation learning setup, followed by an exposition of subgraph sampling based data augmentation, subgraph encoding for representations, and our self-supervised pretext task for model optimization. Finally, we introduce parallel SUBG-CON briefly.
A. Subgraph-Based Self-Supervised Representation Learning
Prior to going further, we first provide the preliminary concepts used in this paper. We assume a general self-supervised graph representation learning setup: For a graph G = (X, A), a set of node features are provided,
X = {x 1 , x 2 , ..., x N },
where N is the number of nodes in the graph and x i ∈ R F represents the features of dimension F for node i. We are also provided with relational information between these nodes in the form of an adjacency matrix, A ∈ R N ×N . While A may consist of arbitrary real numbers (or even arbitrary edge features), in all our experiments we will assume the graphs to be unweighted, i.e. A(i, j) = 1 if there exists an edge i → j in the graph and A(i, j) = 0 otherwise.
Traditional graph representation methods target on training an encoder E : R N ×F × R N ×N → R N ×F to encode a complete graph, so that latent node representations H = E(X, A) ∈ R N ×F can be produced, where F is the dimension of latent representations. For convenience, we represent the leant representation of each node i as h i . These representations then are generated at once and retrieved for downstream tasks, such as node classification. However, due to limited computation time and memory, it remains a great challenge for traditional methods taking the complete graph structure as the input to handle large-scale graphs.
To overcome the limitation of traditional methods, we propose a novel subgraph-based representation learning approach. For a central node i, a subgraph sampler S, i.e., a proxy of data augmentation, is designed to extract its context subgraphs X i ∈ R N ×F from the original graph. The context subgraph provides regional structure information for learning the representation of node i. X i ∈ R N ×F denotes the node features of the ith context subgraph. A i denotes the relational information among node i and its neighbor nodes. N indicates the context subgraph size. We target at learning a encoder for context subgraphs, E : R N ×F ×R N ×N → R N ×F , which serves for acquiring node representations within context graphs. It should be noted that different from traditional methods, the input of the encoder are context subgraphs whose sizes are much smaller than the original graph. And node representations can be retrieved based on their context subgraph structures flexibly without the complete graph. Thus, by operating on sampled subgraph instances, SUBG-CON has prominent performance advantages in model learning scalability. Besides, it is easy to parallelize, which is critical for large-sized graph data.
Here we will focus on three key points for our subgraphbased self-supervised learning method: context subgraph extraction, subgraph encoding for representations, and the selfsupervised pretext task for model optimization.
tures and features of context subgraphs by the encoder E, to produce the central node representations h i . The other key consequence is summarizing the subgraph centered around node i as the subgraph representation s i . • For the self-supervised pretext task, it can optimize the encoder by taking advantage of the strong correlation between central nodes and their context subgraphs so that the regional information captured from the context subgraphs embeds into the central node representations.
B. Subgraph Sampling Based Data Augmentation
To overcome the dependence of manual labels, it is important for self-supervised learning to generate surrogate training samples automatically to train an encoder for representation learning. Data augmentation is a popular technique for training sample generation in computer vision. However, due to unordered vertexes and extensive connections, it hasn't been used explicitly in graph data. For self-supervised graph representation learning, we introduce the concept of data augmentation on graph formally here. There are various transformations for graph data, such as node masking or feature corruption. In this paper, we adopt a subgraph sampling based data augmentation strategy. Because, intuitively, nodes and their regional neighborhoods are more correlated while long-distance nodes hardly influence them. This assumption is more reasonable as the size of graphs increases. Therefore, we sample a series of subgraphs including regional neighborhoods from the original graph as training data.
The most critical issue now is to sample a context subgraph, which can provide sufficient structure information for learning a high-quality representation for the central node.
Here we follow the subgraph sampling based on personalized pagerank algorithm [26] as introduced in [27]. Considering the importance of different neighbors varies, for a specific node i, the subgraph sampler S first measures the importance scores of other neighbor nodes by personalized pagerank algorithm. Given the relational information between all nodes in the form of an adjacency matrix, A ∈ R N ×N , the importance score matrix S can be denoted as
S = α · (I − (1 − α) ·Ā),(1)
where I is the identity matrix and α ∈ [0, 1] is a parameter which is always set as 0.15. D denotes as the corresponding diagonal matrix with D(i, i) = j A(i, j) on its diagonal andĀ = AD −1 denotes the colum-normalized adjacency matrix. S(i, :) is the importance scores vector for node i, which indicates its correlation with other nodes.
It is noted that the importance score matrix S can be precomputed before model training starts. And we implement node-wise PPR to calculate importance scores to reduce computation memory, which makes our method more suitable to work on large-scale graphs.
For a specific node i, the subgraph sampler S chooses topk important neighbors to constitute a subgraph with the score matrix S. The index of chosen nodes can be denoted as
idx = top rank(S(i, :), k),
where top rank is the function that returns the indices of the top k values and k denotes the size of context graphs.
The subgraph sampler S will process the original graph with the node index to obtain the context subgraph G i of node i. Its adjacency matrix X i and feature matrix A i are denoted respectively as
X i = X idx,: , A i = A idx,idx ,
where · idx is an indexing operation. X idx,: is the row-wise (i.e. node-wise) indexed feature matrix. A idx,idx is the rowwise and col-wise indexed adjacency matrix corresponding to the induced subgraph.
So far, we can acquire the context subgraph G i = (X i , A i ) ∼ S(X, A) for any specific node i. For large-sized input graphs, this procedure can support parallel computing to further improve efficiency. These context subgraphs produced by data augmentation can be decomposed into several minibatches and fed to train SUBG-CON.
C. Encoding Subgraph For Representations
Given the context subgraph G i = (X i , A i ) of a central node i, the encoder E : R N ×F × R N ×N → R N ×F encodes it to obtain the latent representations matrix H i denoted as
H i = E(X i , A i ),
Here we adopt graph neural networks (GNN), a flexible class of node embedding architectures, as the encoder E. Node representations are generated by aggregating information from neighbors. We study the impact of different graph neural networks in the experiments and will discuss later. The central node embedding h i is picked from the latent representations matrix
H i h i = C(H i ),
where C denotes the operation picking out the central node embedding.
As mentioned before, the other key consequence is summarizing the subgraph centered around node i as the context subgraph representation s i . In order to obtain the subgraphlevel summary vectors, we leverage a readout function, R : R N ×F → R F , and use it to summarize the obtained node representations into a subgraph-level representation, s i , denoted as
s i = R(H i ).
So far, the representations of central nodes and context subgraphs are produced, which will play a key role in the generation of positive and negative samples for self-supervised pretext tasks. Fig. 2. Architecture of SUBG-CON. A series of context subgraph are sampled from the original graph and fed into the encoder to obtain the representations of central nodes and subgraphs after pooling. For a specified node, its context subgraph representation is regarded as positive sample while other subgraph representations randomly sampled are regarded as negative samples. The contrastive loss in the latent space will force the encoder to recognize positive and negative samples so that different nodes can be well-discriminated based on regional structure information.
D. Contrastive Learning via Central Node and Context Subgraph
The key idea of self-supervised contrastive learning is defining an annotation-free pretext task and then generating positive and negative samples. The encoder can be trained by contrasting positive and negative examples. As the prerequisite for ensuring the quality of learned representations, if the pretext task can fully inherit the rich information in graphs, we can obtain better representations to support subsequent mining tasks without additional guidance.
Intuitively, nodes are dependent on their regional neighborhoods and different nodes have different context subgraphs. This assumption is even more reasonable in large-scale graphs. At the same time, the complete structure of large-scale graphs is still hard to handle by existing node representation learning methods. Therefore, we consider the strong correlation between central nodes and their context subgraphs to design a self-supervision pretext task. The architecture of SUBG-CON is fully summarized by Fig. 2.
Our approach for learning the encoder relies on, for a specific central node, contrasting its real context subgraph with a fake one. Specifically, for the node representation, h i , that captures the regional information in the context subgraph, we regard the context subgraph representation s i as positive sample. On the other hand, for a set of subgraph representations, we employ a function, P, to corrupt them to generate negative samples, denoted as where m is the size of the representation set. The corruption strategy determines the differentiation of nodes with different contexts, which is crucial for some downstream tasks, such as node classification.
As to the objective, related works use a noise-contrastive type objective with a standard binary cross-entropy loss be-
Sample context subgraphs {(X 1 , A 1 ), (X 2 , A 2 ), ..., (X m , A m )} where H i = S(X i , A i )
and m is the size of the subgraph set. 4: for all each subgraph (X i , A i ) do 5: Encode the subgraph to obtain latent representation matrixes H i = E(X i , A i ).
6:
Obtain the central node representation h i = C(H i ).
7:
Summarize the subgraph representation through the readout function s i = R(H i ). [16]. However, as these context subgraphs are extracted from the same original graph and overlap with each other, we suppose that it can be harmful for representation learning if positive and negative examples to be distinguished absolutely. Therefore, we use the margin triplet loss [28] for model optimization so that positive and negative samples can be well-discriminated to some extent and high-quality representations can be obtained. The loss is
L = 1 M M i=1 E (X,A) (− max(σ(h i s i ) − σ(h i s i ) + , 0)),(2)
where σ(x) = 1/(1 + exp(−x)) is the sigmoid function and is margin value. We summarize the steps of the procedure of our approach in Algorithm 1.
E. Parallelizability
Compared with existing methods that input the complete graph data, it is parallelizable to operate on context subgraph. On the one hand, subgraph extraction is easy to parallelize. Several random workers (in different threads, processes, or machines) can simultaneously explore different parts of the same graph for context subgraphs extraction. On the other hand, without the need for global computation that needs the whole graph structure, it becomes possible to encoder multiple subgraphs synchronously to obtain the representations of central nodes and subgraphs. Benefit from the parallelizability, our model can be scaled efficiently on larger-size graphs.
IV. EXPERIMENT
In this section, we conduct extensive experiments to verify both the effectiveness and the efficiency of SUBG-CON on a variety of node classification tasks on multiple real-world datasets from different domains. In each case, SUBG-CON is used to learn node representations in a fully unsupervised manner. We compare our approach with prior unsupervised and supervised strong baselines. Besides, we analyze the design of our architecture, including the encoder architecture and the objective function. We also do experiments about the efficiency including training time and memory usage. Reducing the number of training subgraphs and parallelization are studied to further improve efficiency. Lastly, parameter sensitivity analysis helps to choose suitable parameters for our approach.
A. Datasets
To assess the effectiveness of the representation learned by our work, we conduct experiments on multiple real-world datasets from different domains. We choose three popular small-scale datasets wide used in related works [2] (Cora, Citeseer and Pubmed) and three large-scale datasets to verify the scalability of our approach (PPI, Flickr, and Reddit) [2], [29]. It includes three citation networks, two social networks, and a protein network. All datasets follow fixed-partition splits. Further information on the datasets can be found in Table I. We set up the experiments on the following benchmark tasks: (1) classifying research papers into topics on the Cora, Citeseer and Pubmed citation networks; (2) classifying protein roles within protein-protein interaction (PPI) networks, requiring generalization to unseen networks; (3) categorizing types of images based on the descriptions and common properties of Flickr online; (4) predicting the community structure of a social network modeled with Reddit posts.
B. Experimental Settings
Encoder design. For six different datasets, we study the impact of different graph neural networks (described below) and employ distinct encoders appropriate to that setting.
For Cora, Citeseer, Pubmed and PPI, we adopt a one-layer Graph Convolutional Network (GCN) with skip connections [30] as our encoder, with the following propagation rule:
E(X, A) = σ(D − 1 2ÂD − 1 2 XW +ÂW skip ),
where = A + I N is the adjacency matrix with inserted self-loops andD is its corresponding degree matrix. For the nonlinearity σ, we apply the perametric ReLU (PReLU) function [31]. W is a learnable linear transformation applied to every node and W skip is a learnable projection matrix for skip connections.
For Reddit and Flickr, we adopt a two-layer GCN model as our encoder, with the following propagation rule:
GCN (X, A) = σ(D − 1 2ÂD − 1 2 XW), E(X, A) = GCN (GCN (X, A), A),
where the latent representations produced by the first layer of GCN are fed as the input of the second layer.
Corruption functions. The corruption function generates negative samples for our self-supervised task to make nodes with different contexts well-distinguished, which is important for the node classification task. For convenience of computation, given a set of context subgraph representations, our corruption function shuffles them randomly. The subgraph representation of other central nodes is regarded as the negative sample so that nodes are closely related to their context subgraphs and weakly associated with other subgraphs. For learning node representations towards other kinds of tasks, the design of appropriate corruption strategies remains an area of open research.
R(H) = σ( 1 N N i=1 h i ),
where σ is the logistic sigmoid nonlinearity. We assume that this simple readout is efficient for subgraphs of small sizes when we have found it to perform the best across all our experiments. Objective functions. We compare the margin loss [28] against other commonly used contrastive loss functions, such as logistic loss [32], and bayesian personalized ranking (BPR) loss [33]. Table III shows these three objective function. Their impacts will be discussed later.
Implementation details. All experiments are implemented in PyTorch [34] with Glorot initialization [35] and conducted on 8 NVIDIA TITAN Xp GPUs. SUBG-CON is used to learn node representations in a fully unsupervised manner, followed by evaluating the node-level classification with these representations. This is performed by directly using these representations to train and test a simple linear (logistic regression) classifier. In preprocessing, we perform row normalization on Cora, Citeseer, PubMed following [2], and apply the processing strategy in [20] on Reddit, PPI, and Flickr. Especially, for PPI, suggested by [16], we standardize the learned embeddings before feeding them into the logistic regression classifier. During training, we use Adam optimizer [36] with an initial learning rate of 0.001 (specially, 10 −5 on Citeseer and Reddit). The subgraph size is no more than 20 (specially, the subgraph size is 10 on Citeseer due to better performance). The dimension of node representations is 1024. The margin value for the loss function is 0.75.
Baselines. We choose two state-of-art self-supervised methods, DGI [16] and GMI [17], which both learn graph em-beddings by leveraging mutual information maximization. Two traditional unsupervised methods, DeepWalk [21] and unsupervised variants of GraphSAGE (abbreviated as Unsup-GraphSAGE) [20] are also compared with our model. Specially, we provide results for training the logistic regression on raw input features. Besides, we report experiment results on three supervised graph neural networks, GCN [2], GAT [9], FastGCN [6] and supervised GraphSAGE [20]. Notably, we reuse the metrics already reported in original papers or choose optimal hyper-parameters carefully after reproducing the code for different baselines in this paper to ensure the fairness of comparison experiments.
Evaluation metrics. For the classification task, we provide the learned embeddings across the training set to the logistic regression classifier and give the results on the validation and test nodes [17]. Followed [16], we adopt the mean classification accuracy to evaluate the performance for three benchmark datasets (Cora, Citeseer, and Pubmed), while the micro-averaged F1 score averaged is used for the other three larger datasets (PPI, Flickr, and Reddit).
C. Node Classification
The results of our comparative evaluation experiments are summarized in Table II. The results demonstrate our strong performance can be achieved across all six datasets. Our method successfully outperforms all the competing selfsupervised approachesthus verifying the potential of methods based on graph regional structure information in the node classification domain. We further observe that all self-supervised methods are more competitive than traditional unsupervised baselines that rely on proximity-based objectives. It indicates our data augmentation strategy for self-supervised learning can make a greater contribution to models to capture highlevel information in complex graphs even if these supervision signals are not frankly related to the node classification task. Besides, we particularly note that the DGI approach is competitive with the results reported for three supervised graph neural networks, even exceeding its performance on the Cora, Citeseer, Pubmed, and Reddit datasets. However, on PPI the gap is still largewe believe because our encoder is heavily dependent on node original features while available features on PPI are extremely sparse (over 40% of the nodes having all-zero features). 1) Design of Encoder: For better architecture and performance, we conducted experiments about the design of our encoder. We choose four different graph neural networks as the encoder to learn node representation, including graph convolutional network (GCN), graph convolutional network with skip connection (GCN + Skip), graph attention network (GAT) [9], graph isomorphism network (GIN) [18]. The experimental results are listed in Table IV. As can be observed, GCN with skip connection can achieve the best performance on Citeseer, Pubmed, and PPI. Although GAT can be competitive on Cora, because GAT requires more training time and memory, we choose GCN with skip connection as our encoder finally. It is noted that, even if GCN is not the best choice on these three datasets, but compared in Table II, our method with GCN as encoder still outperforms supervised GCN. For the other two larger datasets, Flickr and Reddit, 2-layer GCN is the best option. We assume higher-level information captured in the largescale graphs can make contributions to improve the quality of the learned representations. To sum up, compared with the complete graphs with large scales and complex structures, subgraphs can be well encoded with simple graph neural networks. More expressive GNNs, such as GAT and GIN, are less suitable to handle these subgraphs. 2) Effectiveness of Objective Function: We compare different objective functions and list the experiment results in Table V. To make the comparisons fair, we tune the hyperparameters for all loss functions and report their best results. Table V shows that margin loss can achieve the best performance compared with other losses. We believe, as context subgraphs extracted from the same original graph can be somewhat similar, it is not suitable to apply the loss functions that distinguish positive and negative examples absolutely. 1) Train with A Few Subgraphs: As context subgraphs have simple and similar structures, we assume that maybe extracting all subgraphs is unnecessary for training the encoder well. Therefore, we conducted some experiments about training the encoder with a few subgraphs sampled from the graph. The effectiveness of the number of sampled subgraphs on the six datasets are showed in the Fig. 3. We observed that, for Cora, Citeseer and Pubmed, about 500 subgraphs can provide sufficient information for the encoder while the other three datasets, PPI, Flickr, and Reddit, only require as few as 50 subgraphs. We believe the sparsity of the graph leads to the difference. The degree of nodes in Cora, Citeseer, and Pubmed is small, therefore subgraphs extracted from these datasets can be of much different shape. On the contrary, PPI, Flickr and Reddit are relatively denser and the context subgraphs likely composed of direct neighbors. Thus, the encoder can capture the structure easily. To verify our surmise, we observe the composition of subgraphs in different datasets as showed in Fig. 4. The observation guides us to train the encoder with a few subgraphs to accelerate the convergence of the loss function, which takes much less training time and computation memory.
D. Design of Architectures
E. Efficiency
2) Training time and memory cost: In Table VI and Table VII, we summarize the performance on the state-of-the-arts self-supervised methods over their training time and memory usage relative to that of our method on all the six datasets. The training time refers to the time for training the encoder (exclude validation). The memory refers to total memory costs of model parameters and all hidden representations of a batch. The two self-supervised baselines apply GCN as their encoders on Cora, Citeseer, and Pubmed, which cannot be trained on large-scale graphs due to excessive memory requirements. For other larger graphs, they choose GraphSAGE, a fast samplingbased graph neural network, for node representation learning. We use an early stopping strategy on the observed results on the validation set, with a patience of 20 epochs (specially, 150 epochs for Pubmed). According to the findings in the previous subsection, 500 subgraphs randomly sampled are used to train the encoder for three small-scale datasets in Table VI while 50 subgraphs are used for three larger datasets in Table VII. We can clearly found ours methods can be trained much faster with much less computation memory than these baselines on all the datasets. In particular, our advantage of efficiency can be more prominent on large-scale graphs, especially on Reddit. We believe that compared to the whole graph structure, subgraphs of much small size can speedup encoder training. Besides, training with a few subgraphs can further reduce training time and memory usage.
3) Parallel Computation: For complex realistic application scenarios, in the case when training with a small number of subgraphs doesn't work, SUBG-CON can be run efficiently in parallel. We set the number of subgraphs as 20000 and set training epoch as 400, and run experiments using multiple GPUs on three large-scale datasets, PPI, Flickr and Reddit. Fig. 5 presents the effects of parallelizing. It shows the speed of processing these three data sets can be accelerated by increasing the number of GPUs (Fig. 5(a)). It also shows that there is no loss of predictive performance relative to the running our model serially ( Fig. 5(b)). It has been demonstrated that this technique is highly scalable.
F. Subgraph size analysis
Now we examine the influence of the size of context subgraphs in our framework on the six datasets. We adjust the subgraph size from 2 to 20 (including the central node) and evaluated the results as showed in Fig. 6. We observe that our model can achieve better performance with context subgraphs of a larger size in general. We believe that is because more regional structure information makes a contribution to highquality latent representations. Due to the limited computation memory, we set the subgraph size as 20. However, there is an exception. As the size of subgraphs increases, the performance on Citeseer becomes better first, reaches the peak when the size is 10, and then goes down. We consider, due to the sparsity of Citeseer, the subgraphs composed of 10 nodes have sufficient context information. Larger subgraphs with complex structures will bring about more noise and deteriorate the process of representations learning. Thus, we set the subgraph size as 10 for Citeseer. It is noted that using very small subgraphs causes different impacts on different datasets. Specifically, when we train the encoder with subgraphs containing only two nodes (a central node and a closest related neighbor), the performance degrades on all the datasets. Especially, the decrease of F1 score on Reddit is up to 20 points. It indicates that Reddit is large in scale and complex in structure, therefore, a few neighbors are insufficient to be a proxy of relatively informative context. We should take it into consideration for model design.
V. CONCLUSION
In this paper, we propose a novel scalable self-supervised graph representation via sub-graph contrast, SUBG-CON. It utilizes the strong correlation between central nodes and their regional subgraphs for model optimization. Based on sampled subgraph instances, SUBG-CON has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
Through an empirical assessment on multiple benchmark datasets, we demonstrate that the effectiveness and efficiency of SUBG-CON compared with both supervised and unsupervised strong baselines. In particular, it shows that the encoder can be trained well on the current popular graph datasets with a little regional information. It indicates that existing methods may still lack the ability to capture higher-order information, or our existing graph dataset only requires loworder information to get good performance. We hope that our work can inspire more research on graph structure to explore the above problems.
VI. ACKNOWLEDGEMENT
We appreciate the comments from anonymous reviewers which will help further improve our work. This work is funded in part by the National Natural Science Foundation of China Projects No. U1636207 and No. U1936213. This work is also partially supported by NSF through grant IIS-1763365 and by FSU. This work is funded by Ant Financial through the Ant Financial Science Funds for Security Research.
Fig. 1 .
1An illustration of DGI (upper left), GMI (bottom left), and our
Definition 1 .
1(Data Augmentation on Graph): Given a graph G = (X, A), where X denotes node features and A denotes relations, data augmentation is a strategy to produce a series of variant graphs G = (X , A ) using various transformations on features and relations of G .
{ s 1
1, s 2 ..., s M } ∼ P({s 1 , s 2 , ..., s m }),
subgraph representations to generate negative examples for the corresponding node representations { s 1 , s 2 , ..., s M } = P(s 1 , s 2 , ..., s M ).
of E and R by applying gradient descent to maximize Eq. 2. 11: end while tween positive examples and negative examples
For all six experimental datasets, we employ the identical readout function with a simple averaging of all the nodes features:
Fig. 3 .
3The effectiveness of training the encoder with different numbers of sampled subgraphs Fig. 4. Composition of context subgraphs for different datasets. The pie chart indicates the proportion of neighbors of different distances from central nodes in the context subgraphs.
Fig. 6 .
6Subgraph size analysis.
Algorithm 1 Optimization Algorithm. Input: A graph G with input feature X and adjacency matrix A; Subgraph sampler S; Encoder E; Readout function R; Corruption function P. 1: Precompute importance score matrix S according to Eq. 1. 2: while not converge do3:
Table I .
IDataset statisticsDataset
Type
Nodes
Edges
Degree Features Classes
Train / Val / Test
Small-scale
Cora
Citation network
2,708
5,429
4.0
1,433
7
0.05 / 0.18 / 0.37
Citeseer Citation network
3,327
4,732
2.8
3,703
6
0.04 / 0.15 / 0.30
Pubmed Citation network
19,717
44,338
4.5
500
3
0.003 / 0.03 / 0.05
Large-scale
PPI
Protein network
56,944
818,716
28.8
50
121
0.79 / 0.11 / 0.10
Flickr
Social network
89,250
899,756
20.2
500
7
0.50 / 0.25 / 0.25
Reddit
Social network
232,965 11,606,919
99.6
602
41
0.66 / 0.10 / 0.24
denoted as
Table II .
IIPerformance comparison with different methods on node classification. The second column illustrates the data used by each algorithm in the training phase, where X, A, and Y denotes features, adjacency matrix, and labels, respectively. OOM: Out of memory.Algorithm
Available data
Cora
Citeseer
Pubmed
PPI
Flickr
Reddit
Raw features
X
56.6 0.4 57.8 0.2 69.1 0.2 42.5 0.3 20.3 0.2 58.5 0.1
DeepWalk
A
67.2
43.2
65.3
52.9
27.9
32.4
Unsup-GraphSAGE
X, A
75.2 1.5 59.4 0.9 70.1 1.4 46.5 0.7 36.5 1.0 90.8 1.1
DGI
X, A
82
Table III .
IIIStudied Objective functions.Name
Objective Function
Margin Loss
− max(σ(hs) − σ(h s) + , 0)
Logistic Loss
log σ(hs) + log σ(−h s)
BPR Loss
log σ(hs − h s)
Table IV .
IVComparison with different graph neural network encoders.Dataset
GCN
GCN+Skip
GAT
GIN
Cora
82.1
83.5
83.5
83.0
Citeseer
72.4
73.2
73.0
73.0
Pubmed
79.2
81.1
80.0
80.4
PPI
66.2
66.9
66.8
66.0
Flickr
48.8
48.2
48.7
48.3
Reddit
95.2
94.5
94.9
93.9
Table V .
VComparison with models trained with different objective functions.Cora Citeseer Pubmed PPI Flickr Reddit
Margin 83.5
73.2
81.0
66.9 48.8
95.2
Logistic 82.4
72.2
79.8
66.8 48.5
95.0
BPR
81.7
72.0
79.9
66.8 48.6
94.8
Table VI .
VIEfficiency of SUBG-CON on three small-scale datasets. We train the encoder with 500 context subgraphs.Table VII. Efficiency of SUBG-CON on three large-scale datasets. We train the encoder with 50 context subgraphs.Dataset
Algorithm
Training Time
Memory
Cora
DGI
27s
3597MB
GMI
104s
3927MB
SUBG-CON
14s
1586MB
Citeseer
DGI
48s
4867MB
GMI
410s
7605MB
SUBG-CON
12s
1163MB
Pubmed
DGI
104s
10911MB
GMI
1012s
12115MB
SUBG-CON
26s
975MB
Dataset
Algorithm
Training Time
Memory
PPI
DGI
44s
10171MB
GMI
561s
12101MB
SUBG-CON
3s
1349MB
Flickr
DGI
518s
5028MB
GMI
1247s
9768MB
SUBG-CON
12s
1903MB
Reddit
DGI
4071s
8517MB
GMI
9847s
12098MB
SUBG-CON
25s
3805MB
• For context subgraph extraction, the subgraph sampler S will serve as the proxy of data augmentation. It measures the importance scores of neighbors and samples a few closely related nodes to compose a context subgraphs which provide regional structure information for representation learning. • For subgraph encoding, we target on encoding the struc-
Representation learning on graphs: Methods and applications. W L Hamilton, R Ying, J Leskovec, arXiv:1709.05584arXiv preprintW. L. Hamilton, R. Ying, and J. Leskovec, "Representation learning on graphs: Methods and applications," arXiv preprint arXiv:1709.05584, 2017.
Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintT. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," arXiv preprint arXiv:1609.02907, 2016.
node2vec: Scalable feature learning for networks. A Grover, J Leskovec, Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on Knowledge discovery and data miningACMA. Grover and J. Leskovec, "node2vec: Scalable feature learning for networks," in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2016, pp. 855-864.
Self-attention graph pooling. J Lee, I Lee, J Kang, arXiv:1904.08082arXiv preprintJ. Lee, I. Lee, and J. Kang, "Self-attention graph pooling," arXiv preprint arXiv:1904.08082, 2019.
Collective link prediction oriented network embedding with hierarchical graph attention. Y Jiao, Y Xiong, J Zhang, Y Zhu, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementY. Jiao, Y. Xiong, J. Zhang, and Y. Zhu, "Collective link prediction oriented network embedding with hierarchical graph attention," in Pro- ceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 419-428.
Fastgcn: Fast learning with graph convolutional networks via importance sampling. J Chen, T Ma, C Xiao, arXiv:1801.10247arXiv preprintJ. Chen, T. Ma, and C. Xiao, "Fastgcn: Fast learning with graph convolutional networks via importance sampling," arXiv preprint arXiv:1801.10247, 2018.
Lanczosnet: Multi-scale deep graph convolutional networks. R Liao, Z Zhao, R Urtasun, R S Zemel, arXiv:1901.01484arXiv preprintR. Liao, Z. Zhao, R. Urtasun, and R. S. Zemel, "Lanczosnet: Multi-scale deep graph convolutional networks," arXiv preprint arXiv:1901.01484, 2019.
Structural deep brain network mining. S Wang, L He, B Cao, C.-T Lu, P S Yu, A B Ragin, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningS. Wang, L. He, B. Cao, C.-T. Lu, P. S. Yu, and A. B. Ragin, "Structural deep brain network mining," in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 475-484.
Graph attention networks. P Velickovic, G Cucurull, A Casanova, A Romero, P Liò, Y Bengio, abs/1710.10903CoRR. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, "Graph attention networks," CoRR, vol. abs/1710.10903, 2017. [Online]. Available: http://arxiv.org/abs/1710.10903
Simplifying graph convolutional networks. F Wu, A H SouzaJr, T Zhang, C Fifty, T Yu, K Q Weinberger, ICML. F. Wu, A. H. Souza Jr, T. Zhang, C. Fifty, T. Yu, and K. Q. Weinberger, "Simplifying graph convolutional networks," in ICML, 2019.
Gmnn: Graph markov neural networks. M Qu, Y Bengio, J Tang, International Conference on Machine Learning. M. Qu, Y. Bengio, and J. Tang, "Gmnn: Graph markov neural networks," in International Conference on Machine Learning, 2019, pp. 5241-5250.
Variational graph auto-encoders. T N Kipf, M Welling, arXiv:1611.07308arXiv preprintT. N. Kipf and M. Welling, "Variational graph auto-encoders," arXiv preprint arXiv:1611.07308, 2016.
Latte: Application oriented social network embedding. L Meng, J Yang Bai, J Zhang, 2019 IEEE International Conference on Big Data (Big Data). L. Meng, J. yang Bai, and J. Zhang, "Latte: Application oriented social network embedding," 2019 IEEE International Conference on Big Data (Big Data), pp. 1169-1174, 2019.
Self-supervised visual feature learning with deep neural networks: A survey. L Jing, Y Tian, IEEE Transactions on Pattern Analysis and Machine Intelligence. L. Jing and Y. Tian, "Self-supervised visual feature learning with deep neural networks: A survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
A survey on image data augmentation for deep learning. C Shorten, T M Khoshgoftaar, Journal of Big Data. 6160C. Shorten and T. M. Khoshgoftaar, "A survey on image data augmen- tation for deep learning," Journal of Big Data, vol. 6, no. 1, p. 60, 2019.
Deep graph infomax. P Veličković, W Fedus, W L Hamilton, P Liò, Y Bengio, R D Hjelm, arXiv:1809.10341arXiv preprintP. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, "Deep graph infomax," arXiv preprint arXiv:1809.10341, 2018.
Graph representation learning via graphical mutual information maximization. Z Peng, W Huang, M Luo, Q Zheng, Y Rong, T Xu, J Huang, Proceedings of The Web Conference 2020. The Web Conference 2020Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, "Graph representation learning via graphical mutual information maxi- mization," in Proceedings of The Web Conference 2020, 2020, pp. 259- 270.
How powerful are graph neural networks?. K Xu, W Hu, J Leskovec, S Jegelka, arXiv:1810.00826arXiv preprintK. Xu, W. Hu, J. Leskovec, and S. Jegelka, "How powerful are graph neural networks?" arXiv preprint arXiv:1810.00826, 2018.
Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. S Abu-El-Haija, B Perozzi, A Kapoor, N Alipourfard, K Lerman, H Harutyunyan, G V Steeg, A Galstyan, arXiv:1905.00067arXiv preprintS. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Harutyunyan, G. V. Steeg, and A. Galstyan, "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing," arXiv preprint arXiv:1905.00067, 2019.
Inductive representation learning on large graphs. W Hamilton, Z Ying, J Leskovec, Advances in neural information processing systems. W. Hamilton, Z. Ying, and J. Leskovec, "Inductive representation learning on large graphs," in Advances in neural information processing systems, 2017, pp. 1024-1034.
Deepwalk: Online learning of social representations. B Perozzi, R Al-Rfou, S Skiena, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMB. Perozzi, R. Al-Rfou, and S. Skiena, "Deepwalk: Online learning of social representations," in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014, pp. 701-710.
Line: Large-scale information network embedding. J Tang, M Qu, M Wang, M Zhang, J Yan, Q Mei, Proceedings of the 24th international conference on world wide web. the 24th international conference on world wide webJ. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, "Line: Large-scale information network embedding," in Proceedings of the 24th international conference on world wide web, 2015, pp. 1067-1077.
Unsupervised representation learning by predicting image rotations. S Gidaris, P Singh, N Komodakis, arXiv:1803.07728arXiv preprintS. Gidaris, P. Singh, and N. Komodakis, "Unsupervised repre- sentation learning by predicting image rotations," arXiv preprint arXiv:1803.07728, 2018.
A critical analysis of selfsupervision, or what we can learn from a single image. Y M Asano, C Rupprecht, A Vedaldi, arXiv:1904.13132arXiv preprintY. M. Asano, C. Rupprecht, and A. Vedaldi, "A critical analysis of self- supervision, or what we can learn from a single image," arXiv preprint arXiv:1904.13132, 2019.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, arXiv:2002.05709arXiv preprintT. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple frame- work for contrastive learning of visual representations," arXiv preprint arXiv:2002.05709, 2020.
Scaling personalized web search. G Jeh, J Widom, Proceedings of the 12th international conference on World Wide Web. the 12th international conference on World Wide WebG. Jeh and J. Widom, "Scaling personalized web search," in Proceedings of the 12th international conference on World Wide Web, 2003, pp. 271- 279.
Graph-bert: Only attention is needed for learning graph representations. J Zhang, H Zhang, L Sun, C Xia, arXiv:2001.05140arXiv preprintJ. Zhang, H. Zhang, L. Sun, and C. Xia, "Graph-bert: Only at- tention is needed for learning graph representations," arXiv preprint arXiv:2001.05140, 2020.
Facenet: A unified embedding for face recognition and clustering. F Schroff, D Kalenichenko, J Philbin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionF. Schroff, D. Kalenichenko, and J. Philbin, "Facenet: A unified embed- ding for face recognition and clustering," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 815- 823.
Graphsaint: Graph sampling based inductive learning method. H Zeng, H Zhou, A Srivastava, R Kannan, V Prasanna, arXiv:1907.04931arXiv preprintH. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna, "Graph- saint: Graph sampling based inductive learning method," arXiv preprint arXiv:1907.04931, 2019.
Gresnet: Graph residual network for reviving deep gnns from suspended animation. J Zhang, L Meng, abs/1909.05729ArXiv. J. Zhang and L. Meng, "Gresnet: Graph residual network for reviving deep gnns from suspended animation," ArXiv, abs/1909.05729, 2019.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013.
S Rendle, C Freudenthaler, Z Gantner, L Schmidt-Thieme, arXiv:1205.2618Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprintS. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, "Bpr: Bayesian personalized ranking from implicit feedback," arXiv preprint arXiv:1205.2618, 2012.
Introduction to pytorch. N Ketkar, Deep learning with python. SpringerN. Ketkar, "Introduction to pytorch," in Deep learning with python. Springer, 2017, pp. 195-208.
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsX. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
| [] |
[
"On the Rotating Charged BTZ Metric",
"On the Rotating Charged BTZ Metric"
] | [
"Alberto García \nDepartamento de Física\nFacultad de Ciencia\nDepartamento de Física CINVESTAV-IPN\nUniversidad de Santiago de Chile\nCasilla 307, Apartado Postal 14-7403493, C.P. 07000Santiago, MéxicoAvda. Ecuador, D.F. MéxicoChile\n"
] | [
"Departamento de Física\nFacultad de Ciencia\nDepartamento de Física CINVESTAV-IPN\nUniversidad de Santiago de Chile\nCasilla 307, Apartado Postal 14-7403493, C.P. 07000Santiago, MéxicoAvda. Ecuador, D.F. MéxicoChile"
] | [] | It is shown that the charged non-diagonal BTZ (2+1)-spacetime is not a solution of the Einstein-Maxwell field equations with cosmological constant. | null | [
"https://export.arxiv.org/pdf/hep-th/9909111v1.pdf"
] | 118,790,016 | hep-th/9909111 | bb16d9b705af5bea94d3cff8c368bf21f8a2ab72 |
On the Rotating Charged BTZ Metric
15 Sep 1999 (March 28, 2022)
Alberto García
Departamento de Física
Facultad de Ciencia
Departamento de Física CINVESTAV-IPN
Universidad de Santiago de Chile
Casilla 307, Apartado Postal 14-7403493, C.P. 07000Santiago, MéxicoAvda. Ecuador, D.F. MéxicoChile
On the Rotating Charged BTZ Metric
15 Sep 1999 (March 28, 2022)2+1 dimensions, black hole PACS numbers: 0420Jb
It is shown that the charged non-diagonal BTZ (2+1)-spacetime is not a solution of the Einstein-Maxwell field equations with cosmological constant.
Bañados et al. [1] in their elegant paper presented black hole solutions to the Einstein-Maxwell field equations in (2+1)-anti de Sitter spacetime, from now on I refer to them as BTZ solutions, opening with this work an area of intensive research [2][3][4]. The BTZ solutions possess features inherent to the (3+1)-black holes, and it is believed that (2+1)-gravity will provide new insights to a better understanding of the physically relevant (3+1)-gravity. Nevertheless, the reported charged generalization of their non-diagonal metric (2), pag. 1850, occurs to be wrong, it does not fulfill the Einstein-Maxwell equations.
One constructs a (2+1)-Einstein theory coupled with Maxwell electrodynamics starting from the action
S = 1 2π √ −g R − 2Λ − F ab F ab d 3 x,(1)
Varying this action with respect to gravitational field one obtains the Einstein equations
G ab + Λg ab = 2(F ac F c b − 1 4 g ab F cd F cd ),(2)
while the variation with respect to the electromagnetic potential A a entering in F ab = A b,a − A a,b , yields the electromagnetic field equations
∇ a F ab = 0.(3)
The non-diagonal BTZ (2+1)-metric has the form
ds 2 = −N 2 dt 2 + dr 2 N 2 + r 2 (dφ + N φ dt) 2 ,(4)
where N 2 = N 2 (r) and N φ = N φ (r) are unknown functions of the variable r. The contravariant metric components one can read from ∂ 2 s = g ab ∂ a ∂ b :
∂ 2 s = − 1 N 2 ∂ 2 t + N 2 ∂ r 2 + ( 1 r 2 − N φ2 N 2 )∂ φ 2 + 2 N φ N 2 ∂ t ∂ φ .(5)
I restrict the electric field, as in [1], to be F rt = −E(r) = A t,r , thus
F ab = E(r) δ t a δ r b − δ r a δ t b .(6)
hence the non-vanishing contravariant F ab components are
F rt = −E, F rφ = −E N φ .(7)
The invariant F then is given by 2F = −E 2 (r) −→ L(F ) = E 2 /8π. The electromagnetic field equations (3), in the considered case, reduce to d dr √ −gF rb = 0, and yield d dr
(r E) = 0 −→ rE(r) = Q = constant,(8)
where Q is a constant interpretable as the charge, together with
d dr (r E N φ ) = 0.(9)
Substituting E(r) = Q/r from(8) into the second electromagnetic equation (9), one arrives at
d dr N φ (r) = 0 −→ N φ (r) = N φ 0 = constant.(10)
This means that the metric, instead of being non-diagonal is simply static (diagonal); the metric reduces to the diagonal form by introducing a new Killingian coordinate dφ ′ = dφ + N φ 0 dt, thus without loss of generality N φ 0 can be equated to zero. A straightforward demonstration of the incorrectness of the Bañados et al charged solution can be established as follows. If one substitutes into the Maxwell equations above the functions presented in Bañados paper, pag. 1850 third paragraph after eq.(6), for the charged case, namely:
A 0 = −Qln( r r 0 ), N φ = − J 2r 2 ,(11)E = −( ∇A 0 ) → E = − d dr A 0 = Q r ,(12)
where J is the angular momentum constant, one concludes that the equation for E,(8), is readily fulfilled, while (9)yields the condition
QJ r 3 = 0,(13)
therefore in the charged case, Q = 0, the angular momentum J has to vanish, J = 0 !!, and hence N φ = 0. One is then arriving at a globally diagonal metric -a static charged metric for the whole spacetime. Although these approaches are sufficient to demonstrate the incorrectness of the charged BTZ result, I shall prove this fact also by using the Einstein-Maxwell equations:
1 2r d dr r d dr N 2 − 1 2 (r d dr N φ ) 2 = −2Λ,(14)
1 r
d dr N 2 + 1 2 (r d dr N φ ) 2 = −2E 2 − 2Λ,(15)3 d dr N φ + r d 2 dr 2 N φ = 0.(16)
The last equation above can be written as (r 3 N φ ,r ) ,r = 0 , thus it integrates as
N φ ,r = J r 3 −→ N φ (r) = − J 2r 2 + N φ 0 ,(17)
the shifting freedom of the Killingian φ coordinate can be used to set the integration constant N φ 0 equal to zero. Replacing the derivative of N φ (r) into the first order equation (15) for N 2 (r), one has
N 2 ,r = − J 2 2r 3 − 2rE 2 − 2Λr,(18)
which integrates as
N 2 (r) = −M + J 2 4r 2 − Λr 2 − 2 rE 2 dr.(19)
Let us now use the information contained in the Maxwell equations:
d dr (r E) = 0 −→ rE(r) = Q = constant,(20)
which substituted into N 2 (r) yields
N 2 (r) = −M + J 2 4r 2 − Λr 2 − 2Q 2 ln r.(21)
At this stage, with Λ = −1/l 2 , one recognize the structural functions presented for the charged case in [1]:
N 2 = N 2 Q=0 + 1 2 QA 0 , N φ = − J 2r 2 ,(22)A 0 = −Qln( r r 0 ), E = − d dr A 0 .(23)
Finally, substituting N φ (r) and E(r) into the Maxwell equation for N φ (r),
d dr (r E N φ ) = 0,(24)
one arrives at the condition b) The vacuum plus lambda case, with Q = 0 and therefore E = 0. In this case the structural functions N 2 (r) = −M + J 2 /4r 2 − Λr 2 and N φ (r) = −J/2r 2 give rise to the non-diagonal metric reported in the BTZ paper.
QJ r 3 = 0,(25)
The equation (14) gives no further conditions. To treat correctly the rotating charged plus lambda case, one has to enlarge the class of metrics, namely to deal with a line element of the form
ds 2 = −f dt 2 + dr 2 g + R 2 (dφ + W dt) 2 ,(26)
where f = f (r), g = g(r), R = R(r) and W = W (r) are unknown functions of the variable r. One can use the freedom in the choice of the r variable to fix one of the functions above. The electromagnetic tensor can be assumed to be of the form
F ab = E(r)(δ t a δ r b − δ r a δ t b ) + B(r)(δ φ a δ r b − δ r a δ φ b ).(27)
When this work was in the refereeing process at PRL, the author learnt from the referee that a footnote on this respect appeared in [5]. This work was supported in part by FONDECYT (Chile)-1980891, CONACYT (México)-3692P-E9607.
Hence one has two possible cases: a) The electrovacuum plus lambda case with the charge different from zero, Q = 0. In such a case J = 0 and thus N φ (r) has to be a constant (reducible to zero), under this circumstances one obtains a correct static charged (2+1)-solution with structural functions N 2 (r) = −M − Λr 2 − 2Q 2 ln r, and N φ (r) = 0. Thus one arrives at the main result of this comment: within the BTZ metric anzats (4) there is no charged non-diagonal solutions to the Einstein-Maxwell plus lambda equations in (2+1)-dimensions.
. M Bañados, C Teitelboim, J Zanelli, Phys. Rev. Lett. 691849M. Bañados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. 69 (1992) 1849.
Lectures on (2+1)-dimensional Gravity. S Carlip, gr-qc/9503024Davis preprint UCD-95-6S. Carlip, "Lectures on (2+1)-dimensional Gravity", Davis preprint UCD-95-6, gr-qc/9503024.
Lower Dimensional Black Holes: Inside and out. R Mann, qr-qc/9501038R. Mann, "Lower Dimensional Black Holes: Inside and out", qr-qc/9501038
. V Frolov, S Hendy, A L Larsen, Nucl. Phys. B. 468336V. Frolov, S. Hendy and A.L. Larsen, Nucl. Phys. B 468 (1996) 336.
. M Kamata, T Koikawa, Phys.Lett B. 353196M. Kamata and T. Koikawa, Phys.Lett B 353 (1995) 196.
| [] |
[
"Thermal Model Description of Strangeness Enhancement at Mid-Rapidity in Pb-Pb collisions at 158 GeV A/c",
"Thermal Model Description of Strangeness Enhancement at Mid-Rapidity in Pb-Pb collisions at 158 GeV A/c"
] | [
"Sahal Yacoob \nPhysics Department\nUniversity of Cape Town Private\n7700Bag, RondeboschSouth Africa\n",
"Jean Cleymans \nPhysics Department\nUniversity of Cape Town Private\n7700Bag, RondeboschSouth Africa\n"
] | [
"Physics Department\nUniversity of Cape Town Private\n7700Bag, RondeboschSouth Africa",
"Physics Department\nUniversity of Cape Town Private\n7700Bag, RondeboschSouth Africa"
] | [] | The results of the WA97 collaboration for strange particle production at mid-rapidity in Pb-Pb collisions at 158 GeV·A/c at CERN display a strong strangeness enhancement with system size at mid-rapidity which is dependent on the strangeness of the particle concerned, and saturates at values of participating nucleons greater than 120. These results are phenomenologically described by the mixed canonical ensemble, with canonical (exact) strangeness conservation involving all strange resonances, and grand canonical conservation of charge and baryon number. A detailed quantitative analysis shows that the data are well described by an equilibrium (γS ≡ 1) hadron gas.PACS numbers: | null | [
"https://export.arxiv.org/pdf/hep-ph/0208246v2.pdf"
] | 118,080,408 | hep-ph/0208246 | f900c84964a6239a522b696034603b7664fe1b3a |
Thermal Model Description of Strangeness Enhancement at Mid-Rapidity in Pb-Pb collisions at 158 GeV A/c
arXiv:hep-ph/0208246v2 3 Oct 2002
Sahal Yacoob
Physics Department
University of Cape Town Private
7700Bag, RondeboschSouth Africa
Jean Cleymans
Physics Department
University of Cape Town Private
7700Bag, RondeboschSouth Africa
Thermal Model Description of Strangeness Enhancement at Mid-Rapidity in Pb-Pb collisions at 158 GeV A/c
arXiv:hep-ph/0208246v2 3 Oct 2002(Dated: August 26,2002)PACS numbers:
The results of the WA97 collaboration for strange particle production at mid-rapidity in Pb-Pb collisions at 158 GeV·A/c at CERN display a strong strangeness enhancement with system size at mid-rapidity which is dependent on the strangeness of the particle concerned, and saturates at values of participating nucleons greater than 120. These results are phenomenologically described by the mixed canonical ensemble, with canonical (exact) strangeness conservation involving all strange resonances, and grand canonical conservation of charge and baryon number. A detailed quantitative analysis shows that the data are well described by an equilibrium (γS ≡ 1) hadron gas.PACS numbers:
I. INTRODUCTION
The WA97 collaboration [1] at CERN has shown that strange particle yields per wounded nucleon reach a saturation level for the most central Pb-Pb collisions (the Pb-Pb data is presented in 4 centrality bins) and show a pronounced increase when compared to a p-Be system. A mixed canonical description of these data has been published by Hamieh et al. [2,3] who showed that the density of strange particles derived from such a treatment depends on the size of the system consistent with the WA97 observations [1]. In addition Hamieh et al. [2] have shown that if reasonable values are used for the thermal parameters of each system the behavior of the yield (in the Pb-Pb systems) per wounded nucleon normalized to the p-Be yield is in agreement with the values obtained by the WA97 collaboration. The present work uses also he mixed canonical ensemble [4], and presents a more thorough analysis in that the thermal parameters for each system are obtained by minimizing χ 2 and a more explicit comparison between model and data is performed. The dependence of the parameters on the size of the system is presented, as well as the yields per wounded nucleon relative to p-Be.
II. MIXED CANONICAL FORMALISM
The mixed canonical partition function for a hadron gas conserving strangeness exactly is described by the * Also at Department of Physics and Astronomy, Northwestern University; Electronic address: [email protected] † Electronic address: [email protected] following partition function [5]:
Z C S=0 = 1 2π π −π dφ exp 3 n=−3 S n e inφ ,(1)
where S n = V k Z 1 k , V is the volume, and the sum is over all particles and resonances carrying strangeness n. For a particle of mass m k , with spin-isospin degeneracy factor g k , carrying baryon number B k and charge Q k with baryon chemical potential µ B and charge chemical potential µ Q , the one-particle partition function is expressed in the Boltzmann approximation as:
Z 1 k ≡ g k 2π 2 m 2 k T K 2 m k T exp(B k µ B + Q k µ Q ).(2)
As described in Ref. [4] the partition function for this ensemble may be rewritten in the following form which is well suited for numerical calculation:
Z C S=0 = e S0 ∞ n=−∞ ∞ p=−∞ a p 3 a n 2 a −2n−3p 1 ×I n (x 2 )I p (x 3 )I −2n−3p (x 1 ),(3)
where
a i = S i /S −i (4) x i = 2 S i S −i(5)
and I i are modified Bessel functions. The expression for the particle density, n i , may be obtained from the partition function Eq. (3) by following the standard method [5]. For a particle i having strangeness s the result is
n i = Z 1 i Z C S=0 ∞ n=−∞ ∞ p=−∞ a p 3 a n 2 a −2n−3p−s 1 ×I n (x 2 )I p (x 3 )I −2n−3p−s (x 1 ).(6)
III. RESULTS
In this section we explore whether canonical strangeness suppression at small volumes (compared to grand canonical equilibrium particle yields) is able to explain the enhancement of strange particle yields per wounded nucleon from small to large systems, as measured by the WA97 collaboration at CERN [1]. The thermal model is well suited for to 4π integrated data and to the central rapidity region if a boost invariant plateau exists around at mid-rapidity [6]. We assume the validity of the latter in order to analyze the particle yields. A thorough comparison of 4π particle numbers with the thermal model has been made in Ref. [7]. As has been the case in other calculations [8,9], our findings at midrapidity are found to be consistent with full strangeness equilibration (i.e. γ s = 1). This is in contrast to 4π integrated particle yields which are incompatible with full strangeness chemical equilibriun and deviate from it by several standard deviations. Experimentally, ratios of particle to anti-particle yields [15] at mid-rapidity [16] have been compared to 4π integrated yields and shown to be in agreement for S+S collisions at 200 A GeV by the CERN NA35 collaboration [10]. In addition, and perhaps of greater interest, the 4π integrated ratio Ξ/Ξ measured by CERN NA49 for Pb+Pb collisions at 158 A GeV has been shown to agree with the corresponding CERN WA97 mid-rapidity ratio [11].
As the mixed canonical formalism has been derived here for the case of Boltzmann statistics, it is worth noting that the second term in a series expansion of the correct quantum statistical distribution function for kaons gives a correction to the kaon numbers of the order of 3% at a temperature of 150 MeV. The corrections to the Boltzmann distribution functions due to quantum mechanics are expected to affect the kaons more than any other strange particle, as they are the lightest of the strange particles. With errors of 3% and less, the use of Boltzmann statistics to describe the strange particles is justified. For all mixed canonical analyses the parameter µ Q fit to zero, and has been subsequently removed as a free parameter. For a 4π integrated Pb + Pb system, µ Q is expected to be small and negative.
A. Analysis A
The main results of analysis A are based on fitting the ratios of the strange particle yields to the yield of negatives for each case in Table I. The negatives yield is used for each ratio due to the good statistics. In addition it provides a measure of the entropy of the collision. For negative pions, the quantum mechanical correction to the Boltzmann distribution function at 150 MeV is of the order of 30% . It is unacceptable to ignore this correction when attempting to describe the data. Considering that the non-strange particle yields predicted by the model are independent of the exact conservation of strangeness, the full quantum mechanical grand canonical particle number expression is evaluated for all non-strange particles. The parameters obtained in this way are shown in Table I as analysis A [17]. The χ 2 for each system is good, and there are three (two) degrees of freedom for the p+Pb, and Pb+Pb systems (p+Be system). The large uncertainties in the radius parameter for the Pb+Pb systems shows that we have hit the grand canonical limit, and have no volume dependence, this is discussed briefly in a later section.
B. Analysis B
Analysis B has only three data points ( Λ Λ , Ξ Ξ and Ξ Λ ) for each fit. This analysis has been motivated by the reasons mentioned previously in the text. This analysis has no free parameters and shows reasonable agreement with analysis A. The large uncertainties of the fitted parameters are unavoidable. Interestingly, fitting these ratios proves impossible for the p+Pb system. The hypothesis of [2] of a special 'interaction volume' for strange particles in this system cannot be tested, as the particle ratios would depend only on this interaction volume, as the regular volume cancels in the ratio of the particle multiplicities.
C. Analysis C Analysis C determines the fits to the data using the grand canonical ensemble with quantum statistics. In this case, the volume dependence canels out in the particle ratios and we have included µ S and µ Q . This procedure fails for the p+Be system. As may be expected for a system of this size, a canonical treatment [12] is required. In this analysis, the parameters fitted for Bin 4 immediately catch the eye -with high uncertainty values even though χ 2 < 1. This is especially puzzling when one notices that this is not the case in analyses A or B. The T and µ B parameters agree with those obtained by Becattini et al. [7], for a grand canonical description of the 4π-integrated data for central symmetric Pb 208 collisions at 158 A GeV by the CERN NA49 collaboration. The parameters in analyses A, B, and C agree within th each other one standard deviation.
D. Thermal Model Parameters
In order to fully reproduce the data obtained by the WA97 collaboration , it is necessary to determine a relationship between the parameters µ B , the radius R and the temperature T obtained from the fits of analysis A and the number of wounded nucleons. The fits used are shown in Figs. 1 (temperature), and 2 (µ B ), as well as the parameters obtained from a full Boltzmann treatment [18]. The volume paramaterisation is described
T - 143±8 173±20 165±4 166±5 171±20 µB - 205±106 282±96 227±39 203±45 265±120 µQ - -109±52 -54±68 0±50 0±60 -62±86 µS - 105±63 110±73 54±25 39±28 94±97
by equation 7. The parameters obtained from the p+Pb data have not been considered because of their prediction of a volume smaller than that of the p+Be system.
Temperature
As may be seen in Fig. 1, it appears initially that the temperature increases with centrality. However, the last data point does not fit this hypothesis, and the variation of chemical freeze-out temperature with number of participants has been assumed to be constant (T = 163 MeV). This is in agreement with the fit to the NA49 data by Becattini et al. [7] to central Pb+Pb data. This temperature value is slightly less than the one obtained by Hamieh et al. [2].
Radius
In order to reproduce the WA97 data, knowledge of the variation of system size with average number of wounded nucleons is required. We have chosen a function of the form:
< N wound >= R 3 + b(7)
with b < 0.5 adjusted to reproduce the data of the WA97 collaboration (Figs. 4 and 3). This differs from the dependence of the radius on A part assumed by Hamieh et al. [2] where A part ∼ 1.3 − 1.7R 3 . The large uncertainty in the radius for the larger systems is not surprising, as the ratios being fitted are expected to show a small volume dependence, due only to canonical strangeness suppression. The effects of canonical strangeness suppression decrease with system size as one approaches the grand canonical limit where ratios of particle multiplicities have no volume dependence. Figure 2 shows the variation of the baryon chemical potential with system size. This function increases rapidly before saturating. All the Pb+Pb bins are described by the saturation value of approximately 210 MeV. The obtained values of µ B in the central Pb+Pb bin are in agreement with that of Becattini et al. [7]. The µ used for the Pb+Pb bins by Hamieh et al. [2] is just outside one standard deviation of the values obtained during this analysis. For the p+Be system the value of µ B used in Ref. [2] (150 MeV) is much larger than that obtained in this analysis (111 ± 7 MeV) . The exact variation of µ B , T and R in the range between the p-Be and Pb-Pb may differ from what is shown here and new data from CERN NA57 in this range are eagerly awaited. The figures mentioned above (1, and 2) also show the variation of the parameters T , and µ B with centrality in the case where all particle multiplicities are assumed to be Boltzmann. These parameters are seen to be in agreement with the parameters from Analysis A. The value of the χ 2 parameter obtained for the purely Boltzmann fits were of order 1 for all systems considered.
Baryon Chemical Potential
E. Predicted Evolution of Particle Yields
Figs. 3 and 4 show the ability of the model to reproduce the data using the functions in Figs 1 and 2, and equation 7 to predict the variation in chemical freeze-out temperature T , baryon chemical potential µ B , and radius R with average number of wounded nucleons. As is clear, the agreement of the model with experimental data is good. The exact shape of the thermal model predictions shown in Figure 3 and 4 show a large dependence on the exact relationship between number of wounded nucleons and system size, and a weaker dependence on the functional fit to the other two parameters. This is rather unfortunate as the thermal model R values obtained have uncertainties equal to their magnitude, so this relationship is not constrained by the model.
IV. CONCLUSIONS
A thermal model conserving baryon number and charge on average, and strangeness exactly has been used to describe particle yields from heavy ion collisions. The formalism includes all strange particles. The model has been applied to the CERN WA97 data, and is shown to be able to reproduce the data for all centrality classes (Figs. 3 and 4). The accuracy of the model parameters (temperature T , radius R, and baryon chemical potential µ B ) is restricted by the lack of 4π integrated particle yields for multi-strange particles. In order to reproduce WA97 graphical presentation of their data accurately, some knowledge of the variation of T , R, and µ B was required. A functional form of the variation of these parameters with the number of wounded nucleons has been determined and is presented in Figs 1 and 2, and equation 7.
The parameters obtained in this model for the central Pb+Pb bins are in agreement with the 4π thermal model application of Becattini et al. [7]. This lends some weight to the accuracy of the model when fitting particle ratios in a limited kinematic region. This grand canonical model includes the factor γ S = 1 to predict the yields of strange particles. To differentiate between canonical strangeness suppression, and suppression of strange particles by anomalous phase space occupancy [19], the yield of the φ particle could be used. Yields of this particle are not sensitive to canonical strangeness suppression, but as it contains an s and an s quark, these yields will be sensitive to γ S = 1.
The enhancement of strange particles measured by CERN WA97 has been considered a signal for QGP formation. At first glance, the ability of a full equilibrium thermal model to reproduce the data suggests the existence of a deconfined state, since equilibrium strange particle yields have been proposed as a possible signal for deconfinement [13]. A deconfined phase is not, however, expected to be formed in the p+Be system, as it is only expected in large dense systems. The ability of the model to reproduce the p+Be data suggests that it is possible for strange particles to reach equilibrium yields by hadronic interactions alone.
full Boltzmann stats Temp. assuming quantum stats. for non-strange particles Temp function used to reproduce the WA97 data FIG. 1: Variation of the chemical freeze-out temperature (T ) of the system with number of wounded nucleons.
µ
B assuming full Boltzmann stats. µ B assuming quantum stats. for non -strange particles µ B function used to reproduce the WA97 data FIG. 2: Variation of the baryon chemical potential (µB) with radius.
FIG. 3 :FIG. 4 :
34Comparison of the hadron gas model with exact strangeness conservation and CERN WA97 data for negatives and strange particles. Comparison of the hadron gas model with exact strangeness conservation and CERN WA97 data for the Ω and strange anti-particles.
TABLE I :
IParameters obtained for various analyses as described in the text. The temperature (T ) and chemical potential (µB) are in MeV. The radii (R) are given in fm. If χ 2 is greater than 50, the fit parameters are not shown.p+Be
p+Pb
Pb+Pb
Bin1
Bin2
Bin3
Bin4
Analysis A
Ratios of Strange Particle to Negatives Multiplicities
χ 2
0.756
1.85
2.07
0.427
1.58
1.45
T 162±4
172±2
161±4
164±4
166±4
159±5
µB 111±7
157±39
201±25
230±38
228±28
204±30
R 1.39±0.14 1.16±0.39 6.81±8.73 9.88±10.9 6.78±9.39 10.4±6.8
Analysis B
Ratios of Particle Yields to that of their Anti-Particles
χ 2 ∼ 10 −6
60287
∼ 10 −7
∼ 10 −4
∼ 10 −3
∼ 10 −2
T 157±25
-
171±17
161±15
153±35
163±45
µB 106±31
-
229±55
222±46
190±61
220±30
R 1.50±0.83
-
2.17±2.03 6.06±8.92 9.38±7.12 8.08±6.83
Analysis C
Grand Canonical Fit
χ 2
78
17.5
1.53
0.15
0.70
0.97
AcknowledgmentsSahal Yacoob wishes to thank the National Research Foundation (NRF) of South Africa for supporting this work.
. F Antinori, Nucl. Phys. A. 661130F. Antinori et al., Nucl. Phys. A 661, 130c (1999).
. S Hamieh, K Redlich, A Tounsi, Phys. Lett. B. 48661S. Hamieh, K. Redlich, and A. Tounsi, Phys. Lett. B 486, 61 (2000).
. K Redlich, Nucl. Phys. 69894K. Redlich, Nucl. Phys. A698, 94 (2002).
. P Braun-Munzinger, J Cleymans, H Oeschler, K Redlich, Nucl. Phys. A. 697902P. Braun-Munzinger, J. Cleymans, H. Oeschler, and K. Redlich, Nucl. Phys. A 697, 902 (2002).
. J Cleymans, K Redlich, E Suhonen, Z. Phys. C. 76269J. Cleymans, K. Redlich, and E. Suhonen, Z. Phys. C 76, 269 (1997).
. J Cleymans, . K Redlich, Phys. Rev. C. 6054908J. Cleymans and . K. Redlich, Phys. Rev. C 60, 054908 (1999).
. F Becattini, J Cleymans, A Keranen, K Redlich, E Suhonen, Phys. Rev. C. 63649F. Becattini, J. Cleymans, A. Keranen, K. Redlich, and E. Suhonen, Phys. Rev. C 63, 649 (2001).
. P Braun-Munzinger, J Stachel, Nucl. Phys. A. 606320P. Braun-Munzinger and J. Stachel, Nucl. Phys. A 606, 320 (1996).
. P Braun-Munzinger, J Stachel, Nucl. Phys. A. 654119P. Braun-Munzinger and J. Stachel, Nucl. Phys. A 654, 119c (1999).
. J Sollfrank, Z. Phys. C. 61659J. Sollfrank et al., Z. Phys. C 61, 659 (1994).
. R Barton, J. Phys. G. 27367R. Barton et al., J. Phys. G 27, 367 (2001).
. J Cleymans, A Keranen, M Marais, E Suhonen, Phys. Rev. C. 562747J. Cleymans, A. Keranen, M. Marais, and E. Suhonen, Phys. Rev. C 56, 2747 (1997).
. J Rafelski, B Muller, Phys. Rev. Lett. 481066J. Rafelski and B. Muller, Phys. Rev. Lett. 48, 1066 (1982).
. G D Yen, M I Gorenstein, Phys. Rev. C. 592788G. D. Yen and M. I. Gorenstein, Phys. Rev. C 59, 2788 (1999).
The reason for checking particle to anti-particle ratios is that the masses of the particles in the ratio are equal -this leads to these particles being equally affected by flow. This in turn minimizes the errors introduced by considering a limited kinematic region [14The reason for checking particle to anti-particle ratios is that the masses of the particles in the ratio are equal -this leads to these particles being equally affected by flow. This in turn minimizes the errors introduced by considering a limited kinematic region [14]
In symmetric collisions the majority of new particles are produced at mid-rapidity. In symmetric collisions the majority of new particles are produced at mid-rapidity.
The table also includes the χ 2 and fit parameters for two other analyses described in detail below. The table also includes the χ 2 and fit parameters for two other analyses described in detail below
Using Boltzmann statistics for strange, and non-strange particles. Using Boltzmann statistics for strange, and non-strange particles
| [] |
[
"Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers",
"Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers",
"Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers",
"Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers"
] | [
"Chengyuan Cai \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n",
"Xi-Han Zhou \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n",
"Weichao Yu \nState Key Laboratory of Surface Physics and Institute for Nanoelectronic Devices and Quantum Computing\nFudan University\n200433ShanghaiChina\n\nZhangjiang Fudan International Innovation Center\nFudan University\n201210ShanghaiChina\n",
"Tao Yu \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n",
"Chengyuan Cai \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n",
"Xi-Han Zhou \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n",
"Weichao Yu \nState Key Laboratory of Surface Physics and Institute for Nanoelectronic Devices and Quantum Computing\nFudan University\n200433ShanghaiChina\n\nZhangjiang Fudan International Innovation Center\nFudan University\n201210ShanghaiChina\n",
"Tao Yu \nSchool of Physics\nHuazhong University of Science and Technology\n430074WuhanChina\n"
] | [
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina",
"State Key Laboratory of Surface Physics and Institute for Nanoelectronic Devices and Quantum Computing\nFudan University\n200433ShanghaiChina",
"Zhangjiang Fudan International Innovation Center\nFudan University\n201210ShanghaiChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina",
"State Key Laboratory of Surface Physics and Institute for Nanoelectronic Devices and Quantum Computing\nFudan University\n200433ShanghaiChina",
"Zhangjiang Fudan International Innovation Center\nFudan University\n201210ShanghaiChina",
"School of Physics\nHuazhong University of Science and Technology\n430074WuhanChina"
] | [] | We predict frequency multiplication of surface acoustic waves in dielectric substrates via the ferromagnetic resonance of adjacent magnetic transducers when driven by microwaves. We find pure second harmonic generation (SHG) without any linear and third harmonic components by a magnetic nanowire. The SHG and linear phonon pumping are switched by varying the saturated magnetization direction of the wire, or resolved directionally when pumped by magnetic nano-disc. We address the high efficiency of SHG with comparable magnitude to that of linear response, as well as unique non-reciprocal phonon transport that is remarkably distinct in different phonon harmonics. Such acoustic frequency comb driven by microwaves should bring unprecedented tunability for the miniaturized phononic and spintronic devices. | 10.1103/physrevb.107.l100410 | [
"https://export.arxiv.org/pdf/2212.03451v1.pdf"
] | 254,366,330 | 2212.03451 | 1941308df199eadbd4cdf01501f842fe40e8eb62 |
Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers
(Dated: December 8, 2022)
Chengyuan Cai
School of Physics
Huazhong University of Science and Technology
430074WuhanChina
Xi-Han Zhou
School of Physics
Huazhong University of Science and Technology
430074WuhanChina
Weichao Yu
State Key Laboratory of Surface Physics and Institute for Nanoelectronic Devices and Quantum Computing
Fudan University
200433ShanghaiChina
Zhangjiang Fudan International Innovation Center
Fudan University
201210ShanghaiChina
Tao Yu
School of Physics
Huazhong University of Science and Technology
430074WuhanChina
Acoustic Frequency Multiplication and Pure Second Harmonic Generation of Phonons by Magnetic Transducers
(Dated: December 8, 2022)
We predict frequency multiplication of surface acoustic waves in dielectric substrates via the ferromagnetic resonance of adjacent magnetic transducers when driven by microwaves. We find pure second harmonic generation (SHG) without any linear and third harmonic components by a magnetic nanowire. The SHG and linear phonon pumping are switched by varying the saturated magnetization direction of the wire, or resolved directionally when pumped by magnetic nano-disc. We address the high efficiency of SHG with comparable magnitude to that of linear response, as well as unique non-reciprocal phonon transport that is remarkably distinct in different phonon harmonics. Such acoustic frequency comb driven by microwaves should bring unprecedented tunability for the miniaturized phononic and spintronic devices.
Introduction.-Surface acoustic waves (SAWs) are important information carriers in phononic and electronic devices [1,2], but also act as excellent information mediators for quantum communication in high-quality piezoelectric substrates [3][4][5]. Downscaled phononic devices rely on the generation of coherent phonons of above GHz frequency and sub-micrometer wavelength, which arguably represents one challenge by conventional electric approach so far since its excitation efficiency is low and energy consumption is high [6][7][8]. In contrast, the ferromagnetic resonance (FMR) of magnetic nanostructures can pump such phonons via the magnetostriction efficiently in conventional dielectric substrates [9][10][11][12][13][14][15][16][17][18][19], which achieves efficient communication of spin information over millimeter distance. The inverse process, i.e., the modulated transmission of SAWs via magnetostriction, was verified decades ago in a magnetoelastic bilayer towards SAW isolator functionality [20], but recently obtains tremendous attention due to its remarkable nonreciprocity or diode effect with large on-off ratios observed in many ferromagnet|piezoelectric insulator heterostructures [21][22][23][24][25][26][27]. Most of these studies focus on the linear response, however, a regime limiting the tunability and maximal frequency of resonant phonons.
High phonon harmonics in the acoustic frequency multiplication, or the acoustic frequency comb, operate at a higher frequency and shorter wavelength than their linear component [28][29][30][31]. In crystals their coherent generation relies on the anharmonic interaction of lattice and thus needs to exploit strong laser fields to achieve the demanding nonlinearity that may cause unavoidable parasitic effects such as heating and dephasing. Second harmonic generation (SHG) of phonons in the terahertz frequency was excited in ultrashort time scales [28,29], where strong laser pulses are exerted, as well as in the megahertz frequency for the ultrasonic waves [30]. Without piezoelectricity [31], achieving such nonlinearity for GHz phonons appears to be a formidable task. Different from the electric approach, nonlinear magnetization dynamics for frequency multiplication is easily accessible, energy-saving, and well controlled by microwaves [32][33][34][35][36][37]. Figure 1. Pure SHG 2ω of SAWs in conventional dielectric substrates via phonon pumping by the FMR of a magnetic nanowire of thickness d and width w that is launched by microwaves of frequency ω. The saturated magnetization Ms is biased by an external magnetic field, allowing the pure second harmonics of SAWs to mix with other components when away from the wire direction.
In this Letter, we predict the acoustic frequency multiplication as well as pure SHG of SAWs of ∼ 10 GHz frequency in conventional non-piezoelectric dielectric substrates, in which the linear and third harmonic harmonics completely vanish, via the phonon pumping of adjacent magnetic transducers that are launched by microwaves, as sketched in Fig. 1 for the magnetic nanowire configuration (such a wire is later replaced by a magnetic nano-disc). We can switch the pure SHG, when the saturated magnetization is along the wire direction, to the dominant linear phonon excitation, when the magnetization is biased to the wire normal direction, or realize their mixing flexibly with other arbitrary magnetization arXiv:2212.03451v1 [cond-mat.mes-hall] 7 Dec 2022 directions. All such phenomena can be exhibited conveniently with an in-plane magnetized nano-disc, where the pure SHG and linear response are resolved directionally. We find the efficiency of such SHG is high since with accessible magnetization nonlinearity its magnitude is not smaller than that of linear response, but the nonreciprocity appearing in the linear phonon pumping is strongly altered in the SHG due to the distinct dynamic magnetoelastic boundary conditions.
Model and acoustic frequency multiplication.-The magnetoelastic heterostructure that we consider contains a nano-magnet "M" of thickness d with an in-plane equilibrium magnetization M s , such as a magnetic nanowire of width w or a magnetic nano-disc of radius r, and an adjacent thick dielectric substrate "N", which couple via the magnetostriction [11,15,17,25,38,39]
F me = 1 M 2 s dr B || i ε ii M 2 i + B ⊥ i =j ε ij M i M j ,
where B || and B ⊥ are the magneto-elastic coupling constants, {i, j} = {x, y, z} denote the spatial index, and ε ij ≡ (1/2) ∂u i /∂r j + ∂u j /∂r i is the strain tensor defined via the displacement field u. Here we focus on the FMR of the nano-magnet [13,18], such that the precessing magnetization M(t) can be treated as a macrospin, which is governed by the Landau-Lifshitz-Gilbert (LLG) equation [40,41]:
∂M/∂t = −µ 0 γM × H eff + α(M/M s ) × ∂M/∂t,(1)
where µ 0 is the vacuum permeability, −γ is the electron gyromagnetic ratio, and α is the phenomenological damping constant [40]. The magnetization precesses around an effective magnetic field H eff = H app +H d +H ex +H e that contains the external field H app including the static H 0 and dynamic h(t) fields, the demagnetizing field
H d = (−N xx M x , −N yy M y , −N zz M z ), where N xx d/(d + w)
, N yy 0, and N zz w/(d + w) parameterize the demagnetization factor of the wire [42], the exchange field H ex = A ex ∇ 2 M with the exchange stiffness A ex , as well as the effective field due to the magnetostriction
H e,i ≡ −(1/µ 0 )δF me /δM i (r) = − 2 µ 0 M s j ε ij M j δ ij B || + (1 − δ ij )B ⊥ . (2)
As a backaction, the magnetization also affects the static and dynamic strains of the elastic heterostructure. We have to distinguish the displacement fields in the nano-magnet u M (r, t) and the dielectric substrate u N (r, t), as well as their different material densities ρ M and ρ N in the elastic equations of motion [16,[43][44][45]:
ρ MüM = ∇ · ( ← → σ σ σ M + ← → η η η ), ρ NüN = ∇ · ← → σ σ σ N ,(3)
which are governed by the mechanical stress tensor
σ N,M ij = δ ij λ N,M l ε N,M ll + 2µ N,M ε N,M ij ,(4)
where λ N,M and µ N,M are the associated Lamé constants, and the magnetization stress tensor inside the magnet
η ij ≡ ∂F me /∂(∂u i /∂r j ) = M i M j [δ ij B || + (1 − δ ij )B ⊥ ]/M 2 s .(5)
However, with uniform magnetization there is no net effect of ← → η η η inside the magnet since ∇ · ← → η η η = 0. All the phonon pumping effect thereby comes from the static and dynamic magnetization stress at the boundary of the magnet, which appears in the boundary conditions defined by the continuity of the force per unit area or the stress vector at the surfaces and interfaces [16,17,[43][44][45]:
← → σ σ σ N · n| A = 0, ( ← → σ σ σ M + ← → η η η ) · n| B = 0, ( ← → σ σ σ M + ← → η η η ) · n| C = ← → σ σ σ N · n| C .(6)
Here we denote the interfaces between the dielectric substrate and vacuum as "A", between the nano-magnet and vacuum as "B", and between the dielectric substrate and nano-magnet as "C", such that n is the normal unit vector of each interface. When M s is aligned to the wireŷ-direction, the dynamic boundary magnetization stress in the linear order of fluctuated magnetization ← → η η η · n| C = M z B ⊥ /M sŷ is along the wire direction, as well as ( ← → η η η · n| B ) ŷ, which thereby excites no SAW propagating normal to the wire direction since its associated mechanical stress vector along the wire direction vanishes and thereby mismatches. We thereby expect the absence of linear harmonics for the pumped SAWs propagating normally to the wire in such a magnetic configuration.
We substantiate such expectation by numerical simulations, but allow an arbitrary in-plane saturated magnetization by an angleθ with respect to the wire normalx-direction (Fig. 1), which we find is nearly parallel to the external static field H 0 [46]. We combine the LLG equation (1) with the elastic equations of motion (3) under the boundary conditions (6) in COMSOL Multiphysics [47,48]. We choose the nano-magnet as the yttrium iron garnet (YIG) nanowire [49] of thickness d = 80 nm, width w = 150 nm, and saturated magnetization µ 0 M s = 0.177 T [13,18], biased by µ 0 H 0 = 0.1 T, which has high magnetic quality with α = 10 −4 . It is adjacent to a thick gadolinium gallium garnet (GGG) substrate that has high acoustic quality [13,18]. Their elastic properties are close but not identical: for YIG, ρ M = 5170 kg/m 3 , λ M = 1.16 × 10 11 N/m 2 , µ M = 7.64 × 10 10 N/m 2 [11], while for GGG, ρ N = 7080 kg/m 3 , λ N = 1.27 × 10 11 N/m 2 , µ N = 8.83 × 10 10 N/m 2 [50]. They are coupled via magnetostriction with the coupling constants B || = 3.48 × 10 5 J/m 3 and B ⊥ = 6.96 × 10 5 J/m 3 [11]. The sound velocity of SAWs c r = 3271.8 m/s [51].
We apply an in-plane broadband magnetic field transverse to the saturated magnetization h(t) = h 0 sin(ω F t)x with a short duration time 0 ≤ t ≤ 2π/ω F , wherê x ⊥ M s ⊥ẑ, with which we adjust the FMR frequency ω F /(2π) = {5.43, 3.71, 2.29} GHz and the field strength µ 0 h 0 = {9.05, 6.20, 3.83} mT to make sure the pumped transverse magnetization M z ≈ 0.15M s or the precession angle ∼ 8.5 • . Figure 2 plots the frequency multiplication of SAWs up to the third harmonic generation (THG) by the FMR of YIG nanowire with different magnetic configurations, characterized by the pumped displacement field u z at the surface z = 0 (a-c), their Fourier components u k (d-f), as well as the oscillation frequency and wave number resolved from the peaks in u k in comparison to the SAW dispersion (g-i). The excellent agreement of the oscillation frequency and wave vector with the SAW dispersion ω k = c r |k| implies that the pumped elastic strain at the surface is dominated by SAWs. Static strains exist only near the nano-magnet but vanish when M s is along the wire direction because of the absence of static ← → η η η .
One remarkable feature in Fig. 2 is that when the saturated magnetization is along the wire direction (θ = π/2), there is only the SHG of SAWs propagating normally to the wire, without any linear and third harmonics, while when M s is normal to the wire direction (θ = 0), the linear response (LR) dominates. Such SHG comes completely from the nonlinearity of magnetic stress at the boundary, which scales as h 2 0 in its amplitude, thus distinguished from the anharmonicity effect of lattice [28,29]. The mixing of SHG and other harmonics is realized when M s is away from the parallel setup, e.g.,θ = π/4 in Fig. 2(b,e,h). The THG is unique since it comes from the interaction between magnons but not the nonlinear magnetic stress ∝ exp(2iω F t) [46]. These provide flexible tunability for the demanding phonon frequency achievable by different directions and magnitudes of the static magnetic field.
Pronounced non-reciprocity exists in the linear phonon pumping, as shown in Fig. 2(e) whenθ = π/4. Such nonreciprocity vanishes when the magnetization is normal to the wire direction as in Fig. 2(f). These numerical results agree with the theoretical expectations from the previous analytical solutions [15,16]. However, in the SHG the non-reciprocity is generally suppressed in almost all the magnetic configurations, as shown by the Fourier components with opposite momenta in Fig. 2(d,e,f).
Quantum formalism.-Above simulated phonon high harmonic generation can best be formulated in a quantum language.
The magnetization operatorŝ Holstein-Primakoff [52][53][54][55][56][57] expansion of Kittel magnon m and their dominant interactions with strength governed by the ellipticity of eigenmodes M x = iξ 2 m M z [46], which are not circularly polarized when the form factor ξ m = 1. The eigenmodes U U U(x, z, k) of SAWs in the elastic heterostructure contain both near-field solution close to the magnet and far-field |x| w/2 solution that converges asymptotically to those of SAWs [51]. In terms of them and the SAW operatorp k , we quantize the displacement field
M x ,z − √ 2γ M s (M x ,z m + M * x ,z m † ) + O(m 3 ) andM y M s − γ M 2 z (r) + M 2 x (r) mm − γ M * 2 z (r) + M * 2 x(u(x, z, t) = k U U U(x, z, k)p k + U U U * (x, z, k)p † k .(7)
Substituting into magnetostriction energy, we obtain the magnon-phonon coupling HamiltonianĤ c = n≥1 k g (n)
k (m † ) np k + H.c. (refer to the Supplementary Material [46] for details), where the n-th order coupling constants g (n) k rely on the near-field solution of SAWs. g vanish, leading to the pure SHG [46]. So the linear fluctuation ofm is responsible for the LR and SHG of SAWs, while the double frequency inm 2 , existing in elliptical precessions ξ m = 1, causes the THG. The interaction is "non-reciprocal" when |g
(n) k | = |g (n) −k |.
Including broadband microwaves h(t) = h x (t)x and the damping rates of magnons and phonons δ m and δ p , the magnon and surface phonon obey the Langevin's equations [58,59]
dm/dt = −i(ω F − iδ m )m − i n≥1 k ng (n) k (m † ) (n−1)p k − µ 0 γM s V /(2 )ξ m h x (t), dp k /dt = −i(ω k − iδ p )p k − i n≥1 g (n) * km n ,(8)
where V = wdl is the wire's volume with length l. Here we focus on a large coherent pumping such that the magnon's thermal population is much smaller than that driven by microwaves. To solve the nonlinear Eq. (8), we apply the mean-field approximationÂB =  B + B for operators. Below we denote the ensemble-averaged  = A. Disregarding the far-off-resonant excitation, we find in the frequency domain the coherent amplitudes of SAWs
p k (ω) = G k (ω) n≥1 dt 1 e iωt1 g (n) * k m n (t 1 )(9)
contain all the harmonics of coherent magnon amplitude, where the phonon's Green function G k (ω) = 1/(ω − ω k + iδ p ). The magnon amplitude, on the other hand, should be self-consistently solved by the nonlinear equation, to the leading two orders of the coupling constants,
m(ω) = 1 ω − ω F + iδ m k G k (ω)|g (1) k | 2 m(ω) + 6 k,ω1,ω2 G k (ω + ω 1 )|g (2) k | 2 m * (ω 2 )m(ω 1 )m(ω + ω 2 − ω 1 ) − iµ 0 γM s V /(2 )ξ m h x (ω) .(10)
Treating the phonon's back action to the FMR as a perturbation, here we pursue an iteration solution of Eq. (10). Substituting the unperturbed solution
m (0) (ω) ≈ −iµ 0 γM s V /2 ξ m h x (ω)/(ω − ω F + iδ m ) into (10), we arrive at m(ω) ≈ −iµ 0 γM s V /(2 )ξ m h x (ω) ω − ω F + iδ m + Σ L (ω) + Σ NL (ω) ,(11)
where Σ L (ω) = − k G k (ω)|g (1) k | 2 and Σ NL (ω) = −6ln m k G k (ω + ω F )|g (2) k | 2 are self energies contributed, respectively, by the linear and nonlinear phonon pumping. n m ≡ m † (t)m(t) =
[µ 0 γM s wd/(2 )ξ m h x (ω F )] 2 is the pumped magnon number per unit wire length. Around FMR, the imaginary part of Σ, i.e., Im Σ L (ω F ) = L/(2c r )(|g (1) −kr | 2 + g (1) kr 2 ) and Im Σ NL (ω F ) = 3Lln m /c r (|g (2) −2kr | 2 + g (2) 2kr 2 ) contribute to linear and nonlinear magnon dampings, where L is the substrate's length and k r = ω F /c r .
Substituting the solutions p k (t) (9) and m(t) (11) to the displacement field (7), we close the momentum integral in the upper (lower) half complex plane when x > 0 (x < 0). When x > 0, the displacement field
u R = 2L c r n≥1 Im U U U(z, nk r )e inkrx g (n) * nkr m n (t) ,(12)
only appears on the right-hand side of the nanowire but on its left-hand side
u L = 2L c r n≥1 Im U U U(z, −nk r )e −inkrx g (n) * −nkr m n (t) .(13)
Solutions (12) and (13) contain both the linear and nonlinear phonon pumping effects. |u L | = |u R | with the non-reciprocal couplings.
With the parameters in the simulation, solutions (12) and (13) reproduce the numerical results well with
m(t) → ldw/(2γ M s ) iM x (t)/ξ m − ξ m M z (t)
, calculated from Eq. (11) by disregarding the small damping, and proper coupling constants, as shown in Fig. 3. Our quantum formalism is thereby established for the future study of quantum communication with on-chip magnons [60][61][62] mediated by high-quantity acoustic oscillation. (12) and (13) with the simulation parameters. We use g
2kr = 0.76 mHz and g
−2kr = 0.98 mHz in (a), and g Directional SHG by magnetic nano-disc.-It is convenient to resolve the above direction-dependent phonon pumping by a magnetic nano-disc. We consider a YIG disk of thickness d = 80 nm and radius r = 150 nm on the GGG substrate, with the magnetization biased along thê y-direction by a static magnetic field µ 0 H 0 = 0.1 T. The demagnetization factor N xx = N yy d/(2d + √ πr), and N zz √ πr/(2d + √ πr) [63]. We apply a similar magnetic field pulse centered at frequency ω F = 4 GHz to the wire case such that the excited transverse magnetization M z = 0.15M s . Figure 4 shows the pumped displacement fields u z and u x at the surface z = 0 of the substrate. There exists a special direction denoted by the dashed line that exhibits pure SHG without any linear and third harmonics when the pumped SAWs propagate normally to M s direction, similar to that by the magnetic wires. The SHG mixes with the linear phonon pumping, however, when the SAWs propagate in the other directions. Conclusion.-In conclusion, we predict the acoustic frequency comb with frequency multiplication of SAWs by magnetic transducers when driven by microwaves. We further predict the conditions to realize the pure acoustic SHG without any linear and third harmonics, a functionality beyond those by anharmonic interaction of lattice [28][29][30][31]. Such a magnetic approach may overcome the difficulty in the electric technique in coherent phonon generation since it allows high-frequency (> 10 GHz) excitation of phonons by microwaves with ultra low energy consumption and unprecedented tunability with different magnetic configurations and material choices, thus particularly useful in miniaturized phononic, magnonic, and spintronic devices.
Figure 2 .
2r) m †m † + O(m 3 ) contain the linear Acoustic frequency multiplication in GGG substrates by the FMR of adjacent YIG nanowire with different magnetic configurations, plotted about 10 ns after the end of magnetic-field pulse. The blue dashed lines in (a-c) indicate the static strain that is pronounced only near the nano-magnet. When Ms is aligned to the wire direction with θ = π/2, there is only the SHG in the pumped SAWs, as shown by the displacement field uz at the surface z = 0 in (a), the resolved Fourier component u k in (d), as well as the oscillation frequency and wave vector of uz by crosses in comparison with the SAW dispersion in (g). In the other magnetic configurations withθ = π/4 [(b), (e), (h)] andθ = 0 [(c), (f), (i)], the LR, SHG, and THG coexist with flexible tunability by magnetization directions.
vanishes for the circular precession ξ m = 1, and when M s is parallel to the wire direction, g
Figure 3 .
3Calculated SHG and linear phonon pumping by analytical solutions
Figure 4 .
4Pumped displacement fields uz [(a)] and ux [(b)] at the surface z = 0 of GGG substrate by the FMR of YIG disc of thickness d = 80 nm and radius r = 150 nm, which is saturated along theŷ-direction. The dashed line indicates the pattern with pure SHG.
This work is financially supported by the National Natural Science Foundation of China under Grant No. 0214012051, and the startup grant of Huazhong University of Science and Technology (Grants No. 3004012185 and No. 3004012198). W.Y. is supported by National Natural Science Foundation of China under Grant No. 12204107, and Shanghai Science and Technology Committee (Grants No. 21PJ1401500 and No. 21JC1406200).
E A Ash, A A Oliner, G W Farnell, H M Gerard, A J Slobodnik, H I Smith, Acoustic Surface Waves. BerlinSpringerE. A. Ash, A. A. Oliner, G. W. Farnell, H. M. Gerard, A. J. Slobodnik, and H. I. Smith, in Acoustic Surface Waves (Topics in Applied Physics) (Springer, Berlin, 2014).
G S Kino, Acoustic Waves: Devices, Imaging, and Analog Signal Processing. New JerseyPrentice-HallG. S. Kino, Acoustic Waves: Devices, Imaging, and Ana- log Signal Processing (Prentice-Hall, New Jersey, 1987).
Propagating phonons coupled to an artificial atom. M V Gustafsson, T Aref, A F Kockum, M K Ekstrom, G Johansson, P Delsing, Science. 346207M. V. Gustafsson, T. Aref, A. F. Kockum, M. K. Ekstrom, G. Johansson, and P. Delsing, Propagating phonons coupled to an artificial atom, Science 346, 207 (2014).
Universal Quantum Transducers Based on Surface Acoustic Waves. M J A Schuetz, E M Kessler, G Giedke, L M K Vandersypen, M D Lukin, J I Cirac, Phys. Rev. X. 531031M. J. A. Schuetz, E. M. Kessler, G. Giedke, L. M. K. Vandersypen, M. D. Lukin, and J. I. Cirac, Universal Quantum Transducers Based on Surface Acoustic Waves, Phys. Rev. X 5, 031031 (2015).
Quantum control of surface acoustic-wave phonons. K J Satzinger, Y P Zhong, H.-S Chang, G A Peairs, A Bienfait, M.-H Chou, Nature. 563661K. J. Satzinger, Y. P. Zhong, H.-S. Chang, G. A. Peairs, A. Bienfait, M.-H. Chou et al., Quantum control of sur- face acoustic-wave phonons, Nature (London) 563, 661 (2018).
Elastic Waves in Solids II: Generation, Acousto-Optic Interaction. E Dieulesaint, D Royer, Applications. SpringerE. Dieulesaint and D. Royer, Elastic Waves in Solids II: Generation, Acousto-Optic Interaction, Applications (Springer, New York, 2000).
The 2019 surface acoustic waves roadmap. P Delsing, J. Phys. D: Appl. Phys. 52353001P. Delsing et al., The 2019 surface acoustic waves roadmap, J. Phys. D: Appl. Phys. 52, 353001 (2019).
Fundamentals, progress and perspectives on high-frequency phononic crystals. Y Cang, Y Jin, B Djafari-Rouhani, G Fytas, J. Phys. D: Appl. Phys. 55193002Y. Cang, Y. Jin, B. Djafari-Rouhani, and G. Fytas, Fun- damentals, progress and perspectives on high-frequency phononic crystals, J. Phys. D: Appl. Phys. 55, 193002 (2022).
Temperature dependence of microwave acoustic losses in yttrium iron garnet. E G Spencer, R T Denton, R P Chambers, Phys. Rev. 1251950E. G. Spencer, R. T. Denton, and R. P. Chambers, Tem- perature dependence of microwave acoustic losses in yt- trium iron garnet, Phys. Rev. 125, 1950 (1962).
Microwave phonon attenuation in yttrium aluminum garnet and gadolinium gallium garnet. M Dutoit, J. Appl. Phys. 452836M. Dutoit, Microwave phonon attenuation in yttrium aluminum garnet and gadolinium gallium garnet, J. Appl. Phys. 45, 2836 (1974).
Damping of magnetization dynamics by phonon pumping. S Streib, H Keshtgar, G E W Bauer, Phys. Rev. Lett. 12127202S. Streib, H. Keshtgar, and G. E. W. Bauer, Damping of magnetization dynamics by phonon pumping, Phys. Rev. Lett. 121, 027202 (2018).
Controlling acoustic waves using magneto-elastic Fano resonances. O S Latcham, Y I Gusieva, A V Shytov, O Y Gorobets, V V Kruglyak, Appl. Phys. Lett. 11582403O. S. Latcham, Y. I. Gusieva, A. V. Shytov, O. Y. Goro- bets, and V. V. Kruglyak, Controlling acoustic waves us- ing magneto-elastic Fano resonances, Appl. Phys. Lett. 115, 082403 (2019).
Coherent long-range transfer of angular momentum between magnon Kittel modes by phonons. K An, A N Litvinenko, R Kohno, A A Fuad, V V Naletov, L Vila, U Ebels, G De Loubens, H Hurdequint, N Beaulieu, J B Youssef, N Vukadinovic, G E W Bauer, A N Slavin, V S Tiberkevich, O Klein, Phys. Rev. B. 10160407K. An, A. N. Litvinenko, R. Kohno, A. A. Fuad, V. V. Naletov, L. Vila, U. Ebels, G. de Loubens, H. Hurde- quint, N. Beaulieu, J. B. Youssef, N. Vukadinovic, G. E. W. Bauer, A. N. Slavin, V. S. Tiberkevich, and O. Klein, Coherent long-range transfer of angular momentum be- tween magnon Kittel modes by phonons, Phys. Rev. B 101, 060407 (2020).
Long-range phonon spin transport in ferromagnet-nonmagnetic insulator heterostructures. A Rückriegel, R A Duine, Phys. Rev. Lett. 124117201A. Rückriegel and R. A. Duine, Long-range phonon spin transport in ferromagnet-nonmagnetic insulator heterostructures, Phys. Rev. Lett. 124, 117201 (2020).
Unidirectional pumping of phonons by magnetization dynamics. X Zhang, G E W Bauer, T Yu, Phys. Rev. Lett. 12577203X. Zhang, G. E. W. Bauer, and T. Yu, Unidirectional pumping of phonons by magnetization dynamics, Phys. Rev. Lett. 125, 077203 (2020).
Nonreciprocal surface magnetoelastic dynamics. T Yu, Phys. Rev. B. 102134417T. Yu, Nonreciprocal surface magnetoelastic dynamics, Phys. Rev. B 102, 134417 (2020).
Non-reciprocal pumping of surface acoustic waves by spin wave resonance. K Yamamoto, W Yu, T Yu, J Puebla, M Xu, S Maekawa, G E W Bauer, J. Phys. Soc. Jpn. 89113702K. Yamamoto, W. Yu, T. Yu, J. Puebla, M. Xu, S. Maekawa, and G. E. W. Bauer, Non-reciprocal pump- ing of surface acoustic waves by spin wave resonance, J. Phys. Soc. Jpn. 89, 113702 (2020).
Bright and dark states of two distant macrospins strongly coupled by phonons. K An, R Kohno, A N Litvinenko, R L Seeger, V V Naletov, L Vila, G De Loubens, J Ben Youssef, N Vukadinovic, G E W Bauer, A N Slavin, V S Tiberkevich, O Klein, Phys. Rev. X. 1211060K. An, R. Kohno, A. N. Litvinenko, R. L. Seeger, V. V. Naletov, L. Vila, G. de Loubens, J. Ben Youssef, N. Vukadinovic, G. E. W. Bauer, A. N. Slavin, V. S. Tiberkevich, and O. Klein, Bright and dark states of two distant macrospins strongly coupled by phonons, Phys. Rev. X 12, 011060 (2022).
. T Yu, Z C Luo, G E W Bauer, arXiv:2206.05535Chirality as Generalized Spin-Orbit Interaction in Spintronics. T. Yu, Z. C. Luo, and G. E. W. Bauer, Chirality as Generalized Spin-Orbit Interaction in Spintronics, arXiv:2206.05535.
Acoustic-Surface-Wave Isolator. M F Lewis, E Patterson, Appl. Phys. Lett. 20276M. F. Lewis and E. Patterson, Acoustic-Surface-Wave Isolator, Appl. Phys. Lett. 20, 276 (1972).
Elastically Driven Ferromagnetic Resonance in Nickel Thin Films. M Weiler, L Dreher, C Heeg, H Huebl, R Gross, M S Brandt, S T B Goennenwein, Phys. Rev. Lett. 106117601M. Weiler, L. Dreher, C. Heeg, H. Huebl, R. Gross, M. S. Brandt, and S. T. B. Goennenwein, Elastically Driven Ferromagnetic Resonance in Nickel Thin Films, Phys. Rev. Lett. 106, 117601 (2011).
Spin Pumping with Coherent Elastic Waves. M Weiler, H Huebl, F S Goerg, F D Czeschka, R Gross, S T B Goennenwein, Phys. Rev. Lett. 108176601M. Weiler, H. Huebl, F. S. Goerg, F. D. Czeschka, R. Gross, and S. T. B. Goennenwein, Spin Pumping with Coherent Elastic Waves, Phys. Rev. Lett. 108, 176601 (2012).
Nonreciprocal propagation of surface acoustic wave in Ni/LiNbO3. R Sasaki, Y Nii, Y Iguchi, Y Onose, Phys. Rev. B. 95R20407R. Sasaki, Y. Nii, Y. Iguchi, and Y. Onose, Nonrecipro- cal propagation of surface acoustic wave in Ni/LiNbO3, Phys. Rev. B 95, 020407(R) (2017).
Nonreciprocal surface acoustic wave propagation via magneto-rotation coupling. M R Xu, K Yamamoto, J Puebla, K Baumgaertl, B Rana, K Miura, H Takahashi, D Grundler, S Maekawa, Y Otani, Sci. Adv. 61724M. R. Xu, K. Yamamoto, J. Puebla, K. Baumgaertl, B. Rana, K. Miura, H. Takahashi, D. Grundler, S. Maekawa, and Y. Otani, Nonreciprocal surface acous- tic wave propagation via magneto-rotation coupling, Sci. Adv. 6, abb1724 (2020).
Nonreciprocal Dzyaloshinskii-Moriya Magnetoacoustic Waves. M Küß, M Heigl, L Flacke, A Hörner, M Weiler, M Albrecht, A Wixforth, Phys. Rev. Lett. 125217203M. Küß, M. Heigl, L. Flacke, A. Hörner, M. Weiler, M. Albrecht, and A. Wixforth, Nonreciprocal Dzyaloshin- skii-Moriya Magnetoacoustic Waves, Phys. Rev. Lett. 125, 217203 (2020).
Giant nonreciprocity of surface acoustic waves enabled by the magnetoelastic interaction. P J Shah, D A Bas, I Lisenkov, A Matyushov, N Sun, M R Page, Sci. Adv. 65648P. J. Shah, D. A. Bas, I. Lisenkov, A. Matyushov, N. Sun, and M. R. Page, Giant nonreciprocity of surface acoustic waves enabled by the magnetoelastic interaction, Sci. Adv. 6, eabc5648 (2020).
Magnetization control by angular momentum transfer from surface acoustic wave to ferromagnetic spin moments. R Sasaki, Y Nii, Y Onose, Nat. Commun. 122599R. Sasaki, Y. Nii, and Y. Onose, Magnetization con- trol by angular momentum transfer from surface acoustic wave to ferromagnetic spin moments, Nat. Commun. 12, 2599 (2021).
Nonlinear phononics as an ultrafast route to lattice control. M Först, C Manzoni, S Kaiser, Y Tomioka, Y Tokura, R Merlin, A Cavalleri, Nat. Phys. 7854M. Först, C. Manzoni, S. Kaiser, Y. Tomioka, Y. Tokura, R. Merlin, and A. Cavalleri, Nonlinear phononics as an ultrafast route to lattice control, Nat. Phys. 7, 854 (2011).
Second Harmonic Generation of Nanoscale Phonon Wave Packets. A Bojahr, M Gohlke, W Leitenberger, J Pudell, M Reinhardt, A Reppert, M Roessle, M Sander, P Gaal, M Bargheer, Phys. Rev. Lett. 115195502A. Bojahr, M. Gohlke, W. Leitenberger, J. Pudell, M. Reinhardt, A. von Reppert, M. Roessle, M. Sander, P. Gaal, and M. Bargheer, Second Harmonic Generation of Nanoscale Phonon Wave Packets, Phys. Rev. Lett. 115, 195502 (2015).
Phononic Frequency Comb via Intrinsic Three-Wave Mixing. A Ganesan, C Do, A Seshia, Phys. Rev. Lett. 11833903A. Ganesan, C. Do, and A. Seshia, Phononic Frequency Comb via Intrinsic Three-Wave Mixing, Phys. Rev. Lett. 118, 033903 (2017).
Electrical control of surface acoustic waves. L Shao, D Zhu, M Colangelo, D Lee, N Sinclair, Y Hu, P T Rakich, K Lai, K K Berggren, M Lončar, Nat. Electron. 5348L. Shao, D. Zhu, M. Colangelo, D. Lee, N. Sinclair, Y. Hu, P. T. Rakich, K. Lai, K. K. Berggren, and M. Lončar, Electrical control of surface acoustic waves, Nat. Elec- tron. 5, 348 (2022).
The 2021 Magnonics Roadmap. A Barman, J. Phys.: Condens. Matter. 33413001A. Barman et al., The 2021 Magnonics Roadmap, J. Phys.: Condens. Matter 33, 413001 (2021).
. A Brataas, B Van Wees, O Klein, G De Loubens, M Viret, Spin insulatronics. 8851Phys. Rep.A. Brataas, B. van Wees, O. Klein, G. de Loubens, and M. Viret, Spin insulatronics, Phys. Rep. 885, 1 (2020).
Frequency multiplication by collective nanoscale spin-wave dynamics. C Koerner, R Dreyer, M Wagener, N Liebing, H G Bauer, G Woltersdorf, Science. 3751165C. Koerner, R. Dreyer, M. Wagener, N. Liebing, H. G. Bauer, and G. Woltersdorf, Frequency multiplication by collective nanoscale spin-wave dynamics, Science 375, 1165 (2022).
T Hula, K Schultheiss, F J T Goncalves, L Körber, M Bejarano, M Copus, L Flacke, L Liensberger, A Buzdakov, A Kákay, M Weiler, R Camley, J Fassbender, H Schultheiss, Spin-wave frequency combs. 121112404T. Hula, K. Schultheiss, F. J. T. Goncalves, L. Körber, M. Bejarano, M. Copus, L. Flacke, L. Liensberger, A. Buzdakov, A. Kákay, M. Weiler, R. Camley, J. Fass- bender, and H. Schultheiss, Spin-wave frequency combs, Appl. Phys. Lett. 121, 112404 (2022).
J W Rao, B M Yao, C Y Wang, C Zhang, T Yu, W Lu, arXiv:2204.04590Unveiling Pump induced Magnon Mode via its Strong Interaction with Walker Modes. J. W. Rao, B. M. Yao, C. Y. Wang, C. Zhang, T. Yu, and W. Lu, Unveiling Pump induced Magnon Mode via its Strong Interaction with Walker Modes, arXiv:2204.04590.
Broadband microwave detection using electron spins in a hybrid diamond-magnet sensor chip. J J Carmiggelt, I Bertelli, R W Mulder, A Teepe, M Elyasi, B G Simon, G E W Bauer, Y M Blanter, T Van Der Sar, arXiv:2206.07013J. J. Carmiggelt, I. Bertelli, R. W. Mulder, A. Teepe, M. Elyasi, B. G. Simon, G. E. W. Bauer, Y. M. Blanter, and T. van der Sar, Broadband microwave detection using electron spins in a hybrid diamond-magnet sensor chip, arXiv:2206.07013.
Interaction of Spin Waves and Ultrasonic Waves in Ferromagnetic Crystals. C Kittel, Phys. Rev. 110836C. Kittel, Interaction of Spin Waves and Ultrasonic Waves in Ferromagnetic Crystals, Phys. Rev. 110, 836 (1958).
Nonlinear magnetoelastic coupling of epitaxial layers of Fe, Co, and Ni on Ir(100). Z Tian, D Sander, J Kirschner, Phys. Rev. B. 7924432Z. Tian, D. Sander, and J. Kirschner, Nonlinear magne- toelastic coupling of epitaxial layers of Fe, Co, and Ni on Ir(100), Phys. Rev. B 79, 024432 (2009).
A phenomenological theory of damping in ferromagnetic materials. T L Gilbert, IEEE Trans. Magn. 403443T. L. Gilbert, A phenomenological theory of damping in ferromagnetic materials, IEEE Trans. Magn. 40, 3443 (2004).
L D Landau, E M Lifshitz, Electrodynamics of Continuous Media. OxfordButterworth-Heinenann2nd ed.L. D. Landau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed. (Butterworth-Heinenann, Oxford, 1984).
Magnon trap by chiral spin pumping. T Yu, H C Wang, M A Sentef, H M Yu, G E W Bauer, Phys. Rev. B. 10254429T. Yu, H. C. Wang, M. A. Sentef, H. M. Yu, and G. E. W. Bauer, Magnon trap by chiral spin pumping, Phys. Rev. B 102, 054429 (2020).
Dynamic magnetoelastic boundary conditions and the pumping of phonons. T Sato, W C Yu, S Streib, G E W Bauer, Phys. Rev. B. 10414403T. Sato, W. C. Yu, S. Streib, and G. E. W. Bauer, Dynamic magnetoelastic boundary conditions and the pumping of phonons, Phys. Rev. B 104, 014403 (2021).
S P Timoshenko, J N Goodier, Theory of Elasticity. New YorkMcGraw-HillS. P. Timoshenko and J. N. Goodier, Theory of Elasticity, (McGraw-Hill, New York, 1970).
L D Landau, E M Lifshitz, Theory of Elasticity. Oxford, New York, Toronto, Sydney, Paris, BraunschweigPergamon PressL. D. Landau and E. M. Lifshitz, Theory of Elasticity, (Pergamon Press, Oxford, New York, Toronto, Sydney, Paris, Braunschweig, 1970).
] for the detailed derivations of the magnetic equilibrium configurations, magnon Hamiltonian, magnon-phonon coupling Hamiltonian. See Supplementary Material at [.... and nonlinear phonon pumpingSee Supplementary Material at [...] for the detailed derivations of the magnetic equilibrium configurations, magnon Hamiltonian, magnon-phonon coupling Hamil- tonian, and nonlinear phonon pumping.
. Comsol Multiphysics, COMSOL Multiphysics , http://www.comsol.com.
W C Yu, Micromagnetic Simulation with COM-SOL Multiphysics. W. C. Yu, Micromagnetic Simulation with COM- SOL Multiphysics, https://www.comsol.com/blogs/ micromagnetic-simulation-with-comsol-multiphysics/.
Spin Pinning and Spin-Wave Dispersion in Nanoscopic Ferromagnetic Waveguides. Q Wang, B Heinz, R Verba, M Kewenig, P Pirro, M Schneider, T Meyer, B Lägel, C Dubs, T Brächer, A V Chumak, Phys. Rev. Lett. 122247202Q. Wang, B. Heinz, R. Verba, M. Kewenig, P. Pirro, M. Schneider, T. Meyer, B. Lägel, C. Dubs, T. Brächer, and A. V. Chumak, Spin Pinning and Spin-Wave Dis- persion in Nanoscopic Ferromagnetic Waveguides, Phys. Rev. Lett. 122, 247202 (2019).
Magnon, phonon, and electron temperature profiles and the spin Seebeck effect in magnetic insulator/normal metal hybrid structures. M Schreier, A Kamra, M Weiler, J Xiao, G E W Bauer, R Gross, S T B Goennenwein, Phys. Rev. B. 8894410M. Schreier, A. Kamra, M. Weiler, J. Xiao, G. E. W. Bauer, R. Gross, and S. T. B. Goennenwein, Magnon, phonon, and electron temperature profiles and the spin Seebeck effect in magnetic insulator/normal metal hybrid structures, Phys. Rev. B 88, 094410 (2013).
Rayleigh and Lamb waves: Physical theory and applications. I A Viktorov, Plenum PressNew YorkI. A. Viktorov, Rayleigh and Lamb waves: Physical theory and applications (Plenum Press, New York, 1967).
C Kittel, Quantum Theory of Solids. New YorkWileyC. Kittel, Quantum Theory of Solids (Wiley, New York, 1963).
Field Dependence of the Intrinsic Domain Magnetization of a Ferromagnet. T Holstein, H Primakoff, Phys. Rev. 581098T. Holstein and H. Primakoff, Field Dependence of the Intrinsic Domain Magnetization of a Ferromagnet, Phys. Rev. 58, 1098 (1940).
Magnetostatic Modes in Ferromagnetic Resonance. L R Walker, Phys. Rev. 105390L. R. Walker, Magnetostatic Modes in Ferromagnetic Resonance, Phys. Rev. 105, 390 (1957).
Collective spin-wave excitations in a two-dimensional array of coupled magnetic nanodots. R Verba, G Melkov, V Tiberkevich, A Slavin, Phys. Rev. B. 8514427R. Verba, G. Melkov, V. Tiberkevich, and A. Slavin, Col- lective spin-wave excitations in a two-dimensional array of coupled magnetic nanodots, Phys. Rev. B 85, 014427 (2012).
Light scattering by magnons in whispering gallery mode cavities. S Sharma, Y M Blanter, G E W Bauer, Phys. Rev. B. 9694412S. Sharma, Y. M. Blanter, and G. E. W. Bauer, Light scattering by magnons in whispering gallery mode cavi- ties, Phys. Rev. B 96, 094412 (2017).
Surface dynamics of rough magnetic films. T Yu, S Sharma, Y M Blanter, G E W Bauer, Phys. Rev. B. 99174402T. Yu, S. Sharma, Y. M. Blanter, and G. E. W. Bauer, Surface dynamics of rough magnetic films, Phys. Rev. B 99, 174402 (2019).
Input and output in damped quantum systems: Quantum stochastic differential equations and the master equation. C W Gardiner, M J Collett, Phys. Rev. 313761C. W. Gardiner and M. J. Collett, Input and output in damped quantum systems: Quantum stochastic differen- tial equations and the master equation, Phys. Rev. 31, 3761 (1985).
Introduction to quantum noise, measurement, and amplification. A A Clerk, M H Devoret, S M Girvin, F Marquardt, R J Schoelkopf, Rev. Mod. Phys. 821155A. A. Clerk, M. H. Devoret, S. M. Girvin, F. Marquardt, and R. J. Schoelkopf, Introduction to quantum noise, measurement, and amplification, Rev. Mod. Phys. 82, 1155 (2010).
Bell-state generation for spin qubits via dissipative coupling. J Zou, S Zhang, Y Tserkovnyak, Phys. Rev. B. 106180406J. Zou, S. Zhang, and Y. Tserkovnyak, Bell-state gener- ation for spin qubits via dissipative coupling, Phys. Rev. B 106, L180406 (2022).
Opportunities for Long-Range Magnon-Mediated Entanglement of Spin Qubits via On-and Off-Resonant Coupling. M Fukami, D R Candido, D D Awschalom, M E Flatté, PRX Quantum. 240314M. Fukami, D. R. Candido, D. D. Awschalom, and M. E. Flatté, Opportunities for Long-Range Magnon-Mediated Entanglement of Spin Qubits via On-and Off-Resonant Coupling, PRX Quantum 2, 040314 (2021).
B W Zeng, T Yu, arXiv:2209.13386Radiation-free and non-Hermitian topology inertial defect states of on-chip magnons. B. W. Zeng and T. Yu, Radiation-free and non-Hermitian topology inertial defect states of on-chip magnons, arXiv:2209.13386.
Simple and approximate expressions of demagnetizing factors of uniformly magnetized rectangular rod and cylinder. M Sato, J. Appl. Phys. 66983M. Sato, Simple and approximate expressions of demag- netizing factors of uniformly magnetized rectangular rod and cylinder, J. Appl. Phys. 66, 983 (1989).
| [] |
[
"Heavy-flavor impact on CTEQ-TEA global QCD analyses",
"Heavy-flavor impact on CTEQ-TEA global QCD analyses"
] | [
"Marco Guzzi \nDepartment of Physics\nKennesaw State University\n370 Paulding Ave30144KennesawGAU.S.A\n",
"Alim Ablat \nSchool of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina\n",
"Sayipjamal Dulat \nSchool of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina\n",
"Tie-Jiun Hou \nSchool of Nuclear Science and Technology\nUniv. of South China\n421001HengyangHunanChina\n",
"Pavel Nadolsky \nDepartment of Physics\nSouthern Methodist University\n75275-0181DallasTXU.S.A\n",
"Ibrahim Sitiwaldi \nSchool of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina\n",
"Keping Xie \nPittsburgh Particle Physics, Astrophysics, and Cosmology Center\nDepartment of Physics and Astron-omy\nUniversity of Pittsburgh\n15260PittsburghPAU.S.A\n",
"C.-P Yuan \nDepartment of Physics and Astronomy\nMichigan State University\n48824East LansingMIU.S.A\n"
] | [
"Department of Physics\nKennesaw State University\n370 Paulding Ave30144KennesawGAU.S.A",
"School of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina",
"School of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina",
"School of Nuclear Science and Technology\nUniv. of South China\n421001HengyangHunanChina",
"Department of Physics\nSouthern Methodist University\n75275-0181DallasTXU.S.A",
"School of Physics Science and Technology\nXinjiang University\n830046UrumqiXinjiangChina",
"Pittsburgh Particle Physics, Astrophysics, and Cosmology Center\nDepartment of Physics and Astron-omy\nUniversity of Pittsburgh\n15260PittsburghPAU.S.A",
"Department of Physics and Astronomy\nMichigan State University\n48824East LansingMIU.S.A"
] | [] | We discuss heavy-flavor production at hadron colliders in recent global QCD analyses to determine parton distribution functions (PDFs) in the proton. We discuss heavy-flavor treatments in precision theory predictions at the LHC. In particular, we discuss factorization schemes in presence of heavy flavors in proton-proton collisions, as well as the impact of heavy-flavor production at the LHC on PDFs. We show results of recent updates beyond CT18, the latest global QCD analysis from the CTEQ-TEA group. * | 10.1051/epjconf/202227000004 | [
"https://export.arxiv.org/pdf/2209.11143v1.pdf"
] | 252,438,770 | 2209.11143 | f85f04876de521da605aada2c6783edcf9827f12 |
Heavy-flavor impact on CTEQ-TEA global QCD analyses
Marco Guzzi
Department of Physics
Kennesaw State University
370 Paulding Ave30144KennesawGAU.S.A
Alim Ablat
School of Physics Science and Technology
Xinjiang University
830046UrumqiXinjiangChina
Sayipjamal Dulat
School of Physics Science and Technology
Xinjiang University
830046UrumqiXinjiangChina
Tie-Jiun Hou
School of Nuclear Science and Technology
Univ. of South China
421001HengyangHunanChina
Pavel Nadolsky
Department of Physics
Southern Methodist University
75275-0181DallasTXU.S.A
Ibrahim Sitiwaldi
School of Physics Science and Technology
Xinjiang University
830046UrumqiXinjiangChina
Keping Xie
Pittsburgh Particle Physics, Astrophysics, and Cosmology Center
Department of Physics and Astron-omy
University of Pittsburgh
15260PittsburghPAU.S.A
C.-P Yuan
Department of Physics and Astronomy
Michigan State University
48824East LansingMIU.S.A
Heavy-flavor impact on CTEQ-TEA global QCD analyses
We discuss heavy-flavor production at hadron colliders in recent global QCD analyses to determine parton distribution functions (PDFs) in the proton. We discuss heavy-flavor treatments in precision theory predictions at the LHC. In particular, we discuss factorization schemes in presence of heavy flavors in proton-proton collisions, as well as the impact of heavy-flavor production at the LHC on PDFs. We show results of recent updates beyond CT18, the latest global QCD analysis from the CTEQ-TEA group. *
Introduction
Precise determination of parton distribution functions (PDFs) in the proton is critical for all current and future precision programs at the LHC. Proton PDFs are an essential ingredient of QCD factorization theorems and are therefore ubiquitous in theory predictions for standard candle observables in hadronic collisions at high perturbative order in QCD. Precise and accurate theory predictions for such observables are necessary to investigate properties of the Higgs boson and to explore the Electroweak (EW) sector of the Standard Model (SM). In addition, they are important to scrutinize and validate SM extensions and search for signals of new physics interactions. PDFs are obtained through global QCD analyses of experimental hadronic cross section measurements by using a variety of analytical and statistical methods [1][2][3][4]. They are "data-driven" quantities and they currently challenge precision in theory predictions at the LHC. In particular, precision measurements of heavy-quark (HQ) production observables in hadronic reactions play a significant role in PDF determinations because they can constrain PDFs in kinematic regions at intermediate and large parton momentum fraction x which are currently poorly constrained by data. In addition, they provide us with complementary information besides that obtained from jet production in global QCD analyses. HQ production at the LHC at small transverse momentum p T and large rapidity y of the HQ is sensitive to PDFs at both small and large x, where x ≈ p 2 T + m 2 Q / √ S e ±y . This allows us to probe QCD dynamics simultaneously in these two regimes. This is especially true for charm and bottom (c/b) quark production. In fact, in the 4 < |y| < 4.5 rapidity range at the LHC with a proton-proton collision energy of 13 TeV, it can probe x ≤ 10 −5 . When p T ≥ 40 GeV, it can probe x ≥ 0.2. On the other hand, top-quark pair production at the LHC, at the same collision energy, can probe the gluon PDF already at x 0.01, while c/b production at HERA is sensitive to intermediate and small x 10 −5 . Probing these regimes (and beyond, at future facilities like the Electron Ion Collider (EIC) [5][6][7], the Future Circular Collider (FCC) [8][9][10], the Forward Physics Facility at the High-Luminosity (HL) LHC [11,12], and the Super proton-proton Collider (SppC) [13]) will allow us to explore QCD factorization with an unprecedented level of accuracy. This will shed light on open questions like the intrinsic heavy-flavor content in the proton, and small-x dynamics. In this conference proceeding contribution, we shall report preliminary results of global PDF analyses which use recent high-precision HQ measurements from the LHC and HERA which are added on top of the CT18 [1] baseline. Moreover, treatments of QCD factorization in presence of HQ multiscale processes are also discussed.
HQ treatments in QCD factorization and the S-ACOT-MPS scheme
HQ production dynamics is nontrivial due to the interplay of massless and massive schemes, which essentially are different ways of organizing the perturbation series. In general, one can calculate cross sections by using massive schemes, that include HQs only in the final-state and apply at masses such that p T m Q . In this case, the HQ p T -spectrum can be obtained by keeping the number of flavors as fixed, i.e., using a fixed-flavor number (FFN) scheme. That is, there is no HQ PDF in the proton; HQs are generated as massive final states, and m Q acts as an infrared cutoff. Moreover, all power terms of the type p 2 T /m 2 Q p , where p is a positive integer, are correctly accounted for in the perturbative series. Cross sections can also be computed by using massless schemes when p T m Q m P , where m P is the mass of the proton. In this case, large logarithmic terms of the type log n p 2 T /m 2 Q spoil the convergence of the fixed-order expansion. Massless schemes are also known as zero-mass (ZM) schemes; the HQ is considered essentially massless everywhere and also enters the running of the strong coupling α s . These large logarithms need to be resummed by using DGLAP evolution, that is, initial-state logs are resummed into a HQ PDF, while final-state logarithms are resummed into a fragmentation function (FF). Modern global QCD analyses determine proton PDFs using a variety of experimental data which span wide areas of the Q-x kinematic plane. Therefore, amendments to the factorization formula are necessary to calculate key cross sections across wide ranges of energy and momentum transfer. Interpolating schemes, such as General Mass Variable Flavor Number (GMVFN) schemes, introduce modifications in QCD collinear factorization so that they appear as composite schemes which retain key mass dependence and efficiently resum collinear logarithms in order to combine the FFN and ZM schemes together. They are crucial for a correct treatment of HQ in deep inelastic scattering (DIS) processes [14][15][16][17][18][19][20][21][22] and proton-proton (pp) collisions, and for accurate predictions of key scattering rates at the LHC in global QCD analyses to determine proton PDFs. Moreover, they provide us with the possibility of directly accessing HQ PDFs parametrized at the initial scale. This motivated the development of a general-mass (GM) factorization scheme for pp collisions [23,24], the Simplified ACOT scheme with Massive Phase Space (S-ACOT-MPS), which is based on an amended version of the S-ACOT scheme, developed for DIS [14,15,18,19,22]. Similar GMVFN schemes are studied in [25][26][27] and differences are related to the treatment of the phase space. The main idea behind a GMVFN scheme such as S-ACOT-MPS is the introduction of subtraction terms order-by-order in perturbation theory which avoid double counting and cancel enhanced collinear contributions from flavor-creation (FC) terms whenŝ m 2 Q , or p T M Q . In S-ACOT-MPS, the subtraction and flavor excitation (FE) contributions are evaluated in one single step. This improves stability in calculating differential cross sections and facilitates the extension of this scheme to higher orders. We applied S-ACOT-MPS to prompt c and b production at NLO in QCD at the LHC, at √ S = 13 TeV. In Figure 1 (left), we illustrate the c-rapidity distribution determined with the CT18 and CT18X PDFs at NLO. We explore the sensitivity of prompt c production to nonperturbative charm contributions in the proton by considering theory predictions in very forward regions with y c > 8. This is relevant for future applications at the Forward Physics Facility at CERN [11] which will be able to clarify multiple aspects of QCD in such an extended forward region, in coordination with the HL-LHC and EIC. In Figure 1 (right), we compare theory predictions for the rapidity distribution of B ± meson production at NLO in QCD with measurements [28] from the LHCb collaboration at √ S = 13 TeV. Theory uncertainties at NLO are large (O(50%)) and mainly ascribed to scale variation. This can be improved by including higher-order corrections which require an extension of the S-ACOT-MPS scheme to NNLO. [11]. The error band represents the CT18NLO induced PDF uncertainty at 68% CL. Right: NLO theory predictions for the rapidity distributions obtained with CT18NLO and CT18XNLO PDFs compared to B ± production data [28] from LHCb 13 TeV.
Impact on PDFs from high-precision tt production measurements
Top-quark pair production plays a significant role in setting constraints on PDFs, in particular the gluon, at intermediate to large x. In global PDF analyses, the impact on PDFs from differential cross section measurements of tt production at the LHC complements that from jet production measurements: tt and jet production overlap in the Q-x plane, but matrix elements and phase-space suppression are different. Therefore, constraints on the gluon may be placed at different values of x. On the other hand, challenges in the estimate of full impact from tt production on PDFs arise in presence of incompatibility of the tt measurements with other data sets in the baseline (see related discussion in [1]). More realistic estimates account for multiple PDF functional forms and some disagreements between the measurements. In this section, we explore the impact of high-precision tt measurements on proton PDFs, and illustrate preliminary results of a global QCD analysis in which tt single differential cross section measurements at √ S = 13 TeV from the ATLAS [29] and CMS [30] collaborations are added on top of the CT18 data baseline. The measurements from ATLAS are obtained using events in the all-hadronic channel with 36.1 fb −1 of integrated luminosity (IL), while those from CMS are obtained with events containing two leptons with 35.9 fb −1 of IL. We consider the full phase space absolute measurements reported in Table 3, in which we specify the type of correlated systematic uncertainties published in the experimental papers. We added these measurements one by one to the baseline because statistical correlations were not published for all data sets. The measurements from ATLAS and CMS are delivered by using a different binning for the same distribution. The theory predictions for the CMS distributions are obtained by using the FastNNLO grids [31], that are based on the calculation published in [32]. The theory predictions for the ATLAS distributions have been obtained in two steps. We have first generated QCD theory predictions at NLO by using in-house APPLGrid fast tables [33] from the public MCFM computer code [34,35]. Then, we calculated bin-by-bin NNLO/NLO K-factors using the computer code MATRIX [36] that is based on the theory predictions calculated in [37]. The mass of the top quark, in the pole mass approximation, has been set as m (pole) t = 172.5 GeV. The factorization and renormalization scales µ F and µ R respectively, have been chosen according to [38] where for the m tt , p T,tt , y tt , and y t distributions one uses µ F = µ R = H T /4 and H T = m 2 t + p 2 T,t + m 2 t + p 2 T,t , for the p T,t distribution one uses M T = m 2 t + p 2 T,t , and for the p T,tavg one uses µ F = µ R = M T /2. EW corrections [39] are also included in this analysis, but their impact on the global fit is negligible. In Figure 2, we illustrate the pulls on the gluon PDF resulting from a global fit in which the y tt distributions from ATLAS and CMS are added on top of the CT18 baseline. The pulls from these distributions from both experiments are in the same direction, that is, each of them prefers a softer gluon at large x (x 0.2). The individual impact of these measurements on the CT18 gluon is approximately the same as χ 2 /N pt for both distributions is of order 1. A more extensive analysis with complete results will be presented elsewhere [40]. To finalize our discussion, in this section we discuss the impact of the most recent c-and b-quark production combined measurements of semi-inclusive DIS at HERA from the H1 and ZEUS collaborations [41], on the CT18 gluon PDF. In particular, we illustrate the results of various fits in which these measurements replace their previous version [42,43]. The new measurements exhibit an extended kinematic range of photon virtuality 2.5 GeV 2 ≤ Q 2 ≤ 2000 GeV 2 and Bjorken scaling variable 3 · 10 −5 ≤ x Bj ≤ 5 · 10 −2 , and have reduced uncertainties due to a simultaneous combination of c and b cross-section measurements with reduced correlations between them. We find that the new c and b combination at HERA [41] cannot be well described in the CT18 global analysis, and χ 2 /N pt is never below 1.7. For the CT18NNLO fit, we obtained χ 2 /N pt = 1.98 for c production (N pt = 47), and χ 2 /N pt = 1.25 for bottom production (N pt = 26). For the CT18XNNLO fit, instead we obtained χ 2 /N pt = 1.71 for c and 1.26 for b production. We observed tensions between these new combined data and several CT18 datasets such as the LHCb 7 and 8 TeV W/Z production data [44,45], Z-rapidity data [46] at CDF run-II, CMS 8 TeV single inclusive jet production [47], and tt double differential p T and y cross sections [48]. For this reason, these data were not included in the CT18 global analysis [1]. However, these measurements are very important in PDF determinations because they impose direct constraints on the gluon PDF at intermediate and small
x. Therefore, we have re-analyzed these data by performing a series of attempts in which we tried to vary several parameters and settings in the fits. For example, we performed fits in which we varied the weight assigned to these data. We explored the impact of consistent initial-scale Q 0 variations, and tried different parametrizations for the gluon. We tried to vary the value of the c-quark mass m c in both the pole and MS definitions, and we also analyzed the impact of scale variations in which the factorization scale is defined as µ DIS (x) = A m 2 Q + B 2 /x C according to saturation models [49,50]. This is expected to have similar impact to low-x resummation [51]. In addition, we explored phase-space suppression in fits in which we varied the S-ACOT-χ rescaling parameter χ = ζ(1 + ζ λ m 2 Q /Q 2 ). A more detailed description of this study can be found in [52]. Here we only provide the results relative to two of the exercises that are described above. In Figure 3, we illustrate the impact on the CT18XNNLO gluon when a scan over the MS m c (m c ) is performed (left panel), and the impact from scale variation where parameter B is varied in the µ DIS (x) scale definition, while A = 0.5 and C = 0.33(right panel). The CT18X fit is a variant of CT18 where µ F,DIS = µ DIS (x). Our preliminary findings indicate that the new c/b production measurements at HERA seem to prefer a harder gluon at intermediate and small x.
Conclusions
In this work we have studied HQ production at hadron colliders in recent global QCD analyses and analyzed their impact in collinear proton PDF determinations. We discussed S-ACOT-MPS, a new GMVFN scheme which we applied to pp collisions to describe c/b production at central and forward rapidities. In the near future, it will be technically possible to generate predictions within the S-ACOT-MPS scheme at NNLO with suitable K-factors (NNLO/NLO) at hand. Moreover, it is easy to extend S-ACOT-MPS to other HQ production processes. We explored the impact of tt single differential cross section measurements at the LHC at √ S =13 TeV on the CT18 PDFs. The impact of tt production at the LHC at √ S =13 TeV complements that of jet data on the gluon PDF, in particular in the large-x region. tt and jets overlap in the Q-x plane, but being matrix elements and phase-space suppression different, constraints on the gluon PDF may be placed at different values of x. Overall, the impact is found to be mild with a preference of softer gluon at large x. However, this may change when tt production in the lepton + jet channel at 13 TeV is included. We analyzed the most recent c/b combination at HERA. We find that these measurements are challenging and deserve more attention as they are very important for small-x dynamics. HQ production remains a critical process to constrain correlations between m Q , α s , and gluon-PDF.
Figure 1 .
1Left: Rapidity distributions of prompt charm at the LHC 13 TeV in the very forward region (yc > 8)
Figure 2 .
2Left: PDF ratio to CT18NNLO at Q = 2 GeV for the gluon PDF. Right: same as left, but at Q = 100 GeV. The blue band represents the CT18NNLO PDF uncertainty at 90% CL.
Figure 3 .
3Ratio to the CT18XNNLO gluon PDF as a function of x at Q = 2 GeV. Left: scan over the c-quark mass m c (m c ) while m b (m b ) = 4.18 GeV, and with Q 0 = 1 GeV. Right: scan over the B parameter in µ DIS (x) with A = 0.5, C = 0.33, and Q 0 = 1 GeV. Error bands are shown at 90% confidence level for CT18NNLO (red) and CT18XNNLO (blue).
Table 1. Details of the single differential distributions for tt production considered in our analysis.data type
N pt Exp
Corr Sys.
dσ/dm tt
9
ATLAS nuisance par.
dσ/dy tt
12
ATLAS nuisance par.
dσ/dH T,tt 11
ATLAS nuisance par.
dσ/d p T,t1 10
ATLAS nuisance par.
dσ/d p T,t2 8
ATLAS nuisance par.
dσ/dm tt
7
CMS
Covariance matr.
dσ/d p T,t
6
CMS
Covariance matr.
dσ/dy tt
10
CMS
Covariance matr.
dσ/dy t
10
CMS
Covariance matr.
. T J Hou, 1912.10053Phys. Rev. D. 10314013T.J. Hou et al., Phys. Rev. D 103, 014013 (2021), 1912.10053
. S Bailey, T Cridge, L A Harland-Lang, A D Martin, R S Thorne, 2012.04684Eur. Phys. J. C. 81341S. Bailey, T. Cridge, L.A. Harland-Lang, A.D. Martin, R.S. Thorne, Eur. Phys. J. C 81, 341 (2021), 2012.04684
. R D Ball, NNPDF)2109.02653Eur. Phys. J. C. 82428R.D. Ball et al. (NNPDF), Eur. Phys. J. C 82, 428 (2022), 2109.02653
. J S. Alekhin, S Blümlein, R Moch, Placakyte, 1701.05838Phys. Rev. D. 9614011S. Alekhin, J. Blümlein, S. Moch, R. Placakyte, Phys. Rev. D 96, 014011 (2017), 1701.05838
. A Accardi, 1212.1701Eur. Phys. J. A. 52268A. Accardi et al., Eur. Phys. J. A 52, 268 (2016), 1212.1701
. E C Aschenauer, S Fazio, J H Lee, H Mantysaari, B S Page, B Schenke, T Ullrich, R Venugopalan, P Zurita, 1708.01527Rept. Prog. Phys. 8224301E.C. Aschenauer, S. Fazio, J.H. Lee, H. Mantysaari, B.S. Page, B. Schenke, T. Ullrich, R. Venugopalan, P. Zurita, Rept. Prog. Phys. 82, 024301 (2019), 1708.01527
. R , Abdul Khalek, 2103.05419Nucl. Phys. A. 1026122447R. Abdul Khalek et al., Nucl. Phys. A 1026, 122447 (2022), 2103.05419
. A Abada, FCC)Eur. Phys. J. C. 79474A. Abada et al. (FCC), Eur. Phys. J. C 79, 474 (2019)
. A Abada, FCCEur. Phys. J. ST. 228261A. Abada et al. (FCC), Eur. Phys. J. ST 228, 261 (2019)
. A Abada, FCCEur. Phys. J. ST. 228755A. Abada et al. (FCC), Eur. Phys. J. ST 228, 755 (2019)
. L A Anchordoqui, 2109.10905Phys. Rept. 968L.A. Anchordoqui et al., Phys. Rept. 968, 1 (2022), 2109.10905
. J L Feng, 2203.05090J.L. Feng et al. (2022), 2203.05090
. J Tang, 1507.03224J. Tang et al. (2015), 1507.03224
. M A G Aivazis, F I Olness, W K Tung, hep-ph/9312318Phys. Rev. D. 503085M.A.G. Aivazis, F.I. Olness, W.K. Tung, Phys. Rev. D 50, 3085 (1994), hep-ph/9312318
. M A G Aivazis, J C Collins, F I Olness, W K Tung, hep-ph/9312319Phys. Rev. D. 503102M.A.G. Aivazis, J.C. Collins, F.I. Olness, W.K. Tung, Phys. Rev. D 50, 3102 (1994), hep-ph/9312319
. M Buza, Y Matiounine, J Smith, W L Van Neerven, hep-ph/9612398Eur. Phys. J. C. 1301M. Buza, Y. Matiounine, J. Smith, W.L. van Neerven, Eur. Phys. J. C 1, 301 (1998), hep-ph/9612398
. R S Thorne, R G Roberts, hep-ph/9709442Phys. Rev. D. 576871R.S. Thorne, R.G. Roberts, Phys. Rev. D 57, 6871 (1998), hep-ph/9709442
. M Krämer, F I Olness, D E Soper, hep-ph/0003035Phys. Rev. D. 6296007M. Krämer, F.I. Olness, D.E. Soper, Phys. Rev. D 62, 096007 (2000), hep-ph/0003035
. W K Tung, S Kretzer, C Schmidt, hep-ph/0110247J. Phys. G. 28W.K. Tung, S. Kretzer, C. Schmidt, J. Phys. G 28, 983 (2002), hep-ph/0110247
. J S.Alekhin, S Blumlein, S Klein, Moch, 0908.2766Phys. Rev. D. 8114032S.Alekhin, J.Blumlein, S.Klein, S.Moch, Phys. Rev. D 81, 014032 (2010), 0908.2766
. S Forte, E Laenen, P Nason, J Rojo, 1001.2312Nucl. Phys. B. 834116S. Forte, E. Laenen, P. Nason, J. Rojo, Nucl. Phys. B 834, 116 (2010), 1001.2312
. M Guzzi, P Nadolsky, H L Lai, C P Yuan, 1108.5112Phys.Rev. D. 8653005M.Guzzi, P. Nadolsky, H.L. Lai, C.P. Yuan, Phys.Rev. D 86, 053005 (2012), 1108.5112
. K Xie, Southern Methodist U. Ph.D. thesisK. Xie, Ph.D. thesis, Southern Methodist U. (2019)
. K Xie, J M Campbell, P M Nadolsky, 2108.03741SciPost Phys. Proc. 884K. Xie, J.M. Campbell, P.M. Nadolsky, SciPost Phys. Proc. 8, 084 (2022), 2108.03741
. I Helenius, H Paukkunen, 1804.03557JHEP. 05196I. Helenius, H. Paukkunen, JHEP 05, 196 (2018), 1804.03557
. B A Kniehl, G Kramer, I Schienbein, H Spiesberger, hep-ph/0410289Phys. Rev. D. 7114018B.A. Kniehl, G. Kramer, I. Schienbein, H. Spiesberger, Phys. Rev. D 71, 014018 (2005), hep-ph/0410289
. B A Kniehl, G Kramer, I Schienbein, H Spiesberger, hep-ph/0502194Eur. Phys. J. C. 41199B.A. Kniehl, G. Kramer, I. Schienbein, H. Spiesberger, Eur. Phys. J. C 41, 199 (2005), hep-ph/0502194
. R Aaij, LHCb1710.04921JHEP. 1226R. Aaij et al. (LHCb), JHEP 12, 026 (2017), 1710.04921
. G Aad, ATLAS2006.09274JHEP. 0133G. Aad et al. (ATLAS), JHEP 01, 033 (2021), 2006.09274
. A M Sirunyan, CMS1811.06625JHEP. 02149A.M. Sirunyan et al. (CMS), JHEP 02, 149 (2019), 1811.06625
. M Czakon, D Heymes, A Mitov, 1704.08551M. Czakon, D. Heymes, A. Mitov (2017), 1704.08551
. M Czakon, D Heymes, A Mitov, 1511.00549Phys. Rev. Lett. 11682003M. Czakon, D. Heymes, A. Mitov, Phys. Rev. Lett. 116, 082003 (2016), 1511.00549
. T Carli, D Clements, A Cooper-Sarkar, C Gwenlan, G P Salam, F Siegert, P Starovoitov, M Sutton, 0911.2985Eur. Phys. J. C. 66503T. Carli, D. Clements, A. Cooper-Sarkar, C. Gwenlan, G.P. Salam, F. Siegert, P. Starovoitov, M. Sutton, Eur. Phys. J. C 66, 503 (2010), 0911.2985
. J M Campbell, R K Ellis, 1204.1513J. Phys. G. 4215005J.M. Campbell, R.K. Ellis, J. Phys. G 42, 015005 (2015), 1204.1513
. J M Campbell, R K Ellis, W T Giele, 1503.06182Eur. Phys. J. C. 75246J.M. Campbell, R.K. Ellis, W.T. Giele, Eur. Phys. J. C 75, 246 (2015), 1503.06182
. M Grazzini, S Kallweit, M Wiesemann, 1711.06631Eur. Phys. J. C. 78537M. Grazzini, S. Kallweit, M. Wiesemann, Eur. Phys. J. C 78, 537 (2018), 1711.06631
. S Catani, S Devoto, M Grazzini, S Kallweit, J Mazzitelli, 1906.06535JHEP. 07100S. Catani, S. Devoto, M. Grazzini, S. Kallweit, J. Mazzitelli, JHEP 07, 100 (2019), 1906.06535
. M Czakon, D Heymes, A Mitov, 1606.03350JHEP. 0471M. Czakon, D. Heymes, A. Mitov, JHEP 04, 071 (2017), 1606.03350
. M Czakon, A Ferroglia, D Heymes, A Mitov, B D Pecjak, D J Scott, X Wang, L L Yang, 1803.07623JHEP. 05149M. Czakon, A. Ferroglia, D. Heymes, A. Mitov, B.D. Pecjak, D.J. Scott, X. Wang, L.L. Yang, JHEP 05, 149 (2018), 1803.07623
. A Ablat, In preparationA. Ablat et al., In preparation (2022)
. H Abramowicz, H1, ZEUS1804.01019Eur. Phys. J. C. 78473H. Abramowicz et al. (H1, ZEUS), Eur. Phys. J. C 78, 473 (2018), 1804.01019
. A Aktas, H1)hep-ex/0411046Eur. Phys. J. C. 40349A. Aktas et al. (H1), Eur. Phys. J. C 40, 349 (2005), hep-ex/0411046
. H Abramowicz, H1, ZEUS1211.1182Eur. Phys. J. C. 732311H. Abramowicz et al. (H1, ZEUS), Eur. Phys. J. C 73, 2311 (2013), 1211.1182
. R Aaij, LHCb1505.07024JHEP. 0839R. Aaij et al. (LHCb), JHEP 08, 039 (2015), 1505.07024
. R Aaij, LHCb1503.00963JHEP. 05109R. Aaij et al. (LHCb), JHEP 05, 109 (2015), 1503.00963
. T A Aaltonen, CDF0908.3914Phys. Lett. B. 692232T.A. Aaltonen et al. (CDF), Phys. Lett. B 692, 232 (2010), 0908.3914
. V Khachatryan, CMS1609.05331JHEP. 03156V. Khachatryan et al. (CMS), JHEP 03, 156 (2017), 1609.05331
. A M Sirunyan, CMS)1703.01630Eur. Phys. J. C. 77459A.M. Sirunyan et al. (CMS), Eur. Phys. J. C 77, 459 (2017), 1703.01630
. K J Golec-Biernat, M Wusthoff, hep-ph/9807513Phys. Rev. D. 5914017K.J. Golec-Biernat, M. Wusthoff, Phys. Rev. D 59, 014017 (1998), hep-ph/9807513
. F Caola, S Forte, J Rojo, 0910.3143Phys. Lett. B. 686127F. Caola, S. Forte, J. Rojo, Phys. Lett. B 686, 127 (2010), 0910.3143
. R D Ball, V Bertone, M Bonvini, S Marzani, J Rojo, L Rottoli, 1710.05935Eur. Phys. J. C. 78321R.D. Ball, V. Bertone, M. Bonvini, S. Marzani, J. Rojo, L. Rottoli, Eur. Phys. J. C 78, 321 (2018), 1710.05935
. M Guzzi, P Nadolsky, K Xie, 2108.01791SciPost Phys. Proc. 8M. Guzzi, P. Nadolsky, K. Xie, SciPost Phys. Proc. 8, 164 (2022), 2108.01791
| [] |
[
"Supporting SPARQL Update Queries in RDF-XML Integration *",
"Supporting SPARQL Update Queries in RDF-XML Integration *"
] | [
"Nikos Bikakis \nNTU Athens & R.C. ATHENA\nGreece\n",
"Chrisa Tsinaraki \nEU Joint Research Center\nItaly\n",
"Ioannis Stavrakantonakis \nSTI\nUniversity of Innsbruck\nAustria\n",
"Stavros Christodoulakis \nTechnical University of Crete\nGreece\n"
] | [
"NTU Athens & R.C. ATHENA\nGreece",
"EU Joint Research Center\nItaly",
"STI\nUniversity of Innsbruck\nAustria",
"Technical University of Crete\nGreece"
] | [] | The Web of Data encourages organizations and companies to publish their data according to the Linked Data practices and offer SPARQL endpoints. On the other hand, the dominant standard for information exchange is XML. The SPARQL2XQuery Framework focuses on the automatic translation of SPARQL queries in XQuery expressions in order to access XML data across the Web. In this paper, we outline our ongoing work on supporting update queries in the RDF-XML integration scenario. | null | [
"https://arxiv.org/pdf/1408.2800v2.pdf"
] | 8,656,432 | 1408.2800 | 10ee7b46ccbb39481b5bdae2d3d7ff9b84c460b0 |
Supporting SPARQL Update Queries in RDF-XML Integration *
Nikos Bikakis
NTU Athens & R.C. ATHENA
Greece
Chrisa Tsinaraki
EU Joint Research Center
Italy
Ioannis Stavrakantonakis
STI
University of Innsbruck
Austria
Stavros Christodoulakis
Technical University of Crete
Greece
Supporting SPARQL Update Queries in RDF-XML Integration *
SPARQL2XQuerySPARQL to XQueryXML Schema to OWLSPARQL updateXQuery UpdateSPARQL 11
The Web of Data encourages organizations and companies to publish their data according to the Linked Data practices and offer SPARQL endpoints. On the other hand, the dominant standard for information exchange is XML. The SPARQL2XQuery Framework focuses on the automatic translation of SPARQL queries in XQuery expressions in order to access XML data across the Web. In this paper, we outline our ongoing work on supporting update queries in the RDF-XML integration scenario.
Introduction
The SPARQL2XQuery Framework, that we have previously developed [6], aims to bridge the heterogeneity issues that arise in the consumption of XML-based sources within Semantic Web. In our working scenario, mappings between RDF/S-OWL and XML sources are automatically derived or manually specified. Using these mappings, the SPARQL queries are translated on the fly into XQuery expressions, which access the XML data. Therefore, the current version of SPARQL2XQuery provides read-only access to XML data. In this paper, we outline our ongoing work on extending the SPARQL2XQuery Framework towards supporting SPARQL update queries.
Both SPARQL and XQuery have recently standardized their update operation semantics in the SPARQL 1.1 and XQuery Update Facility, respectively. We have studied the correspondences between the update operations of these query languages, and we describe the extension of our mapping model and the SPARQL-to-XQuery translation algorithm towards supporting SPARQL update queries.
Similarly to the motivation of our work, in the RDB-RDF interoperability scenario, D2R/Update [1] (a D2R extension) and OntoAccess [2] enable SPARQL update queries over relational databases. Regarding the XML-RDB-RDF interoperability scenario [5], the work presented in [3] extends the XSPARQL language [4] in order to support update queries.
Translating SPARQL Update Queries to XQuery
This section describes the translation of SPARQL update operations into XQuery expressions using the XQuery Update Facility. We present how similar methods and algorithms previously developed in the SPARQL2XQuery Framework can be adopted for the update operation translation. For instance, graph pattern and triple pattern translation are also used in the update operation translation. Note that, due to space limitations, some issues are presented in a simplified way in the rest of this section and several details are omitted. Table 1 presents the SPARQL update operations and summarizes their translation in XQuery. In particular, there are three main categories of SPARQL update operations a) Delete Data; b) Insert Data; and c) Delete/Insert. For each update operation, a simplified SPARQL syntax template is presented, as well as the corresponding XQuery expressions. In SPARQL context, we assume the following sets, let tr be an RDF triple set, tp a triple pattern set, trp a set of triples and/or triple patterns, and gp a graph pattern. Additionally, in XQuery, we denote as xEW, xEI and xED the sets of XQuery expressions (i.e., FLOWR expressions) that have resulted from the translation of the graph pattern included in the Where, Insert and Delete SPARQL clauses, respectively. Let xE be a set of XQuery expressions, xE($v1, $v2,… $vn) denote that xE are using (as input) the values assigned to XQuery variables $v1, $v2,… $vn. Finally, xn denotes an XML fragment, i.e., a set of XML nodes, and xp denotes an XPath expression. In the following examples, we assume that an RDF source has been mapped to an XML source. In particular, we assume the example presented in [6], where an RDF and an XML source describing persons and students have been mapped. Here, due to space limitation, we just outline the RDF and XML concepts, as well as the mappings that are involved in the following examples. In RDF, we have a class Student having several datatype properties, i.e., FName, E-mail, Department, GivenName, etc. In XML, we have an XML complex type Student_type, having an attribute SSN and several simple elements, i.e., FirstName, Email, Dept, GivenName etc. Based on the XML structure, the students' elements appear in the \Persons\Student path. We assume that the Student class has been mapped to the Student_type and the RDF datatype properties to the similar XML elements.
( insert nodes $data1 into $insert_location1 , … insert nodes $datap into $insert_locationp ) DELETE / INSERT (a) Delete{ trp }Where{ gp } (b) Insert{ trp }Where{ gp } (c) Delete{ trp }Insert{ trp }Where{ gp } (a) let $where_gp := xEW let $delete_gp:= xED ($where_gp) return
Delete Data. The Delete Data SPARQL operation removes a set of triples from RDF graphs. This SPARQL operation can be translated in XQuery using the Delete Nodes XQuery operation. Specifically, using the predefined mappings, the set of triples tr defined in the SPARQL Delete Data clause is transformed (using a similar approach such as the BGP2XQuery algorithm [6]) in a set of XPath expressions XP. For each xpi ∊ XP an XQuery Delete Nodes operation is defined.
In this example, two RDF triples are deleted from an RDF graph. In addition to the mappings described above, we assume that the person "http://rdf.gr/person1209" in RDF data has been mapped to the person "/Persons/Student[.@SSN=1209]" in XML data.
Insert Data. The Insert Data SPARQL operation, adds a set of new triples in RDF graphs. This SPARQL operation can be translated in XQuery using the Insert Nodes XQuery operation. In the Insert Data translation, the set of triples tr defined in SPARQL are transformed into XML node sets xni, using the predefined mappings. In particular, a set of Let XQuery clauses is used to build the XML nodes and define the appropriate node nesting and grouping. Then, the location of the XML node insertion can be easily determined considering the triples and the mappings. Finally, the constructed nodes are inserted in their location of insertion using the XQuery Insert nodes clause.
In this example, the RDF triples deleted in the previous example are reinserted in the RDF graph.
SPARQL Insert Data query Translated XQuery query
Insert data{ <http://rdf.gr/person1209> ns:FName "John" . <http://rdf.gr/person1209> ns:E-mail "[email protected]". } let $n1 := <FirstName>John</FirstName> let $n2 := <Email>[email protected]</Email> let $data1 := ($n1, $n2) let $insert_location1 := collection("http://xml.gr")/Persons/Student[.@SSN=1209] return insert nodes $data1 into $insert_location1 Insert / Delete. The Delete/Insert SPARQL operations are used to remove and/or add a set of triples from/to RDF graphs, using the bindings that resulted from the evaluation of the graph pattern defined in the Where clause. According to the SPARQL 1.1 semantics, the Where clause is the first one that is evaluated. Then, the Delete/Insert clause is applied over the produced results. Especially, in case, that both Delete and Insert operations exist, the deletion is performed before the insertion, and the Where clause is evaluated once. The Delete and the Insert SPARQL operations can be translated to XQuery using the Delete Nodes and Insert Nodes operations, respectively. In brief, initially the graph pattern used in the Where clause is translated to XQuery expressions xEW (similarly as in the GP2XQuery algorithm [6]). Then, the graph pattern used in the Delete/Insert clause is translated to XQuery expressions xED/xEI (as it is also in the BGP2XQuery algorithm [6]) using also the bindings that resulted from the evaluation of xEW.
In this example, the Where clause selects all the students studying in a computer science (CS) department. Then, the Delete clause deletes all the triples that match with its triple patterns, using the ?student bindings determined from the Where clause. In particular, from all the retrieved students (i.e., CS students), the students which have as first name the name "John" should be deleted.
In this example, the Where clause selects all the students studying in a CS department, as well as their first names. Then, the Insert clause creates new triples according to its triple patterns, using the ?student and ?name bindings determined from the Where clause. In particular, a new triple having as predicate "ns:GivenName" and as object the first name of the ?student, is inserted for each ?student.
delete nodes collection("http://dataset...")/xp1 ... delete nodes collection("http://dataset...")/xpn INSERT DATA Insert data{ tr } let $n1 := xn1 … let $nn := xnn let $data1 := ($nk, $nm,…) // k, m,… ∈ [1,n] … let $datap := ($nj, $nv,…) // j, y,… ∈ [1,n] let $insert_location1 := collection("http://xmldataset...")/xp1 … let $insert_locationp := collection("http://xmldataset...")/xpp return
SPARQL
Delete Data query Translated XQuery query Delete data{ <http://rdf.gr/person1209> ns: FName "John" . <http://rdf.gr/person1209> ns:E-mail "[email protected]". } delete nodes collection("http://xml.gr")/Persons/Student[.@SSN=1209]/FirstName[.= "John"] delete nodes collection("http://xml.gr")/Persons/Student[.@SSN=1209]/Email[.= "[email protected]"]
Table 1 .
1Translation of the SPARQL Update Operations in XQuerySPARQL
Translated XQuery Expressions
SPARQL Update
Operation
Syntax Template 1
DELETE DATA
Delete data{
tr
}
For simplicity, the WITH, GRAPH and USING clauses are omitted.delete nodes $delete_gp
(c)
Translate Delete Where same as (a),
then translate Insert Where same as (b)
(b) let $where_gp := xEW
let $insert_location1 := xp1
for $it1 in $insert_location1
xEI ($where_gp, $it1)
return insert nodes into $it1
…
let $where_gp := xEW
let $insert_ location n := xpn
for $itn in $insert_locationn
xEI ($where_gp, $itn)
return insert nodes into $itn
1
* This paper appears in 13th International Semantic Web Conference(ISWC '14). † This work is partially supported by the EU/Greece funded KRIPIS: MEDA Project
SPARQL Delete query Translated XQuery queryDelete{ ?student ns:FName "John" . }Where{ ?student ns:Department "CS" .
D2RQ/update: updating relational data via virtual RDF. V Eisenberg, Y Kanza, WWW 2012Eisenberg V., Kanza Y.: "D2RQ/update: updating relational data via virtual RDF". In WWW 2012
Updating relational data via SPARQL/update. M Hert, G Reif, H C Gall, EDBT/ICDT Workshops. Hert M., Reif G., Gall H. C.: "Updating relational data via SPARQL/update". In EDBT/ICDT Workshops 2010.
Update Semantics for Interoperability among XML, RDF and RDB. M I Ali, N Lopes, O Friel, A Mileo, In APWebAli M.I., Lopes N., Friel O., Mileo A.: "Update Semantics for Interoperability among XML, RDF and RDB". In APWeb 2013
Mapping between RDF and XML with XSPARQL. S Bischof, S Decker, T Krennwallner, N Lopes, A Polleres, J. Data Semantics. 13Bischof S., Decker S., Krennwallner T., Lopes N., Polleres A.: "Mapping between RDF and XML with XSPARQL". J. Data Semantics 1(3), (2012)
The XML and Semantic Web Worlds: Technologies, Interoperability and Integration. A survey of the State of the Art. N Bikakis, C Tsinaraki, N Gioldasis, I Stavrakantonakis, S Christodoulakis, Semantic Hyper/Multi-media Adaptation: Schemes and Applications. SpringerBikakis N., Tsinaraki C., Gioldasis N., Stavrakantonakis I., Christodoulakis S.: "The XML and Semantic Web Worlds: Technologies, Interoperability and Integration. A survey of the State of the Art". In Semantic Hyper/Multi-media Adaptation: Schemes and Applications, Springer 2013
The SPARQL2XQuery Interoperability Framework. N Bikakis, C Tsinaraki, I Stavrakantonakis, N Gioldasis, S Christodoulakis, World Wide Web Journal. WWWJBikakis N., Tsinaraki C., Stavrakantonakis I., Gioldasis N., Christodoulakis S.: "The SPARQL2XQuery Interoperability Framework". World Wide Web Journal (WWWJ), 2014
| [] |
[
"Experience with Heuristics, Benchmarks & Standards for Cylindrical Algebraic Decomposition",
"Experience with Heuristics, Benchmarks & Standards for Cylindrical Algebraic Decomposition"
] | [
"Matthew England [email protected] \nFaculty of Engineering, Environment and Computing\nCoventry University\nCV1 2JHCoventryU.K\n\nDepartment of Computer Science\nUniversity of Bath\nBA2 7AYBathU.K\n",
"James H Davenport [email protected] "
] | [
"Faculty of Engineering, Environment and Computing\nCoventry University\nCV1 2JHCoventryU.K",
"Department of Computer Science\nUniversity of Bath\nBA2 7AYBathU.K"
] | [] | In the paper which inspired the SC 2 project, [E.Ábráham, Building Bridges between Symbolic Computation and Satisfiability Checking, Proc. ISSAC '15, pp. | null | [
"https://arxiv.org/pdf/1609.09269v1.pdf"
] | 9,172,692 | 1609.09269 | 68ddb6ff8f8db36a4610f2e0ab7c6108d1d4b6ce |
Experience with Heuristics, Benchmarks & Standards for Cylindrical Algebraic Decomposition
Matthew England [email protected]
Faculty of Engineering, Environment and Computing
Coventry University
CV1 2JHCoventryU.K
Department of Computer Science
University of Bath
BA2 7AYBathU.K
James H Davenport [email protected]
Experience with Heuristics, Benchmarks & Standards for Cylindrical Algebraic Decomposition
In the paper which inspired the SC 2 project, [E.Ábráham, Building Bridges between Symbolic Computation and Satisfiability Checking, Proc. ISSAC '15, pp.
I. INTRODUCTION
This article is inspired by the SC 2 project 1 , a new initiative to forge a joint community from the existing fields of Symbolic Computation and Satisfiability Checking. For further details on the project we refer the reader to:
• [2] which introduced the two fields, describes some of the challenges and opportunities from working together, and outlines planned project actions; and
• [1] the accompanying paper to an invited talk at ISSAC 2015 which inspired the creation of the new project and community.
Within [1] the author outlines the strengths and weaknesses of the two communities, writing in the introduction:
"Symbolic Computation is strong in providing powerful procedures for sets (conjunctions) of arithmetic constraints, but it does not exploit the achievements in SMT solving for efficiently handling logical fragments, using heuristics and learning to speed-up the search for satisfying solutions."
By heuristic the we mean a practical method to make a choice which is not guaranteed to be optimal. Although Computer Algebra Systems prize correctness and exact solutions there is still much scope for the use of heuristics and statistical methods in symbolic computation algorithms: both for tuning how individual algorithm are run and for selecting a particular algorithm to use in the first place. In regards to the latter point, 1 http://www.sc-square.org/ we note that the solve procedures in Computer Algebra Systems are really meta-algorithms: algorithms to select specific procedures to use based on problem parameters. Although the individual procedures are usually well documented within the scientific literature we are not aware of any publications describing these meta-algorithms.
Another topic where Symbolic Computation might benefit from experience in Satisfiability Checking is standards and benchmarks. Competitions based on these form an integral part of the Satisfiability Checking community, and may have contributed to the remarkable progress made in practical algorithms. The lack of a comparable focus for the Symbolic Computation community was acknowledged in [2]. However, recent experiments have suggested the benchmarks for nonlinear real arithmetic are insufficient and the development of new standards and benchmarks for the joint community has been identified as a key SC 2 project action in [2,Section 3.3].
In the present paper we outline our experience of these issues for a single Symbolic Computation algorithm, Cylindrical Algebraic Decomposition (CAD). The aim of the paper is to instigate the learning process from the Satisfiability Checking community by illustrating the current use of heuristics, benchmarks and standards in (at least one area of) Symbolic Computation and posing some questions. We start with a summary of the necessary background on CAD in Section II, then survey work with heuristics in CAD in Section III and our experience with standards and benchmarks in Section IV. We finish with conclusions and questions in Section V.
II. CYLINDRICAL ALGEBRAIC DECOMPOSITION
A. Definition
A Cylindrical Algebraic Decomposition (CAD) is a decomposition of R n into cells (connected subsets). By algebraic we mean semi-algebraic: i.e. each cell can be described with a finite sequence of polynomial constraints. Finally, the cells are arranged cylindrically, meaning the projections of any pair, with respect to the variable ordering in which the CAD was created, are either equal or disjoint. We assume variables labelled according to their ordering (so the projections considered are (x 1 , . . . , x ) → (x 1 , . . . , x k ) for k < ) with the highest ordered variable present said to be the main variable. Hence CADs can be represented in a tree like format branching on the semi-algebraic conditions involving increasing main variable, as in the example below (with the branching from right to left; all √ indicating the positive root; and the tuples arXiv:1609.09269v1 [cs.SC] 29 Sep 2016 on the left sample points of the cells).
(−2, 0) x < −1 (−1, −1) y < 0 (−1, 0) y = 0 (−1, 1) 0 < y x = −1 (0, −2) y < − √ −x 2 + 1 (0, −1) y = − √ −x 2 + 1 (0, 0) − √ −x 2 + 1 < y < √ −x 2 + 1 (0, 1) y = + √ −x 2 + 1 (0, 2) √ −x 2 + 1 < y −1 < x < 1 (1, −1) y < 0 (1, 0) y = 0 (1, 1) 0 < y x = 1 (2, 0) 1 < x
A CAD is produced to be invariant for input; originally signinvariant for a set of input polynomials (so on each cell each polynomial is positive, zero or negative). The example above is a sign invariant CAD for the polynomial x 2 +y 2 −1 defining the unit circle. More recently CADs have been produced truthinvariant for input Boolean-valued polynomial formulae. A sign-invariant CAD for the polynomials in a formula is also truth-invariant for the formula; but we can often achieve truthinvariance with far less cells. For example suppose we need a CAD truth-invariant for the formula
x 2 + y 2 − 1 = 0 ∧ (x − 1) 2 + y 2 − 1 = 0.
A sign-invariant CAD would require 55 cells (with the full dimensional ones shown on the left of Figure 1). However, a truth-invariant CAD would need only 7 cells: 2 of which are full dimension (as on the right of Figure 1) and 5 more to decompose the line x = 1 2 at the points of intersection.
B. Computation
CAD construction usually involves two phases. The first projection, applies operators recursively on polynomials, starting with the input. Each time the operator produces a set with one less variable which together define the projection polynomials. These are used in the second phase, lifting, to CADs of increasing dimension. First a CAD of R 1 is built by splitting on the real roots of the univariate polynomials (those in x 1 only). Next, a CAD of R 2 is built by repeating the process over each cell in R 1 with the bivariate polynomials in (x 1 , x 2 ) evaluated at a sample point of the cell in R 1 ; and the process is repeated until a CAD of R n is produced. We call the cells where a polynomial vanishes sections and those regions in-between sectors, which together form the stack over the cell. In each lift we extrapolate the conclusions drawn from working at a sample point to the whole cell requiring validity theorems for the projection operator used. CAD cells are represented by at least a sample point (as in the left of the example above), and an index: a list of positive integers with each integer indicating the section or stack each variable is within (in reference to the ordered roots of the projection polynomials). Some implementations will also encode the full algebraic description within each cell. CAD was originally introduced by Collins for quantifier elimination (QE) in real closed fields [4]. Although CAD construction has complexity doubly exponential in the number of variables [26], applications range from parametric optimisation [34] and epidemic modelling [16], to reasoning with multivalued functions [24] and the derivation of optimal numerical schemes [33]. There have been many improvements to Collins' original approach most notably refinements to the projection operators [50] [10], [53]; early termination of lifting [22] [62]; and symbolic-numeric schemes [58], [42]. Some recent advances include dealing with multiple formulae [7], [8]; local projection [14], [60]; and decompositions via complex space [21], [6]. For a more detailed introduction to CAD see e.g. [8].
III. HEURISTIC USE FOR CAD
A. Choosing the variable ordering
The most well known choice required for CAD is that of the variable ordering, which the cylindricity is defined with respect to. This determines the order in which variables are eliminated during projection and the subspaces through which CADs are built incrementally during lifting. When using CAD for QE we must project variables in the order they are quantified, but we are free to project the other variables in any order (and to change the order within quantifier blocks).
The variable ordering used can have a great effect on the output produced. For example, let f := (x − 1)(y 2 + 1) − 1 and consider the minimal sign-invariant CAD in each variable ordering, as visualised in Figure 2. In each case we project down with the left figure projecting x first and the right y. In this toy example the "wrong" choice more than doubles the number of cells, while numerous experiments have shown that for larger examples the choice can determine whether a problem is tractable (see for example the experimental results in [8]). At the extreme end of this observation, [15] sotd: For all admissible orderings, calculate the projection set and choose the one with smallest sum of total degrees for each of the monomials in each of the polynomials [27]. Performs well but more costly than Brown. A greedy alternative is to allocate one variable of the ordering at a time by projecting each unallocated variable and choosing the one which increases the sotd least. ndrr: As with sotd construct the full projection set and choose the ordering whose set has the least number of distinct real roots of the univariate polynomials within [9]. Even more costly than sotd, but sensitive to the real geometry and shown to assist with examples where the sotd heuristic failed.
The papers cited each contain experimental results demonstrating their worth. In [39] the results of an experiment comparing the 3 on a data set of over 7000 examples were reported. The experiments showed Brown selecting the best ordering (as measured by lowest cell count) more often than the others. However a key finding was that there were substantial subsets of examples for which each heuristic did best. Further, when calculating the saving made by the heuristics (compared to the average cell count of the different orderings) the authors of [39] found that sotd actually made a greater saving for quantified problems on average (i.e. while Brown was superior on more examples, sotd was superior on examples with greater savings on offer). Together, this meant that recommending one heuristic at the expense of all others was not possible.
If the minimal cell count is a priority then one further approach, suggested in [64] is to compute the full dimensional cells for each possible ordering and pick the ordering with the minimum to derive a full CAD. Computing full dimensional cells avoids any work with algebraic numbers and so is not as costly as may be thought, although it does require more computation than ndrr. It was noted in [64] that the fulldimensional cell calculations could be done in parallel with the first to finish extended to a full CAD and the rest discarded.
B. Choices with equational constraints 1) Equational constraints: There are several ways in which we can modify CAD constriction to achieve truth-invariance (rather than sign-invariance) including refining sign-invariant CAD and truncating lifting once the truth-value is determined. A particularly fruitful approach is to take advantage of the Booloean structure of a formula through the identification of Equational Constraints (ECs): polynomial equations logically implied by a formula. Reduced projection operators (using a subset of the usual polynomials) have been proven valid for use when an EC is present with corresponding main variable [51], [52]. Results in [29], [31] suggest that the double exponent in the complexity bound decreases by 1 for each EC used, although this is restricted to primitive ECs meaning the classic lower-bound examples of [26], [15] are not violated [25].
These reduced operators can only use one EC for each projection, so when there are multiple we must make a designation. Note that ECs need not appear explicitly as atoms (formula with no logical connectives) in the input formula, but could instead be implicit. For example, the resultant of any two ECs in the same main variable is itself an EC (not containing that variable) [52]. Propagating ECs in this way allows for the maximal use of the reduced operators. However, it can require multiple choices of which EC to designate at each projection. Section 4 of [29] described such an example where the wrong designation could make add tens of thousands to the cell count of the final output, making it more than 15 times bigger.
2) Making the designation: In [9] the authors experimented in using the sotd and ndrr measures on this question (the Brown heuristic was not applicable since it acted only on the input polynomials). In general they were useful in identifying the optimal designation, although both could be misled. As described above these heuristics essentially complete the projection stage of the algorithm for each ordering, which although minimal in comparison to the lifting stage, is likely far more computation than would normally be undertaken by a heuristic. This becomes an issue when the number of choices grows. Further, for these experiments a fixed variable ordering was used, and the question of addressing the two choices together (when the number of possibilities multiplies) has not been addressed.
3) Designation in TTICAD:
In [7], [8] a truth-table invariant CAD (TTICAD) is defined as a CAD on whose cells the truth-table for a set of formulae is invariant. A new operator was presented which takes advantage of ECs present in the separate formulae (with [7] developing the theory in the case where all had an EC and [8] extending to the general case). The operator essentially recognises when to consider the interaction of polynomials from different formulae. If an individual formula has multiple ECs then, as above, we must choose just one to designate for each projection.
C. Choices for CAD by Regular Chains
Recently an alternative CAD computational scheme has been proposed where, instead of projection and lifting, we: first cylindrically decompose complex space according to whether polynomials are zero or not using the theory of triangular decomposition by regular chains; and then refine to a CAD of real space. This was first proposed in [21] with an incremental version described in [20] and an extension to TTICAD in [6]. All versions are implemented within the RegularChains Library 2 with a summary in [19].
Most of the heuristics outlined earlier in this section are not directly applicable to the CAD by Regular Chains computation scheme (as there is no "cheap" projection phase to derive information from). We outline some of the new heuristics developed for the choices this scheme involves.
1) Variable order in TTICAD by Regular Chains:
This problem was considered in [30]. Two existing heuristics were compared: that of Brown introduced in Section III-A and another, denoted Triangular, already in use for other algorithms in the RegularChains Library. Triangular chooses first the variable with lowest degree occurring in the input; then breaks ties by choosing variables for which leading coefficients have lowest total degree; and finally sum of degrees in input. In addition the heuristics sotd and ndrr discussed above were used (even though the sets of projection polynomials built were not explicitly used later). The experiments found sotd to make the best choices, but due to its higher costs the Triangular heuristic was the most efficient choice overall. However, as with the experiments discussed in Section III-A, the example set could be subdivided into groups where different heuristics were dominant. Further experimentation and illustrative examples in [30] led to the development of a new heuristic (composed from parts of the others) tailored to the variable ordering choice for TTICAD by Regular Chains [6].
2) Constraint order in TTICAD by Regular Chains:
The latest CAD algorithm within the RegularChains Library [20] processes constraints incrementally when building the complex cylindrical decomposition and thus are sensitive to the order in which constraints are considered. Further, in the case of TTICAD we have the extra question of what order to consider the formulae in. These issues were studied in [28] which considered the following example.
Assume the ordering x ≺ y and consider
f 1 := x 2 + y 2 − 1, f 2 := 2y 2 − x, f 3 := (x − 5) 2 + (y − 1) 2 − 1, φ 1 := f 1 = 0 ∧ f 2 = 0, φ 2 := f 3 = 0.
The polynomials are graphed within the plots of Figure 3. If we want to study the truth of φ 1 and φ 2 (or say a parent formula φ 1 ∨ φ 2 ) we need a TTICAD to take advantage of the ECs. There are two possible orders for the formulae and two possible to consider the constraints within φ 1 . Hence 4 possible ways we could calculate a TTICAD by Regular Chains. Below we show how many cells are produced by proceeding in the orders indicated, with the two dimensional cells shown in Figure 3.
• φ 1 → φ 2 and f 1 → f 2 : 37 cells.
• φ 1 → φ 2 and f 2 → f 1 : 81 cells.
• φ 2 → φ 1 and f 1 → f 2 : 25 cells.
• φ 2 → φ 1 and f 2 → f 1 : 43 cells.
No previously discussed heuristic was applicable to this problem. For choosing which EC to process first in a given formula an argument could be made for measuring a set of polynomials shown to be rendered sign-invariant by the algorithm (leading to Heuristic 1 in [28]). The only heuristic derived for the other choices was to measure the sum of degrees of the polynomials in the complex cylindrical decomposition created. As with the ndrr and full dimensional cells heuristics above; this requires more computation than is ideal (although it is the real root refinement that makes up the bulk of the CAD by Regular Chains computation time).
D. Gröbner Basis preconditioning
A Gröbner Basis G is a particular generating set of an ideal I defined with respect to a monomial ordering [17]. One definition is that the ideal generated by the leading terms of I is generated by the leading terms of G. Gröbner Bases (GB) are used extensively to study ideals and the polynomials that define them as they allow properties such as dimension and number of zeros to be easily deduced. Although like CAD the calculation of GB is doubly exponential in the worst case [48], GB computation is now mostly trivial for any problem on which CAD construction is tractable.
It was first observed in [18] that replacing a conjunction of polynomial equalities in a CAD problem by their GB (logically equivalent) could be useful for the CAD computation. Of the ten test problems studied: 6 were improved by the GB preconditioning (speed-up varying from 2-to 1700-fold); 1 problem resulted in a 10-fold slow-down; 1 timed out when GB preconditioning was applied, but would complete without; and 2 were intractable both for CAD construction alone and the GB preconditioning step. The problem was revisited in [66]. As expected, there had been a big decrease in the computation timings, especially for the GB. However, it was still the case that 2 of the problems were hindered by GB preconditioning.
The key conclusion is that GB preconditioning will on average benefit CAD (sometimes very significantly) but could on occasion hinder it (to the point of making a tractable CAD problem intractable). We are yet to understand why this occurs, but the authors of [66] did develop a metric to predict when it will. They defined the Total Number of Indeterminates (TNoI) of a set of polynomials A as
TNoI(A) = a∈A NoI(a)
where NoI(a) is the number of indeterminates in a polynomial a. The heuristic is to build a CAD for the preconditioned polynomials only if the TNoI decreased. For most of their test problems the heuristic made the correct choice, but there were examples to the contrary.
E. Use of machine learning
Finally, we note recent experiments using machine learning, specifically support vector machines (see for example [56]), to make choices for CAD construction:
• In [40] the authors used an SVM to choose between the three heuristics for CAD variable ordering outlined in Section III-A. Simple problem features were selected (e.g. degrees, proportion of monomials containing each variable) and parameter optimisation was applied to maximise Matthews' Correlation Coefficient [47]. Over 7000 examples were studied, and over the 1721 reserved for the test set the machine learned choice was found to outperform each heuristic individually on average.
• In [38] the authors used an SVM to predict when it will be useful to precondition a CAD problem with GB (see Section III-D). The features used were from both the original input and the GB: so the study was answering the question should we use this GB rather than should we compute it (relevant since the GB computation was trivial for the problem set involved). The machine learned choice outperformed both using GB universally, and the human defined TNoI heuristic.
We also note that a recent paper [45] applied a support vector machine (seeded with the problem features from [40]) to suggest the order in which QE should be performed on subformulae of a non-prenex formula. Experimental results on more than 2,000 non-trivial examples showed that machine learning was doing better than the human derived heuristics, following appropriate parameter optimisation.
IV. STANDARDS AND BENCHMARKS
A. History of benchmarking in computer algebra
The Computer Algebra community has occasionally recognised the importance of benchmarks. The PoSSo and FRISCO projects aimed to do this for polynomials systems and symbolic-numeric problems respectively in the 1990s. PoSSO, with which the second author was involved, collected a series of benchmark examples for GB, and a descendant of these can still be found online 3 . However, this does not appear to be maintained; and the polynomials are not stored in a machine-readable form. Polynomials from this list still crop up in various papers, but there is no systematic reference, and it is not clear whether people are really referring to the same example. Several of the examples are families, which is good but means that a benchmark has to contain specific instances. The PoSSo project did its best to do "level playing field" comparisons, but at the time different implementations ran on different hardware / operating systems meaning this was not really achievable. The environment is much simpler these days, and it would be feasible to organise true contests.
The topic of benchmarking in computer algebra has most recently been taken up by the SymbolicData Project 4 [35] which is beginning to build a database of examples in XML format (although currently not with any suitable for CAD). The software described in [36] was built to translate problems in that database into executable code for various computer algebra systems. The authors of [36] discuss the peculiarities of computer algebra that make benchmarking particularly difficult including the fact that results of computations need not be unique and that the evaluation of the correctness of an output may not be trivial (or may be the subject of research itself).
A final point to note is that while SAT / SMT-solvers have only a few clear possible answers (e.g. sat, unsat, unknown) in computer algebra there is also the quality of the result to consider (e.g., size of quantifier-free formula produced). With CAD the output size is usually correlated to computation time, but this is not always the case with other algorithms.
B. The present authors' recent experience with CAD
In our work we have taken various approaches to experimenting with CAD: [44] and [23]. The latter were derived from applications of CAD while the former seem to be a collection of geometric problems invented by the authors. Other papers that have contributed such test problems include [5], [49] [11], [27]. We wonder whether historic repetition within the literature is alone a strong enough reason to be benchmark?
In [65], [61] an attempt was made to gather together all those test examples in the literature for CAD, along with references of their first appearance in the literature and encodings for some computer algebra systems.
(b) Supplement existing examples with modified versions
suitable for demonstrating the feature in question.
In [7], [8] the new TTICAD algorithm that was the subject of the paper offered an improvement on the state of the art for examples consisting of multiple formulae; or a single formulae in disjunctive normal form. Such examples had not been the topic of any CAD papers before and no existing examples were capable of demonstrating the savings on offer. The experiments produced in these papers were made of up two sets:
• Formulae produced to describe the branch cuts of multivalued functions in a proposed simplification formula [30], with CAD to be applied so that the complex domain could be decomposed into regions where the functions were univariate, and thus the formula applicable or not.
• Formulae produced by adapting the logical connectors in previous examples from the literature in [65] so that conjunctions became disjunctions.
Clearly the former set is of great interest as they represent a real example for the algorithms; but they all conform to a single structure and so are arguably too uniform to alone draw broad conclusions from. The second set were produced to be somehow close to the accepted test examples of the literature, but whether this is any better than inventing a new examples from scratch is debatable.
Derive new sets of random examples
A recent experiment using machine learning [40], [38] (see Section III-E) exposed a shortcoming in the above techniques. To train the SVMs hundreds of examples are required (with hundreds more then needed for validation and testing). The dataset from the literature in [65] contained not nearly enough examples and while the datasets discussed in the next section were sufficient for the first experiment in [40] they proved too uniform for [38]. We were left with no choice but to generate large quantities of new examples, which we did using the random polynomial generator in MAPLE. We had applied this technique also in [28], [30] receiving positive feedback from reviewers for the technique; but the initial reviews of [38] were all negative on the use of random data. It seems the appropriateness of this technique varies with the community (conference) in question. We opine that had we used data from the example bank of MetiTarski examples discussed in the next section then reviewers may have praised the focus on examples from a real application; even though MetiTarski themselves derive examples for benchmarks using random polynomials.
C. Sources of large benchmarks sets
We note some other sources of large sets of benchmark problems that represent real applications of CAD:
• MetiTarski 5 [3], [54] is an automatic theorem prover designed to prove theorems involving real-valued special functions (such as log, exp, sin, cos and sqrt). In general this theory is undeciadable but MetiTarski is able to solve many problems by applying real polynomial bounds and then using real quantifier elimination tools like CAD. Applications of MetiTarski in turn derive examples for CAD.
• The NRA (non-linear real arithmetic) category of the SMT-LIB library 6 which according to [43] consists mostly of problems originating from attempts to prove termination of term-rewrite systems.
These two data sets where included in the nlsat Benchmark Set 7 produced to evaluate the work in [43]. This also included verification conditions from the Keymaera [55] and parametrized generalizations of the problem from [37]. Together this dataset had many thousands of problems. However, we note that the problems come from a small number of classes and may have some hidden uniformity.
As mentioned above, the nlsat dataset was unsuitable to use for our machine learning experiment in [38]. Every single problem within that had more than one equality was aided by GB preconditioning, in fact a great many simply had a GB containing only 1 indicating the problem had no solution. Previous experiments on small example sets suggested GB preconditioning sometimes harms CAD computation and this was verified by analysis of a large randomly generated dataset 5 https://www.cl.cam.ac.uk/ ∼ lp15/papers/Arith 6 http://smtlib.cs.uiowa.edu/ 7 http://cs.nyu.edu/ ∼ dejan/nonlinear/ in [38]. Thus while the nlsat dataset is an excellent starting point it needs to be expanded to be less uniform.
We finish this section by noting one possible source of examples for the future.
• The Todai Robot Project 8 [46] is a Japanese AI project that aims to have an artificial intelligence pass the entrance examination for the University of Tokyo by 2021. A majority of questions on the Mathematics exam can be resolved by real quantifier elimination with a variety of techniques employed [41]. A key difficulty is that the natural language processing of the question derives a formula of far greater complexity than the human derived equivalent. This process derives a large bank of examples of CAD problems, as discussed in [45] for example. The authors of [45] told us there are plans to make this data set public.
V. CONCLUSIONS AND QUESTIONS
After surveying the work in Section III we see that several approaches to the creation of heuristics have been taken: ranging from human identified algebraic features, justified by mathematical arguments and observations to different extents; to machine learned choices using a support vector machine. We are interested to hear how these compare with the heuristics used in SAT-solvers and what lessons can be learned.
We can identify at least two areas where CAD is in need of further heuristic development.
• How best to take decisions in tandem: The work surveyed all considered choices to be made for CAD in isolation (assuming other choices had already been fixed). Of course, in reality this may not be the case. We must decide which decisions to prioritise; how heuristics can be combined; and how the combinatorial blow-up of decisions can be contained. Does the SATsolving community have experience in similar issues?
There are many implementations of CAD including: the dedicated command line program Qepcad [12]; Mathematica [58], [59], where CAD is not available directly but is used as a subroutine for quantifier elimination; the Redlog package for REDUCE [57]; and 3 different MAPLE librariesthe RegularChains Library (see Section III-C; SyNRAC [67] (now part of the Todai Robot project) and our own ProjectionCAD module [32].
• How to choose between different implementations? Each implementation includes unique pieces of theory and features and excels on different examples. Ideally, we would have a single implementation which encompasses all recent advances. A more manageable step may be an overarching MAPLE command to choose between the 3 packages there. SMT-solvers are designed to use a variety of different theory solvers and how they choose between these may offer valuable lessons here.
Surveying Section IV raises a number of questions about how benchmark sets should be produced:
• How best to generate large numbers of examples which are not internally uniform?
• How important is it that the benchmarks come from current applications?
• How important is it that the benchmarks have historically been used in the literature?
• Are randomly generated examples a fairer way to evaluate the software, or irrelevant as too far removed from applications?
The SAT / SMT community posses a unified, large and growing set of benchmarks in the SMT-LIB library and so we may be able to extrapolate lessons from this. However, as noted above, this library may be too uniform [38], and comments from the anonymous referees suggest that this and other critics are already a topic of discussion in the SMT community.
Fig. 1 .
1Example visualising sign and truth-invariant CADs
Fig. 2 .
2CADs under different variable orderings
Fig. 3 .
3Visualisations of the four TTICADs which can be built using the Regular Chains Library for the example in this section. The figures on the top have φ 1 → φ 2 and those on the bottom φ 2 → φ 1 . The figures on the left have f 1 → f 2 and those on the right f 2 → f 1 .
defined a class of examples where changing variable ordering would change the number of cells required from constant to doubly exponential in the number of variables.Several heuristics have been developed to choose the variable
ordering, including:
Brown: Use the following criteria, starting with the first
and breaking ties with successive ones:
(1) Eliminate variable if lowest overall degree.
(2) Eliminate variable if lowest (maximum) total
degree in terms in which it occurs.
(3) Eliminate variable if smallest number of terms
contains it.
Suggested by Brown in [13, Section 5.2].
www.regularchains.org
http://www-sop.inria.fr/saga/POL/ 4 www.symbolicdata.org
http://21robot.org
AcknowledgementsThanks to our collaborators on the work surveyed here: Russell Bradford, James Bridge, Changbo Chen, Zongyan Huang, Scott McCallum, Marc Moreno Maza, Lawrence Paulson, and David Wilson. Thanks also to the anonymous referees for their comments which improved the paper.Most of the work surveyed here was supported by EPSRC grant EP/J003247/1. The authors are now supported by EU H2020-FETOPEN-2016-2017-CSA project SC 2 (712689).
Building bridges between symbolic computation and satisfiability checking. E Ábráham, Proceedings of the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15. the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15ACME.Ábráham. Building bridges between symbolic computation and satisfiability checking. In Proceedings of the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15, pages 1-6. ACM, 2015.
SC 2 : Satisfiability checking meets symbolic computation. E Ábrahám, J Abbott, B Becker, A M Bigatti, M Brain, B Buchberger, A Cimatti, J H Davenport, M England, P Fontaine, S Forrest, A Griggio, D Kroening, W M Seiler, T Sturm, Intelligent Computer Mathematics: Proceedings CICM 2016. M. Kohlhase, M. Johansson, B. Miller, L. de Moura, and F. Tompa9791E.Ábrahám, J. Abbott, B. Becker, A.M. Bigatti, M. Brain, B. Buch- berger, A. Cimatti, J.H. Davenport, M. England, P. Fontaine, S. Forrest, A. Griggio, D. Kroening, W.M. Seiler, and T. Sturm. SC 2 : Satisfiability checking meets symbolic computation. In M. Kohlhase, M. Johansson, B. Miller, L. de Moura, and F. Tompa, editors, Intelligent Computer Mathematics: Proceedings CICM 2016, LNCS 9791, pages 28-43.
MetiTarski: An automatic theorem prover for real-valued special functions. B Akbarpour, L C Paulson, Journal of Automated Reasoning. 443B. Akbarpour and L.C. Paulson. MetiTarski: An automatic theorem prover for real-valued special functions. Journal of Automated Reason- ing, 44(3):175-205, 2010.
Cylindrical algebraic decomposition I: The basic algorithm. D Arnon, G E Collins, S Mccallum, SIAM Journal of Computing. 13D. Arnon, G.E. Collins, and S. McCallum. Cylindrical algebraic decomposition I: The basic algorithm. SIAM Journal of Computing, 13:865-877, 1984.
A cluster-based cylindrical algebraic decomposition algorithm. D S Arnon, Journal of Symbolic Computation. 51-2D.S. Arnon. A cluster-based cylindrical algebraic decomposition algorithm. Journal of Symbolic Computation, 5(1-2):189-212, 1988.
Truth table invariant cylindrical algebraic decomposition by regular chains. R Bradford, C Chen, J H Davenport, M England, M Moreno Maza, D Wilson, Computer Algebra in Scientific Computing. V.P. Gerdt, W. Koepf, W.M. Seiler, and E.V. VorozhtsovSpringer International Publishing8660R. Bradford, C. Chen, J.H. Davenport, M. England, M. Moreno Maza, and D. Wilson. Truth table invariant cylindrical algebraic decomposition by regular chains. In V.P. Gerdt, W. Koepf, W.M. Seiler, and E.V. Vorozhtsov, editors, Computer Algebra in Scientific Computing, LNCS 8660, pages 44-58. Springer International Publishing, 2014.
Cylindrical algebraic decompositions for boolean combinations. R Bradford, J H Davenport, M England, S Mccallum, D Wilson, Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13. the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13ACMR. Bradford, J.H. Davenport, M. England, S. McCallum, and D. Wilson. Cylindrical algebraic decompositions for boolean combinations. In Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, pages 125-132. ACM, 2013.
Truth table invariant cylindrical algebraic decomposition. R Bradford, J H Davenport, M England, S Mccallum, D Wilson, Journal of Symbolic Computation. 76R. Bradford, J.H. Davenport, M. England, S. McCallum, and D. Wilson. Truth table invariant cylindrical algebraic decomposition. Journal of Symbolic Computation, 76:1-35, 2015.
Optimising problem formulations for cylindrical algebraic decomposition. R Bradford, J H Davenport, M England, D Wilson, Intelligent Computer Mathematics. J. Carette, D. Aspinall, C. Lange, P. Sojka, and W. WindsteigerBerlin HeidelbergSpringer7961R. Bradford, J.H. Davenport, M. England, and D. Wilson. Optimising problem formulations for cylindrical algebraic decomposition. In J. Carette, D. Aspinall, C. Lange, P. Sojka, and W. Windsteiger, editors, Intelligent Computer Mathematics, LNCS 7961, pages 19-34. Springer Berlin Heidelberg, 2013.
Improved projection for cylindrical algebraic decomposition. C W Brown, Journal of Symbolic Computation. 325C.W. Brown. Improved projection for cylindrical algebraic decomposi- tion. Journal of Symbolic Computation, 32(5):447-465, 2001.
Simple CAD construction and its applications. C W Brown, Journal of Symbolic Computation. 315C.W. Brown. Simple CAD construction and its applications. Journal of Symbolic Computation, 31(5):521-547, 2001.
An overview of QEPCAD B: a tool for real quantifier elimination and formula simplification. C W Brown, Journal of Japan Society for Symbolic and Algebraic Computation. 101C.W. Brown. An overview of QEPCAD B: a tool for real quantifier elimination and formula simplification. Journal of Japan Society for Symbolic and Algebraic Computation, 10(1):13-22, 2003.
Companion to the tutorial: Cylindrical algebraic decomposition. C W Brown, presented at ISSAC '04C.W. Brown. Companion to the tutorial: Cylindrical algebraic decompo- sition, presented at ISSAC '04. http://www.usna.edu/Users/cs/wcbrown/ research/ISSAC04/handout.pdf, 2004.
Constructing a single open cell in a cylindrical algebraic decomposition. C W Brown, Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13. the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13ACMC.W. Brown. Constructing a single open cell in a cylindrical algebraic decomposition. In Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, pages 133-140. ACM, 2013.
The complexity of quantifier elimination and cylindrical algebraic decomposition. C W Brown, J H Davenport, Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, ISSAC '07. the 2007 International Symposium on Symbolic and Algebraic Computation, ISSAC '07ACMC.W. Brown and J.H. Davenport. The complexity of quantifier elimina- tion and cylindrical algebraic decomposition. In Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, ISSAC '07, pages 54-60. ACM, 2007.
Algorithmic methods for investigating equilibria in epidemic modelling. C W Brown, M El Kahoui, D Novotni, A Weber, Journal of Symbolic Computation. 41C.W. Brown, M. El Kahoui, D. Novotni, and A. Weber. Algorithmic methods for investigating equilibria in epidemic modelling. Journal of Symbolic Computation, 41:1157-1173, 2006.
An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal. B Buchberger, Journal of Symbolic Computation. 413-4Bruno Buchberger's PhD thesisB. Buchberger. Bruno Buchberger's PhD thesis (1965): An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal. Journal of Symbolic Computation, 41(3-4):475-511, 2006.
Speeding up quantifier elimination by Gröbner bases. B Buchberger, H Hong, RISC, Johannes Kepler UniversityTechnical reportB. Buchberger and H. Hong. Speeding up quantifier elimination by Gröbner bases. Technical report, 91-06. RISC, Johannes Kepler University, 1991.
Cylindrical algebraic decomposition in the RegularChains library. C Chen, M Moreno Maza, Mathematical Software -ICMS 2014. H. Hong and C. YapHeidelbergSpringer8592C. Chen and M. Moreno Maza. Cylindrical algebraic decomposition in the RegularChains library. In H. Hong and C. Yap, editors, Mathematical Software -ICMS 2014, LNCS 8592, pages 425-433. Springer Heidelberg, 2014.
An incremental algorithm for computing cylindrical algebraic decompositions. C Chen, M Moreno Maza, Computer Mathematics. R. Feng, W. Lee, and Y. SatoSpringer Berlin HeidelbergC. Chen and M. Moreno Maza. An incremental algorithm for computing cylindrical algebraic decompositions. In R. Feng, W. Lee, and Y. Sato, editors, Computer Mathematics, pages 199-221. Springer Berlin Hei- delberg, 2014.
Computing cylindrical algebraic decomposition via triangular decomposition. C Chen, M Moreno Maza, B Xia, L Yang, Proceedings of the 2009 International Symposium on Symbolic and Algebraic Computation, ISSAC '09. the 2009 International Symposium on Symbolic and Algebraic Computation, ISSAC '09ACMC. Chen, M. Moreno Maza, B. Xia, and L. Yang. Computing cylindrical algebraic decomposition via triangular decomposition. In Proceedings of the 2009 International Symposium on Symbolic and Algebraic Computation, ISSAC '09, pages 95-102. ACM, 2009.
Partial cylindrical algebraic decomposition for quantifier elimination. G E Collins, H Hong, Journal of Symbolic Computation. 12G.E. Collins and H. Hong. Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation, 12:299- 328, 1991.
A "Piano-Movers. J H Davenport, Problem. SIGSAM Bull. 201-2J.H. Davenport. A "Piano-Movers" Problem. SIGSAM Bull., 20(1- 2):15-17, 1986.
Program verification in the presence of complex numbers, functions with branch cuts etc. J H Davenport, R Bradford, M England, D Wilson, 14th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '12. IEEEJ.H. Davenport, R. Bradford, M. England, and D. Wilson. Program verification in the presence of complex numbers, functions with branch cuts etc. In 14th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '12, pages 83-88. IEEE, 2012.
Need polynomial systems be doubly exponential?. J H Davenport, M England, Mathematical Software -Proceedings of ICMS 2016. G.-M. Greuel, T. Koch, P. Paule, and A. SommeseSpringer International Publishing9725J.H. Davenport and M. England. Need polynomial systems be doubly exponential? In G.-M. Greuel, T. Koch, P. Paule, and A. Sommese, editors, Mathematical Software -Proceedings of ICMS 2016, LNCS 9725, pages 157-164. Springer International Publishing, 2016.
Real quantifier elimination is doubly exponential. J H Davenport, J Heintz, Journal of Symbolic Computation. 51-2J.H. Davenport and J. Heintz. Real quantifier elimination is doubly exponential. Journal of Symbolic Computation, 5(1-2):29-35, 1988.
Efficient projection orders for CAD. A Dolzmann, A Seidl, T Sturm, Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation, ISSAC '04. the 2004 International Symposium on Symbolic and Algebraic Computation, ISSAC '04ACMA. Dolzmann, A. Seidl, and T. Sturm. Efficient projection orders for CAD. In Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation, ISSAC '04, pages 111-118. ACM, 2004.
Problem formulation for truth-table invariant cylindrical algebraic decomposition by incremental triangular decomposition. M England, R Bradford, C Chen, J H Davenport, M Moreno Maza, D Wilson, Intelligent Computer Mathematics. S.M. Watt, J.H. Davenport, A.P. Sexton, P. Sojka, and J. UrbanSpringer International8543M. England, R. Bradford, C. Chen, J.H. Davenport, M. Moreno Maza, and D. Wilson. Problem formulation for truth-table invariant cylindrical algebraic decomposition by incremental triangular decomposition. In S.M. Watt, J.H. Davenport, A.P. Sexton, P. Sojka, and J. Urban, editors, Intelligent Computer Mathematics, LNCS 8543, pages 45-60. Springer International, 2014.
Improving the use of equational constraints in cylindrical algebraic decomposition. M England, R Bradford, J H Davenport, Proceedings of the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15. the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15ACMM. England, R. Bradford, and J.H. Davenport. Improving the use of equational constraints in cylindrical algebraic decomposition. In Proceedings of the 2015 International Symposium on Symbolic and Algebraic Computation, ISSAC '15, pages 165-172. ACM, 2015.
Choosing a variable ordering for truth-table invariant cylindrical algebraic decomposition by incremental triangular decomposition. M England, R Bradford, J H Davenport, D Wilson, Mathematical Software -ICMS 2014. H. Hong and C. YapHeidelbergSpringer8592M. England, R. Bradford, J.H. Davenport, and D. Wilson. Choosing a variable ordering for truth-table invariant cylindrical algebraic de- composition by incremental triangular decomposition. In H. Hong and C. Yap, editors, Mathematical Software -ICMS 2014, LNCS 8592, pages 450-457. Springer Heidelberg, 2014.
The complexity of cylindrical algebraic decomposition with respect to polynomial degree. M England, J H Davenport, Proceedings CASC 2016 (Springer LNCS). CASC 2016 (Springer LNCS)To appear InM. England and J.H. Davenport. The complexity of cylindrical algebraic decomposition with respect to polynomial degree. To appear In: Proceedings CASC 2016 (Springer LNCS), 2016.
Using the Regular Chains Library to build cylindrical algebraic decompositions by projecting and lifting. M England, D Wilson, R Bradford, J H Davenport, Mathematical Software -ICMS 2014. H. Hong and C. YapHeidelbergSpringer8592M. England, D. Wilson, R. Bradford, and J.H. Davenport. Using the Regular Chains Library to build cylindrical algebraic decompositions by projecting and lifting. In H. Hong and C. Yap, editors, Mathe- matical Software -ICMS 2014, LNCS 8592, pages 458-465. Springer Heidelberg, 2014.
Synthesis of optimal numerical algorithms using real quantifier elimination (Case Study: Square root computation). M Erascu, H Hong, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14. the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14ACMM. Erascu and H. Hong. Synthesis of optimal numerical algorithms using real quantifier elimination (Case Study: Square root computation). In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14, pages 162-169. ACM, 2014.
Nonlinear parametric optimization using cylindrical algebraic decomposition. I A Fotiou, P A Parrilo, M Morari, Decision and Control, 2005 European Control Conference. CDC-ECC '05. I.A. Fotiou, P.A. Parrilo, and M. Morari. Nonlinear parametric opti- mization using cylindrical algebraic decomposition. In Decision and Control, 2005 European Control Conference. CDC-ECC '05., pages 3735-3740, 2005.
The SymbolicData project: Towards a computer algebra social network. H G Graebe, A Nareike, S Johanning, ; M England, J H Davenport, A Kohlhase, M Kohlhase, P Libbrecht, W Neuper, P Quaresma, A P Sexton, P Sojka, J Urban, S M Watt, Joint Proceedings of the MathUI, OpenMath and ThEdu Workshops and Work in Progress track at CICM, number 1186 in CEUR Workshop Proceedings. H.G. Graebe, A. Nareike, and S. Johanning. The SymbolicData project: Towards a computer algebra social network. In M. England, J.H. Davenport, A. Kohlhase, M. Kohlhase, P. Libbrecht, W. Neuper, P. Quaresma, A.P. Sexton, P. Sojka, J. Urban, and S.M. Watt, editors, Joint Proceedings of the MathUI, OpenMath and ThEdu Workshops and Work in Progress track at CICM, number 1186 in CEUR Workshop Proceedings, 2014.
The SDEval benchmarking toolkit. A Heinle, V Levandovskyy, ACM Communications in Computer Algebra. 491A. Heinle and V. Levandovskyy. The SDEval benchmarking toolkit. ACM Communications in Computer Algebra, 49(1):1-9, 2015.
Comparison of several decision algorithms for the existential theory of the reals. H Hong, RISC, LinzTechnical reportH. Hong. Comparison of several decision algorithms for the existential theory of the reals. Technical report, RISC, Linz, 1991.
Using machine learning to decide when to precondition cylindrical algebraic decomposition with Groebner bases. Z Huang, M England, J H Davenport, L Paulson, 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '16. Preprint: Arxiv 1608.04219. IEEEZ. Huang, M. England, J.H. Davenport, and L. Paulson. Using machine learning to decide when to precondition cylindrical algebraic decom- position with Groebner bases. In 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '16. Preprint: Arxiv 1608.04219. IEEE, 2016.
A comparison of three heuristics to choose the variable ordering for cad. Z Huang, M England, D Wilson, J H Davenport, L Paulson, ACM Communications in Computer Algebra. 483Z. Huang, M. England, D. Wilson, J.H. Davenport, and L. Paulson. A comparison of three heuristics to choose the variable ordering for cad. ACM Communications in Computer Algebra, 48(3):121-123, 2014.
Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition. Z Huang, M England, D Wilson, J H Davenport, L Paulson, J Bridge, Intelligent Computer Mathematics. S.M. Watt, J.H. Davenport, A.P. Sexton, P. Sojka, and J. UrbanSpringer International8543Z. Huang, M. England, D. Wilson, J.H. Davenport, L. Paulson, and J. Bridge. Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition. In S.M. Watt, J.H. Davenport, A.P. Sexton, P. Sojka, and J. Urban, editors, Intelligent Computer Mathematics, LNAI 8543, pages 92-107. Springer International, 2014.
Automated natural language geometry math problem solving by real quantifier elimination. H Iwane, T Matsuzaki, N H Arai, H Anai, Proceedings of the 10th International Workshop on Automated Deduction in Geometry, ADG '14. the 10th International Workshop on Automated Deduction in Geometry, ADG '14H. Iwane, T. Matsuzaki, N.H. Arai, and H. Anai. Automated natural language geometry math problem solving by real quantifier elimination. In Proceedings of the 10th International Workshop on Automated Deduction in Geometry, ADG '14, pages 75-84, 2014.
An effective implementation of a symbolic-numeric cylindrical algebraic decomposition for quantifier elimination. H Iwane, H Yanami, H Anai, K Yokoyama, Proceedings of the 2009 conference on Symbolic Numeric Computation, SNC '09. the 2009 conference on Symbolic Numeric Computation, SNC '09H. Iwane, H. Yanami, H. Anai, and K. Yokoyama. An effective imple- mentation of a symbolic-numeric cylindrical algebraic decomposition for quantifier elimination. In Proceedings of the 2009 conference on Symbolic Numeric Computation, SNC '09, pages 55-64, 2009.
Solving non-linear arithmetic. D Jovanovic, L De Moura, Automated Reasoning: 6th International Joint Conference (IJCAR). B. Gramlich, D. Miller, and U. SattlerSpringer7364D. Jovanovic and L. de Moura. Solving non-linear arithmetic. In B. Gramlich, D. Miller, and U. Sattler, editors, Automated Reasoning: 6th International Joint Conference (IJCAR), LNCS 7364, pages 339- 354. Springer, 2012.
Branch cuts for complex elementary functions. W Kahan, Proceedings The State of Art in Numerical Analysis. A. Iserles and M.J.D. PowellThe State of Art in Numerical AnalysisClarendon PressW. Kahan. Branch cuts for complex elementary functions. In A. Iserles and M.J.D. Powell, editors, Proceedings The State of Art in Numerical Analysis, pages 165-211. Clarendon Press, 1987.
Efficient subformula orders for real quantifier elimination of non-prenex formulas. M Kobayashi, H Iwane, T Matsuzaki, H Anai, Mathematical Aspects of Computer and Information Sciences (MACIS '15). S.I. Kotsireas, M.S. Rump, and K.C. YapSpringer International Publishing9582M. Kobayashi, H. Iwane, T. Matsuzaki, and H. Anai. Efficient subfor- mula orders for real quantifier elimination of non-prenex formulas. In S.I. Kotsireas, M.S. Rump, and K.C. Yap, editors, Mathematical Aspects of Computer and Information Sciences (MACIS '15), LNCS 9582, pages 236-251. Springer International Publishing, 2016.
The most uncreative examinee: A first step toward wide coverage natural language math problem solving. T Matsuzaki, H Iwane, H Anai, N Arai, Proceedings of 28th Conference on Artificial Intelligence, AAAI '14. C.E. Brodley and P. Stone28th Conference on Artificial Intelligence, AAAI '14AAAI PressT. Matsuzaki, H. Iwane, H. Anai, and N. Arai. The most uncreative examinee: A first step toward wide coverage natural language math problem solving. In C.E. Brodley and P. Stone, editors, Proceedings of 28th Conference on Artificial Intelligence, AAAI '14, pages 1098-1104. AAAI Press, 2014.
Comparison of the predicted and observed secondary structure of T4 phage lysozyme. B W Matthews, Biochimica et Biophysica Acta (BBA)-Protein Structure. 4052B.W. Matthews. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochimica et Biophysica Acta (BBA)- Protein Structure, 405(2):442-451, 1975.
The complexity of the word problems for commutative semigroups and polynomial ideals. E W Mayr, A R Meyer, Advances in Mathematics. 463E.W. Mayr and A.R. Meyer. The complexity of the word problems for commutative semigroups and polynomial ideals. Advances in Mathematics, 46(3):305-329, 1982.
An improved projection operation for cylindrical algebraic decomposition of three-dimensional space. S Mccallum, Journal of Symbolic Computation. 51-2S. McCallum. An improved projection operation for cylindrical alge- braic decomposition of three-dimensional space. Journal of Symbolic Computation, 5(1-2):141-161, 1988.
An improved projection operation for cylindrical algebraic decomposition. S Mccallum, Quantifier Elimination and Cylindrical Algebraic Decomposition, Texts & Monographs in Symbolic Computation. B. Caviness and J. JohnsonSpringer-VerlagS. McCallum. An improved projection operation for cylindrical algebraic decomposition. In B. Caviness and J. Johnson, editors, Quantifier Elimination and Cylindrical Algebraic Decomposition, Texts & Monographs in Symbolic Computation, pages 242-268. Springer- Verlag, 1998.
On projection in CAD-based quantifier elimination with equational constraint. S Mccallum, Proceedings of the 1999 International Symposium on Symbolic and Algebraic Computation, ISSAC '99. the 1999 International Symposium on Symbolic and Algebraic Computation, ISSAC '99ACMS. McCallum. On projection in CAD-based quantifier elimination with equational constraint. In Proceedings of the 1999 International Symposium on Symbolic and Algebraic Computation, ISSAC '99, pages 145-149. ACM, 1999.
On propagation of equational constraints in CADbased quantifier elimination. S Mccallum, Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation, ISSAC '01. the 2001 International Symposium on Symbolic and Algebraic Computation, ISSAC '01ACMS. McCallum. On propagation of equational constraints in CAD- based quantifier elimination. In Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation, ISSAC '01, pages 223-231. ACM, 2001.
Validity proof of Lazard's method for CAD construction. S Mccallum, A Parusińiski, L Paunescu, 1607:00264Preprint: ArxivS. McCallum, A. Parusińiski, and L. Paunescu. Validity proof of Lazard's method for CAD construction. Preprint: Arxiv 1607:00264, 2016.
Metitarski: Past and future. L C Paulson, Interactive Theorem Proving. L. Beringer and A. FeltySpringer7406L.C. Paulson. Metitarski: Past and future. In L. Beringer and A. Felty, editors, Interactive Theorem Proving, LNCS 7406, pages 1-10. Springer, 2012.
Real world verification. A Platzer, J D Quesel, P Rümmer, Automated Deduction (CADE-22). R.A. SchmidtBerlin HeidelbergSpringer5663A. Platzer, J.D. Quesel, and P. Rümmer. Real world verification. In R.A. Schmidt, editor, Automated Deduction (CADE-22), LNCS 5663, pages 485-501. Springer Berlin Heidelberg, 2009.
Kernel methods in computational biology. B Schölkopf, K Tsuda, J.-P Vert, MIT PressB. Schölkopf, K. Tsuda, and J.-P. Vert. Kernel methods in computational biology. MIT Press, 2004.
A generic projection operator for partial cylindrical algebraic decomposition. A Seidl, T Sturm, Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation, ISSAC '03. the 2003 International Symposium on Symbolic and Algebraic Computation, ISSAC '03ACMA. Seidl and T. Sturm. A generic projection operator for partial cylindri- cal algebraic decomposition. In Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation, ISSAC '03, pages 240-247. ACM, 2003.
Cylindrical algebraic decomposition using validated numerics. A Strzeboński, Journal of Symbolic Computation. 419A. Strzeboński. Cylindrical algebraic decomposition using validated numerics. Journal of Symbolic Computation, 41(9):1021-1038, 2006.
Computation with semialgebraic sets represented by cylindrical algebraic formulas. A Strzeboński, Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation, ISSAC '10. the 2010 International Symposium on Symbolic and Algebraic Computation, ISSAC '10ACMA. Strzeboński. Computation with semialgebraic sets represented by cylindrical algebraic formulas. In Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation, ISSAC '10, pages 61-68. ACM, 2010.
Cylindrical algebraic decomposition using local projections. A Strzeboński, Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14. the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14ACMA. Strzeboński. Cylindrical algebraic decomposition using local projec- tions. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14, pages 389-396. ACM, 2014.
Real geometry and connectedness via triangular description: Cad example bank. D Wilson, D. Wilson. Real geometry and connectedness via triangular description: Cad example bank, 2013.
Cylindrical algebraic sub-decompositions. D Wilson, R Bradford, J H Davenport, M England, Mathematics in Computer Science. 8D. Wilson, R. Bradford, J.H. Davenport, and M. England. Cylindrical algebraic sub-decompositions. Mathematics in Computer Science, 8:263-288, 2014.
A "piano movers" problem reformulated. D Wilson, J H Davenport, M England, R Bradford, 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '13. IEEED. Wilson, J.H. Davenport, M. England, and R. Bradford. A "piano movers" problem reformulated. In 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '13, pages 53-60. IEEE, 2013.
Using the distribution of cells by dimension in a cylindrical algebraic decomposition. D Wilson, M England, J H Davenport, R Bradford, 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '14. IEEED. Wilson, M. England, J.H. Davenport, and R. Bradford. Using the dis- tribution of cells by dimension in a cylindrical algebraic decomposition. In 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC '14, pages 53-60. IEEE, 2014.
A repository for CAD examples. D J Wilson, R J Bradford, J H Davenport, ACM Communications in Computer Algebra. 463D.J. Wilson, R.J. Bradford, and J.H. Davenport. A repository for CAD examples. ACM Communications in Computer Algebra, 46(3):67-69, 2012.
Speeding up cylindrical algebraic decomposition by Gröbner bases. D J Wilson, R J Bradford, J H Davenport, Intelligent Computer Mathematics. J. Jeuring, J.A. Campbell, J. Carette, G. Reis, P. Sojka, M. Wenzel, and V. SorgeSpringer7362D.J. Wilson, R.J. Bradford, and J.H. Davenport. Speeding up cylindrical algebraic decomposition by Gröbner bases. In J. Jeuring, J.A. Campbell, J. Carette, G. Reis, P. Sojka, M. Wenzel, and V. Sorge, editors, Intel- ligent Computer Mathematics, LNCS 7362, pages 280-294. Springer, 2012.
Development of SyNRAC. H Yanami, H Anai, Proceedings of the 6th international conference on Computational Science: Part II. the 6th international conference on Computational Science: Part II3992ICCS '06H. Yanami and H. Anai. Development of SyNRAC. In Proceedings of the 6th international conference on Computational Science: Part II. (LNCS vol 3992), ICCS '06, pages 462-469, 2006.
| [] |
[
"LATENT STATE MARGINALIZATION AS A LOW-COST APPROACH FOR IMPROVING EXPLORATION",
"LATENT STATE MARGINALIZATION AS A LOW-COST APPROACH FOR IMPROVING EXPLORATION"
] | [
"Dinghuai Zhang \nUniversity de Montreal\nMila\n",
"Aaron Courville \nUniversity de Montreal\nMila\n",
"Yoshua Bengio \nUniversity de Montreal\nMila\n",
"Qinqing Zheng \nUniversity de Montreal\nMila\n",
"Amy Zhang \nUniversity de Montreal\nMila\n",
"Ricky T Q Chen \nUniversity de Montreal\nMila\n",
"Meta Ai \nUniversity de Montreal\nMila\n"
] | [
"University de Montreal\nMila",
"University de Montreal\nMila",
"University de Montreal\nMila",
"University de Montreal\nMila",
"University de Montreal\nMila",
"University de Montreal\nMila",
"University de Montreal\nMila"
] | [] | While the maximum entropy (MaxEnt) reinforcement learning (RL) frameworkoften touted for its exploration and robustness capabilities-is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity. In this work, we propose the adoption of latent variable policies within the MaxEnt framework, which we show can provably approximate any policy distribution, and additionally, naturally emerges under the use of world models with a latent belief state. We discuss why latent variable policies are difficult to train, how naïve approaches can fail, then subsequently introduce a series of improvements centered around low-cost marginalization of the latent state, allowing us to make full use of the latent state at minimal additional cost. We instantiate our method under the actorcritic framework, marginalizing both the actor and critic. The resulting algorithm, referred to as Stochastic Marginal Actor-Critic (SMAC), is simple yet effective. We experimentally validate our method on continuous control tasks, showing that effective marginalization can lead to better exploration and more robust training.Our implementation is open sourced at https://github.com/zdhNarsil/ Stochastic-Marginal-Actor-Critic. * Work done during an internship at Meta AI. Correspondence to: <[email protected]>. | 10.48550/arxiv.2210.00999 | [
"https://export.arxiv.org/pdf/2210.00999v2.pdf"
] | 252,683,376 | 2210.00999 | 227f04217a212a5e1f7a8e38831431e4b1ab325f |
LATENT STATE MARGINALIZATION AS A LOW-COST APPROACH FOR IMPROVING EXPLORATION
Dinghuai Zhang
University de Montreal
Mila
Aaron Courville
University de Montreal
Mila
Yoshua Bengio
University de Montreal
Mila
Qinqing Zheng
University de Montreal
Mila
Amy Zhang
University de Montreal
Mila
Ricky T Q Chen
University de Montreal
Mila
Meta Ai
University de Montreal
Mila
LATENT STATE MARGINALIZATION AS A LOW-COST APPROACH FOR IMPROVING EXPLORATION
Published as a conference paper at ICLR 2023
While the maximum entropy (MaxEnt) reinforcement learning (RL) frameworkoften touted for its exploration and robustness capabilities-is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity. In this work, we propose the adoption of latent variable policies within the MaxEnt framework, which we show can provably approximate any policy distribution, and additionally, naturally emerges under the use of world models with a latent belief state. We discuss why latent variable policies are difficult to train, how naïve approaches can fail, then subsequently introduce a series of improvements centered around low-cost marginalization of the latent state, allowing us to make full use of the latent state at minimal additional cost. We instantiate our method under the actorcritic framework, marginalizing both the actor and critic. The resulting algorithm, referred to as Stochastic Marginal Actor-Critic (SMAC), is simple yet effective. We experimentally validate our method on continuous control tasks, showing that effective marginalization can lead to better exploration and more robust training.Our implementation is open sourced at https://github.com/zdhNarsil/ Stochastic-Marginal-Actor-Critic. * Work done during an internship at Meta AI. Correspondence to: <[email protected]>.
INTRODUCTION
Figure 1
: The world model ( ) infers latent states ( ) from observation inputs ( ). While most existing methods only take one sample or the mean from this latent belief distribution, the agent ( ) of the proposed SMAC algorithm marginalizes out the latent state for improving exploration. Icons are adapted from Mendonca et al. (2021).
A fundamental goal of machine learning is to develop methods capable of sequential decision making, where reinforcement learning (RL) has achieved great success in recent decades. One of the core problems in RL is exploration, the process by which an agent learns to interact with its environment. To this end, a useful paradigm is the principle of maximum entropy, which defines the optimal solution to be one with the highest amount of randomness that solves the task at hand. While the maximum entropy (Max-Ent) RL framework (Todorov, 2006;Rawlik et al., 2012) is often motivated for learning complex multi-modal 1 behaviors through a stochastic agent, algorithms that are most often used in practice rely on simple agents that only make local perturbations around a single action. Part of this is due to the need to compute the entropy of the agent and use it as part of the training objective.
Meanwhile, the use of more expressive models have not gained nearly as much traction in the community. While there exist works that have increased the flexibility of their agents by making use of more complex distributions such as energy-based models (Haarnoja et al., 2017), normalizing flows (Haarnoja et al., 2018a;Ward et al., 2019), mixture-of-experts (Ren et al., 2021), and Instead, we note that a relatively simple approach to increasing expressiveness is to make use of latent variables, providing the agent with its own inference procedure for modeling stochasticity in the observations, environment, and unseen rewards. Introducing latent variables into the policy makes it possible to capture a diverse set of scenarios that are compatible with the history of observations. In particular, a majority of approaches for handling partial observability make use of world models (Hafner et al., 2019;2020), which already result in a latent variable policy, but existing training algorithms do not make use of the latent belief state to its fullest extent. This is due in part to the fact that latent variable policies do not admit a simple expression for its entropy, and we show that naïvely estimating the entropy can lead to catastrophic failures during policy optimization. Furthermore, high-variance stochastic updates for maximizing entropy do not immediately distinguish between local random perturbations and multi-modal exploration. We propose remedies to these aforementioned downsides of latent variable policies, making use of recent advances in stochastic estimation and variance reduction. When instantiated in the actor-critic framework, the result is a simple yet effective policy optimization algorithm that can perform better exploration and lead to more robust training in both fully-observed and partially-observed settings.
Our contributions can be summarized as follows:
• We motivate the use of latent variable policies for improving exploration and robustness to partial observations, encompassing policies trained on world models as a special instance. • We discuss the difficulties in applying latent variable policies within the MaxEnt RL paradigm. We then propose several stochastic estimation methods centered around costefficiency and variance reduction. • When applied to the actor-critic framework, this yields an algorithm (SMAC; Figure 1) that is simple, effective, and adds minimal costs. • We show through experiments that SMAC is more sample efficient and can more robustly find optimal solutions than competing actor-critic methods in both fully-observed and partially-observed continuous control tasks.
BACKGROUND
MAXIMUM ENTROPY REINFORCEMENT LEARNING
We first consider a standard Markov decision process (MDP) setting. We denote states x t ∈ S and actions a t ∈ A, for timesteps t ∈ N. There exists an initial state distribution p(x 1 ), a stochastic transition distribution p(x t |x t−1 , a t−1 ), and a deterministic reward function r t : S ×A → R. We can then learn a policy π(a t |x t ) such that the expected sum of rewards is maximized under trajectories τ (x 1 , a 1 , . . . , x T , a T ) sampled from the policy and the transition distributions.
While it is known that the fully-observed MDP setting has at least one deterministic policy as a solution (Sutton & Barto, 2018;Puterman, 1990), efficiently searching for an optimal policy generally requires exploring sufficiently large part of the state space and keeping track of a frontier of current best solutions. As such, many works focus on the use of stochastic policies, often in conjunction with the maximum entropy (MaxEnt) framework,
max π E p(τ ) ∞ t=0
γ t (r t (x t , a t ) + αH(π(·|x t ))) , where H(π(·|x t )) = E at∼π(·|xt) [− log π(a t |x t )] ,
(1) where p(τ ) is the trajectory distribution with policy π, H(·) is entropy and γ is a discount factor.
The MaxEnt RL objective has appeared many times in the literature (e.g. Todorov (2006);Rawlik et al. (2012);Nachum et al. (2017)), and is recognized for its exploration (Hazan et al., 2019) and robustness (Eysenbach & Levine, 2022) capabilities. It can be equivalently interpreted as variational inference from a probabilistic modeling perspective (Norouzi et al., 2016;Levine, 2018;Lee et al., 2020a). Intuitively, MaxEnt RL encourages the policy to obtain sufficiently high reward while acting as randomly as possible, capturing the largest possible set of optimal actions. Furthermore, it also optimizes policies to reach future states where it has high entropy (Haarnoja et al., 2017), resulting in improved exploration.
Soft Actor-Critic A popular algorithm for solving MaxEnt RL is Soft Actor-Critic (SAC; Haarnoja et al. (2018b)), which we directly build on in this work due to its reasonable good performance and relative simplicity. Briefly, SAC alternates between learning a soft Q-function Q(x t , a t ) that satisfies the soft Bellman equation,
Q(x t , a t ) = r t (x t , a t ) + γE at+1∼π(·|xt+1),xt+1∼p(·|xt,at) [Q(x t+1 , a t+1 ) + αH(π(·|x t+1 ))] ,(2)
and learning a policy with the maximum entropy objective,
max π E xt∼D E π(at|xt) [Q(x t , a t ) + αH(π(·|x t ))] .
( 3) where states are sampled from a replay buffer D during training. In practice, SAC is often restricted to the use of policies where the entropy can be computed efficiently, e.g. a factorized Gaussian policy for continuous control environments. This allows random movements to occur as noise is added independently for each action dimension. Our proposed approach, on the other hand, introduces a structure in the exploration noise level.
WORLD MODELS FOR PARTIALLY-OBSERVED ENVIRONMENTS
In many practically motivated settings, the agents only have access to certain observations, e.g. partial states, and the complete states must be inferred through observations. This can be modelled through the partially observed MDP (POMDP) graphical model, which encompasses a wide range of problem settings involving uncertainty. POMDP can be used to model uncertainty in the state, reward, or even the transition model itself (Åström, 1964). Here, the optimal policy must take into account these uncertainties, naturally becoming stochastic and may exhibit multi-modal behaviors (Todorov, 2006). Notationally, we only have access to observations x t ∈ X with incomplete information, while the latent state s t ∈ S is unobserved, leading to a latent state transition distribution p(s t |s t−1 , a t−1 ), observation distribution p(x t |s t ), and reward function r t (s t , a t ).
In order to tackle this regime, people have resorted to learning world models (Deisenroth & Rasmussen, 2011;Ha & Schmidhuber, 2018) that attempt to learn a belief state conditioned on the history of observations and actions, typically viewed as performing variational inference on the POMDP, i.e. the world model is then responsible for tracking a belief state s t , which is update based on new observations through an inference model q(s t |s t−1 , a t−1 , x t ). The POMDP and the inference model are often jointly trained by maximizing a variational bound on the likelihood of observations,
log p(x 1:T |a 1:T ) ≥ E q T t=1 log p(x t |s t ) − D KL (q(s t |s t−1 , a t−1 , x t ) p(s t |s t−1 , a t−1 )) . (4)
The world model is then typically paired with a policy that makes use of the belief state to take actions, i.e. π(a t |s t ) with s t ∼ q(s t |a <t , x ≤t ), as the assumption is that the posterior distribution over s t contains all the information we have so far regarding the current state.
STOCHASTIC MARGINAL ACTOR-CRITIC (SMAC)
We now discuss the use of latent variables for parameterizing policy distributions, and how these appear naturally under the use of a world model. We discuss the difficulties in handling latent variable policies in reinforcement learning, and derive cost-efficient low-variance stochastic estimators for marginalizing the latent state. Finally, we put it all together in an actor-critic framework.
LATENT VARIABLE POLICIES
We advocate the use of latent variables for constructing policy distributions as an effective yet simple way of increasing flexibility. This generally adds minimal changes to existing stochastic policy algorithms. Starting with the MDP setting, a latent variable policy (LVP) can be expressed as π(a t |x t ) := π(a t |s t )q(s t |x t ) ds t ,
where s t is a latent variable conditioned on the current observation. In the MDP setting, the introduction of a latent q(s t |x t ) mainly increases the expressiveness of the policy. This thus allows the policy to better capture a wider frontier of optimal actions, which can be especially helpful during the initial exploration when we lack information regarding future rewards. We discuss extensions to POMDPs shortly in the following section, where the policy is conditioned on a history of observations.
For parameterization, we use factorized Gaussian distributions for both π(a t |s t ) and q(s t |x t ). Firstly, this results in a latent variable policy that is computationally efficient: sampling and density evaluations both remain cheap. Furthermore, this allows us to build upon existing stochastic policy algorithms and architectures that have been used with a single Gaussian distribution, by simply adding a new stochastic node s t . Secondly, we can show that this is also a sufficient parameterization: with standard neural network architectures, a latent variable policy can universally approximate any distribution if given sufficient capacity. Intuitively, it is known that a mixture of factorized Gaussian is universal as the number of mixture components increases, and we can roughly view a latent variable model with Gaussian-distributed π and q as an infinite mixture of Gaussian distributions. Proposition 1. For any d-dimensional continuous distribution p * (x), there exist a sequence of two-level latent variable model p n (x) = p n (x|z)p n (z) dz, n ∈ N + that converge to it, where both p n (x|z) and p n (z) are factorized Gaussian distributions with mean and variance parameterized by neural networks.
Proof can be found in Appendix D.1.
WORLD MODELS INDUCE LATENT VARIABLE POLICIES
Perhaps unsurprisingly, latent variables already exist as part of many reinforcement learning works. In particular, in the construction of probabilistic world models, used when the environment is highly complex or only partially observable. Some works only use intermediate components of the world model as a deterministic input to their policy distribution (e.g. Lee et al. (2020a)), disregarding the distributional aspect, while other approaches use iterative methods for producing an action (e.g. Hafner et al. (2020)). We instead simply view the world model for what it is-a latent state inference model-which naturally induces a latent variable policy, π(a t |a <t , x ≤t ) = π(a t |s t )q(s t |a <t , x ≤t ) ds t . This follows the form of Equation 5 where the context includes the entire history, i.e. h t = (a <t , x ≤t ). Note that π(a t |s t ) conditions only on the current latent state due to a Markov assumption typically used in existing world models (see Figure 2), though our algorithms easily extend to non-Markov settings as well. Furthermore, this marginalizes over the full latent history due to the recurrence,
q(s t |a <t , x ≤t ) = q(s t |s t−1 , a t−1 , x t )q(s t−1 |a <t−1 , x ≤t−1 ) ds t−1 ,(7)
which when recursively applied, we can see that the belief state s t -and hence the policy-marginalizes over the entire history of belief states. A more thorough discussion is in Appendix B.2.
Our approaches for handling latent variables are agnostic to what q conditions on, so to unify and simplify notation, we use the shorthand h t (a <t , x ≤t ) to denote history information. This subsumes the MDP setting, where q(s t |h t ) is equivalent to q(s t |x t ) due to Markovian conditional independence.
MAXENT RL IN THE PRESENCE OF LATENT VARIABLES
The presence of latent variables makes training with the maximum entropy objective (equations 1 and 3) difficult. Firstly, it requires an accurate estimation of the entropy term, and the entropy of a latent variable model is notoriously hard to estimate due to the intractability of marginalization (Paninski, 2003;Lim et al., 2020). Secondly, the use of latent variables results in an increase in gradient variance, which we remedy with variance reduction methods at a negligible cost. Finally, the appearance of latent variables can also be used within the Q-function to better aggregate uncertainty. For each, we derive principled methods for handling latent variables, while the end result is actually fairly simple and only adds a minimal amount of extra cost compared to non-latent variable policies.
ESTIMATING THE MARGINAL ENTROPY
An immediate consequence of using latent variables is that the entropy, or marginal entropy, becomes intractable, due to the log-probability being intractable, i.e.
H(π(·|h t )) = E π(at|ht) − log π(a t |s t )q(s t |h t ) ds t .
Failure cases of naïve entropy estimation Applying methods developed for amortized variational inference (Kingma & Welling, 2013;Burda et al., 2016) can result in a bound on the entropy that is in the wrong direction. For instance, the standard evidence lower bound (ELBO) results in an entropy estimator,
H naïve (h t ) E π(at|ht) E r(st|at,ht) [− log π(a t |s t ) + logq(s t |a t , h t ) − log q(s t |h t )] ,(9)
whereq is any variational distribution, for example settingq(s t |a t , h t ) = q(s t |h t ). Adopting this naïve estimator will result in maximizing an upper bound on the MaxEnt RL objective, which we can see by writing out the error,
H naïve (h t ) = H(π(·|h t )) + E π(at|ht) [D KL (q(s t |a t , h t ) q(s t |a t , h t ))] .(10)
0 1000 2000 3000 4000 5000
Step 0 Therefore, replacing the entropy in the MaxEnt RL objective (Equation 1) with H naïve will lead to maximizing the error-i.e. the KL divergence-incentivizing the variational distribution to be as far as it can from the true posterior q(s t |a t , h t ). Furthermore, this error is unbounded so it may become arbitrarily large without actually affecting the true entropy we want to be maximizing, H(π(·|h t )), which leads to serious numerical instability issues. In Figure 3, we show the results from a preliminary experiment where this approach to entropy estimation during policy optimization led to extremely large values (scale of 10 18 ), significantly overestimating the true entropy, and resulted in policies that did not learn. More details are in Appendix C. To overcome this overestimation issue, we propose the following method for achieving accurate estimation.
Lower bounding the marginal entropy with nested estimator To be amenable to entropy maximization, we must construct a lower bound estimator of the marginal entropy. For this, inspired by advances in hierarchical inference (Yin & Zhou, 2018;Sobolev & Vetrov, 2019), a method to estimate the marginal entropy (Equation 8) via a lower bound can be obtained. Specifically, for any K ∈ N, we define
H K (h t ) E at∼π(at|ht) E s (0) t ∼p(st|at,ht) E s (1:K) 1 K + 1 K k=0 π a t |s (k) t . (11) where p(s t |a t , h t )
is the (unknown) posterior of the policy distribution; however, we can easily sample from this by first sampling s t . This results in a nested estimator where we effectively sample K + 1 times from q(s t |h t ), use only the first latent variable s (0) t for sampling the action, while using all the latent variables to estimate the marginal entropy. Note that this is not equivalent to replacing the expectation inside the logarithm with independent samples, which would correspond to an IWAE estimator (Burda et al., 2016). Equation 11 results in a nested estimator that is monotonically increasing in K, which in the limit, becomes an unbiased estimator of the marginal entropy, i.e.
H K (h t ) ≤ H [π(·|h t )], H K (h t ) ≤ H K+1 (h t ) and lim K→∞ H K (h t ) = H [π(·|h t )]
. Thus, replacing the marginal entropy with H K results in maximizing a tight lower bound on the MaxEnt RL objective, and is much more numerically stable in practice. Proofs for these results are in Appendix D. In practice, we find that using reasonable values for K does not increase computation times since sampling multiple times is easily done in parallel, and the evaluation of π(a t |s t ) is cheap relative to other components such as the world model.
VARIANCE REDUCTION WITH ANTITHETIC MULTI-LEVEL MONTE CARLO
While latent variable policies can optimize for the MaxEnt RL objective better in expectation, its reliance on stochastic estimation techniques introduces additional gradient variance. This higher variance actually results in poorer sample efficiency, negating any gains obtained from using a more flexible distribution. In particular, it has been shown that multi-sample estimators like Equation 11 can result in more noise than signal as K increases (Rainforth et al., 2018a). To remedy this, we adopt a simple yet reliable variance reduction method referred to as antithetic multi-level Monte Carlo (MLMC). While this method has been used in simulations of stochastic differential equations (Giles, 2008;Giles & Szpruch, 2014) and more recently, in variational inference (Ishikawa & Goda, 2021;Shi & Cornish, 2021), it has not yet seen uses in the context of reinforcement learning.
Applying MLMC to the estimator in Equation 11, we have
H MLMC K = log 2 (K) =0 ∆ H 2 , where ∆ H 2 = H 1 if = 0, H 2 − 1 2 H (a) 2 −1 + H (b) 2 −1 otherwise.(12)
At the -th level, after we have generated 2 i.i.d. samples, we use half to compute H (a) 2 −1 , the other half to compute H (b) 2 −1 , and all of the samples to compute H 2 . This antithetic sampling scheme is a key ingredient in reducing variance, and can achieve the optimal computational complexity for a given accuracy (Ishikawa & Goda, 2021). We compute all the ∆ H 2 terms in parallel in our implementation, so there is an almost negligible additional cost compared to H K . The only consideration involved for using H MLMC K is that K should be a power of two.
ESTIMATING THE MARGINAL Q-FUNCTION
Under the POMDP setting, we aim to build a Q-function upon the inferred belief states s t as these contain the relevant dynamics and reward information. However, while most existing approaches such as Lee et al. (2020a); Hafner et al. (2020) only take one sample of the inferred latents as input of the Q-function, we propose marginalizing out the latent distribution in the critic calculation. This can be seen by interpreting the Q-function through the probabilistic inference framework in Levine (2018), where the reward function is viewed as the log-likelihood of observing a binary optimality random variable O, i.e. satisfying p(O t = 1|s t , a t ) ∝ exp(r(s t , a t )). As a result, the Q-function is equivalent to Q(s t , a t ) = log p(O t:T |s t , a t ). Since our latent belief state represents the uncertainty regarding the system in the context of POMDPs, including the current state and unseen rewards, we propose marginalizing the value function over the belief state. Hence, through this probabilistic interpretation, the marginal Q-function is related to the Q-function over latent states through
Q(h t , a t ) = log p(O t:T |s t , a t )q(s t |h t ) ds t = log exp {Q(s t , a t )}q(s t |h t ) ds t .(13)
Given this, we hence propose the following estimator to be used during policy optimization,
Q(h t , a t ) ≈ Q K (h t , a t ) log 1 K K k=1 exp Q(s (k) t , a t ) , s (1:K) t ∼ q(s t |h t ),(14)
where Q(s t , a t ) is trained to satisfy the soft Bellman equation in Equation 2. A closely related approach is from Lee et al. (2020a) who similarly trains a Q-function on latent states; however, they directly use Q(s t , a t ) during policy optimization, which is a special case with K = 1, whereas using K > 1 results in a closer approximation to the marginal Q-function. We found this construction for marginalizing the Q-function to be useful mainly when used in conjunction with a world model.
STOCHASTIC MARGINAL ACTOR-CRITIC (SMAC)
While the above methods can each be applied to general MaxEnt RL algorithms, we instantiate a concrete algorithm termed Stochastic Marginal Actor-Critic (SMAC). SMAC is characterized by the use of a latent variable policy and maximizes a lower bound to a marginal MaxEnt RL objective. Specifically, we use the same method as SAC to train Q(s t , a t ) on latent states, but we train the policy using a low-variance debiased objective for taking into account latent state marginalization,
max π E ht∼D,at∼π Q K (h t , a t ) + α H MLMC K (h t ) .(15)
We train the inference model q(s t |h t ) with standard amortized variational inference (Equation 4), and we train only π(a t |s t ) using the objective in Equation 15. When not used with a world model, we train both π(a t |s t ) and q(s t |h t ) using Equation 15. See Algorithms 1 and 2 for a summary of the training procedures, and more details regarding implementation for SMAC in Appendix B. (Ahmed et al., 2018), and stronger robustness (Eysenbach & Levine, 2022). Generally, policies optimizing the MaxEnt RL objective sample actions that are proportional to the exponentiated reward, and alternatively can be viewed as a noise injection procedure for better exploration (Attias, 2003;Ziebart, 2010;Haarnoja et al., 2017;Nachum et al., 2017;Levine, 2018;Abdolmaleki et al., 2018;Haarnoja et al., 2018b;Vieillard et al., 2020b;Pan et al., 2022;2023;Lahlou et al., 2023). However, this noise injection is commonly done directly in action space, leading to only local perturbations, whereas we inject noise through a nonlinear mapping.
RELATED WORK
Latent variable modeling The usage of latent variable models originates from graphical models (Dayan et al., 1995;Hinton et al., 2006) and has been recently popularized in generative modeling (Kingma & Welling, 2013;Rezende et al., 2014;Zhang et al., 2021a;2022b;a). The estimation of the log marginal probability and marginal entropy has long been a central problem in Bayesian statistics and variational inference (Newton, 1994;Murray & Salakhutdinov, 2008;Nowozin, 2018;Ishikawa & Goda, 2021;Malkin et al., 2022). However, most of these works consider a lower bound on the log marginal probability for variational inference, which is not directly applicable to maximum entropy as discussed in Section 3.2.1. A few works have proposed upper bounds (Sobolev & Vetrov, 2019;Dieng et al., 2017) or even unbiased estimators (Luo et al., 2020), and while we initially experimented with a couple of these estimators, we found that many results in high gradient variance and ultimately identified an approach based on hierarchical inference technique for its efficiency and suitability in RL.
Latent structures in POMDPs and world models Settings with only partial observations are natural applications for probabilistic inference methods, which help learn latent belief states from observational data. As such, variational inference has been adopted for learning sequential latent variable models (Ghahramani & Hinton, 2000;Krishnan et al., 2015;Fraccaro et al., 2016;Karl et al., 2017;Singh et al., 2021). One paradigm is to use the learned recurrent model to help model-free RL algorithms (Wahlstrom et al., 2015;Tschiatschek et al., 2018;Buesing et al., 2018;Igl et al., 2018;Gregor et al., 2019;Han et al., 2020). Another approach is to use world models for solving POMDP and building model-based RL agents (Deisenroth & Rasmussen, 2011;Hausknecht & Stone, 2015;Watter et al., 2015;Zhang et al., 2019;Hafner et al., 2019;2020;2021;Nguyen et al., 2021;Chen et al., 2022) due to their planning capabilities. It is also sometimes the case that the world model is treated mainly as a representation, without much regard for the measure of uncertainty (Ha & Schmidhuber, 2018;Schrittwieser et al., 2020;Amos et al., 2021;Hansen et al., 2022).
EXPERIMENTS
We evaluate SMAC on a series of diverse continuous control tasks from DeepMind Control Suite (DMC; Tassa et al. (2018)). These tasks include challenging cases in the sense of having sparse rewards, high dimensional action space, or pixel observations. We also perform a preliminary unit test with a multi-modal reward in Appendix B.1.
STATE-BASED CONTINUOUS CONTROL ENVIRONMENTS
Setting We first compare SMAC with SAC and TD3 (Fujimoto et al., 2018) baselines on a variety of state-based environments to demonstrate the advantage of latent variable policies. We show eight environments in Figure 4, and leave more results in Appendix due to space limitations.
Results
We find that even in the simple MDP setting, we can improve upon SAC by simply introducing a latent variable. Specifically, our method is almost never worse than SAC, implying that the extra gradient variance from the entropy estimation is not incurring a penalty in terms of sample efficiency. By being able to track a wider frontier of optimal action trajectories, SMAC can be more robust to find optimal policies, particularly when the reward is sparse (e.g., cartpole swingup sparse).
Comparison with other probabilistic modeling approaches We further conduct extensive empirical comparisons with other probabilistic policy modeling methods including normalizing flow and mixture-of-experts (Ren et al., 2021) based SAC methods in Figure 11. Our proposed SMAC generally achieves the best sample efficiency on almost all environments. Due to limited space, we defer related discussion to Appendix C.
Step Episode Return quadruped_walk Particles 8 16 32
Step Episode Return reacher_hard (a) Marginalization
Step Marginalization Marginalizing over the latent state has a significant effect on training, though this often exhibits a diminishing rate, suggesting that using reasonable number of particles is sufficient. Figure 5 shows this behavior for the quadruped walk task.
Variance-reduced updates We find that the use of MLMC as a variance reduction tool is crucial for the latent variable policy to perform well in some difficult environments such as the quadruped escape task. Figure 5b shows that using MLMC clearly reduces variance and makes training much more robust, whereas using only the nested estimator performs closer to the baseline SAC (see comparison in Figure 10).
PIXEL-BASED CONTINUOUS CONTROL ENVIRONMENTS
Setting We next compare different algorithms on a series of DMC environments with pixel-based input. Since this task is much more challenging, and pixels only provide partial observability, we make use of a world model (as described in Section 2.2) to supplement our algorithm. We use the recurrent state-space model ( The proposed SMAC approach achieves better expected return than similar Latent-SAC and SLAC baselines. Notice that our method does not involve any planning, but still achieves comparable (sometimes even better) performance to the model-based RL algorithm. differentiating through the dynamics model. For training, we follow Hafner et al. (2020) and repeat each action 2 times. We show the comparison on eight tasks in Figure 6, and again relegate more results to the Appendix due to space constraints.
Results Comparing SMAC to the Latent-SAC baseline, we again find that we can often find an optimal policy with fewer environment interactions. We find that SLAC and Latent-SAC are roughly on par, while SLAC can also some times perform worse, as their policy does not condition on the latent state. The model-based approach has widely-variable performance when compared to the actor-critic approaches. Interestingly, in most of the environments where the model-based approach is performing well, we find that SMAC can often achieve comparable performance, even though it does not make use of planning. Overall, we find that our method improves upon actor-critic approaches and bridges the gap to planning-based approaches.
Robustness to noisy observations While the pixel-based setting already provides partial observations, we test the robustness of our approach in settings with higher noise levels. In Table 1 we report the episodic rewards of both SMAC and a SAC baseline on three environments (finger spin, hopper stand, and reacher easy) under noisy perturbations and missing pixels (Meng et al., 2021). We find that SMAC behaves more robustly than the baseline across almost all settings.
Efficiency Despite the extra estimation procedures, SMAC does not incur significant computational costs as we can compute all terms in the estimators in parallel. Tested with an NVIDIA Quadro GV100 on the pixel-based environments, our SMAC implementation does 60 frames per second (FPS) on average, almost the same training speed compared to Latent-SAC (63 FPS), whereas differentiating through a single rollout over the dynamics model already reduces to 51 FPS (roughly 20% slower).
CONCLUSION
We propose methods for better handling of latent variable policies under the MaxEnt RL framework, centered around cost-efficient computation and low-variance estimation, resulting in a tractable algorithm SMAC when instantiated in the actor-critic framework. We find that SMAC can better make use of the belief state than competing actor-critic methods and can more robustly find optimal policies, while adding only very minimal amounts of extra compute time.
A NOTATIONS
Symbol Description Belief distribution over inferred states from observation x t q(s t |a <t , x ≤t )
Belief distribution over inferred states from past data q(s t |h t )
Unifies notation for the above two π(a t |s t )
Policy conditioned on latent variable s t π(a t |x t ) Latent variable policy, equals π(a t |s t )q(s|x) ds π(a t |a <t , x ≤t ) Latent variable policy, equals π(a t |s t )q(s t |a <t , x ≤t ) ds t π(a t |h t )
Unifies notation for the above two While a latent variable policy can theoretically model any distribution (see Proposition 1), training this policy can still be difficult, especially if the true reward is actually multi-modal. Here, we test in a controlled setting, whether our method can truly recover a multi-modal policy.
A standard interpretation of the MaxEnt RL objective is as a reverse KL objective (Levine, 2018), max
π E p(x)π(a|x)) [r(x, a) − α log π(a|x)] (16) ⇔ max π E p(x)π(a|x)) r(x, a) α − log π(a|x) (17) ⇔ max π E p(x)π(a|x)) log exp r(x, a) α − log π(a|x) (18) ⇔ min π E p(x) [D KL (π(a|x) p * (a|x))](19)
where p * (a|x) ∝ exp r(x,a) α , i.e., a target distribution defined by the exponentiated reward function and annealed with temperature α.
Despite the ubiquity of the reverse KL objective-such as appearing in standard posterior inferencethe training of latent variable models for this objective is still relatively under-explored due to the difficulty in estimating this objective properly. Luo et al. (2020) showed that using improper bounds on the objective can lead to catastrophic failure, but only showed successful training for a unimodal target distribution, while Sobolev & Vetrov (2019) discussed proper bounds but did not perform such an experiment. We experiment by setting a reward function that has multiple optimal actions. Using a sufficiently large α creates a target distribution with four modes (Figure 7a). In Figure 7b, we show that we can successfully learn a multi-modal distribution with a latent variable policy using the methods discussed in Section 3.2.1 and 3.2.2. On the other hand, a Gaussian policy can only capture one out of four modes (Figure 7c), with the exact mode depending on the random initialization.
B.2 WORLD MODEL LEARNING
In Figure 8 we visualize the graphical model for the RSSM similarly with Hafner et al. (2019) described in Section 2.2. We use solid arrows to denote the generative machinery (p in the following equations) and dotted arrows to denote the inference machinery (q in the following equations). A variational bound for the likelihood on observed trajectory could be written as follows,
log p(x ≤T , r ≤T |a ≤T ) ≥ E s ≤T ∼q [log p(x ≤T , r ≤T , s ≤T |a ≤T ) − log q(s ≤T |x ≤T , a ≤T )] (20) =E q T t=1
log p(x t |s t ) + log p(r t |s t ) + log p(s t |s t−1 , a t−1 ) − log q(s t |s t−1 , a t−1 , x t ) (21)
=E q T t=1
log p(x t |s t ) + log p(r t |s t ) − D KL (q(s t |s t−1 , a t−1 , x t ) p(s t |s t−1 , a t−1 )) . (22) The world model / RSSM is then learned by maximizing Equation 22 with regard to parameters of p(x|s), p(r|s), q(s t |s t−1 , a t−1 , x t ) and p(s t |s t−1 , a t−1 ). Note that in Section 2.2 we omit the reward modeling part for simplicity. Due to the Markovian assumption on latent dynamics and the shorthand of h t (a <t , x ≤t ) , we could also use q(s t |h t ) to denote q(s t |s t−1 , a t−1 , x t ). In this section, we present the algorithmic details of SMAC with and without world model in Algorithm 1 and Algorithm 2 respectively. Both of the two algorithms follow the commonly adopted off-policy actor-critic style and utilize a replay buffer to save data for the update of both the actor and critic networks (the dependency of buffer D is omitted in the algorithms). Our SMAC is based on SAC algorithm, whose critic is trained by minimizing TD error,
B.3 SMAC ALGORITHM
J Q = Q(x, a) − r + γQ(x , a ) + α H(π(·|x )) 2 ,(23)
where (x, a, r, x ) ∼ D, a ∼ π(·|x ),Q denotes a stop gradient operator and H is an estimate for the policy entropy. In our case, we estimate the entropy of latent variable policy with Equation 12 as discussed in Section 3.2. What's more, the actor is updated via minimizing
J π = −Q(x, a) − α H(π(·|x)),(24)
where x ∼ D and a ∼ π(·|x), which is equivalent to a ∼ π(·|s), s ∼ q(s|x). In the algorithm box we omit the moving average of the critic network for simplicity, which is adopted as common-sense. We remark that SMAC has not much difference with SAC in the sense of RL algorithmic details, but mainly achieve improvement with the structured exploration behavior achieved from latent variable modeling. For SMAC in conjunction with a world model, we learn the critic network by minimizing TD error on the latent level,
J Q = Q(s, a) − r + γQ(s , a ) + α H(π(·|s )) 2 ,(25)
whose terms can be seen as one sample estimate to each term in Equation 23. We also try to directly train the critic network on the observation level, but the empirical difference is negligible. As a result, we keep the latent level TD learning for simplicity's sake.
We next state a method to encourage exploration through conditional entropy minimization. Entropy in a latent variable policy (Equation 5) can be increased by either dispersing the probability density to other modes, or by increasing the entropy at a single mode. The latter corresponds to increasing the entropy of the conditional distribution π(a t |s t ), which can end up as a shortcut to increasing entropy and can result in spurious local minima during training. This is in fact a well-known issue that sampling-based objectives run into (Rainforth et al., 2018b;Midgley et al., 2022), resulting in a policy that explores the space of action trajectories at a slower pace. On the other hand, notice that in the proof of Proposition 1 in Section D.1 we require the decoder variance to be sufficiently small to be expressive, thus we propose remedying this issue for MaxEnt RL by adding a conditional entropy term to the objective:
max π E p(τ ) ∞ t=0 γ t r t (x t , a t ) + αH(π(·|x t )) − βE q(st|xt) [H(π(·|s t ))] .(26)
The conditional entropy H(π(·|s t )) represents the entropy around a single mode. By minimizing this in conjunction with maximizing the marginal entropy, we incentivize the policy to disperse its density to other modes, encouraging it to explore and find all regions with high reward. This allows the latent variable policy to make better use of its source of randomness s t , and encourages a nonlinear mapping between s t and the action space. This is in stark contrast to entropy maximization with a Gaussian policy, where random noise is simply added linearly and independently to the action. This technique is only useful for a few experiments (hopper hop humanoid run humanoid stand quadruped escape quadruped run reacher easy walker run) on state-based model-free experiments. In the pixel-based experiments where SMAC leverages a world model, the distribution of the latent variable is learned within the world model, thus there is no need to further involve such a regularizer.
Unadopted techniques Our proposed technique is universal and could be applied to any MaxEnt RL algorithm. For example, we also try to combine Dreamer with MaxEnt in the policy optimization part, and wish to further improve the performance with our method. Nonetheless, as stated in the Appendix of Hafner et al. (2020), there is no much positive effect of introducing MaxEnt principle into Dreamer implementation. Indeed, we find that doing MaxEnt on the actor of Dreamer will in fact lower down the sample efficiency. Therefore, we think it is not meaningful to put the experiments of this part into this work.
Another unused technique lies in latent variable modeling. MLMC with finite samples (i.e., K < ∞) still gives a biased estimator. On the other hand, Russian Roulette estimator (Kahn, 1955) enables an unbiased estimate of an infinite series of summation with the help of randomized truncations, together with corresponding term upweighting operation. This technique is also used in many modern machine learning problems (Xu et al., 2019;Chen et al., 2019). As a result, we also try to introduce Russian Roulette calculation into our MLMC estimator. However, we do not find much evident improvement in our RL experiments, thus we do not take this technique into the final SMAC algorithm.
C ADDITIONAL DETAILS REGARDING EXPERIMENTS
For all episode return curves, we report the mean over 5 seeds and 95% confidence intervals.
Published as a conference paper at ICLR 2023
Algorithm 1 SMAC (without a world model) Figure 3 We run SMAC on DMC finger spin environment with different marginal log probability estimators. We obtain the ground truth value via expensive Monte Carlo estimation with 1 × 10 5 samples. From the figure, we can see that there is little hope of using a naïve upper bound of entropy for MaxEnt RL, where a reasonable scale of the entropy term is ∼ 10 0 . For IWAE (Burda et al., 2016) implementation, we set the number of particles to 32. Other values for the number of particles give similar results.
for t = 1 . . . T do 4: a t ∼ π(a t |s t ), s t ∼ q(s t |s t−1 , a t−1 , x t ) 5: r t , x t+1 ← env.step(a t ) 6: end for 7: D ← D ∪ {(x t , a t , r t ) T
Step Episode Return quadruped_walk Particles 8 16 32
Step Episode Return reacher_hard Figure 9: Ablation study of the number of particles for SMAC on quadruped walk and reacher hard environments. The effect is evident on some but not all environments.
Regarding Section 5.1 For model-free experiments, the agents are fed with state-based inputs.
We follow the PyTorch model-free SAC implementation of Tandon (2020) for this part. The actor is parametrized by a tanh-Gaussian distribution. For our method, we additionally use a two layer MLP to parametrize the latent distribution q(s|x). We set the neural network width of the baselines and SMAC to 400 and 256 respectively to keep comparable number of parameters. For the entropy coefficients, we use the same autotuning approach from SAC (Haarnoja et al., 2018b). We follow Fujimoto et al. (2018) for the TD3 implementation details, except that we do not take its 1 × 10 −3 learning rate. This is because we found the original learning rate gives very poor performance in our experiments, so instead we set its learning rate to 3 × 10 −4 , which is empirically much better and also consistent with two other algorithms. We conduct ablation study for the number of particles (i.e., K in Equation 12) used in Figure 5. This indicates that the number of level / particles used in the estimation has an effect on some of the environments. We choose the best hyperparameters (number of particles in Figure 11: Experiment results about comparison with other probabilistic policy modeling methods on different DMC environments with state-based observations. "Flow2" and "Flow4" refer to normalizing flow based policy with two or four RealNVP blocks as backend, while "MOE5" and "MOE10" refer to probabilistic mixture-of-experts policy with five or ten mixtures. MOE, SAC and SMAC share similar number of parameters, while flow methods have about two or four times more parameters. Our proposed SMAC general achieves the best sample efficiency on a majority of environments. as other methods. We experiment with two-block and four-block RealNVPs, which makes their number of parameters to be approximately twice and four times as the number of SAC and SMAC methods (these two share approximate the same number of parameters). We show their performance in Figure 11 as "Flow2" and "Flow4", respectively. About the probabilistic mixture-of-experts (MOE) method, we follow the practice of Ren et al. (2021) to use a five Gaussian mixture and a ten Gaussian mixture. We show their performance in Figure 11 as "MOE5" and "MOE10", respectively. This probabilistic MOE shares roughly around the same number of parameters and compute cost as SAC and SMAC. To conclude, SMAC achieves more favorable performance across a majority of the control tasks with less or equal number of parameters (notice that SMAC is the red curve in the figure). Normalizing flow based polices outperform SMAC on only one environment, while MOE requires the use of biased gradient estimators which often required extra hyperparameter tuning or could lead to worse performance.
Regarding Section 5.2 For this part, the agents are fed with pixel-based inputs. We mainly follow the PyTorch world model implementation of Lin (2022) for this part. We implement the Latent-SAC algorithm according to instructions and hyperparameters in Wang et al. (2022). We implement the SLAC (Lee et al., 2020a) algorithm following its original github repo (Lee et al., 2020b). Note that Latent-SAC and SLAC are two different algorithms, in the sense of actor modeling and world model design, although they both build on a SAC backend. We select the hyperparameters in the same way as the above part. We show the full version of experimental results in Figure 12. We do not plot the results of the humanoid domain as well as quadruped fetch, as all methods could not obtain meaningful results within 1 million frames.
For the robustness experiments, we add two kinds of noises in the pixel space, namely Gaussian perturbation and sensor missing perturbation. For Gaussian perturbation, we add isotropic Gaussian noise with scale 0.01 and 0.05. For sensor missing, we randomly drop the pixel value for each dimension to zero according to a Bernoulli distribution (with parameter 0.01 and 0.05) in an independent way. For results in Table 1, we report the best episodic reward across training iterations with standard deviation estimated from 5 seeds.
D THEORETICAL DERIVATIONS D.1 UNIVERSALITY OF LATENT VARIABLE MODELS
We would first need the help of the following Lemma for the proof.
Lemma 2. For any continuous d-dimensional distribution p * (x) and ∀ > 0, there exists a neural network Ψ : R → R d with finite depth and width that satisfies W [Ψ#N (0, 1)||p * (·)] ≤ . Here # is the push-forward operator, W(µ, ν) := inf π∈Π(µ,ν) |x − y| dπ(x, y) is the Wasserstein metric, and N (0, 1) is the standard Gaussian distribution. We use this result to prove Proposition 1.
Proposition 1. For any d-dimensional continuous distribution p * (x), there exist a sequence of two-level latent variable model p n (x) = p n (x|z)p n (z) dz, n ∈ N + that converge to it, where both p n (x|z) and p n (z) are factorized Gaussian distributions with mean and variance parameterized by neural networks.
Proof of Proposition 1. We let z ∈ R and set p n (z) = N (0, 1), ∀n ∈ N + . From the above Lemma 2, we know that ∀n ∈ N + , ∃Ψ n : R → R d s.t. W [Ψ n #p n (z)||p * (x)] ≤ 1 n , where Ψ n is a finite size neural network. We then set p n (x|z) = N (x; Ψ n (z), 1 n 2 ). Note that this falls into the category of factorized Gaussian.
Let π 0 be a coupling between p n (x) and Ψ n #p n (z), where π 0 (x, x ) is the joint distribution over (x, x ) and x = x + ζ/n, ζ ∼ N (0, 1), x ∼ Ψ n #p n (z). We thus have W [p n (x)||Ψ n #p n (z)] ≤ |x − x | dπ 0 (x, x ) = 1 √ 2πn < 1 n . Since the Wasserstein metric satisfies the triangle inequality, we have W [p n (x)||p * (x)] ≤ W [p n (x)||Ψ n #p n (z)] + W [Ψ n #p n (z)||p * (x)] ≤ 2 n n→∞ −→ 0.
D.2 TIGHT LOWER BOUND ON THE MARGINAL ENTROPY
Proposition 3 (Lower bound of marginal entropy). For a latent variable policy π(a|h) := π(a|s)q(s|h) dz with prior q(s|h) and likelihood π(a|s), consider H K (h) E a∼π(a|h) E s (0) ∼p(s|a,h) E s (1:K) ∼q(s|h) − log 1 K + 1 K k=0 π a|s (k) .
where p(s|a, h) ∝ π(a|s)q(s|h) is the posterior and K is any positive integer, then the following holds:
(1) H K (h) ≤ H(π(·|h)) − A log S π(a|s)q(s|h) ds da,
(2) H K (h) ≤ H K+1 (h),
(3) lim K→∞ H K (h) = H(π(·|h)).
The following proofs roughly follow the derivations from Sobolev & Vetrov (2019). We describe them here for completeness.
For (1):
Then the marginal of s (0:K) is K h=0 q(s (0:K) |h) w h K k=0 w k δ(s 0 −s (h) )δ(s (1:K) −s (\h) ) ds (0:K) =(K + 1) q(s (0:K) |h) w 0 K k=0 w k δ(s (0) −s (0) )δ(s (1:K) −s (1:K) ) ds (0:K) =(K + 1) q(s (0:K) |h)π(a|s (0) ) K k=0 π(a|s (k) ) = q(s (1:K) |h)p(s (0) |a, h) 1 K+1 K k=0 p(s (k) |a,h) q(s (k) |h) = w(s (0:K) |a, h).
Thus w(s (0:K) |a, h) is a normalized density function. where v(s (0:K+1) |a, h) = p(s (0) |a, h)q(s (1:K+1) |h) 1 K+2 K+1 k=0 π(a|s (k) ) 1 K+1 K k=0 π(a|s (k) ) . We could then show that v(s (0:K+1) |a, h) is a normalized density function similarly as (1).
For (3).
Proof. For the estimator, we have 1 K + 1 K k=0 π(a|s (k) ) = A K 1 K + 1 π(a|s (0) ) +
B K K K + 1 C K 1 K K k=1
π(a|s (k) ) By law of large numbers, we have A K → 0, B K → 1, C K → p(x). Thus the limit of the left-hand side is p(x).
Figure 2 :
2Graphical model of POMDP (solid), world model, and induced latent variable policy (dashed).
Figure 3 :
3Training with naïve entropy estimators results in extremely loose upper bounds.
Figure 4 :
4Experiments on eight DMC environments where agents are given state-based inputs. The SMAC approach improves upon SAC with better exploration and more robust training.
Figure 5 :
5Ablation experiments.
Figure 6 :
6RSSM) architecture fromHafner et al. (2019) as the world model. We refer to this baseline as "Latent-SAC" and follow the practice inWang et al. (2022), which samples from the belief distribution q(s t |a <t , x ≤t ) and directly trains using SAC on top of the belief state. A closely related work, SLAC (Lee et al., 2020a), only uses s t as input to a learned Q-function, while the policy does not use s t and instead uses intermediate layers of the world model as input. Finally, we also compare to Dreamer, a model-based RL (MBRL) algorithm that performs rollouts on the dynamics model(Hafner et al., 2020). This iterative procedure results in a higher computational cost as it requires iteratively sampling from the belief state and differentiating through the rollouts. In contrast, our proposed SMAC aggregates samples from the current belief state and does not require Experiments on eight DMC environments where agents are given pixel-based inputs.
p(x t |s t )Observation model in the world model p(r t |s t )Reward model in the world model p(s t+1 |s t , a t ) Transition model, also the prior of a world model q(s t |s t−1 , a t−1 ,x t ) Inferred posterior dynamics model of learned world model B ADDITIONAL DETAILS REGARDING METHODOLOGY B.1 MULTI-MODALITY OF LATENT VARIABLE POLICIES
Figure 7 :
7Optimizing a latent variable policy for a one-step multi-modal MaxEnt RL objective.
Figure 8 :
8Graphical model of a POMDP (solid) and a world model (dashed).
Figure 10 :
10{8, 16, 32}, dimension of the latent in {8, 16, 32}) for each environment. We show the full set of experimental results in Figure 10. Regarding other probabilistic policy modeling methods We further compare with normalizing flow based policy and mixture-of-experts. For the normalizing flow based method, we follow the practice of Haarnoja et al. (2018a); Ward et al. (2019) to use RealNVP (Dinh et al., 2017) architecture for the policy distribution π(a|s). The neural network is also followed by a tanh transformation Experiment results on different DMC environments with state-based observations.
Figure 12 :
12Experiment results on different DMC environments with pixel-based observations.
HE
K+1 (h) − H K (h) = E s0∼p(s|a,h) E z 1:K+1 ∼q(s|h) s0∼p(s|a,h) E z 1:K+1 ∼q(s|h) log p(s (0) |a, h)q(s (1:K+1) |h) w(s (0:K+1) |a, h) = D KL p(s (0) |a, h)q(s (1:K+1) |h) v(s (0:K+1) |a, h) ,
Maximum entropy reinforcement learning Prior works have demonstrated multiple benefits of MaxEnt RL, including improved exploration (Han & Sung, 2021), regularized behaviors(Neu et al., 2017; Vieillard et al., 2020a), better optimization property
Table 1 :
1Experiments with noisy observations. We experiment with two different perturbation level for two kinds of noise. GAUSSIAN PERTURBATION adds independent white noise to the pixel images while SENSOR MISSING randomly turns a portion of the pixels to black. SMAC is trained with a world model while L-SAC denotes a SAC baseline trained with a world model.GAUSSIAN PERTURBATION
SENSOR MISSING
NOISE
SMALL
LARGE
SMALL
LARGE
METHOD
L-SAC
SMAC
L-SAC
SMAC
L-SAC
SMAC
L-SAC
SMAC
FINGER
912±138
959± 30
880±109
924± 43
933± 63
955± 36
921± 44
921± 52
HOPPER
571±411
731±407
516±385
702±391
721± 16
872± 62
703± 7
866± 61
REACHER
869± 28
928± 10
782± 89
925± 92
883±112
937± 89
854±169
925± 59
Proof. The Theorem 5.1 ofPerekrestenko et al. (2020) shows that for any p * (x) and > 0, there exists a nonlinear ReLU neural network with finite sizeΨ :R → R d that satisfies W Ψ #U [0, 1]||p * (·) ≤ , where U [0, 1] is the uniform distribution on [0, 1].On the other hand, it is well known that the cumulative distribution function (cdf) of standard Gaussian Φ(·) : R → R could map the standard Gaussian to the uniform distribution U [0, 1], thus we have the construction of Ψ :=Ψ • Φ.
t ∼q(st|ht) − log
ACKNOWLEDGEMENTThe authors would like to thank Zhixuan Lin, Tianwei Ni, Chinwei Huang, Brandon Amos, Ling Pan, Max Schwarzer, Yuandong Tian, Tianjun Zhang, Shixiang Gu, and anonymous reviewers for helpful discussions. Dinghuai also expresses gratitude towards his fellow interns at FAIR for creating a lot of joyful memories during the summer in New York City.Proof. We write H(π(·|h)) − H K (h) = E s0∼p(s|a,h) E s (1:K) ∼q(s|h) log 1. We only need to show that w(s (0:K) |a, h) is a normalized density function.Consider such generation process:1. sample K + 1 sampless (k) ∼ q(s|h), k = 0, . . . , K, 2. set weight for each sample w k = π(a|s (k) ), It is easy to see the joint probability of this generation process is p(s (0:K) , s (0:K) , h) = q(s (0:K) |h) w h K k=0 w k δ(s 0 −s (h) )δ(s (1:K) −s (\h) ).sample a categorical random variable h ∼ Cat
Maximum a posteriori policy optimisation. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Manfred , Otto Heess, Martin A Riedmiller, abs/1806.06920ArXiv. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Rémi Munos, Nicolas Manfred Otto Heess, and Martin A. Riedmiller. Maximum a posteriori policy optimisation. ArXiv, abs/1806.06920, 2018.
Understanding the impact of entropy in policy learning. Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, Dale Schuurmans, Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy in policy learning. 2018.
On the model-based stochastic value gradient for continuous reinforcement learning. Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson, Learning for Dynamics and Control. PMLRBrandon Amos, Samuel Stanton, Denis Yarats, and Andrew Gordon Wilson. On the model-based stochastic value gradient for continuous reinforcement learning. In Learning for Dynamics and Control, pp. 6-20. PMLR, 2021.
Optimal control of markov processes with incomplete state information. Karl Johanåström, Journal of Mathematical Analysis and Applications. 10Karl JohanÅström. Optimal control of markov processes with incomplete state information. Journal of Mathematical Analysis and Applications, 10:174-205, 1964.
Planning by probabilistic inference. Hagai Attias, AISTATS. Hagai Attias. Planning by probabilistic inference. In AISTATS, 2003.
Demis Hassabis, and Daan Wierstra. Learning and querying fast generative models for reinforcement learning. Lars Buesing, Théophane Weber, S M Sébastien Racanière, Danilo Jimenez Ali Eslami, David P Rezende, Fabio Reichert, Frederic Viola, Karol Besse, Gregor, abs/1802.03006ArXiv. Lars Buesing, Théophane Weber, Sébastien Racanière, S. M. Ali Eslami, Danilo Jimenez Rezende, David P. Reichert, Fabio Viola, Frederic Besse, Karol Gregor, Demis Hassabis, and Daan Wierstra. Learning and querying fast generative models for reinforcement learning. ArXiv, abs/1802.03006, 2018.
Importance weighted autoencoders. CoRR. Yuri Burda, Roger Baker Grosse, Ruslan Salakhutdinov, abs/1509.00519Yuri Burda, Roger Baker Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. CoRR, abs/1509.00519, 2016.
Residual flows for invertible generative modeling. T Q Ricky, Jens Chen, David Behrmann, Jörn-Henrik Kristjanson Duvenaud, Jacobsen, abs/1906.02735ArXiv. Ricky T. Q. Chen, Jens Behrmann, David Kristjanson Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. ArXiv, abs/1906.02735, 2019.
Flow-based recurrent belief state learning for pomdps. Xiaoyu Chen, Yao Mu, Ping Luo, Sheng Li, Jianyu Chen, ICML. 2022Xiaoyu Chen, Yao Mu, Ping Luo, Sheng Li, and Jianyu Chen. Flow-based recurrent belief state learning for pomdps. In ICML, 2022.
The helmholtz machine. Peter Dayan, Geoffrey E Hinton, M Radford, Richard S Neal, Zemel, Neural Computation. 7Peter Dayan, Geoffrey E. Hinton, Radford M. Neal, and Richard S. Zemel. The helmholtz machine. Neural Computation, 7:889-904, 1995.
Pilco: A model-based and data-efficient approach to policy search. Marc Peter Deisenroth, Carl Edward Rasmussen, ICML. Marc Peter Deisenroth and Carl Edward Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In ICML, 2011.
Variational inference via χ upper bound minimization. Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David Blei, Advances in Neural Information Processing Systems. 30Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, and David Blei. Variational inference via χ upper bound minimization. Advances in Neural Information Processing Systems, 30, 2017.
| [
"https://github.com/zdhNarsil/"
] |
[
"Computer Vision and Image Understanding SImProv: scalable image provenance framework for robust content attribution",
"Computer Vision and Image Understanding SImProv: scalable image provenance framework for robust content attribution"
] | [
"Alexander Black \nCVSSP\nUniversity of Surrey\nUK\n",
"Tu Bui \nCVSSP\nUniversity of Surrey\nUK\n",
"Simon Jenni \nAdobe Research\n\n",
"Zhifei Zhang \nAdobe Research\n\n",
"Viswanathan Swaminanthan \nAdobe Research\n\n",
"John Collomosse \nCVSSP\nUniversity of Surrey\nUK\n\nAdobe Research\n\n"
] | [
"CVSSP\nUniversity of Surrey\nUK",
"CVSSP\nUniversity of Surrey\nUK",
"Adobe Research\n",
"Adobe Research\n",
"Adobe Research\n",
"CVSSP\nUniversity of Surrey\nUK",
"Adobe Research\n"
] | [] | We present SImProv -a scalable image provenance framework to match a query image back to a trusted database of originals and identify possible manipulations on the query. SImProv consists of three stages: a scalable search stage for retrieving top-k most similar images; a re-ranking and near-duplicated detection stage for identifying the original among the candidates; and finally a manipulation detection and visualization stage for localizing regions within the query that may have been manipulated to differ from the original. SImProv is robust to benign image transformations that commonly occur during online redistribution, such as artifacts due to noise and recompression degradation, as well as out-of-place transformations due to image padding, warping, and changes in size and shape. Robustness towards out-of-place transformations is achieved via the end-to-end training of a differentiable warping module within the comparator architecture. We demonstrate effective retrieval and manipulation detection over a dataset of 100 million images. | 10.48550/arxiv.2206.14245 | [
"https://export.arxiv.org/pdf/2206.14245v2.pdf"
] | 250,113,754 | 2206.14245 | 499ca0da4878fc607a890579ddeb72e2301c4fad |
Computer Vision and Image Understanding SImProv: scalable image provenance framework for robust content attribution
Alexander Black
CVSSP
University of Surrey
UK
Tu Bui
CVSSP
University of Surrey
UK
Simon Jenni
Adobe Research
Zhifei Zhang
Adobe Research
Viswanathan Swaminanthan
Adobe Research
John Collomosse
CVSSP
University of Surrey
UK
Adobe Research
Computer Vision and Image Understanding SImProv: scalable image provenance framework for robust content attribution
1 journal homepage: www.elsevier.com
We present SImProv -a scalable image provenance framework to match a query image back to a trusted database of originals and identify possible manipulations on the query. SImProv consists of three stages: a scalable search stage for retrieving top-k most similar images; a re-ranking and near-duplicated detection stage for identifying the original among the candidates; and finally a manipulation detection and visualization stage for localizing regions within the query that may have been manipulated to differ from the original. SImProv is robust to benign image transformations that commonly occur during online redistribution, such as artifacts due to noise and recompression degradation, as well as out-of-place transformations due to image padding, warping, and changes in size and shape. Robustness towards out-of-place transformations is achieved via the end-to-end training of a differentiable warping module within the comparator architecture. We demonstrate effective retrieval and manipulation detection over a dataset of 100 million images.
Introduction
Images are a great way to share stories and spread information. However, images can be easily manipulated to tell altered or even completely false stories. As both the number of images shared online each day and the ease of image manipulation grow, the need for tools to provide content provenance information rises. This is addressed in the recently introduced C2PA standards (Coalition for Content Provenance and Authenticity, 2021) which specifies how provenance information can be encapsulated as meta-data alongside the image content. If an image follows the C2PA standards, users can extract the entire edit story via its secondary stream meta-data. This paper addresses a common scenario where meta-data is striped from an image during its online redistribution. It contributes a technique for robustly matching a query (without meta-data) to an original from a trusted database (with full meta-data), followed by an intuitive visualization of the image regions that have been manipulated to differ from the original.
Robust image matching poses many challenges. Images spread online are often subject to benign transformations such as changes to quality, resolution, aspect ratio, format etc. Additionally, we aim to match images that have been manipulated for editorial reasons that alter or falsify their stories (we also call this editorial changes, as opposed to benign changes). We note that cryptographic (bit-level) hashes cannot be relied for matching, nor can simple pixel difference operations be used to visualize changes due solely to manipulation. We propose SIm-Prov -a robust and scalable content provenance framework that compliments C2PA. SImProv has two technical contributions:
Robust Near-Duplicate Image Search. We learn a visual search embedding that is robust to both benign transformations and content manipulations. We train a convolutional neural network (CNN) using a contrastive learning approach. We use a dataset of original photographs modified in Adobe Photoshop TM , combined with data augmentations simulating benign image modifications. This yields a search embedding for robustly matching a near-duplicate query image circulating 'in the wild' to a trusted database of original images (hereon, we use the term 'near-duplicate' to refer to images that undergo certain transformations regardless of such transformations being benign or editorial changes).
An earlier version of SImProv was proposed at the CVPR workshop on Media Forensics 2021 (Black et al., 2021). The proposed method improves upon this using instance-level feature pooling methods to improve near-duplicate image search. We show that incorporating these into our image fingerprinting descriptor improves performance scalability, using a corpus of up to 100 million diverse photographic and artistic images from Behance.Net. These adaptations demonstrate the utility of our arXiv:2206.14245v2 [cs.CV] 8 May 2023 approach for web-scale content authenticity applications.
Pairwise Image Comparison. We propose a novel CNN architecture for pairwise image comparison that learns a joint image pair representation. We use this architecture to train two models for near-duplicated detection and editorial change localization respectively. In the near-duplicated detection model, the pair representation is used in conjunction with the individual visual search embeddings of both images to decide whether the two input images are two versions of the same image or completely unrelated distinct images. In the editorial change visualization model, the pair representation is used to produce a heatmap that localizes visual discrepancies due to editorial manipulation. The network incorporates both a de-warping and image correlation module, and is trained end-to-end to ignore out-of-place transformation of content e.g. due to padding or warping as well as in-place corruption due to noise. In this extension of the earlier proposed pair-wise approach (Black et al., 2021) we show that fusing end-to-end features from the image embedding together with the pair-wise embedding model improves the performance of the near-duplicate detection and re-ranking. These tasks were previously trained and applied as two sequential, entirely decoupled processes.
Related Work
The issue of visual content authenticity has been extensively studied from two main perspectives: detection and attribution.
Detection typically involves identifying instances of visual tampering or generative content -'deep fakes' (Dolhansky et al., 2020). This usually requires "blind" detection, where the image in question is the only available information. Different statistical approaches have been explored to localize manipulated regions (Wang et al., 2019(Wang et al., , 2010. Other methods can identify whether a generative adversarial networks (GANs) was used to create the content Nguyen et al., 2022) or even specify which GAN, using GAN fingerprints (Yu et al., 2019). Most frequently, these methods focus on detection of fake faces in particular (Guo et al., 2021;Rössler et al., 2019).
Image Attribution methods aim to link image to data on its provenance, using embedded metadata (, CAI;Aythora et al., 2020), watermarking (K. Hameed, 2006;Devi et al., 2019;Profrock et al., 2006;Baba et al., 2019), or perceptual hashing (Pan, 2019;Liu et al., 2016;Cao et al., 2017;Khelifi and Bouridane, 2017). Emerging standards securely transport a cryptographically signed edit history within image metadata (, CAI;Aythora et al., 2020;Coalition for Content Provenance and Authenticity, 2021). However, social media platforms often strip metadata from uploaded images, and may even replace it with false information to misattribute the image (Council, 2020).
CSQ (Yuan et al., 2020) approaches hashing as a retrieval/attribution optimization task. Deep Supervised Hashing (DSH) (Liu et al., 2016) and HashNet (Cao et al., 2017) train a siamese convolutional neural network to learn visual hashes. They use a ranking loss, which is commonly used in visual search (Gordo et al., 2016). DSDH (Li et al., 2017) learns metric ranking and classification directly from the hash code. Our approach uses deep metric learning as well, but differs in that we use contrastive training (Chen et al., 2020) and data augmentation to learn invariances relevant to benign and editorial image transformation. Compared to the image similarity detection challenge and dataset (Douze et al., 2021), which focuses on large-scale retrieval of images subjected to benign transformations, our approach addresses the more complex problem of detecting editorial changes.
Localization of image manipulation focuses on identifying image splicing (M. Huh et al., 2018) or the use of photoretouching tools (Wang et al., 2019) in blind detection tasks. We tackle the problem by combining perceptual hashing and pair-wise comparison. Our image comparator, the second contribution of this paper, assumes that a trusted "original" image can be found using visual search (the first contribution of this paper). The comparator is able to ignore differences caused by benign image transformations but is sensitive to editorial manipulations. This is achieved through the use of a differential optical flow (Teed and Deng, 2020) and dewarping module in our two-stream architecture. Two-stream networks have previously been used to predict the types of editing operations applied to pairs of images (Jenni and Favaro, 2018). Our approach is different because it produces a heatmap of editing operations that is insensitive to specific transformation classes. Another feature of our method is a classification score that can be used at inference to determine whether an image is a benign or manipulated version, or a different image altogether.
Method
Our approach for image provenance assumes the existence of a trusted database D = {I 1 , I 2 , ..., I N } containing N original images and their associated provenance information (e.g. curated by a trusted publisher, or via a decentralized immutable datastore such as a blockchain). Given a query image q, our goals are: (i) determining whether there exists an original version of q in D; and (ii) localizing editorial changes if a match is found. The two goals appear to conflict each other since the former requires robustness to both benign and editorial changes while the latter should be sensitive to editorial manipulations. Learning a single model to achieve our goals is therefore extremely challenging. We instead propose a multi-stage framework. Our SImProv consists of 3 stages: (i) a visual search stage followed by (ii) re-ranking and near-duplicated detection, and finally (iii) detection and visualization of editorial changes (Fig. 1).
Firstly, in ?? we describe the representation learning process, used for near-duplicate image search (stage 1). We develop a model that learns 256-D representations of images that are further binarized into a 128-bit hash for scalable search (Johnson et al., 2019). The search is used to identify the most similar images to a users' query image.
Secondly, 3.2 describes the Pairwise Embedding Network (PEN) to deliver a pairwise representation of the query and a candidate image. PEN is our core design for the later stages of SImProv. In 3.3, PEN is integrated to our stage-2 Pairwise Similarity Evaluation Network (PSEN) to re-rank the top k images (k=100) and identify the likelihood of candidate image being a near-duplicated version of the query, as opposed to being just another distinct image. Finally, 3.4 describes how PEN is leveraged to identify whether the query image is a manipulated or benignly transformed version of the original (stage 3). If the query is identified as manipulated, we visualize a heatmap of the manipulated region on top of the image.
Near-Duplicate Image Search
We train a CNN model f r (.) to encode an image I into a compact embedding space r = f r (I) ∈ R 256 . Additionally, we perform KMeans quantization with 1024 clusters, followed by product quantization, resulting in a 128-bit descriptor of the 256-dimensional embedding (Johnson et al., 2019). We employ the ResNet50 (He et al., 2016) backbone for f r , replacing the final layer with a 256-D fully connected (fc) layer as the embedding. We use DeepAugMix (Hendrycks et al., 2020) as the pretrained weight. We finetune with a multiple-positives contrastive loss as described in (Black et al., 2021).
During inference, we find our model benefits from geometric pooling (GeM) (Radenović et al., 2018) in two ways. Firstly, model becomes resolution agnostic and can take larger images as input, capturing more information. Secondly, it allows to focus on local features, which is more beneficial for matching out of place transformed images.
For a set of K spatial feature map activations Φ = [φ 1 , . . . φ K ], the GeM (Radenović et al., 2018)
pooling operation G is de- fined as G(Φ) = [g 1 , . . . g k , . . . g K ]; g k = ( 1 |φ k | x∈φ k x p k ) 1 p k ,
where p k is a hyper-parameter (p k is fixed at a default value of 3 in our experiments).
Pairwise Embedding Network
We propose an Pairwise Embedding Network (PEN) that learns a joint representation of two input images (Fig. 2). This architecture is later utilized for two purposes: near-duplicated detection (3.3.1) and localization of editorial change (3.4), which corresponds to stage 2 and 3 of SImProv respectively.
The PEN accepts a pair of query-candidate images as input and outputs a n-dimensional (n = 256 in our experiments) representation of the image pair. The PEN architecture consists of 2 modules: a geometrical alignment module, F A , followed by a projection module, F P (Fig. 2). Below we describe our designs for F A and F P .
Geometric Alignment Module is used to account for the fact that the query q may undergo through geometric transformations which alter the pixel placement. We correct its alignment prior to joint representation learning. In this work we use RAFT (Teed and Deng, 2020), but any flow estimation network that can be trained end-to-end is suitable.
Our DWU then uses the predicted optical flow to align the query image with the candidate image:
M : (x, y) → (x + ρ x (x), y + ρ y (y)) (1) DWU (q|ρ x , ρ y ) = S(M) ∈ R H×W(2)
where (x, y) refers to the pixel coordinates in the query image q which are mapped to their corresponding coordinates M in the candidate image according to the optical flow ρ x , ρ y . S(.) is a bilinear sampler that effectively fits a local grid around M: S(M) = M + ∆M|∆M ∈ R 2 , |∆M| <= 1 where output coordinates are computed through linear interpolation. Projection Module takes the candidate I and the aligned query, q = F A (q|I) and outputs a single feature z. We first extract local features of each image using a shared CNN module: z q = f E (q ); z I = f E (I) ∈ R H ×W ×C , where H , W and C are the new height, width and feature dimension respectively. Our feature extractor f E (.) is a 3-layer convolutional neural network (CNN) with ReLU activations, batch normalization, and max pooling layers. It outputs features at 1 4 resolution (H = H/4, W = W/4 and we set C = 128). The combined features are then fed into another CNN to learn a fusion rep-
resentation z = f S ([z q , z I ]) ∈ R 256 , where [, ]
is concatenation, and f S (.) is made up of 4 ResNet residual blocks (He et al., 2016) followed by average pooling and a fully-connected (FC) layer that outputs 256-dimensional features. The PEN output is used for re-ranking, near-duplicate detection, and manipulation localization, as described below.
Near-duplicate detection and Re-ranking
Our near-duplicate image search method is designed to produce compact descriptors to enable interactive speeds in search through millions of images. However, the increase in speed comes at a cost of precision. The correct image could end up near the top of retrieval results, but in many cases might not in the first place. We propose a re-ranking model based on pairwise comparison of the query image with each of the top-k retrieval candidates. Such pairwise comparison is much slower and is not feasible for search through millions of images, but allows to identify the most likely match within a shortlist. The re-ranking consists of two steps: pairwise similarity evaluation and final reordering. Similarity evaluation produces a similarity confidence score for each of the 100 query-candidate image pairs. Re-ordering looks at the full list of 100 confidence scores and decides which of the candidate images is the most likely match to the query. On a single GTX 1080 Ti GPU initial search takes 40ms and reordering 400ms on average. Below we describe similarity evaluation (3.3.1) and reordering (3.3.2).
Image Similarity Evaluation
We propose a Pairwise Similarity Evaluation Network (PSEN) that uses two images as input: the query image q and a candidate image c, retrieved by the near-duplicate search model (??). The PSEN uses the previously obtained individual embeddings of the images, as well as a PEN joint embedding learned from stack of two images together (Fig. 1).
The final output of the model is a confidence score s, indicating the likelihood that the query and candidate images are the same image under different transformations:
s = S ([ f r (q), f r (c), PEN([q, c])]) ∈ [0, 1](3)
where [, ] is concatenation, f r (.) is the search embedding (3.1), PEN([., .]) is the pair representation (3.2) and S (.) is a binary classification fully connected layer. The model is trained with binary cross-entropy loss.
Re-ordering
The role of re-orderer is to decide which of the candidate images is the most likely match to the query, based on two pieces of information: the initial raking of near-duplicate image search (distance between candidate and query embeddings) and similarity confidence scores. The index n of the most likely match is defined as n = R(s 0 , . . . , s 99 ) ∈ [0, 100], where s i = S ([ f r (q), f r (c i ), PEN([q, c i ])]) is the similarity confidence score between the query image and i-th retrieved candidate image; R(.) is a neural network, consisting of three fully connected layers of sizes [8192, 1024, 101], respectively. Re-orderer is trained as a 101-way classifier with cross-entropy loss. First 100 classes correspond to indices of candidate images and the final class indicates that the correct match to the query is not present within the candidate images.
Detecting and Localizing Editorial Change
This stage assumes a near-duplicated image to a query has been found after phase 2 (3.3). In order to predict the benignmanipulated relationship and visualize the possible manipulated regions, we train a second PEN model using a combination of two different loss functions (Fig. 1). The first loss is a binary cross-entropy loss L C (.) that predicts whether the pair is benign (i.e., the query image q is either identical to or a benign transformed version of the candidate image I) or manipulated (i.e., z is a manipulated version of I). The second loss function is used to minimize the cosine distance L T (.) between the manipulation heatmap derived from z and the ground truth heatmap. This heatmap is generated at a resolution of t ×t using a fully-connected layer E t (z) ∈ R t 2 .
The total loss is L(.) = w c L C (.) + w t L T (.) where loss weight w c = w t = 0.5 is set empirically.
Experiments and Discussion
Datasets
The PSBattles (Heller et al., 2018) dataset is used to train and evaluate our models. We use the splits and edit localization annotations from (Black et al., 2021). The dataset is split into a training set (PSBat-Train) and a test set (PSBat-Test), with 6,364/21,197 and 807/2,960 original/manipulated images in the training and test sets, respectively. The PSBat-Train set is used to train our image retrieval, similarity evaluation, and edit localization models, while the PSBat-Test set is used for the two benchmarks we evaluate.
Additionally, we also evaluate the retrieval efficacy of SIm-Prov on BAM-100M, a large scale dataset consisting of 100M artworks from Behance. BAM-100M is significantly larger and Table 1: Retrieval performance (on 2M images, PSBat-Ret) reported as IR score at ranks [1,10,100], for query images subjected to benign transforms, manipulation, or both. Stage 1 refers to nearest-neighbor search only.
Method
Benign Manip Manip+Benign Average IR@1 IR@10 IR@100 IR@1 IR@10 IR@100 IR@1 IR@10 IR@100 IR@1 IR@10 more diverse than ImageNet, since its collection spans many fields beyond photography, such as paintings, graphic designs, advertising and graffiti. We note this is the largest experiment in term of dataset size for image provenance to date. We create two query sets, BAM-Q-Res and BAM-Q-Aug from 1K images sampled at random from BAM-100M. To make BAM-Q-Res, we downscale images at random ratio in range 0.1-0.9 (up to 10x downscaling) with bilinear interpolation keeping aspect ratio. To make BAM-Q-Aug, we apply the same augmentation strategy as in PSBat-Ret.
Metrics
To evaluate near duplicate search, we measure the ratio of queries that return the relevant images within the top-k retrieval using the Instance Retrieval IR@k metric. Formally,
IR@k = 1 Q Q i=1 k j=1 r(q i , j)
where Q is number of queries, relevance function r(q i , j) = 1 if the returned image at rank j is relevant to the query q i (there is only one such image in PSBat-Ret), otherwise 0.
We use Average Precision (AP) to measure the accuracy of both classifiers: the same/different similarity evaluation network and benign/manipulated classifier branch of the edit localization network.
For the generated heatmap, we compute Intersection over Union (IoU) with the ground truth by up-sampling the 7x7 heatmap to the image resolution H × W, converting it to binary with a threshold, and then calculating the intersection and union. This is expressed as
IoU = 1 Q Q i=1 S (U i )∩T i S (U i )∪T i where T i is the H × W
binary ground truth heatmap, U i is the predicted heatmap after interpolation and thresholding, and S (U i ) is the set of values in U i . We improve the heatmap using the image pair classification result, setting S (U i ) = U i if the query is classified as manipulated, 0 H×W if benign, and 1 H×W if distinct.
Evaluating Near-Duplicate Search
We compare our retrieval method (both before and after reranking) against 9 baselines. ICN (Black et al., 2021) is our initial workshop version of SImProv. ImageNet fine. and MSResNet fine. are the finetuned models on PSBat-Train using our training strategy. All methods produce 128-bit hash code except pHash (64-bit).
Tab. 1 compares retrieval performance. The two online hashing methods, CSQ (Yuan et al., 2020) and HashNet (Cao et al., 2017), are among the worst performers. CSQ and Hash-Net struggle to cope with strong ImageNet-C transformations present during training and test, resulting in lower performance than the classical pHash. ImageNet (He et al., 2016), MSRes-Net (Lenyk and Park, 2021) and DeepAugMix (Hendrycks et al., 2020) perform strongly on the Manip set but poorly when they undergo benign transformations. When trained via our contrastive loss (eq. ??), all models gain with our proposed SIm-Prov (stage 1) achieving 25% improvement on Benign IR@1 and 24% on Manip+Benign versus the pretrained DeepAug-Mix model. SImProv (stage 1) also outperforms the finetuned ImageNet/MSResNet by a large margin on all top-k scores and query sets. The improvement of SImProv compared to the one presented in the workshop paper (ICN (stage 1)) (Black et al., 2021) can be attributed to geometric pooling, which allows higher resolution input. We demonstrate significant performance improvement of the proposed re-ranking method (SIm-Prov (stage 1+2)) compared to the naive re-ranking approach used in ICN (stage 1+2).
Feature Pooling
We show the superiority of GeM features against the traditional output features from the last FC layer of the retrieval model (Black et al., 2021) as well as its dependence on input image resolution in Fig. 3. GeM features work best at 384x384 resolution, outperforming (Black et al., 2021) by 4% on the challenging Manip.+Benign test set. The presence of benign transformations hampers GeM performance as the resolution increases, underperforming (Black et al., 2021) from 512x512 resolution on Benign set and from 640x640 on the Manip.+Benign set. Tab. 2 compare GeM with (Black et al., 2021) and a similar feature pooling method -RMAC (Tolias et al., 2016), at two pooling levels, L=3 and L=4. It can be seen that the pooling level does not affect much the performance of both GeM and RMAC. Additionally, RMAC is comparable to GeM, slightly outperforming GeM on the Benign set at L=3 but underperforming on the Manip. set at L=4. However, we choose GeM as the proposed method since it is significantly faster than RMAC. It takes RMAC 28.72 seconds to perform 1000 iterations, while GeM is ∼ 18 times faster, with just 1.63 for the same setup.
Evaluating Localization of Editorial Changes
The proposed method is compared with four baselines in terms of localization performance. The first baseline, Sum of Squared Distances (SSD), calculates SSD between two images at the pixel level, resizes it to 7 × 7, and then resizes it back before thresholding to create continuity in the detected heatmap. The second baseline, ResNetConv, extracts 7 × 7 × 2048 features from a pre-trained ImageNet ResNet50 model for both query and original images. These are averaged across channels to produce a 7 × 7 heatmap. ErrAnalysis -inspired from the Table 3: Evaluating heatmap accuracy and intrepretability for baseline methods. Our proposed SImProv method is compared against baselines both objectively for accuracy (IoU) and subjectively via users to determine which exhibits best intrepretability (% method preference). F A indicates geometric alignment module applied.
Method
Accuracy ( blind detection technique in (Wang et al., 2010), we perform JPEG compression on the query image and compare with itself.
MantraNet -is a supervised blind detection method (Wu et al., 2019) that detects anomalous regions. Additionally we evaluate baselines with images passed through our alignment module. We compare the heatmaps generated by our SImProv with baseline methods. Heatmaps are produced by upsampling the 7×7 heatmap output of the SImProv to the size of the image using bicubic interpolation. Heatmaps may be presented on falsecolour scale (e.g. jet) in this form, or thresholded to produce an outline of the predicted manipulated region. In our experiments, we threshold the normalized heatmaps at 0.35 determined empirically. Tab. 3 (first column) reports the IoU metric between the predicted heatmap and the ground truth, both with and without the thresholding. Whilst most baselines are improved through use of our geometric alignment (F A ) process, our SImProv significantly exceeds baseline performances by at least 0.30. Change localization examples for SImProv are shown in Fig. 4.
The effectiveness of heatmap interpretability is compared to baseline methods using a crowd-sourced study on Amazon Mechanical Turk (MTurk). Participants are shown an original image and an image that has been altered, along with the ground truth for the altered image. The altered image is also accompanied by a grid of heatmaps generated by nine different methods. The 9 methods included our own, 4 baselines (SSD, MantraNet, ErrAnalysis, and ResNetConv), and 4 warp-corrected baselines that used F A for geometric alignment. Each of the 200 tasks was annotated by 5 different participants.
Tab. 3 (final col.) presents the results, which favor our proposed method, even when the image pair are pre-aligned.
Large Scale Retrieval
We evaluate the scalability of our method by indexing the BAM-100M database. We compare the IR@k performance of SImProv to its earlier version ICN (Black et al., 2021). Fig. 6 shows IR@k versus database size curves of SImProv and ICN on BAM-100M with BAM-Q-Res as the query set. We demonstrates that SImProv's performance does not degrade nearly as much as ICN with increase in database size. For SImProv, the IR@k remains nearly 1.0 at all image database sizes, dipping to 0.999 for the most challenging case of IR@1 for database sizes above 30M. ICN, on the other hand, is much greater affected by database size, with IR@1 dropping from 0.997 at 1M images to 0.985 at 100M images. Results for the more challenging query set BAM-Q-Aug are depicted in Fig. 5. SIm-Prov outperforms ICN by a large margin at early k values, on both BAM-100M and a subset of 1M images. The performance drop when increasing the database size from 1M to 100M for SImProv is also lower than ICN. The IR@k curves converge as k value reaches 100, and saturated performance is achieved at IR@100 for both methods regardless of database size, which justifies our design choice of selecting top-100 images for SIm-Prov subsequent stages.
Evaluating Classification
We evaluate the classification performance of two classifiers: same/different in SImProv stage 2 and benign/manipulated in 3.2, which classifies a pair of images as either being two entirely different images, or the same image, potentially under different transformations. Benign/manipulated classification is an output of the change localization network 3.4, which assumes that the input images are not distinct and focuses on classifying whether the differences between them are benign or editorial. We compare the performance of our approach with ICN (Black et al., 2021), which has a single 3-way (benign, manipulated, distinct) classifier. In case of same/different evaluation, we combine the confidences of 'benign' and 'manipulated' to count as 'same'. We evaluate the performance of the 2-way same/different classification by comparing each original image in the test set with: itself, benign transformed version of itself, manipulated version, manipulated as well as benign transformed and an entirely different image, chosen at random. All of the cases except the last are expected to be classified as 'same' and the last one as 'distinct'. Tab. 4 shows the Average Precision (AP) scores achieved for each case. A non-modified original-original pair is always correctly classified as the same image by both both methods. Introduction of benign transformations reduces the accuracy of ICN slightly, but does not affect our approach. The most challenging case is queries that are both manipulated and benign transformed, however both methods maintain AP near 0.99 in all of the cases.
The bigger difference in performance can be seen in Tab. 5 which shows the AP scores for benign/manipulated classification. Here, our approach outperforms ICN by 2% in the cases where the query image is either just benignly transformed or just manipulated. The difference in performance grows to 9.9% when the query is both manipulated and benign transformed.
Conclusion
We presented a Scalable Image Provenance (SImProv) framework for large-scale retrieval and visual comparison of a pair of images in order to detect and localize manipulated regions. SImProv enables users to match images circulating 'in the wild' to a trusted database of original images. When a query image is matched to an original, SImProv generates a heatmap that highlights areas of manipulation while ignoring benign transformations that can occur when images are shared online. We introduced two main architecture changes compared to an earlier version of the work (Black et al., 2021): incorporation of instance-level feature pooling for image retrieval and combination of individual and pairwise descriptors for nearduplicate detection, followed by re-ranking. We show that feature pooling improves retrieval performance by enabling the use of queries of larger resolution. We use a large corpus of 100 million diverse images to demonstrate that these changes improve retrieval performance and make our approach applicable to the web-scale content authenticity problem.
Fig. 1 :
1Architecture diagrams of each of second and third stages of the proposed SImProv framework. Stage 2 (left) performs re-ranking of the top-100 retrieved results, utilizing a pairwise embedding z to re-order the results and identify a correct match to the query. Stage 3 (right) uses the Pairwise Embedding Network (PEN) to identify whether the query image has been manipulated and localizes the manipulation with a heatmap.
Fig. 2 :
2Architecture of the proposed Pairwise Embedding Network (PEN). A candidate match to the user queried image is obtained from near-duplicate search (??, not shown). Image alignment is performed via differentiable de-warping unit (DWU) based on a dense optical flow estimate provided by the flow estimator. The resulting image pair are separately encoded via a feature extractor f E (.) and the concatenated features passed through f S (.) to obtain the combined feature z.
Fig. 3 :
3Top-1 performance of GeM features on different input image resolutions. Dash-lines represent the performance without using feature pooling.
Fig. 5 :
5Retrieval performance comparison of ICN(Black et al., 2021) and SIm-Prov on a 1M subset and fullset of BAM-100M, using BAM-Q-Aug queries.
Fig. 6 :
6Top-k retrieval performance comparison of ICN(Black et al., 2021) and SImProv versus database size, using BAM-Q-Res queries.
Table 2 :
2Top-1 retrieval performance of GeM and RMAC features and output from the last FC layer on the 384x384 resolution test sets.Fig. 4: Heatmap results. Left col.: Original image. Middle col.: Manipulated image also subjected to benign transformation. Right col.: Heatmap output (green) ignoring benign transformation and highlighting manipulation (ground truth in yellow).RMAC
GeM
CNN
L=3
L=4
L=3
L=4
Benign
0.9503 0.9444 0.9450 0.9441 0.9272
Manip.
0.8777 0.8777 0.8838 0.8926 0.8520
Manip.+Benign 0.8025 0.8027 0.8064 0.8068 0.7570
Table 4 :
4SImProv stage 2 -Same / different classifierTest
Average Precision (AP)
ICN (Black et al., 2021)
Ours
Original
1.000
1.000
Benign
0.9996
1.000
Manip.
0.9976
0.9922
Benign+Manip.
0.9962
0.9909
Distinct
0.9895
0.9973
Table 5 :
5SImProv stage 3 -Benign / Manip classifierTest
Average Precision (AP)
ICN (Black et al., 2021)
Ours
Original
1.000
1.000
Benign
0.9635
0.9800
Manip.
0.9726
0.9932
Benign+Manip.
0.8807
0.9793
stage 3. Same/different classification is an output of PSEN
3.
Multi-stakeholder media provenance management to counter synthetic media risks in news publishing. J Aythora, R Burke-Agüero, A Chamayou, S Clebsch, M Costa, N Earnshaw, L Ellis, P Eng, Proc. Intl. Broadcasting Convention (IBC). Intl. Broadcasting Convention (IBC)Aythora, J., Burke-Agüero, R., Chamayou, A., Clebsch, S., Costa, M., Earn- shaw, N., Ellis, L., Eng, P., 2020. Multi-stakeholder media provenance man- agement to counter synthetic media risks in news publishing, in: Proc. Intl. Broadcasting Convention (IBC).
Watermarking scheme for copyright protection of digital images. S Baba, L Krekor, T Arif, Z Shaaban, 9Baba, S., Krekor, L., Arif, T., Shaaban, Z., 2019. Watermarking scheme for copyright protection of digital images. IJCSNS 9.
Deep image comparator: Learning to visualize editorial change. A Black, T Bui, H Jin, V Swaminathan, J Collomosse, Proc. CVPR WS, IEEE. CVPR WS, IEEEBlack, A., Bui, T., Jin, H., Swaminathan, V., Collomosse, J., 2021. Deep image comparator: Learning to visualize editorial change, in: Proc. CVPR WS, IEEE. pp. 972-980.
Setting the Standard for Content Attribution. C A I , Adobe IncTechnical Report, C.A.I., 2020. Setting the Standard for Content Attribution. Technical Report. Adobe Inc.
Hashnet: Deep learning to hash by continuation. Z Cao, M Long, J Wang, P S Yu, Proc. CVPR. CVPRCao, Z., Long, M., Wang, J., Yu, P.S., 2017. Hashnet: Deep learning to hash by continuation, in: Proc. CVPR, pp. 5608-5617.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, Proc. ICML. ICMLChen, T., Kornblith, S., Norouzi, M., Hinton, G., 2020. A simple framework for contrastive learning of visual representations, in: Proc. ICML, pp. 1597- 1607.
Coalition for Content Provenance and Authenticity, 2021. Draft Technical Specification 0.7. . C2PATechnical ReportCoalition for Content Provenance and Authenticity, 2021. Draft Technical Specification 0.7. Technical Report. C2PA.
Social media sites photo metadata test results. I Council, Council, I., 2020. Social media sites photo metadata test results. http:// embeddedmetadata.org/social-media-test-results.php.
A fragile watermarking scheme for image authentication with tamper localization using integer wavelet transform. P Devi, M Venkatesan, K Duraiswamy, J. Computer Science. 5Devi, P., Venkatesan, M., Duraiswamy, K., 2019. A fragile watermarking scheme for image authentication with tamper localization using integer wavelet transform. J. Computer Science 5, 831-837.
The deepfake detection challenge (DFDC) dataset. B Dolhansky, J Bitton, B Pflaum, J Lu, R Howes, M Wang, C C Ferrer, CoRRabs/2006.07397.arXiv:2006.07397Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., Ferrer, C.C., 2020. The deepfake detection challenge (DFDC) dataset. CoRR abs/2006.07397. arXiv:2006.07397.
M Douze, G Tolias, E Pizzi, Z Papakipos, L Chanussot, F Radenovic, T Jenicek, M Maximov, L Leal-Taixé, I Elezi, O Chum, C C Ferrer, 10.48550/ARXIV.2106.09672doi:10.48550/ ARXIV.2106.09672The 2021 image similarity dataset and challenge. Douze, M., Tolias, G., Pizzi, E., Papakipos, Z., Chanussot, L., Radenovic, F., Jenicek, T., Maximov, M., Leal-Taixé, L., Elezi, I., Chum, O., Ferrer, C.C., 2021. The 2021 image similarity dataset and challenge. doi:10.48550/ ARXIV.2106.09672.
Deep image retrieval: Learning global representations for image search. A Gordo, J Almazán, J Revaud, D Larlus, Proc. ECCV. ECCVGordo, A., Almazán, J., Revaud, J., Larlus, D., 2016. Deep image retrieval: Learning global representations for image search, in: Proc. ECCV, pp. 241- 257.
Fake face detection via adaptive manipulation traces extraction network. Computer Vision and Image Understanding 204. Z Guo, G Yang, J Chen, X Sun, 103170Guo, Z., Yang, G., Chen, J., Sun, X., 2021. Fake face detection via adaptive manipulation traces extraction network. Computer Vision and Image Under- standing 204, 103170.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proc. CVPR. CVPRHe, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proc. CVPR, pp. 770-778.
The PS-Battles Dataset -an Image Collection for Image Manipulation Detection. S Heller, L Rossetto, H Schuldt, CoRR abs/1804.04866Heller, S., Rossetto, L., Schuldt, H., 2018. The PS-Battles Dataset -an Image Collection for Image Manipulation Detection. CoRR abs/1804.04866.
The many faces of robustness: A critical analysis of out-of-distribution generalization. D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, T Zhu, S Parajuli, M Guo, arXiv:2006.16241arXiv preprintHendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., et al., 2020. The many faces of robust- ness: A critical analysis of out-of-distribution generalization. arXiv preprint arXiv:2006.16241 .
Self-supervised feature learning by learning to spot artifacts. S Jenni, P Favaro, Proc. CVPR. CVPRJenni, S., Favaro, P., 2018. Self-supervised feature learning by learning to spot artifacts, in: Proc. CVPR.
Billion-scale similarity search with gpus. J Johnson, M Douze, H Jegou, IEEE Transactions on Big Data. Johnson, J., Douze, M., Jegou, H., 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data .
Digital image watermarking in the wavelet transform domain. K Hameed, A Mumtax, S G , WASET. 13K. Hameed, A. Mumtax, S.G., 2006. Digital image watermarking in the wavelet transform domain. WASET 13, 86-89.
Perceptual video hashing for content identification and authentication. F Khelifi, A Bouridane, IEEE TCSVT. 1Khelifi, F., Bouridane, A., 2017. Perceptual video hashing for content identifi- cation and authentication. IEEE TCSVT 1.
Microsoft vision model resnet-50 combines webscale data and multi-task learning to achieve state of the art. Z Lenyk, J Park, Lenyk, Z., Park, J., 2021. Microsoft vision model resnet-50 combines web- scale data and multi-task learning to achieve state of the art. https:// pypi.org/project/microsoftvision/.
Deep supervised discrete hashing. Q Li, Z Sun, R He, T Tan, Proc. NeurIPS. NeurIPSLi, Q., Sun, Z., He, R., Tan, T., 2017. Deep supervised discrete hashing, in: Proc. NeurIPS, pp. 2482-2491.
Deep supervised hashing for fast image retrieval. H Liu, R Wang, S Shan, X Chen, Proc. CVPR. CVPRLiu, H., Wang, R., Shan, S., Chen, X., 2016. Deep supervised hashing for fast image retrieval, in: Proc. CVPR, pp. 2064-2072.
Fighting fake news: Image splice detection via learned self-consistency. M Huh, A Liu, A Owens, A Efros, Proc. ECCV. ECCVM.Huh, Liu, A., Owens, A., Efros, A., 2018. Fighting fake news: Image splice detection via learned self-consistency, in: Proc. ECCV.
Deep learning for deepfakes creation and detection: A survey. T T Nguyen, Q V H Nguyen, D T Nguyen, D T Nguyen, T Huynh-The, S Nahavandi, T T Nguyen, Q V Pham, C M Nguyen, Computer Vision and Image Understanding. 223103525Nguyen, T.T., Nguyen, Q.V.H., Nguyen, D.T., Nguyen, D.T., Huynh-The, T., Nahavandi, S., Nguyen, T.T., Pham, Q.V., Nguyen, C.M., 2022. Deep learn- ing for deepfakes creation and detection: A survey. Computer Vision and Image Understanding 223, 103525.
Digital-content-based identification: Similarity hashing for content identification in decentralized environments. T Pan, Proc. Blockchain for Science. Blockchain for SciencePan, T., 2019. Digital-content-based identification: Similarity hashing for con- tent identification in decentralized environments, in: Proc. Blockchain for Science.
Content-based watermarking by geometric wrapping and feature-based image segmentation. D Profrock, M Schlauweg, E Muller, Proc. SITIS. SITISProfrock, D., Schlauweg, M., Muller, E., 2006. Content-based watermarking by geometric wrapping and feature-based image segmentation, in: Proc. SITIS, pp. 572-581.
Fine-tuning cnn image retrieval with no human annotation. F Radenović, G Tolias, O Chum, IEEE TPAMI. 41Radenović, F., Tolias, G., Chum, O., 2018. Fine-tuning cnn image retrieval with no human annotation. IEEE TPAMI 41, 1655-1668.
FaceForensics++: Learning to detect manipulated facial images. A Rössler, D Cozzolino, L Verdoliva, C Riess, J Thies, M Nießner, International Conference on Computer Vision (ICCV). Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M., 2019. FaceForensics++: Learning to detect manipulated facial images, in: International Conference on Computer Vision (ICCV).
Raft: Recurrent all-pairs field transforms for optical flow. Z Teed, J Deng, Proc. ECCV. ECCVSpringerTeed, Z., Deng, J., 2020. Raft: Recurrent all-pairs field transforms for optical flow, in: Proc. ECCV, Springer. pp. 402-419.
Particular object retrieval with integral max-pooling of cnn activations. G Tolias, R Sicre, H Jégou, Proc. ICLR. ICLRTolias, G., Sicre, R., Jégou, H., 2016. Particular object retrieval with integral max-pooling of cnn activations, in: Proc. ICLR, pp. 1-12.
Detecting photoshopped faces by scripting photoshop. S Y Wang, O Wang, A Owens, R Zhang, A Efros, Proc. ICCV. ICCVWang, S.Y., Wang, O., Owens, A., Zhang, R., Efros, A., 2019. Detecting pho- toshopped faces by scripting photoshop, in: Proc. ICCV.
Cnn-generated images are surprisingly easy to spot... for now. S Y Wang, O Wang, R Zhang, A Owens, A Efros, Proc. CVPR. CVPRWang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A., 2020. Cnn-generated images are surprisingly easy to spot... for now, in: Proc. CVPR.
Tampered region localization of digital color images based on jpeg compression noise. W Wang, J Dong, T Tan, Intl. WS Digital Watermarking. SpringerWang, W., Dong, J., Tan, T., 2010. Tampered region localization of digital color images based on jpeg compression noise, in: Intl. WS Digital Watermarking, Springer. pp. 120-133.
Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. Y Wu, W Abdalmageed, P Natarajan, Proc. CVPR. CVPRWu, Y., AbdAlmageed, W., Natarajan, P., 2019. Mantra-net: Manipulation trac- ing network for detection and localization of image forgeries with anoma- lous features, in: Proc. CVPR, pp. 9543-9552.
Attributing fake images to gans: Learning and analyzing gan fingerprints. N Yu, L Davis, M Fritz, Proc. ICCV. ICCVYu, N., Davis, L., Fritz, M., 2019. Attributing fake images to gans: Learning and analyzing gan fingerprints, in: Proc. ICCV.
Central similarity quantization for efficient image and video retrieval. L Yuan, T Wang, X Zhang, F Tay, Z Jie, W Liu, J Feng, Proc. CVPR. CVPRYuan, L., Wang, T., Zhang, X., Tay, F., Jie, Z., Liu, W., Feng, J., 2020. Cen- tral similarity quantization for efficient image and video retrieval, in: Proc. CVPR, pp. 3083-3092.
Implementation and Benchmarking of Perceptual Image Hash Functions. C Zauner, Upper Austria University of Applied Sciences. Master's thesisZauner, C., 2010. Implementation and Benchmarking of Perceptual Image Hash Functions. Master's thesis. Upper Austria University of Applied Sci- ences, Hagenberg.
| [] |
[
"RadiOrchestra: Proactive Management of Millimeter-wave Self-backhauled Small Cells via Joint Optimization of Beamforming, User Association, Rate Selection, and Admission Control",
"RadiOrchestra: Proactive Management of Millimeter-wave Self-backhauled Small Cells via Joint Optimization of Beamforming, User Association, Rate Selection, and Admission Control"
] | [
"Luis F Abanto-Leon \nTechnische Universität Darmstadt\nGermany\n",
"Arash Asadi \nTechnische Universität Darmstadt\nGermany\n",
"Andres Garcia-Saavedra ‡[email protected] \nNEC Laboratories Europe GmbH\n\n",
"Hong Gek ",
"Allyson Sim \nTechnische Universität Darmstadt\nGermany\n",
"Matthias Hollick [email protected] \nTechnische Universität Darmstadt\nGermany\n"
] | [
"Technische Universität Darmstadt\nGermany",
"Technische Universität Darmstadt\nGermany",
"NEC Laboratories Europe GmbH\n",
"Technische Universität Darmstadt\nGermany",
"Technische Universität Darmstadt\nGermany"
] | [] | Millimeter-wave self-backhauled small cells are a key component of next-generation wireless networks. Their dense deployment will increase data rates, reduce latency, and enable efficient data transport between the access and backhaul networks, providing greater flexibility not previously possible with optical fiber. Despite their high potential, operating dense selfbackhauled networks optimally is an open challenge, particularly for radio resource management (RRM). This paper presents, RadiOrchestra, a holistic RRM framework that models and optimizes beamforming, rate selection as well as user association and admission control for self-backhauled networks. The framework is designed to account for practical challenges such as hardware limitations of base stations (e.g., computational capacity, discrete rates), the need for adaptability of backhaul links, and the presence of interference. Our framework is formulated as a nonconvex mixed-integer nonlinear program, which is challenging to solve. To approach this problem, we propose three algorithms that provide a trade-off between complexity and optimality. Furthermore, we derive upper and lower bounds to characterize the performance limits of the system. We evaluate the developed strategies in various scenarios, showing the feasibility of deploying practical self-backhauling in future networks. | 10.1109/twc.2022.3191744 | [
"https://arxiv.org/pdf/2201.10297v2.pdf"
] | 246,275,866 | 2201.10297 | e60b78c17c7be897c3247a2935575e40a0fb7b53 |
RadiOrchestra: Proactive Management of Millimeter-wave Self-backhauled Small Cells via Joint Optimization of Beamforming, User Association, Rate Selection, and Admission Control
13 Jul 2022
Luis F Abanto-Leon
Technische Universität Darmstadt
Germany
Arash Asadi
Technische Universität Darmstadt
Germany
Andres Garcia-Saavedra ‡[email protected]
NEC Laboratories Europe GmbH
Hong Gek
Allyson Sim
Technische Universität Darmstadt
Germany
Matthias Hollick [email protected]
Technische Universität Darmstadt
Germany
RadiOrchestra: Proactive Management of Millimeter-wave Self-backhauled Small Cells via Joint Optimization of Beamforming, User Association, Rate Selection, and Admission Control
13 Jul 20221Index Terms-radio resource managementself-backhaulingmillimeter-wavebeamformingscheduling
Millimeter-wave self-backhauled small cells are a key component of next-generation wireless networks. Their dense deployment will increase data rates, reduce latency, and enable efficient data transport between the access and backhaul networks, providing greater flexibility not previously possible with optical fiber. Despite their high potential, operating dense selfbackhauled networks optimally is an open challenge, particularly for radio resource management (RRM). This paper presents, RadiOrchestra, a holistic RRM framework that models and optimizes beamforming, rate selection as well as user association and admission control for self-backhauled networks. The framework is designed to account for practical challenges such as hardware limitations of base stations (e.g., computational capacity, discrete rates), the need for adaptability of backhaul links, and the presence of interference. Our framework is formulated as a nonconvex mixed-integer nonlinear program, which is challenging to solve. To approach this problem, we propose three algorithms that provide a trade-off between complexity and optimality. Furthermore, we derive upper and lower bounds to characterize the performance limits of the system. We evaluate the developed strategies in various scenarios, showing the feasibility of deploying practical self-backhauling in future networks.
I. INTRODUCTION
Network densification, through the deployment of small cells, is indispensable to meet the increasing user demands for emerging wireless services [1]. Small cells are realized by lowcost radio access nodes, known as small base stations (SBSs), that provide wireless connectivity to undersized geographical areas [2]. SBSs are strategically installed in close proximity to the end users, bolstering the quality of experience and improving the radio access network (RAN) performance. In this way, dense small cell deployments are expected to increase data rates, maintain low latency, extend coverage and support a large number of users, thereby enabling the rollout of a wide range of new services.
As small cell deployments become denser, more efficient forms of backhauling data traffic between SBSs and the core network will be needed [3]. Optical fiber has been the predominant means for this task, but its installation and maintenance are costly. Self-backhauling, standardized under the name of integrated access and backhaul (IAB) [4], is an innovative technology that promises to reduce costs by sharing the wireless spectrum in time/frequency/space between RAN and backhaul links [5]. Small cells with self-backhauling capabilities benefit from a tight integration of access and backhaul functions, leading to high reconfigurability and facilitating self-adaptation to a wide range of cases.
Self-backhauled small cells require wide bandwidth to cope with the growing access-backhaul traffic. The millimeterwave spectrum offers the necessary bandwidth to meet this requirement but it poses challenges, e.g., limited transmission range. Fortunately, recent advances in beamforming [6] have overcome the physical drawbacks of millimeter-waves by taking advantage of the small antennas size that have enabled large antenna arrays. Thus, millimeter-wave self-backhauled small cell networks, realized by multi-antenna SBSs, will play a key role in next-generation wireless networks. Their dense deployment will reduce costs and enable efficient transport of massive data traffic between access and backhaul networks. In addition, the flexibility of millimeter-wave self-backhauled small cells will provide higher adaptability in various topologies and network conditions, previously not possible with fiber.
Despite consensus on the potential of millimeter-wave selfbackhauling, designing an optimal system remains an open research challenge [7], which requires efficient radio resource management (RRM) across the access and backhaul networks. To date, the body of work in this area often overlooks practical challenges inherent to realistic wireless communications systems, such as discrete modulations and coding schemes (MCSs), or low computing capabilities of SBSs. Our work is motivated by the absence of holistic RRM frameworks providing a realistic model and a practical solution for millimeter-wave self-backhauled small cells deployments. In the following, we introduce these challenges and put them in perspective with the literature. Challenge 1: Scalable self-backhauling design. The majority of prior works relies on point-to-point links, e.g., [8], [9], between macro base station (MBS) and SBSs, which is unscalable in dense SBS deployments. The scalability issue is addressed in a handful of works, e.g., [10], [11] assume that SBSs are capable of multi-layer successive interference cancellation (SIC). While this assumption simplifies traffic transport, it involves heavy computational tasks (i.e., SIC) not suited for SBSs. Thus, to keep SBS economical for the operators, it is necessary to reduce the computational burden The connection between the MBS and SBSs is called backhaul link, which is a convention in small cells literature. However, in a cloud-RAN context, MBSs are called central processors or BBUs, SBSs are called RRHs, and the connection between MBS and SBSs are called fronthaul links. In Table I, we have considered both kinds of nomenclatures since the problems originated from these two contexts are essentially the same.
of SBSs by developing practical backhauling mechanisms. Challenge 2: Adaptive backhaul capacity. Although selfbackhauling relies on wireless media, whose capacity is inherently highly variable due to noise and interference, the assumption of unlimited or fixed capacity prevails in many prior works, e.g., [9], [12]- [14]. However, it is necessary to consider the capacity limitation of backhaul links as well as their variability in real systems. Challenge 3: User association. It is conventionally assumed that users are served by a single SBS [13], [15] or by all SBSs within a given range [9], [12]. While these assumptions simplify the problem formulation and solution, they are neither realistic nor optimal. Thus, a general scheme is needed where users are associated to multiple SBSs in a flexible manner without considering extremes cases. Challenge 4: Admission control. Many works assume that all users can be served simultaneously [14], [16], [17], which is unrealistic due to limitations in power, number of antennas or RF chains. Admission control (or user scheduling) is crucial to guarantee the quality of service requirements for at least a subset of admitted users, thereby circumventing unfeasibility issues. Challenge 5: Discrete data rates. It is usually assumed that data rates are continuous-valued, e.g., [8], [10], [17]- [19]. However, in practice they are limited to a number of possible choices, i.e., finite set of MCSs. It is critical to consider the discreteness of rates since results obtained from solving problems for continuous values cannot be easily applied to real systems and are not expected to work properly. In contrast to prior art, we propose a comprehensive RRM framework that includes the challenges mentioned above, allowing us to more realistically validate millimeter-wave selfbackhauled small cell deployments. Our approach makes the following novel contributions. Contribution 1: In Section II, we address Challenge 1 by proposing a simple yet effective clustering mechanism for SBSs and users that results in multiple non-overlapping virtual cells or clusters. This allows us to exploit multigroup multicast beamforming for backhaul traffic transmissions. Our clustering approach simplifies the backhaul design and reduces hardware/computational requirements at the sending and receiving nodes. Contribution 2: In Section II-A and Section II-B we model Challenge 2, Challenge 3, Challenge 4, Challenge 5 considering the access-backhaul interdependencies between MBS, SBSs and users. In Section II-C, we include these challenges in our formulation to jointly optimize beamforming, user association, rate selection, admission control in the access network and beamforming, rate selection in the backhaul network for maximizing the access network downlink weighted sum-rate. We cast the problem as a nonconvex mixed-integer nonlinear program (MINLP), which to the best of our knowledge, has not been investigated before. Contribution 3: To tackle the nonconvex MINLP, we propose three formulations and their corresponding algorithms. In Section IV, we recast the nonconvex MINLP as a mixedinteger second-order cone program (MISOCP), which can be solved optimally. Due to the large number of integral variables, the cost of solving the MISOCP via branch-and-cut (BnC) techniques is prohibitive. To cope with this issue, in Section VI we propose a formulation solved via an iterative algorithm that tackles a SOCP at every instance. In Section VII, a much simpler SOCP formulation further decreases the complexity by reducing the number of variables, and optimizing only the beamformers gains. In particular, the complexity of the latter algorithm with respect to the former decreases roughly by a factor equal to the third power of the number of antennas at the SBS. Contribution 4: In Section V, we derive an upper bound to provide insights on the performance gaps and trade-offs of RadiOrchestra. We also provide a simple lower bound marking the oerformance. We note that the upper bound is a novel problem itself that has not been investigated before. Contribution 5: In Section VIII, we examine RadiOrchestra exhaustively under several scenarios including transmit power, number of clusters, and channel estimation errors.
There is a plethora of literature on self-backhauling for sub-6GHz spectrum, e.g., [19], [32], [33], which assume signals properties that do not work for millimeter-wave. Many works have focused on the design of either the backhaul, e.g., [8], [29], [34], [35] or the access network, e.g., [27], [28] alone. However, the growth that mobile networks are experiencing Figure 1: Overview of the steps to formulate and solve the problem in RadiOrchestra.
calls for heterogeneous networks with wireless backhauling, which require joint optimization. Considering linear antenna arrays, many works have optimized beamforming, e.g., [18], [33]. However, planar arrays are capable of 3D beamforming and hence are more suitable for dense deployments. The joint optimization of beamforming and user association (Challenge 3), admission (Challenge 4), or rate selection (Challenge 5), generally requires solving complex nonconvex MINLPs. Thus, many works facing these challenges split the problem into stages and solve them separately. For instance, the integer variables are eliminated first by assuming a given set of scheduled users, e.g., [17], [18]. Then, the nonconvex functions are linearized and the problem is solved in the continuous domain. Although simpler to solve, variable decoupling affects optimality due to interdependencies removal. To meet the continuously growing demands, resources have to be exploited more optimally. Therefore, RRM problems need to be solved as a whole, without relying on variable partitioning which translates to inefficient radio resource usage. After a scrupulous study of the state of the art, we found that the works most related to ours are [10], [19]. Like us, the authors of [10] assumed a multicast topology in the backhaul network, with a MBS transmitting multiple signals to various SBSs using multigroup multicasting beamforming (each signal carrying the data of a user). Since a single SBS may serve several users, SBSs are therefore required to decode many signal layers via SIC, which entails heavy computational burden for low-cost SBSs. Further, the decoding order of signals is known to affect the performance, leading to potential high decoding errors and making SIC impractical, which was not evaluated in [10]. The authors of [19] considered multiple SBS groups served in a multicast manner using time division multiplexing (TDM), i.e., each group at a time. However, as the number of clusters grows, the multiplexing time generates longer latency that is unavoidable as SBSs need to transmit coordinately to users, making it less practical. In addition, these works do not consider discrete rates, admission control, millimeterwave spectrum and 3D beamforming. For completeness, we summarize in Table I the related literature on RRM for small cells.
Overview: Not surprisingly, the inherent couplings among all the different parameters of the system result in a complex problem which is difficult to address. However, our frame-work helps to realize the true potential of self-backhauled mobile networks, in particular in the presence of real-world constraints. To the best of our knowledge, this is the first work that has modeled an integrated access-backhaul system with such practical constraints and proposed solutions to assess its performance. The investigated problem is unique and hence existing solutions are not applicable to it. In the following, we provide an overview of the steps taken to solve our problem, from a systems design perspective as well as the mathematical treatment. Systems aspect. 3GPP specifications for 5G leave several design choices to the operators such as spectrum allocation of backhaul and access. We leveraged these degrees of freedom to reduce the complexity of the problem while maintaining a realistic setup. The wireless nature of the access and backhaul links, coupled with the dense deployment of SBSs and users, creates a very complex interference environment. In RadiOrchestra, we choose an out-of-band system where backhaul and access links use different frequency bands, thus disentangling the interference between the two networks. Conventionally, the MBS sends individual backhaul signals to each SBS thus producing interference, which is handled via (point-to-point) unicast beamforming. In dense deployments this solution does not scale well due to the need to multiplex various data streams. Thus, we propose a clustering strategy where the MBS divides the SBSs into clusters, which are served simultaneously via (point-to-multipoint) multigroup multicast beamforming. This has three advantages: (i) Enhancing the scalability of self-backhauling by avoiding pointto-point transmissions which cause higher interference; (ii) Eliminating the need for heavy signal processing (e.g., SIC operation) at SBSs [10], [11]; (iii) Reducing hardware requirements and costs since MBS becomes more cost-efficient only requiring as many RF chains as SBS clusters, which is far less than the point-to-point topology (i.e., dedicated RF-chain per link). Problem formulation and solution. Considering our design choices above, we model the system and propose solutions in a series of steps that are demonstrated in Fig. 1. We formulate a RRM problem for integrated access-backhaul networks considering real-world constraints, which results in a nonconvex MINLP with entangled variables (see Section II). We adopt a series of procedures to simplify the structure of the nonconvex MINLP without altering its optimality. Thus, we (i) improve its tractability by eliminating additive binary couplings and multiplicative mixed-integer couplings, and (ii) reduce the search space by adding cuts. Although the problem structure is greatly simplified after these procedures, it still remains a nonconvex MINLP. However, its more amenable layout allows us to tailor algorithms for its solution (see Section III). We transform some of the nonconvex constraints into equivalent (convex) SOC constraints and remodel others as convex inner SOC approximations. As a result, we recast the nonconvex MINLP into a MISOCP, which can be solved optimally (see Section IV). Although solving the proposed MISOCP guarantees an optimal solution, it requires a considerable amount of time due to the numerous integral variables. To deal with that, which translates to more branches evaluations by the BnC method, we propose a reformulation based on relaxation and penalization of the integral variables that only requires to solve iteratively a SOCP, and is guaranteed to attain a local optimum (see Section VI). To further simplify the computational burden and expedite the solving time, we offer a much simpler reformulation that reduces the number of continuous variables, where we predesign the access and backhaul beamforming vectors and only optimize their gains. As a result, we only solve a low-complexity SOCP problem iteratively (see Section VII). Finally, we derive an upper bound for the problem, which we use to characterize the performance of the developed algorithms (see Section V).
II. SYSTEM MODEL AND PROBLEM FORMULATION We consider that data is transported from the core network to the user equipments (UEs) via a MBS and a deployment of SBSs as shown in Fig. 2. The SBSs are connected to the MBS through wireless backhaul links. We assume an out-ofband full-duplex access-backhaul system, i.e., the backhaul network (connecting SBSs to the MBS) and the access network (connecting UEs to SBSs) operate simultaneously employing orthogonal bands. In the following, we detail the modeling assumptions. Backhaul model: We rely on an advantageous clustering approach, where we divide the SBSs into L non-overlapping virtual cells or clusters, each formed by B SBSs (as in distributed antenna systems). In this way, data streams sent from the MBS to a SBS cluster contain the aggregate content for all the served UEs in that cluster, as shown in Fig. 3. The SBSs are deployed in a planned fashion and grouped based on their proximity. The antenna arrays are oriented towards the cluster center, as shown in Fig. 4. Access model: Each UE is pre-associated to a SBS cluster, based on the geographical distance or a given operator policy. Without loss of generality, we assume that each cluster has U UEs. Thus, SBSs in a cluster transmit collaboratively to UEs only within that cluster. However, not all SBSs are necessarily involved in serving a particular UE, and not all UEs may be served. The information for all the served UEs is coprocessed by all SBSs, thus allowing to handle interference more efficiently. Channel model: The backhaul links operate over a bandwidth W backhaul BW and we assume line-of-sight (LOS) connectivity since the MBS and SBSs are usually strategically installed in the planning phase. Besides, the access network operates over a bandwidth W access BW and its channels (i.e., between SBSs and UEs) exhibit multipath scattering containing both line-ofsight (LOS) and Non-line-of-sight (NLOS) components. Both access and backhaul channels are modeled according to [36]. Optimization model: In line with the related literature, we assume that the MBS has knowledge of the access channels between the SBSs and UEs. In particular, 3GPP specifies channel training procedures in the access network that we can rely upon. In addition, the MBS also knows the backhaul channels, i.e., between itself and the SBSs. This knowledge is even simpler to acquire than the access channels since backhaul links are rather static with small variability. In summary, the MBS collects knowledge of all the wireless channels and, accordingly, optimizes all the radio resources of the system.
For the sake of clarity, variables and parameters used in the following sections are summarized in Table II.
A. Backhaul Network: Multicast Transmissions from MBS to SBSs
In the backhaul network, two important aspects are dealt with. First, rate selection, i.e. choosing appropriate data rates at which the MBS transmits information to the SBSs. Second, beamforming, i.e. adjusting the amplitude and phases of the signals at the MBS to guarantee the selected rates. Beamforming: The MBS is equipped with a planar array of N MBS tx transmit antennas operating on Band 1 used for communication with the SBSs, which have N SBS rx = 1 receive antenna. The MBS transmits as many streams as clusters. Every stream contains the aggregate data for the served UEs in their respective clusters (see Fig. 3). The instantaneous multicast symbol for the SBSs in cluster B l is denoted by z l , with E [z l ] = 0 and E |z l | 2 2 = 1. The beamforming vector conveying z l is denoted by m l . The composite signal transmitted from the MBS to all SBS clusters is given by
x MBS = l∈L m l z l . The received signal at SBS b ∈ B l is expressed as y SBS b = g H b x MBS + n b = g H b m l z l signal for SBS b + l ′ ∈L,l ′ =l g H b m l ′ z l ′ interference + n b noise ,(1)
where g b is the channel between SBS b ∈ B l and the MBS whereas n b ∼ CN 0, σ 2 SBS symbolizes circularly symmetric Gaussian noise. The signal-to-interference-plus-noise ratio (SINR) at SBS b is
SINR SBS b = g H b m l 2 l ′ ∈L,l ′ =l g H b m l ′ 2 + σ 2 SBS .(2)
Since all SBSs within a cluster receive the same common information (i.e. aggregate UE content), the effective rate/SINR per cluster is determined by the SBS with the worst conditions. As a result, a more sensible means of quantifying the maximal SINR per cluster is the following
SINR SBS l = min b∈B l SINR SBS b
, ∀l ∈ L. REMARK: This system is known as multigroup multicast beamforming [37] and has been studied for transmissions from a MBS/SBS to multiple clusters of UEs. We exploit that same idea to transmit data streams from the MBS to the SBSs. We assume that the number of streams that the MBS can handle is sufficient to serve all SBS clusters, i.e. N MBS streams ≥ L. Rate Selection: In practical wireless communications systems, the set of eligible data rates is finite [38, pp. 64]. These predefined rates are uniquely identified by their associated CQI index, and each corresponds to a specific MCS. In addition, for each rate, a minimum received SINR is required in order to ensure a target block error rate (BLER) [39]. While the rates and MCSs are standardized, the corresponding target SINRs are usually vendor-and equipment-specific. We consider the target SINRs in [40], which are shown in Table III (in linear scale) and approximately exhibit increments of twice the previous rate starting from R SBS 1 = 0.2344 bps/Hz. In order to assign R SBS j to the l-th SBS cluster, it is required
that SINR SBS l ≥ Γ SBS j , j ∈ J SBS ,
where J SBS represent the set of possible rates. To represent the rate assignment, we introduce the binary variables β l,j ∈ {0, 1} with β l,j = 1 denoting that the SBSs in B l are allocated R SBS j . We assume that all SBS clusters are served, which is ensured through j∈J SBS β l,j = 1, ∀l ∈ L and N MBS streams ≥ L. Thus, to guarantee the predefined target BLER for cluster B l , it must
hold that SINR SBS l ≥ j∈J SBS β l,j Γ SBS j .
B. Access Network: Distributed Unicast Transmissions from SBSs to UEs
In the access network, four pivotal aspects are addressed. First, admission control, i.e. deciding which UEs are served. Second, rate selection, i.e. choosing data rates for the served UEs. Third, user association, i.e. determining which subset of SBSs transmit to a served UE. Fourth, beamforming. Beamforming and User Association: Each SBS is equipped with a planar array of N SBS tx transmit antennas operating on Band 2 and used for communication with the UEs, which have N UE rx = 1 receive antenna. A SBS b ∈ B l serving a subset of UEs in U l transmits multiple unicast signals simultaneously, each signal targeting a specific UE. The instantaneous unicast symbol for UE u ∈ U l is denoted by s l,u , with E [s l,u ] = 0 and E |s l,u | 2 2 = 1. In addition, the beamforming vector from SBS b ∈ B l transmitting s l,u to UE u ∈ U l is denoted by w b,u . Therefore, the composite signal that SBS b in B l sends to the UEs in U l is represented by
x SBS b = u∈U l w b,u s l,u κ b,u , y UE u = b∈B l h H b,u w b,u s l,u κ b,u signal for UE u in cluster U l + b∈B l u ′ ∈U l u ′ =u h H b,u w b,u ′ s l,u ′ κ b,u ′ interference originated in cluster U l + l ′ ∈L l ′ =l b ′ ∈B l ′ u ′ ∈U l ′ h H b ′ ,u w b ′ ,u ′ s l ′ ,u ′ κ b ′ ,u ′ aggregate interference originated in clusters U l ′ =l + n u noise (3) SINR UE u = b∈B l h H b,u w b,u κ b,u 2 u ′ ∈U l u ′ =u b∈B l h H b,u w b,u ′ κ b,u ′ 2 + l ′ ∈L l ′ =l u ′ ∈U l ′ b ′ ∈B l ′ h H b ′ ,u w b ′ ,u ′ κ b ′ ,u ′ 2 + σ 2 UE .(4)P ′ : max m l ,w b,u ,αu,j ,β l,j ,κ b,u R access w−sum (α) ≡ l∈L u∈U l ω u j∈J UE α u,j R UE j s.t. C 1 : α u,j = {0, 1} , ∀l ∈ L, u ∈ U l , j ∈ J UE , C 2 : j∈J UE α u,j ≤ 1, ∀l ∈ L, u ∈ U l , C 3 : l∈L m l 2 2 ≤ P MBS tx , C 4 : u∈U l w b,u κ b,u 2 2 ≤ P SBS tx , ∀l ∈ L, b ∈ B l ,C 5 : SINR UE u ≥ j∈J UE α u,j Γ UE j , ∀l ∈ L, u ∈ U l , C 6 : κ b,u = {0, 1} , ∀l ∈ L, b ∈ B l , u ∈ U l , C 7 : u∈U l κ b,u ≤ N SBS streams , ∀l ∈ L, b ∈ B l , C 8 : u∈U l κ b,u ≥ 1, ∀l ∈ L, b ∈ B l , C 9 : b∈B l κ b,u ≤ B max j∈J UE α u,j , ∀l ∈ L, u ∈ U l , C 10 : b∈B l κ b,u ≥ B min j∈J UE α u,j , ∀l ∈ L, u ∈ U l , C 11 : β l,j = {0, 1} , ∀l ∈ L, j ∈ J SBS , C 12 :
j∈J SBS β l,j = 1, ∀l ∈ L,
C 13 : W access BW u∈U l j∈J UE α u,j R UE j ≤ W backhaul BW j∈J SBS β l,j R SBS j , ∀l ∈ L, C 14 : u∈U l j∈J UE α u,j = U served , ∀l ∈ L, C 15 : SINR SBS l ≥ j∈J SBS β l,j Γ SBS j , ∀l ∈ L,
where κ b,u is a binary variable that is 1 when SBS b ∈ B l serves UE u ∈ U l and 0 otherwise. A served UE u ∈ U l receives its information from at least B min = 1 and at most B max = B SBSs in B l . The signal received by UE u in U l is given by (3), where n u ∼ CN 0, σ 2 UE and h b,u represents the channel between SBS b and UE u. Every UE perceives interference from within its own cluster and from neighboring clusters. The SINR at UE u in U l is defined by (4). When κ b,u = 0, no information is sent to the UE. The effective beamforming vector is κ b,u · w b,u , which becomes a zerovector for unserved UEs, thus accomplishing the association between UEs and SBSs. Rate Selection and Admission Control: Similarly to Section II-A, the rate assigned to a served UE can only be one within a set of predefined values. To depict the rate selection for the UEs, we introduce the binary variables α u,j ∈ {0, 1}. These variables perform the dual task of admission control and rate selection, which is ensured by j∈J UE α u,j ≤ 1, ∀l ∈ L, u ∈ U l , where J UE represents the set of possible rate values. A UE u is served when j∈J UE α u,j = 1, meaning that one rate has been assigned. Otherwise, when j∈J UE α u,j = 0, the UE is not served. We denote the rates and target SINRs for UEs with R UE j and Γ UE j , respectively. To assign R UE j to UE u, it is required that SINR UE u ≥ Γ UE j , j ∈ J UE , for which we assume the same values shown in Table III in Section III. Further, not all UEs shall be admitted since each SBS can support up to N SBS streams streams simultaneously.
C. Problem Formulation
We investigate the problem of joint optimization of beamforming, user association, rate selection, admission control Proposition 1. Due to existence of C 1 − C 2 , constraintC 5 can be equivalently rewritten as
C 5 : SINR UE u ≥ α u,j Γ UE j , ∀l ∈ L, u ∈ U l , j ∈ J UE . Proof: Because of C 2 ,
there is at most one variable at a time that is 1. As a result, the SINR constraints can be decomposed into multiple constraints, each being related to only one binary variable. Proposition 2. Due to existence of C 6 , constraintsC 4 − C 5 can be equivalently rewritten as C 17 , C 18 , C 19 , C 20 ,C 21 , wherē
C 4 − C 5 = C 17 : p b,u ≥ 0, ∀l ∈ L, b ∈ B l , u ∈ U l , C 18 : u∈U l p b,u ≤ P SBS tx , ∀l ∈ L, b ∈ B l , C 19 : p b,u ≤ κ b,u P SBS tx , ∀l ∈ L, b ∈ B l , u ∈ U l , C 20 : 2w H b,u , κ b,u − p b,u 2 ≤ κ b,u + p b,u , ∀l ∈ L, b ∈ B l , u ∈ U l , C 21 : b∈B l h H b,u w b,u 2 u ′ ∈U l u ′ =u b∈B l h H b,u w b,u ′ 2 + l ′ ∈L l ′ =l u ′ ∈U l ′ b ′ ∈B l ′ h H b ′ ,u w b ′ ,u ′ 2 +σ 2 UE ≥ α u,j Γ UE j , ∀l ∈ L, u ∈ U l , j ∈ J UE , Proof: See Appendix A.
Proposition 3. Due to existence of C 1 , constraintC 21 can be rewritten as C 21 , where
C 21 : l ′ ∈L u ′ ∈U l ′ b ′ ∈B l ′ h H b ′ ,u w b ′ ,u ′ 2 + σ 2 UE ≤ 1 + Γ UE j −1 b∈B l h H b,u w b,u 2 + (1 − α u,j ) 2 Q 2 u , ∀l ∈ L, u ∈ U l , j ∈ J UE , and Q 2 u = P SBS tx l ′ ∈L b ′ ∈B l ′ h b ′ ,u 2 2 + σ 2 UE is an upper bound for the left-hand side (LHS) term of C 21 . Proof: See Appendix B.
in the access network and beamforming, rate selection in the backhaul network aiming to maximize the weighted sum-rate at the access network (i,e., for the UEs), which is formulated as P ′ in the previous page.
In P ′ , R access w−sum (α) denotes the weighted sum-rate achieved by all UEs in the access network. Besides, ω u represents the weight associated to UE u, which can be adjusted by the network operator to assign different priorities, for instance, to balance fairness among UEs. Formally, the objective function is expressed as R access
w−sum (α) ≡ W access BW l∈L u∈U l ω u j∈J UE α u,j R UE j . However, since W access BW is constant, we have redefined it as R access w−sum (α) ≡ l∈L u∈U l ω u j∈J UE α u,j R UE j
without altering the nature of the problem.
Constraints C 1 , C 2 ,C 4 ,C 5 , C 6 , C 7 , C 8 , C 9 , C 10 , C 14 are related to the access network, C 3 , C 11 , C 12 ,C 15 are related to the backhaul network whereas C 13 is related to both networks. Constraints C 1 − C 2 depict the rate selection for all UEs, constraint C 3 restricts the transmit power of the MBS, constraintC 4 restricts the transmit power of the SBSs, constraintC 5 guarantees that the unicast SINR is larger than the corresponding target SINR (specified in Table III), constraints C 6 − C 8 ensure that each SBS serves at least one UE but cannot serve more UEs than the number of streams it can handle, constraints C 9 − C 10 ensure that each admitted UE is served by at least B min and by at most B max SBSs, constraints C 11 − C 12 guarantee a rate selection for every SBS cluster, constraint C 13 guarantees that the total access throughput in a cluster does not exceed the throughput of the corresponding serving backhaul link, C 14 ensures that there are U served served UEs per cluster, constraintC 15 guarantees that the SINR per SBS cluster is larger than the selected target SINR (specified in Table III).
REMARK: In the strict sense, the integrality constraints (i.e., C 1 , C 6 , C 11 ) make P ′ nonconvex. Nevertheless, in the MINLP literature, a MINLP is referred to as nonconvex if it remains nonconvex even after excluding the integral variables. Otherwise, it is called convex [41]. In general, both convex and nonconvex MINLPs are NP-hard but the latter ones are more challenging to solve. Specifically, P ′ is a nonconvex MINLP and the nonconvexity nature is conferred by the constraints C 4 ,C 5 ,C 15 .
III. PROPOSED PROBLEM REFORMULATION In this section, we propose a series of transformations to simplify the nonconvex constraintsC 4 ,C 5 ,C 15 . The resulting reformulation P (shown in Section III-D) is used in Section IV, Section VI, Section VII, where we propose three algorithms: BnC-MISOCP, RnP-SOCP-1 and RnP-SOCP-2.
A. Eliminating Additive Coupling between Binary Variables
To deal with the additive coupling of the binary variables at the right-hand side (RHS) ofC 5 (i.e. sum of variables), we separateC 5 into multiple equivalent constraints, as described in Proposition 1.
B. Eliminating the Multiplicative Coupling between Continuous and Binary Variables
To deal with the multiplicative coupling between the unicast beamforming vectors and binary variables (in the form w b,u κ b,u ) inC 4 − C 5 , we reformulate such interdependencies as equivalent additive couplings, which are simpler to handle, as described in Proposition 2. In addition, note that C 17 − C 20 are convex, whereasC 21 is a nonconvex mixed-integer nonlinear constraint. To circumvent the involved structureC 21 , we remodel it (without loss of optimality) harnessing the big-M method [42], which allows to remove the multiplicative tie Proposition 4. Due to existence of C 11 , constraintC 15 can be equivalently recast as C 15 , where
C 15 : l ′ ∈L g H b m l ′ 2 + σ 2 SBS ≤ 1 + Γ SBS j −1 g H b m l 2 + (1 − β l,j ) 2 Q 2 b , ∀l ∈ L, b ∈ B l , j ∈ J SBS , and Q 2 b = P MBS tx g b 2 2 + σ 2 SBS
is an upper bound for the LHS term of C 15 . Proof: The proof is along the same lines as the procedures adopted in Proposition 1, Proposition 2 and Proposition 3. Therefore, it is omitted.
Proposition 5. The nonconvex constraints C 21 − C 22 can be equivalently expressed as SOC constraints C 23 − C 25 , i.e.,
C 21 − C 22 = C 23 : h H u W, σ UE 2 ≤ 1 + Γ UE j −1 Re h H u w u + (1 − α u,j ) Q u , ∀l ∈ L, u ∈ U l , j ∈ J UE , C 24 : Re h H u w u ≥ α u,j Γ UE j σ UE , ∀l ∈ L, u ∈ U l , j ∈ J UE , C 25 : Im h H u w u = 0, ∀l ∈ L, u ∈ U l , j ∈ J UE . Proof: See Appendix C.
Proposition 6. The nonconvex constraints C 15 − C 16 can be recast as the more conservative SOC constraints C 26 − C 27 , where
C 15 − C 16 = C 26 : g H b M, σ SBS 2 ≤ 1 + Γ SBS j −1 Re g H b m l + (1 − β l,j ) Q b , ∀l ∈ L, b ∈ B l , j ∈ J SBS , C 27 : Re g H b m l ≥ β l,j Γ SBS j σ SBS , ∀l ∈ L, b ∈ B l , j ∈ J SBS . Proof: See Appendix D.
between the beamformers and binary variables, as described in Proposition 3. Finally, because constraintC 15 has a similar structure asC 5 , we can reformulate it in an equivalent manner, as described in Proposition 4.
C. Adding Cuts to Tighten the Feasible Set
To reduce the number of branches to be evaluated by MINLP solvers, we include valid inequalities (cuts) for certain constraints involving integer variables. Thus, we add the constraints C 16 and C 22 , defined as
C 16 : g H b m l 2 ≥ β l,j Γ SBS j σ 2 SBS , ∀l ∈ L, b ∈ B l , j ∈ J SBS , C 22 : b∈B l h H b,u w b,u 2 ≥ α u,j Γ UE j σ 2 UE , ∀l ∈ L, u ∈ U l , j ∈ J UE .
Note that C 16 is a lower bound for the multicast SINR numerator, which becomes tight when the interference term is zero. This constraint is always satisfied when β l,j are binary thus reducing the feasible set and tightening the problem relaxation when the binary variables are recast as real values (as in the proposed algorithms in Section VI and Section VII). Adding C 16 does not change the nature of the problem nor affects its optimality. Similarly, C 22 is a lower bound for the unicast SINR numerator.
D. Redefining the Problem
After applying the transformations in Section III-A, Section III-B and Section III-C, the nonconvex constraintsC 4 ,C 5 ,C 15 have been replaced by the convex constraints C 17 , C 18 , C 19 , C 20 and the nonconvex constraints C 15 , C 21 . In addition, the nonconvex constraints C 16 , C 22 have been added to contract the feasible set. Collecting these outcomes, we define P as P : max m l ,w b,u ,p b,u , αu,j ,β l,j ,κ b,u convex: R access w−sum (α)
s.t. convex: C 2 − C 3 , C 7 − C 10 , C 12 − C 14 , C 17 − C 20 , nonconvex: C 15 − C 16 , C 21 − C 22 , binary: C 1 , C 6 , C 11 .
REMARK: Notice that P is also a nonconvex MINLP and has the same optimal solution as P ′ since the introduced transformations do not affect the original feasible set. However, the structure of P is simpler, thus allowing us to tailor algorithms for solving the problem more efficiently.
IV. BNC-MISOCP: PROPOSED MISOCP FORMULATION In this section, we recast P as a MISOCP by transforming the nonconvex constraints into convex ones. We remodel C 21 − C 22 as convex constraints and replace C 15 − C 16 with convex inner surrogates.
A. Transforming Nonconvex Constraints into Convex Constraints
To deal with the nonconvex constraints C 21 − C 22 , we recast them as convex conic constraints as they have hidden convexity. To simplify notation, we first rewrite C 21 − C 22 as , denote respectively the channels and beamforming vectors from all SBS in the same cluster that UE u is located. Further,h u denotes the channel between UE u and all SBSs in the system whereas W is a block diagonal matrix collecting all beamforming vectors between SBSs and UEs. After applying these changes, we are in the position of expressing the nonconvex constraints C 21 − C 22 as exactly equivalent SOC constraints, as described in Proposition 5.
C 21 : h H u W 2 + σ 2 UE ≤ 1 + Γ UE j −1 h H u w u 2 + (1 − α u,j ) 2 Q 2 u , ∀l ∈ L, u ∈ U l , j ∈ J UE , C 22 : α u,j Γ UE j σ 2 UE ≤ h H u w u 2 , ∀l ∈ L, u ∈ U l , j ∈ J UE , where b∈B l h H b,u w b,u 2 = h H u w u 2 , u ∈ U l and l ′ ∈L u ′ ∈U l ′ b ′ ∈B l ′ h H b ′ ,u w b ′ ,u ′ 2 = h H u W 2 .
B. Recasting Nonconvex Constraints as Convex Inner Approximations
To circumvent the nonconvex constraints C 15 , C 16 , we replace them by convex surrogates. Assuming that M = [m 1 , · · · , m L ], we express C 15 as
C 15 : g H b M 2 2 + σ 2 SBS ≤ 1 + Γ SBS j −1 g H b m l 2 + (1 − β l,j ) 2 Q 2 b , ∀l ∈ L, b ∈ B l , j ∈ J SBS .
Using this expression, we reformulate C 15 − C 16 as convex inner SOC approximations, as stated in Proposition 6. If constraints C 26 − C 27 are satisfied, then C 15 − C 16 are automatically guaranteed because the feasible set of C 26 −C 27 is contained in that of C 15 − C 16 . Therefore, they are called inner approximations.
C. Summarizing the Changes
After applying the transformations above, we define the following problem, P 0 : max
m l ,w b,u ,p b,u , αu,j ,β l,j ,κ b,u convex: R access w−sum (α) s.t. convex: C 2 − C 3 , C 7 − C 10 , C 12 − C 14 , C 17 − C 20 , C 23 − C 27 , binary: C 1 , C 6 , C 11 ,
which is an inner approximation of problem P due to convexification of its original feasible set upon replacing C 15 − C 16 by C 26 − C 27 . Thus, any feasible solution to P 0 will also be feasible to P ′ and P. Here, P 0 has N v = 2LN MBS tx +2LBU N SBS tx +2LBU +LJ SBS +LU J UE variables, N l = 3L + 2LU + 3LB + 2LBU + 3LU J UE + LBJ SBS linear constraints and N c = 1 + LBU + LU J UE + LBJ SBS convex constraints. The complexity is O N s (N v ) 3 (N l + N c ) , where N s is the total number of evaluations needed by the mixedinteger (MIP) solver.
REMARK: Note that P 0 is a convex MINLP, and as such it can be solved optimally by MIP solvers which exploit BnC techniques to prune infeasible solutions thus reducing the search space of the problem. Although BnC techniques can explore the binary space more efficiently and are faster than exhaustive search, they may still require a considerable amount of time to find the optimum, specially when the number of integral variables is large as in P 0 . Thus, in order to expedite this process, we propose suboptimal algorithms in Section VI and Section VII based on integrality relaxation and penalization.
V. PROPOSED BOUNDS We derive an upper bound and a lower bound for P 0 . The upper bound is defined as a MISOCP whereas the lower bound is a system-and problem-specific rate value. When not possible to obtain a solution for P 0 (due to high time complexity), the upper and lower bounds will be used as benchmarks for the developed algorithms in Section VI and Section VII. Upper Bound (UB): While the weighted sum-rate is a mechanism to balance rates, i.e., to give higher priorities to the least favored UEs, the actual aggregate rate in the network is given by the sumrate R access sum (α) = W access BW l∈L u∈U l j∈J UE α u,j R UE j (without the weights). Thus, note that R access sum (α) is re-lated to constraint C 13 , which ensures that the access sumrate per cluster does not exceed the rate of the serving backhaul link. Therefore, the access sum-rate R access sum (α) is bounded from above by the backhaul sum-rate, defined as R backhaul sum (β) W backhaul BW l∈L j∈J SBS β l,j R SBS j , i.e., R access sum (α) ≤ R backhaul sum (β). Since the backhaul sum-rate depends only on m l and β l,j , the upper bound is given by
P UB : max m l ,β l,j R backhaul sum (β) s.t. C 3 , C 11 , C 12 , C 26 , C 27 ,
which is a MISOCP that can be solved optimally. The upper bound essentially maximizes the backhaul network throughput without considering the access network requirements. Note that P UB has N v = LJ SBS + 2LN MBS tx variables, N l = L + LBJ SBS linear constraints and N c = 1 + LBJ SBS convex constraints. Thus, its complexity is
O N s (N v ) 3 (N l + N c ) ,
where N s represents the total number of evaluations needed by the MIP solver.
REMARK: P UB can be interpreted as joint multigroup multicast beamforming and rate selection, which has not been investigated before. A similar problem was studied in [43]
but with continuous rates. Although we do not investigate this new problem alone but in conjunction with the additional access network constraints, we believe it is important to highlight its novelty as it represents the discrete counterpart of the aforementioned problem thus filling a gap in the existing literature and opening new avenues of research. Lower Bound (LB):
The lower bound is based on the analysis of P 0 . From constraint C 13 , a number of U served UEs per cluster needs to be served. In the worst case, these UEs are allocated the lowest rate possible, which based on Table III, corresponds to R UE 1 = 0.2344 bps/Hz. With L clusters, the minimum sum-rate at the access network is defined as R access sum−min = R UE 1 · W access BW · U served · L bps. We underline that this bound corresponds to the worst possible case in which the UEs are minimally served while still satisfying the system constraints.
VI. RNP-SOCP-1: PROPOSED SOCP FORMULATION This formulation is derived from problem P 0 . We propose a relax-and-penalize SOCP algorithm denoted by RnP-SOCP-1, which iteratively optimizes a SOCP. To cope with the integrality constraints C 1 , C 6 , C 11 , we replace them with the intersection of two continuous sets [44], as described in Proposition 7. Proposition 7. The constraints C 1 , C 6 , C 11 can be equivalently expressed as,
C 1 = X 1 : 0 ≤ α u,j ≤ 1, Z 1 : l,u,j α u,j − α 2 u,j ≤ 0, C 6 = X 2 : 0 ≤ κ b,u ≤ 1, Z 2 : l,b,u κ b,u − κ 2 b,u ≤ 0, C 11 =
X 3 : 0 ≤ β l,j ≤ 1, Z 3 : l,j β l,j − β 2 l,j ≤ 0. Proof: It is straightforward to see that X 1 and Z 1 intersect only at {0, 1}. Thus, we omit further details.
Notice that constraints X 1 − X 3 are convex whereas Z 1 − Z 3 are nonconvex. Considering Proposition 7, we define
P 1 : max Θ∈D R (α, β, κ) R access w−sum (α) −λ α f α (α) − λ β f β (β) − λ κ f κ (κ) nonconvex DC functions (7) f α (α) p α (α) + q α (α) , p α (α) l∈L u∈U l j∈J UE α u,j , q α (α) − l∈L u∈U l j∈J UE α 2 u,j , f β (β) p β (β) + q β (β) , p β (β) l∈L j∈J SBS β l,j , q β (β) − l∈L j∈J SBS β 2 l,j , f κ (κ) p κ (κ) + q κ (κ) , p κ (κ) l∈L b∈B l u∈U l κ b,u , q κ (κ) − l∈L b∈B l u∈U l κ 2 b,u . P (t) 1 : max Θ∈DR (t) (α, β, κ) R access w−sum (α) − λ αf (t) α (α) − λ βf (t) β (β) − λ κf (t) κ (κ) (8) f (t) α (α) p α (α) +q (t) α (α) ,f (t) β (β) p β (β) +q (t) β (β) ,f (t) κ (κ) p κ (κ) +q (t) κ (κ)
.
P 1 : max Θ R access w−sum (α) s.t. Θ ∈ D convex , Z 1 − Z 3 nonconvex
which is equivalent to P 0 . Here, Θ = (M, W, p, α, β, κ) groups all the optimization variables and D denotes the feasible set spanned by the convex constraints
X 1 − X 3 , C 2 − C 3 , C 7 − C 10 , C 12 − C 14 , C 17 − C 20 , C 23 − C 27 . Although P 1
is a nonconvex MINLP, its nonconvexity is only due to simple polynomial constraints Z 1 − Z 3 , which belong to the class of difference of convex (DC) functions.
Since P 1 is challenging to solve optimally, we aim to obtain a locally optimal solution. To find a solution for P 1 , we devise an algorithm based on the minorization-maximization (MM) principle. To cope with Z 1 − Z 3 , we include them as penalty terms in the objective function [45]. Thus, we define P 1 in (7) where λ α ≥ 0, λ β ≥ 0, λ κ ≥ 0. Whenever α, β, κ are not binary, the functions f α (α), f β (β), f κ (κ) are positive. By including them in the objective, they can be used as a measure of the degree of satisfaction of the binary constraints, with λ α , λ β , λ κ representing penalty factors. Problems P 1 and P 1 are related in the following sense. If Proposition 8 is satisfied, P 1 and P 1 become equivalent [45], [46]. Proposition 8. The optimization problems P 1 and P 1 are equivalent for sufficiently large values of λ α , λ β , λ κ , in which case both problems attain the same optimal value and solution.
Proof: See Appendix E.
To solve P 1 , the complication is in the objective since f α (α), f β (β), f κ (κ) are nonconvex DC functions. Thus, we apply first-order approximations to q α (α), q β (β), q κ (κ), and definẽ
q (t) α (α) q α α (t−1) + ∇ α q T α α (t−1) α − α (t−1) , q (t) β (β) q β β (t−1) + ∇ β q T β β (t−1) β − β (t−1) , q (t) κ (κ) q κ κ (t−1) + ∇ κ q T κ κ (t−1) κ − κ (t−1) , whereq (t) α (α) ≥ q α (α),q (t) β (β) ≥ q β (β),q (t) κ (κ) ≥ q κ (κ)
are outer linear approximations for q α (α), q β (β), q κ (κ), respectively.
Here, α (t−1) , β (t−1) , κ (t−1) denote a feasible solution (i.e. reference point for linearization) whereas ∇ x represents the derivative with respect to variable x. Using the MM principle and constructing a sequence of surrogate functionsq
(t) α (α), q (t) β (β),q (t) κ (κ) at every iteration t, we solve problem P (t) 1 defined in (8), which is a SOCP wheref (t) α (α) ≥ f α (α), f (t) β (β) ≥ f β (β),f (t) κ (κ) ≥ f κ (κ). In particular, problem P (t)
1 is convex and can be solved using interior-point methods. By solving P constitutes a sequence of enhanced points for P 1 , which converges to a KKT point.
Proof: See Appendix G.
To solve P (t) 1 , a feasible point Θ (0) is needed to guarantee convergence as explained in Proposition 10. We generate random initial feasible points and test them for feasibility, as described in [47]. We use the best of these points as the initial Θ (0) , and iteratively solve P Step 1:
Define N iter , δ, λα, λ β , λκ.
Step 2:
Find an initial point Θ (0) = ·, ·, ·, α (0) , β (0) , κ (0) using {0, 1} values.
Step 3:
Initialize t = 1.
Step 4:
Solve P (t) 1 using Θ (t−1) .
Step 5:
Assign Θ (t) ← Θ (t−1) .
Step 6:
Update the iteration index t by one, i.e. t = t + 1.
Step 7:
Verify if the stop criterion is attained. Otherwise, return to Step 4.
We stop the iterative process when a criterion has been met, i.e., t = N iter orR (t) (α, β, κ) −R (t−1) (α, β, κ) ≤ δ. The computational complexity of P is similar to that of one evaluation of P 0 . In particular,
N v = 2LN MBS tx + (C 3 ) L 1 : l∈L t 2 l ≤ P MBS tx , (C 20 ) L 2 : 2 w H b,u v b,u , κ b,u − p b,u 2 ≤ κ b,u + p b,u , ∀l ∈ L, b ∈ B l , u ∈ U l , (C 23 ) L 3 : [S b v, σ UE ] 2 ≤ 1 + Γ UE j −1 Re b∈B l c b,u v b,u + (1 − α u,j ) Q u , ∀l ∈ L, u ∈ U l , j ∈ J UE , (C 24 ) L 4 : Re b∈B l c b,u v b,u ≥ α u,j Γ UE j σ UE , ∀l ∈ L, u ∈ U l , j ∈ J UE , (C 25 ) L 5 : Im b∈B l c b,u v b,u = 0, ∀l ∈ L, u ∈ U l , j ∈ J UE . (C 26 ) L 6 : [R b t, σ SBS ] 2 ≤ 1 + Γ SBS j −1 r b,l t l + (1 − β l,j ) Q b , ∀l ∈ L, b ∈ B l , j ∈ J SBS , (C 27 ) L 7 : r b,l t l ≥ β l,j Γ SBS j σ SBS , ∀l ∈ L, b ∈ B l , j ∈ J SBS ,
2LBU N SBS tx + 2LBU + LJ SBS + LU J UE variables, N l = 3L + 2LU + 3LB + 2LJ SBS + 4LBU + 5LU J UE + LBJ SBS linear constraints and N c = 1 + LBU + LU J UE + LBJ SBS convex constraints. Therefore, the complexity is
O N iter (N v ) 3 (N l + N c ) , where N iter is the number of it- erations.
VII. RNP-SOCP-2: PROPOSED SOCP FORMULATION This formulation is derived from problem P 1 . We propose an alternative relax-and-penalize SOCP algorithm, denoted by RnP-SOCP-2, whose main characteristic is the reduced number of optimization variables compared to RnP-SOCP-1, thus allowing to obtain solutions faster. To decrease the large number of optimization variables in P 1 , (essentially dominated by the number of antennas at the MBS and SBSs) we adopt a simpler approach in which instead of optimizing highdimensional beamforming vectors, we only optimize their gains.
In particular, we define the variables v b,u and t l as the gains (i.e., amplitude and phase) of predefined unicast (i.e., access) and multicast (i.e., backhaul) beamforming vectors w b,u and m l , respectively, such that m l = t l m l , w b,u = v b,u w b,u , m l 2 2 = 1, w b,u 2 2 = 1. We design the unitnorm unicast beamforming vectors w b,u using the zero-forcing (ZF) criterion. On the other hand, the unit-norm multicast beamforming vectors m l are obtained experimentally upon evaluating the upper bound P UB for multiple realizations with varying degrees of shadowing and small-scale fading, and then taking the average of all these beamforming vectors. This procedure allows us to obtain a fair estimation of the multicast beamforming vectors because the SBSs are stationary and therefore the MBS-SBS channels geometry do not change substantially. Thus, the constraints that are affected by m l = t l m l , w b,u = v b,u w b,u are C 3 , C 20 ,C 23 −C 27 which are redefined at the top of this page, where S b is a block diagonal matrix containing the combinations of beamformers w b,u and channels for UE u,
c b,u = h H b,u w b,u , R b = diag g H b M , r b,l = Re g H b m l .
After applying these changes, we define,
P 2 : max Θ R access w−sum (α) s.t. Θ ∈ D convex , Z 1 − Z 3 nonconvex ,
where Θ = (t, v, p, α, β, κ) with D denoting the feasible set spanned by the constraints
L 1 − L 7 , X 1 − X 3 , C 2 , C 7 − C 10 , C 12 − C 14 , C 17 − C 19
. In a similar manner as with P 1 , we define P 2 , and thereupon its linearized version as P (t) 2 , which can be solved via Algorithm 1. P (t) 2 is a SOCP program with N v = 2L + 4LBU + LJ SBS + LU J UE decision variables, which roughly represent half of that used in BnC-MISOCP and RnP-SOCP-1 (for the evaluated settings). In addition, P (t) 2 has N l = 3L + 2LU + 3LB + 2LBU + 3LU J UE + LBJ SBS linear constraints and N c = 1 + LBU + LU J UE + LBJ SBS convex constraints. Thus, the complexity of RnP-SOCP-2 is O N iter (N v ) 3 (N l + N c ) , with N iter denoting the number of iterations. Further, we note that RnP-SOCP-2 exhibits reduced complexity compared to RnP-SOCP-1.
VIII. SIMULATION RESULTS We evaluate the performance of RadiOrchestra in different scenarios with varying conditions. Throughout all simulations, we consider the following default parameters, unless specified otherwise. The carrier frequency is f c = 41 GHz (V-band in FR2) with W access BW = W backhaul BW = 100 MHz bandwidth [36]. The channel models are UMa LOS for the backhaul and UMi LOS/NLOS for the access [36], which include path-loss, shadowing and small-scale fading. We assume that SBSs can support up to four UEs (N SBS streams = 4) simultaneously, and there are U served = 4 UEs served concurrently (i.e., in one slot) in each cluster. Further, all UEs have the same priority, i.e., ω u = 1 L * U and l∈L u∈U l ω u = 1. In Table IV, we show the parameters for each scenario. The algorithms have been implemented using CVX and MOSEK on a computer with 16GB RAM and a Intel Core i7-6700 processor. Scenario S 1 : Optimality gap and computational complexity. We benchmark the algorithms considering a small Besides, we note that UB can be used for quick benchmarking when the access throughput bottleneck is originated by the backhaul network. In addition, we note that LB is loose as it is agnostic to the network conditions but provides an idea of the worst-case scenario without solving any problem. It becomes valuable when evaluating cases wherein the transmit power at the MBS or SBSs are limited as in Fig. 5a because under such conditions the lowest rates will very likely be allocated.
setting, with the purpose of obtaining an optimal solution for BnC-MISOCP within a reasonable amount of time and compare its performance against that of RnP-SOCP-1 and RnP-SOCP-2. Fig. 5a, Fig. 5b, Fig. 5c show the access throughput with various MBS and SBSs transmit powers. In particular, RnP-SOCP-1 and RnP-SOCP-2 are 5.1% and 9.7% below BnC-MISOCP when P SBS tx = 14 dBm (see Fig. 5c). Also, UB becomes tighter with increasing P SBS tx , e.g., within only 9.6% with respect to BnC-MISOCP in Fig. 5c. This occurs because UB only considers the backhaul throughput optimization, which depends on P MBS tx . Thus, as long as the bottleneck is originated in the access network (due to low transmit power at the SBSs), UB will not capture such limitations. With higher P SBS tx , as shown in Fig. 5c, the access throughput limitation is removed and is shifted to the backhaul network, where P MBS tx is varied from a low to a high transmit power. As a result, in Fig. 5c the access throughput limitation is dominated by P MBS tx , where we recognize a high degree of similarity between UB and BnC-MISOCP. Therefore, UB can be used as a tight bound to evaluate the performance of the system whenever the SBSs can transmit at sufficiently high power.
On the other hand, Fig. 5d and Fig. 5e provide the time complexities when P SBS tx = 14 (as in Fig. 5c) showing that RnP-SOCP-1 and RnP-SOCP-2 are roughly 1000 and 2000 times computationally faster than BnC-MISOCP, respectively. Similarly, the time complexity of UB is approximately 100 times lighter than that of BnC-MISOCP. This huge difference is because the complexity of BnC-MISOCP is combinatorial, i.e., collapsing to exhaustive search in the worst case. Although this case may not be reached in practice, BnC-MISOCP requires to solve multiple convex problems to prune the infeasible branches and thus abridge the search process. However, RnP-SOCP-1 and RnP-SOCP-2 circumvent this issue by relaxing the binary variables, penalizing them and solving the problem in the continuous domain, which explains their reduced complexity. Besides, UB has a small number of optimization variables compared to BnC-MISOCP, explaining 15 Figure 7: Evaluation of Scenario S 3 . We note that maximizing the access throughput is highly dependent on both backhaul and access network parameters, which highlights the importance of jointly optimizing them.
its faster solving time. Note that the time complexities grow with increasing P MBS tx because a higher P MBS tx enables the allocation of a wider range of rates thus needing more evaluations, specially by BnC-MISOCP and UB. Further, Fig. 5f shows the convergence of RnP-SOCP-1 and RnP-SOCP-2 for 5 different realizations. Here, we measured the error of the binary variables with respect to their rounded versions and computed the mean squared error (MSE), showing that after 6 or 7 iterations the error converges to zero, i.e., the relaxed binary variables values become integer. Scenario S 2 : Upper bound as a means of network planning. Since UB is much simpler to solve than BnC-MISOCP (as shown in Fig. 5d and Fig. 5e), we can use UB in larger settings to examine multiple configurations of number of antennas, transmit power, number of clusters and cluster size. From the planning perspective, these results are valuable as they allow us to choose suitable operation points for the network. In Fig. 6a, we show the backhaul throughput (i.e., the objective of UB) for various combinations of P MBS tx , N MBS tx , L, where the bottommost and uppermost layers represent L = 1 (one cluster) and L = 6 (six clusters), respectively. We observe that the backhaul throughput improves with increasing number of antennas and transmit power because more antennas enhance the multiplexing capability while a higher power allows transmitting at higher rates. However, when the number of clusters grows from L = 5 to L = 6, the throughput saturates showing marginal improvement because the scenario becomes more interference limited (due to more SBSs deployed). We realize that with N MBS tx = 64 antennas, P MBS tx = 36 dBm transmit power and L = 5 clusters, the backhaul network can be operated at its full capacity. In Fig. 6a, we considered B = 3, but we validate such decision in Fig. 6b, where we illustrate the backhaul throughput for various combinations of P MBS tx and B when L = 5. We note that the throughput decreases when the cluster size increases from B = 1 to B = 6 because, to reach more SBSs, higher MBS power is consumed but also more interference is generated due to more SBSs scattered. However, a larger SBS cluster is preferred because (i) more UEs can be served (each SBS can serve a limited number of UEs) and (ii) UEs can be allocated higher rates by being connected to more SBSs. With B = 3, still the maximum backhaul throughput can be achieved. Scenario S 3 : Impact of the transmit power. Fig. 7a, Fig. 7b, Fig. 7c and Fig. 7d 4 . We note that the overall access throughput can be expanded with more clusters (i.e., more SBSs and UEs). However, this improvement may saturate beyond a number of clusters due to more interference or insufficient transmit power.
power at the MBS and SBSs impacts the access network throughput. Fig. 7a shows the case when P SBS tx = 14 dBm and P MBS tx is varied. As observed, the access throughput improves as the MBS increases its transmit power, which is logical since the backhaul capacity is naturally expanded with higher power. Similarly, Fig. 7b shows the case when P MBS tx = 36 dBm and P SBS tx is varied. We note that the access throughput improves as the SBSs increase their transmit power. This occurs because higher SBSs power enables UEs to be assigned higher rates. We observe in Fig. 7a and Fig. 7b that when P MBS tx = 36 dBm and P SBS tx = 14, both RnP-SOCP-1 and RnP-SOCP-2 achieve nearly the same performance although RnP-SOCP-2 grows at a slower rate. This slower improvement stems from the fact that the beamforming vectors for RnP-SOCP-2 are predesigned and only their gains can be optimized, thus allowing for less flexibility compared to RnP-SOCP-1. Thus, their performance meet only in the presence of high MBS/SBSs transmit power. At this point, the gap compared to UB is 14.8% and 16.5% for RnP-SOCP-1 and RnP-SOCP-2, respectively. Fig. 7c and Fig. 7d show the effect of varying both P SBS tx and P MBS tx . In Fig. 7e, Fig. 7f, Fig. 7g, Fig. 7h, we show the allocation of UE rates when P MBS tx = 36 and P SBS tx is varied gradually from a low to a high power. At lower P SBS tx as in Fig. 7e, the UEs are mainly assigned the lowest rates. As P SBS tx becomes higher, it becomes possible to allocate higher rates to the UEs, as observed in Fig. 7h. Scenario S 4 : Impact of the number of clusters. Fig. 8a, Fig. 8b, Fig. 8c show the access throughput when P SBS tx = 14 dBm and the number of clusters is varied from L = 2 to L = 6 for different P MBS tx values. The access throughput improves with increasing L because more clusters translates to more served UEs (there are U served UEs per cluster), and hence the higher aggregate rate. Besides, higher P MBS tx also improves the access throughput because it boosts the backhaul network capacity. In particular, we observe throughput saturation when increasing from L = 5 to L = 6, which is consistent with the behavior observed in Fig. 6a where the backhaul network throughput was evaluated. Further, we note that RnP-SOCP-1 outperforms RnP-SOCP-2 when P MBS tx = {18, 27} dBm. However, for sufficiently high P MBS tx = 36 dBm, the performance of both are comparable. Besides, we examine the UE rate allocation in Fig. 8d, Fig. 8e, Fig. 8f and Fig. 8g assuming P MBS tx = 18 dBm, P SBS tx = 14 dBm. We observe that when the number of clusters is small, e.g. L = 2 (see Fig. 8d), the rates assigned to the UEs span a wider range compared to the case when L = 5 (see Fig. 8g). The reason for this behavior is that more interference is generated in the backhaul network with L = 5 than with L = 2. In particular, with L = 2, only two signals are transmitted whereas with L = 5, five different signals are sent from the MBS, thus (d) Imprecise channels Figure 9: Evaluation of Scenario S 5 . We have used the model c = 1 − χ 2ĉ + χp to emulate imprecise channel conditions, where c is the estimated channel, c is the exact access/backhaul channel (but unknown), χ ∈ [0, 1] is the degree at which the perturbation contaminates the channel, and p ∼ 0, ĉ 2 2 I/K is a random perturbation, where K is the length ofĉ. We note the importance of careful provision of the backhaul network because it is the link with highest importance delivering data to the UEs. A potential disruption affecting this link causes a degradation of the whole network whereas impairments in the individual access links do not have a significant impact on the overall network performance. We underline a fundamental difference regarding the impact of imperfect CSI in system models assuming discrete or continuous rates. While CSI variations affect both systems, it has more detrimental consequences in the discrete-rate case. For instance, in continuous-rate models, a CSI variation will produce a SINR different from the expected thus also affecting the rate. However, the resulting rate will still be feasible for the model due to being continuous. On the contrary, in discrete-rate models, if the SINR is below the required target, the data will not be decoded by the SBS/UE thus causing the resulting rate to drop to zero. generating more interference at the receiving SBSs. In Fig. 8h, Fig. 8i, Fig. 8j and Fig. 8k we also examine the UE rates assuming P MBS tx = 36 dBm, P SBS tx = 14 dBm. In this case, the backhaul network has sufficiently high power. As a result, throughout Fig. 8h, Fig. 8i, Fig. 8j and Fig. 8k, the distribution of rates remains more or less similar. Scenario S 5 : Impact of imprecise channel estimation. Fig. 9a shows the access throughput when the access channels are estimated perfectly but the backhaul channels inaccurately. Here, the channel energy variation is represented by ξ backhaul . Although backhaul channels are generally static due to fixed positions of MBS and SBS, it is important to test the network against estimation errors that may arise due to hardware miscalibration or impairments. We observe that as the degree of error in the backhaul channels increases, the access throughput is affected more severely due to information that cannot be decoded by the SBSs and therefore not relayed to the UEs. Further, RnP-SOCP-1 is more robust than RnP-SOCP-2 to dealing with such imprecisions because RnP-SOCP-2 only optimizes the beamformers gains, making it less robust to perturbations. With RnP-SOCP-1 and RnP-SOCP-2, the throughput decreases 4.2% and 18.4%, respectively when the channel energy varies within ξ backhaul = 5%, and 10.1% and 58.5%, respectively when the channel energy varies within ξ backhaul = 10%. Fig. 9b shows the access throughput when the access channels are estimated inaccurately but the backhaul channels perfectly, and the error energy is represented by ξ access . The access channel may be inaccurately estimated due to UE mobility, feedback quantization or unmanaged interference from other networks.
RnP − SOCP − 1 RnP − SOCP − 2 T 5 T 10 T 15 T 20 T 25 T 30 T 35 T 40 T 45
We note that the access throughput with RnP-SOCP-1 and RnP-SOCP-2 only suffers a decay of 9.9% and 31.6%, respectively, even when the access channels change within ξ access = 40%, which is much less compared to the case in Fig. 9a. The reason for this outcome is that a disruption in an access link may cause only a single UE not being able to decode its information (since its SINR may decrease). In contrast, a disruption in a backhaul link may cause many SBSs in a cluster to be automatically unsupplied, thus making them unable to deliver data to the UEs. In addition, the multicast topology of the backhaul network is more susceptible to channel variations, since the link with the weakest condition limits the data rate for the whole SBS cluster. On the other hand, Fig. 9c and Fig. 9d show the access throughput performance when both the access and backhaul channels contain estimation errors. Scenario S 6 : Time-slotted evaluation. We have evaluated the access throughput considering that all UEs have the same priorities. However, the UE priorities (weights) can be adjusted, for instance, to balance the cumulative throughput so that all UEs experience a similar degree of fairness over time. To realize this, we evaluate the algorithms in a slotted manner. Assuming L = 5, U = 20, U served = 4, the network needs 5 slots to allocate the 100 UEs, i.e., in each slot, 20 UEs are simultaneously served with 4 UEs per cluster. In Fig. 10a, we show the access throughput for RnP-SOCP-1 and RnP-SOCP-2 during 50 slots of equal duration T = T n − T n−1 and assuming that the channel is estimated every 5 slots, i.e., once all the UEs have been served, a new UE scheduling with a different channel is considered. In particular, in every cluster, in time slot The process continues in this manner, updating the weights every 5 slots. In Fig. 10b, we show the individual cumulative throughput for all 20 UEs in cluster U 1 . We realize that the throughput experienced by the UEs tend to be similar as the deviation from each other is small, which is achieved due to the adaption of weights.
IX. CONCLUSIONS Self-backhauling millimeter-wave networks are a key enabler for dense deployments by virtue of reducing costs (not needing fiber links) and facilitating higher flexibility through usage of wireless links. However, designing efficient and practical solutions for such systems are extremely complex due to the intertwined nature of backhaul and radio access networks that are not straightforward to model, and intrinsically result in complex problems with coupled optimization variables that are challenging to solve. In this paper, RadiOrchestra demonstrated how to tame this complexity with a series of design choices in the system, and providing mathematical formulation and optimization of radio resources. We proposed three formulations and their respective algorithms, BnC-MISOCP, RnP-SOCP-1 and RnP-SOCP-2, to jointly optimize beamforming, user association, rate selection and admission control with the aim of maximizing the access network throughput. Our complexity analysis showed that RnP-SOCP-1 and RnP-SOCP-2 are less complex than BnC-MISOCP while the simulation results illustrated that their performance remained within 16.5% of the upper bound. We believe this attractive complexity-performance trade-off is key to potential adaptation of RadiOrchestra in future systems. RadiOrchestra can be extended in several directions. In RadiOrchestra we considered that both the access and backhaul networks operate over a fixed bandwidth. However, to make the approach more flexible and therefore capable of dealing with unbalanced channel conditions, bandwidth optimization could be incorporated as an additional degree of freedom. Another direction is expanding RadiOrchestra to be robust against channel imprecisions at both the access and backhaul networks to ultimately preserve the integrity of data.
While current networks are centralized, enabling distributed optimization algorithms is desirable due to lower latency. Thus, a possible direction of expanding RadiOrchestra is to parallelize the optimization to let each SBS cluster optimize the resources without a central coordinator. In RadiOrchestra, we assumed that the UEs are pre-associated to a given SBS cluster. In dynamic networks, however, this association can change. Therefore, it is interesting to investigate these changes in contexts of transitions between different clusters.
Figure 2 :
2Self-backhauled SBSs grouped into clusters. The backhaul exploits multigroup multicast beamforming for data sharing whereas the access network is based on distributed unicast beamforming.
Figure 3 :
3SBS clustering allows to merge data of all the served users into one stream, minimizing interference and simplifying data decoding at the SBSs.
Figure 4 :
4SBS distribution and clustering with a MBS transmitting multicast streams to three different clusters.
we show in Proposition 9 and Proposition 10, that P (t) 1 is a global lower bound of P 1 and the obtained solution is a KKT point.Proposition 9. Problem P (t) 1 is a global lower bound for P 1 sinceR (t) (α, β, κ) ≤ R (α, β, κ). Proof: See Appendix F.Proposition 10. Starting from a feasible point Θ (0) = ·, ·, ·, α (0) , β (0) , κ (0) , the sequence of solutions Θ (t) = M (t) , W (t) , p (t) , α (t) , β (t) , κ (t) , for t ≥ 1,
Algorithm 1 :
1Optimization of P 1
In the system, there are L = 5 clusters each having B = 3 SBSs and U = 20 UEs, thus making a total of B total = 15 SBSs and U total = 100 UEs. The MBS has a maximum transmit power of P MBS tx = 36 dBm and is equipped with a 16 × 4 antenna array (N MBS tx = 64) whereas the SBSs can transmit at a maximum power of P SBS tx = 14 dBm and have smaller 4 × 4 arrays (N SBS tx = 16).
Figure 5 :
5Evaluation of Scenario S 1 . We notice the small performance gap of RnP-SOCP-1 and RnP-SOCP-2 with respect to BnC-MISOCP, which is reasonable considering that their time complexities are smaller by 3 orders of magnitude. Because CVX needs to parse the mathematical model into a suitable structure for MOSEK, the results showing time complexity consider the raw solving time while neglecting the parsing time.
Figure 8 :
8Evaluation of Scenario S
ce ss (% ) χb ac kh au l (% ) Access throughput[Mbps]
T 1 , 4
4UEs out of 20 are chosen; in slot T 2 , 4 out of 16 are chosen; in slot T 3 , 4 out of 12 are chosen, in slot T 4 , 4 out of 8, and in slot T 5 the remaining 4 UEs are served. In slot T 6 , the weights are updated based on the cumulative rate the UEs have experienced according to w the rate of UE u in slot T i . In slot T 6 , another 4 UEs out of 20 are chosen (possibly a different UE batch than in slot T 1 ).
Table I :
ICategorization of related workApproach Solution
Spectrum
Network
Access network
Backhaul network
Topology Beamforming
User
association
Rate
selection
Admission
control
Topology
Link
Medium
Beamforming
Rate
selection
[20], [21]
Joint
Sub-6GHz
Single-SBS Unicast
✓
✗
✓
✓
N/A
N/A
N/A
N/A
N/A
[22]-[25]
Joint
Sub-6GHz
Multi-SBS Unicast
✓
Many
✗
✗
N/A
N/A
N/A
N/A
N/A
[26], [27] Decoupled
Sub-6GHz
Single-SBS Multicast
✓
✗
✗
✓
N/A
N/A
N/A
N/A
N/A
[28]
Joint
Millimeter-wave Multi-SBS Unicast
3D
Many
✗
✗
N/A
N/A
N/A
N/A
N/A
[8], [29]
Joint
Millimeter-wave Multi-SBS
N/A
N/A
N/A
N/A
N/A
Unicast
Adaptive
Wireless
2D
✗
[30]
Joint
Sub-6GHz
Multi-SBS Multicast
2D
Many
✗
✗
Unicast
Fixed
Wired
✗
✗
[31], [32]
Joint
Sub-6GHz
Multi-SBS
Both
2D
Many
✗
✗
Unicast
Fixed
Wired
✗
✗
[9], [12] Decoupled
Sub-6GHz
Multi-SBS Unicast
2D
✗
✗
✓
Unicast
Unbounded
Wired
✗
✗
[13]
Joint
Sub-6GHz
Multi-SBS Unicast
2D
One
✗
✓
Unicast
Fixed
Wireless
✗
✗
[17]
Decoupled Millimeter-wave Multi-SBS Unicast
3D
Many
✗
✗
Unicast
Adaptive
Wireless/TDM
3D
✗
[18]
Decoupled
Sub-6GHz
Multi-SBS Unicast
2D
One
✗
✓
Unicast
Adaptive
Wireless/SDM
2D
✗
[10]
Decoupled
Sub-6GHz
Multi-SBS Unicast
2D
Many
✗
✗
Multicast Adaptive/SIC Wireless/SDM
2D
✗
[15]
Joint
Sub-6GHz
Multi-SBS Unicast
2D
One
✗
✓
Unicast
Adaptive
Wireless/SDM
2D
✗
[14], [16]
Joint
Sub-6GHz
Multi-SBS Multicast
2D
✗
✗
✗
Unicast
Fixed
Wireless
✗
✗
[33]
Joint
Sub-6GHz
Multi-SBS Unicast
2D
One
✗
✓
Unicast
Adaptive
Wireless/SDM
2D
✗
[19]
Joint
Sub-6GHz
Multi-SBS Unicast
2D
Many
✗
✗
Multicast
Adaptive
Wireless/TDM
2D
✗
[11]
Joint
Sub-6GHz
Multi-SBS Unicast
2D
✗
✗
✗
Unicast Adaptive/SIC Wireless/SDM
2D
✗
Proposed
Joint
Millimeter-wave Multi-SBS Unicast
3D
Many
✓
✓
Multicast
Adaptive
Wireless/SDM
3D
✓
Table II :
IIParameters and variables of the system Multicast precoder from the MBS to SBS cluster B l m l Unicast precoder from SBS b to UE u w b,u Binary variable for UE rate/SINR selection αu,j Binary variable for SBS rate/SINR selection β l,j Binary variable for UE association κ b,uParameters and Variables
Notation
Number of transmit antennas at the MBS and SBSs
N MBS
tx
, N SBS
tx
Maximum transmit power at the MBS and SBSs
P MBS
tx
, P SBS
tx
Number of clusters in the system
L
Number of UEs per cluster
U
Number of SBSs per cluster
B
Number of predefined rate/SINR values
J UE , J SBS
Bandwidth of the access and backhaul networks
W access
BW
, W backhaul
BW
Set of clusters
L = {1, · · · , L}
Set of SBSs
B = l∈L B l
Set of UEs
U = l∈L U l
Set of predefined rate/SINR values at SBSs
J SBS
Set of predefined rate/SINR values at UEs
J UE
Set of UEs in the l-th cluster
U l
Set of SBSs in the l-th cluster
B l
Channel between the MBS and SBS b
g b
Channel between SBS b and UE u
h b,u
Table III :
IIIRates and target SINR valuesCoding rate
Rate R SBS
j
[bps/Hz]
SINR Γ SBS
j
120/1024 (QPSK)
0.2344
0.2159
308/1024 (QPSK)
0.6016
0.6610
602/1024 (QPSK)
1.1758
1.7474
466/1024 (QAM)
2.7305
10.6316
948/1024 (QAM)
5.5547
95.6974
Table IV :
IVSimulation settingsScenario
Backhaul network
Access network
N MBS
tx
P MBS
tx
[dBm]
L
B
B total
χ backhaul P SBS
tx
[dBm] U U total U served χaccess
S1
64
9, 12, . . . , 27
2
3
6
0
6, 10, 14
6
12
3
0
S2
16, 32, 48, 64 15, 18, . . . , 36 1, 2, . . . , 6
3
3, 6, . . . , 18
0
−
−
−
−
−
64
15, 18, . . . , 36
5
1, 2, . . . , 6 5, 10, . . . , 30
0
−
−
−
−
−
S3
64
15, 18, . . . , 36
5
3
15
0
0, 2, . . . , 14 20 100
4
0
S4
64
18, 27, 36
2, 3, . . . , 6
3
6, 9, . . . , 18
0
14
20 100
4
0
S5
64
36
5
3
15
[0, 1]
14
20 100
4
[0, 1]
S6
64
36
5
3
15
0
14
20 100 4 (slotted)
0
UB
LB
BnC − MISOCP
RnP − SOCP − 1
RnP − SOCP − 2
12 18 24 30 36
200
400
600
800
1,000
1,200
P MBS
tx
[dBm]
Access throughput
[Mbps]
(a) Varying P MBS
tx
when P SBS
tx
= 6 dBm
12 18 24 30 36
200
400
600
800
1,000
1,200
P MBS
tx
[dBm]
Access throughput
[Mbps]
(b) Varying P MBS
tx
when P SBS
tx
= 10 dBm
12 18 24 30 36
200
400
600
800
1,000
1,200
P MBS
tx
[dBm]
Access throughput
[Mbps]
(c) Varying P MBS
tx
when P SBS
tx
= 14 dBm
12 18 24 30 36
0
150
300
450
600
P MBS
tx
[dBm]
Time [ms]
(d) Time complexity when P SBS
tx
= 14 dBm
12 18 24 30 36
0
40
80
P MBS
tx
[dBm]
Time [s]
illustrate how the variation of transmitUB
LB
RnP − SOCP − 1
RnP − SOCP − 2
2
3
4
5
6
400
800
1,200
1,600
2,000
2,400
2,800
L
Access throughput
[Mbps]
(a) P MBS
tx
= 18 dBm and P SBS
tx
= 14 dBm
2
3
4
5
6
400
800
1,200
1,600
2,000
2,400
2,800
L
Access throughput
[Mbps]
(b) P MBS
tx
= 27 dBm and P SBS
tx
= 14 dBm
2
3
4
5
6
400
800
1,200
1,600
2,000
2,400
2,800
L
Access throughput
[Mbps]
(c) P MBS
tx
= 36 dBm and P SBS
tx
= 14 dBm
R UE
1
R UE
2
R UE
3
R UE
4
R UE
5
0.3
0.6
0.9
Rates
Frequency
(d) L = 2 | P MBS
tx
= 18 dBm and
P SBS
tx
= 14 dBm
R UE
1
R UE
2
R UE
3
R UE
4
R UE
(a) Serving U served = 4 UEs per cluster per slot.T 5 T 10 T 15 T 20 T 25 T 30 T 35 T 40 T 45 T 50(b) Displaying the individual rates of all UEs in cluster U1.Figure 10: Evaluation of Scenario S6 . We observe that it is possible to serve all UEs in a system by allocating them in multiple slots, showing that RadiOrchestra is scalable. In addition, the UE rates can be adapted to enforce different priorities based on any network policy of the operator. In this example, we aimed at improving fairness among UEs.T 50
800
1,200
1,600
2,000
2,400
Time slot (Tn)
Access
throughput
[Mbps]
400
800
1,200
Time slot (Tn)
Cumulative access
throughput per
UE [Mbps]
APPENDIX A: PROOF OFPROPOSITION 2In constraintsC 4 and C 5 , the beamformer w b,u and binary variable κ b,u are tied. This leads to obtain zero-beamformers for unserved UEs. To ensure the same effect after removing the multiplicative coupling between w b,u and κ b,u , additional constraints are required. First, we define the auxiliary variable p b,u representing the power of the beamformer from SBS b to UE u, which leads us to declare the following constraint, C 17 : p b,u ≥ 0, ∀l ∈ L, b ∈ B l , u ∈ U l . Considering, the newly introduced variable, constraintC 4 is redefined as C 18 : u∈U l p b,u ≤ P SBS tx , ∀l ∈ L, b ∈ B l . In addition, the power p b,u of a beamformer needs to be zero for unserved UEs and positive for served UEs, which is enforced via C 19 : p b,u ≤ κ b,u P SBS tx , ∀l ∈ L, b ∈ B l , u ∈ U l . To connect the beamformer w b,u and its power p b,u , we define w b,u 2 2 ≤ κ b,u p b,u , which ensures that the beamformer is a zero-vector when κ b,u = 0. Note that w b,u 2 2 ≤ κ b,u p b,u is nonconvex but it can be recast as a SOC constraint as shown in the following. Using the difference of squares, the product, which allows us to rearrange w b,uL, b ∈ B l , u ∈ U l . After these changes, w b,u and κ b,u have been decoupled while still guaranteeing the same effect as if coupled. Thus, the product w b,u κ b,u can be replaced by w b,u upon including C 17 − C 20 . The, constraintC 21 is obtained after replacing w b,u κ b,u by w b,u in C 5 . APPENDIX B: PROOF OF PROPOSITION 3 We follow a similar procedure as in[20].We exchange positions between the SINR denominator and the right-hand side (RHS) ofC 21 . Then, weto both sides, thus yieldinḡTo deal with this nonconvex constraint, we first derive expressions for its two cases.u ∈ U l , j ∈ J UE . In case 1 , the inequality is satisfied by default. Besides, it is possible to find an upper bound Q 2 u forto prevent using ∞. By harnessing the big-M method, we can equivalently combine the two cases into C 21 , shown at the top of this page. The upper boundis obtained by maximizing the LHS of C 21 . APPENDIX C: PROOF OF PROPOSITION 5 Assuming that x = hH u W, σ UE , constraint C 21 can be expressed asTaking the square root at both sides and applying the Jensen's inequality to the RHS expression, we obtainWhen α u,j = 1, the inequality is tight, because the RHS and LHS of the expression above become equivalent, i.e.,As a result, the following expression is equivalent to C 21Notice that the beamforming vectors are invariant to phase shift. In particular, w u and w u e jθu yield the same received SINR at the UE u. Thus, it is possible to choose a phase e jθu such that h H u w u becomes purely real and nonnnegative[48, ch. 18]. Therefore, the following holds APPENDIX D: PROOF OF PROPOSITION 6 Note that g H b m l ≥ Re g H b m l always holds true. The inequality becomes tight when the phase of g H b m l is zero[37],[49]. This is, in general, not true unless there is a single SBS per cluster. Using this conservative relation, we replace C 15 − C 16 by C 26 − C 27 , which are defined asHowever, these inequalities can be recast as the following convex SOC constraintswhere the Jensen's inequality has been applied to C 26 . APPENDIX E: PROOF OF PROPOSITION 8 We define the Lagrange dual function of P 1 as φ (λ α , λ β , λ κ ) = max Θ∈D L (α, β, κ, λ α , λ β , λ κ ), where L (α, β, κ, λ α , λ β , λ κ ) = R access w−sum (α) − λ α f α (α) − λ β f β (β) − λ κ f κ (κ). In addition, we define 19 primal : p * = max Θ∈D min λα,λ β ,λκ≥0 L (α, β, κ, λ α , λ β , λ κ ) = max (P 1 ). dual : d * = min λα,λ β ,λκ≥0 max Θ∈D L (α, β, κ, λ α , λ β , λ κ ) = min λα,λ β ,λκ≥0 φ (λ α , λ β , λ κ ) .According to the weak duality theorem, the following holdsThus, the Lagrangian L (α, β, κ, λ α , λ β , λ κ ) is monotonically decreasing with respect to λ α , λ β , λ κ when Θ ∈ D. Further, this means that φ (λ α , λ β , λ κ ) is monotonically decreasing with respect to λ α , λ β , λ κ and is bounded by the optimal value of P 1 . We distinguish the following two cases.Case 1: Suppose that f α (α 0 ) = 0, f β (β 0 ) = 0, f κ (κ 0 ) = 0 for some λ α0 < ∞, λ β0 < ∞, λ κ0 < ∞, implying that α 0 , β 0 , κ 0 are binary. Therefore, α 0 , β 0 , κ 0 are also feasible to P 1 . Replacing this solution in the primal problem, we obtain L (α 0 , β 0 , κ 0 , λ α0 , λ β0 , λ κ0 ) = R access w−sum (α 0 ) ≤ p * . Now, considering the dual problem and (E.1), we have that φ (λ α0 , λ β0 , λ κ0 ) = L (α 0 , β 0 , κ 0 , λ α0 , λ β0 , λ κ0 ) = R access w−sum (α 0 ) ≥ p * , which implies that p * = d * , i.e. strong duality holds. Based on the previous result, we realize that φ (λ α0 , λ β0 , λ κ0 ) = min, which means that for any λ α , λ β , λ κ , such that λ α0 < λ α < ∞, λ β0 < λ β < ∞, λ κ0 < λ κ < ∞, problems P 1 and P 1 share the same optimal value an optimal solution. Thus, P 1 can be solved by means of P 1 for appropriately chosen large values of λ α , λ β , λ κ .Case 2: Suppose that f α (α 0 ) > 0, f β (β 0 ) > 0, f κ (κ 0 ) > 0, for λ α0 > 0, λ β0 > 0, λ κ0 > 0, implying that some elements of α 0 , β 0 , κ 0 take values between 0 and 1. From the dual problem, we have that φ (λ α0 , λ β0 , λ κ0 ) → −∞. However, this contradicts the weak duality theorem, which states that φ (λ α , λ β , λ κ ) is bounded from below by the primal solution, which is at worst zero. Thus, this case is not valid.APPENDIX F: PROOF OF PROPOSITION 9 Note that q α (α), q β (β), q κ (κ) are concave. Therefore, their first-order approximationsqConsidering the expressions above, the objective function of P 1 can be rewritten as R (α, β, κ) = g 1 (α, β, κ) − g 2 (α, β, κ) whereas the objective of P can be rewritten asR (t) (α, β, κ) = g 1 (α, β, κ) −g (t) 2 (α, β, κ). Since g 2 (α, β, κ) ≤g (t) 2 (α, β, κ) thenR (t) (α, β, κ) is a lower bound for the objective of P 1 , i.e.R (t) (α, β, κ) ≤ R (α, β, κ). Further, the equality holds when α = α (t−1) , β = β (t−1) , κ = κ (t−1) showing the bound tightness. whereas Θ (t) is its optimal solution. For iteration t, we have that R (α, β, κ) ≥R (t) (α, β, κ) and R α (t−1) , β (t−1) , κ (t−1) =R (t) α (t−1) , β (t−1) , κ (t−1) . Using these relations,is equally or more optimal for P 1 than M (t−1) , W (t−1) , p (t−1) due to linkage with C 20 ,is more befitting for P 1 than Θ (t−1) . As a result, the sequence of points Θ (t) constitutes a sequence of enhanced points for P 1 . In addition, Θ (t) is bounded becauseR (t) (α, β, κ) is upper-bounded by R (α, β, κ), and R (α, β, κ) is upper-bounded by the multicast rate, which is ultimately constrained by the maximum transmit power from the MBS. By Cauchy's theorem, there must exist a convergent subsequence Θ (tn) such that lim n→∞ R α (tn) , β (tn) , κ (tn) − R (α ⋆ , β ⋆ , κ ⋆ ) = 0, (G.1)where Θ ⋆ = (M ⋆ , W ⋆ , p ⋆ , α ⋆ , β ⋆ , κ ⋆ ) is a limit point for Θ (tn) . Thus, for each iteration t, there exists some n such that t n ≤ t ≤ t n+1 . From (G.1) we obtain ǫ (tn) = lim n→∞ R α (tn) , β (tn) , κ (tn) − R (α ⋆ , β ⋆ , κ ⋆ ) = 0, ǫ (tn+1) = lim n→∞ R α (tn+1) , β (tn+1) , κ (tn+1) − R (α ⋆ , β ⋆ , κ ⋆ ) = 0,showing that ǫ (tn) ≤ ǫ (t) ≤ ǫ (tn+1) and lim t→∞ R α (t) , β (t) , κ (t) = R (α ⋆ , β ⋆ , κ ⋆ ). Therefore, each accumulation point Θ ⋆ = (M ⋆ , W ⋆ , p ⋆ , α ⋆ , β ⋆ , κ ⋆ ) is a KKT point[32],[50].
5G Ultra-Dense Cellular Networks. X Ge, S Tu, G Mao, C.-X Wang, T Han, IEEE Wireless Communications. 231X. Ge, S. Tu, G. Mao, C.-X. Wang, and T. Han, "5G Ultra-Dense Cellular Networks," IEEE Wireless Communications, vol. 23, no. 1, pp. 72-79, 2016.
Achieving Sustainable Ultra-Dense Heterogeneous Networks for 5G. J An, K Yang, J Wu, N Ye, S Guo, Z Liao, IEEE Communications Magazine. 5512J. An, K. Yang, J. Wu, N. Ye, S. Guo, and Z. Liao, "Achieving Sustainable Ultra-Dense Heterogeneous Networks for 5G," IEEE Com- munications Magazine, vol. 55, no. 12, pp. 84-90, 2017.
Backhauling 5G Small Cells: A Radio Resource Management Perspective. N Wang, E Hossain, V K Bhargava, IEEE Wireless Communications. 225N. Wang, E. Hossain, and V. K. Bhargava, "Backhauling 5G Small Cells: A Radio Resource Management Perspective," IEEE Wireless Communications, vol. 22, no. 5, pp. 41-49, 2015.
5G; NR; Integrated Access and Backhaul (IAB) Electromagnetic Compatibility (EMC). 3GPP. 175Technical Specification (TS) 38. version 16.0.03GPP, "5G; NR; Integrated Access and Backhaul (IAB) Electromagnetic Compatibility (EMC)," 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 38.175, 11 2020, version 16.0.0.
Innovations in 5G Backhaul Technologies. G Americas, 06White PaperG. Americas, "Innovations in 5G Backhaul Technologies," White Paper, 06 2020.
Massive MIMO for Next Generation Wireless Systems. E G Larsson, O Edfors, F Tufvesson, T L Marzetta, IEEE Communications Magazine. 522E. G. Larsson, O. Edfors, F. Tufvesson, and T. L. Marzetta, "Massive MIMO for Next Generation Wireless Systems," IEEE Communications Magazine, vol. 52, no. 2, pp. 186-195, 2014.
Integrated Access and Backhaul in 5G mmWave Networks: Potential and Challenges. M Polese, M Giordani, T Zugno, A Roy, S Goyal, D Castor, M Zorzi, IEEE Communications Magazine. 583M. Polese, M. Giordani, T. Zugno, A. Roy, S. Goyal, D. Castor, and M. Zorzi, "Integrated Access and Backhaul in 5G mmWave Networks: Potential and Challenges," IEEE Communications Magazine, vol. 58, no. 3, pp. 62-68, 2020.
Millimeter Wave Beamforming for Wireless Backhaul and Access in Small Cell Networks. S Hur, T Kim, D J Love, J V Krogmeier, T A Thomas, A Ghosh, IEEE Trans. Commun. 6110S. Hur, T. Kim, D. J. Love, J. V. Krogmeier, T. A. Thomas, and A. Ghosh, "Millimeter Wave Beamforming for Wireless Backhaul and Access in Small Cell Networks," IEEE Trans. Commun., vol. 61, no. 10, pp. 4391-4403, 2013.
Joint Precoding and RRH Selection for User-Centric Green MIMO C-RAN. C Pan, H Zhu, N J Gomes, J Wang, IEEE Trans. Wireless Commun. 165C. Pan, H. Zhu, N. J. Gomes, and J. Wang, "Joint Precoding and RRH Selection for User-Centric Green MIMO C-RAN," IEEE Trans. Wireless Commun., vol. 16, no. 5, pp. 2891-2906, 2017.
User-Centric Joint Access-Backhaul Design for Full-Duplex Self-Backhauled Wireless Networks. E Chen, M Tao, N Zhang, IEEE Trans. Commun. 6711E. Chen, M. Tao, and N. Zhang, "User-Centric Joint Access-Backhaul Design for Full-Duplex Self-Backhauled Wireless Networks," IEEE Trans. Commun., vol. 67, no. 11, pp. 7980-7993, 2019.
NOMA Aided Interference Management for Full-Duplex Self-Backhauling HetNets. L Lei, E Lagunas, S Chatzinotas, B Ottersten, IEEE Communications Letters. 228L. Lei, E. Lagunas, S. Chatzinotas, and B. Ottersten, "NOMA Aided Interference Management for Full-Duplex Self-Backhauling HetNets," IEEE Communications Letters, vol. 22, no. 8, pp. 1696-1699, 2018.
Joint Scheduling and Beamforming Coordination in Cloud Radio Access Networks With QoS Guarantees. X Huang, G Xue, R Yu, S Leng, IEEE Transactions on Vehicular Technology. 657X. Huang, G. Xue, R. Yu, and S. Leng, "Joint Scheduling and Beam- forming Coordination in Cloud Radio Access Networks With QoS Guarantees," IEEE Transactions on Vehicular Technology, vol. 65, no. 7, pp. 5449-5460, 2016.
Energy Efficient Resource Allocation Optimization in Fog Radio Access Networks With Outdated Channel Knowledge. T H L Dinh, M Kaneko, E H Fukuda, L Boukhatem, IEEE Trans. Green Commun. Netw. 51T. H. L. Dinh, M. Kaneko, E. H. Fukuda, and L. Boukhatem, "En- ergy Efficient Resource Allocation Optimization in Fog Radio Access Networks With Outdated Channel Knowledge," IEEE Trans. Green Commun. Netw., vol. 5, no. 1, pp. 146-159, 2021.
Nonsmooth Optimization Algorithms for Multicast Beamforming in Content-Centric Fog Radio Access Networks. H T Nguyen, H D Tuan, T Q Duong, H V Poor, W.-J Hwang, IEEE Trans. Signal Process. 68H. T. Nguyen, H. D. Tuan, T. Q. Duong, H. V. Poor, and W.-J. Hwang, "Nonsmooth Optimization Algorithms for Multicast Beamforming in Content-Centric Fog Radio Access Networks," IEEE Trans. Signal Process., vol. 68, pp. 1455-1469, 2020.
Green Full-Duplex Self-Backhaul and Energy Harvesting Small Cell Networks With Massive MIMO. L Chen, F R Yu, H Ji, B Rong, X Li, V C M Leung, IEEE J. Sel. Topics Signal Process. 3412L. Chen, F. R. Yu, H. Ji, B. Rong, X. Li, and V. C. M. Leung, "Green Full-Duplex Self-Backhaul and Energy Harvesting Small Cell Networks With Massive MIMO," IEEE J. Sel. Topics Signal Process., vol. 34, no. 12, pp. 3709-3724, 2016.
Robust Multigroup Multicast Beamforming Design for Backhaul-Limited Cloud Radio Access Network. Y Chen, S He, Y Huang, J Ren, L Yang, IEEE Signal Processing Letters. 261Y. Chen, S. He, Y. Huang, J. Ren, and L. Yang, "Robust Multigroup Multicast Beamforming Design for Backhaul-Limited Cloud Radio Access Network," IEEE Signal Processing Letters, vol. 26, no. 1, pp. 189-193, 2019.
Joint User Association and Beamforming Design for Millimeter Wave UDN With Wireless Backhaul. G Kwon, H Park, IEEE J. Sel. Topics Signal Process. 3712G. Kwon and H. Park, "Joint User Association and Beamforming Design for Millimeter Wave UDN With Wireless Backhaul," IEEE J. Sel. Topics Signal Process., vol. 37, no. 12, pp. 2653-2668, 2019.
Joint Load Balancing and Interference Mitigation in 5G Heterogeneous Networks. T K Vu, M Bennis, S Samarakoon, M Debbah, M Latva-Aho, IEEE Trans. Wireless Commun. 169T. K. Vu, M. Bennis, S. Samarakoon, M. Debbah, and M. Latva-aho, "Joint Load Balancing and Interference Mitigation in 5G Heterogeneous Networks," IEEE Trans. Wireless Commun., vol. 16, no. 9, pp. 6032- 6046, 2017.
Joint Fronthaul Multicast Beamforming and User-Centric Clustering in Downlink C-RANs. B Hu, C Hua, J Zhang, C Chen, X Guan, IEEE Trans. Wireless Commun. 168B. Hu, C. Hua, J. Zhang, C. Chen, and X. Guan, "Joint Fronthaul Multicast Beamforming and User-Centric Clustering in Downlink C- RANs," IEEE Trans. Wireless Commun., vol. 16, no. 8, pp. 5395-5409, 2017.
Dynamic Rate Adaptation and Multiuser Downlink Beamforming Using Mixed Integer Conic Programming. Y Cheng, A Philipp, M Pesavento, EURASIP European Signal Processing Conference (EUSIPCO). Y. Cheng, A. Philipp, and M. Pesavento, "Dynamic Rate Adaptation and Multiuser Downlink Beamforming Using Mixed Integer Conic Programming," in EURASIP European Signal Processing Conference (EUSIPCO), 2012, pp. 824-828.
Joint Discrete Rate Adaptation and Downlink Beamforming Using Mixed Integer Conic Programming. Y Cheng, M Pesavento, IEEE Trans. Signal Process. 637Y. Cheng and M. Pesavento, "Joint Discrete Rate Adaptation and Downlink Beamforming Using Mixed Integer Conic Programming," IEEE Trans. Signal Process., vol. 63, no. 7, pp. 1750-1764, 2015.
Joint Network Optimization and Downlink Beamforming for CoMP Transmissions Using Mixed Integer Conic Programming. Y Cheng, M Pesavento, A Philipp, IEEE Trans. Signal Process. 6116Y. Cheng, M. Pesavento, and A. Philipp, "Joint Network Optimization and Downlink Beamforming for CoMP Transmissions Using Mixed Integer Conic Programming," IEEE Trans. Signal Process., vol. 61, no. 16, pp. 3972-3987, 2013.
Optimal Dynamic Point Selection for Power Minimization in Multiuser Downlink CoMP. D H N Nguyen, L B Le, T Le-Ngoc, IEEE Trans. Wireless Commun. 161D. H. N. Nguyen, L. B. Le, and T. Le-Ngoc, "Optimal Dynamic Point Selection for Power Minimization in Multiuser Downlink CoMP," IEEE Trans. Wireless Commun., vol. 16, no. 1, pp. 619-633, 2017.
Optimal Joint Base Station Assignment and Beamforming for Heterogeneous Networks. M Sanjabi, M Razaviyayn, Z.-Q Luo, IEEE Trans. Signal Process. 628M. Sanjabi, M. Razaviyayn, and Z.-Q. Luo, "Optimal Joint Base Station Assignment and Beamforming for Heterogeneous Networks," IEEE Trans. Signal Process., vol. 62, no. 8, pp. 1950-1961, 2014.
User Assignment in C-RAN Systems: Algorithms and Bounds. H Ghauch, M M U Rahman, S Imtiaz, C Qvarfordt, M Skoglund, J Gross, IEEE Trans. Wireless Commun. 176H. Ghauch, M. M. U. Rahman, S. Imtiaz, C. Qvarfordt, M. Skoglund, and J. Gross, "User Assignment in C-RAN Systems: Algorithms and Bounds," IEEE Trans. Wireless Commun., vol. 17, no. 6, pp. 3889-3902, 2018.
Mixed-Integer Semidefinite Relaxation of Joint Admission Control and Beamforming: An SOC-Based Outer Approximation Approach with Provable Guarantees. S X , -Y Ni, A , M.-C So, IEEE SPAWC. S. X.-Y. Ni and A. M.-C. So, "Mixed-Integer Semidefinite Relaxation of Joint Admission Control and Beamforming: An SOC-Based Outer Approximation Approach with Provable Guarantees," in IEEE SPAWC, 2018, pp. 1-5.
Joint User Grouping, Scheduling, and Precoding for Multicast Energy Efficiency in Multigroup Multicast Systems. A Bandi, M R B Shankar, S Chatzinotas, B Ottersten, IEEE Trans. Wireless Commun. 1912A. Bandi, M. R. B. Shankar, S. Chatzinotas, and B. Ottersten, "Joint User Grouping, Scheduling, and Precoding for Multicast Energy Efficiency in Multigroup Multicast Systems," IEEE Trans. Wireless Commun., vol. 19, no. 12, pp. 8195-8210, 2020.
Load Balancing User Association in Millimeter Wave MIMO Networks. A Alizadeh, M Vu, IEEE Trans. Wireless Commun. 186A. Alizadeh and M. Vu, "Load Balancing User Association in Millimeter Wave MIMO Networks," IEEE Trans. Wireless Commun., vol. 18, no. 6, pp. 2932-2945, 2019.
Optimal Design of Energy-Efficient Millimeter Wave Hybrid Transceivers for Wireless Backhaul. A Pizzo, L Sanguinetti, International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt. A. Pizzo and L. Sanguinetti, "Optimal Design of Energy-Efficient Millimeter Wave Hybrid Transceivers for Wireless Backhaul," in In- ternational Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2017, pp. 1-8.
Content-Centric Sparse Multicast Beamforming for Cache-Enabled Cloud RAN. M Tao, E Chen, H Zhou, W Yu, IEEE Trans. Wireless Commun. 159M. Tao, E. Chen, H. Zhou, and W. Yu, "Content-Centric Sparse Multicast Beamforming for Cache-Enabled Cloud RAN," IEEE Trans. Wireless Commun., vol. 15, no. 9, pp. 6118-6131, 2016.
Joint Base Station Clustering and Beamforming for Non-Orthogonal Multicast and Unicast Transmission With Backhaul Constraints. E Chen, M Tao, Y.-F Liu, IEEE Trans. Wireless Commun. 179E. Chen, M. Tao, and Y.-F. Liu, "Joint Base Station Clustering and Beamforming for Non-Orthogonal Multicast and Unicast Transmission With Backhaul Constraints," IEEE Trans. Wireless Commun., vol. 17, no. 9, pp. 6265-6279, 2018.
Joint Load Balancing and Interference Management for Small-Cell Heterogeneous Networks With Limited Backhaul Capacity. H H M Tam, H D Tuan, D T Ngo, T Q Duong, H V Poor, IEEE Trans. Wireless Commun. 162H. H. M. Tam, H. D. Tuan, D. T. Ngo, T. Q. Duong, and H. V. Poor, "Joint Load Balancing and Interference Management for Small-Cell Heterogeneous Networks With Limited Backhaul Capacity," IEEE Trans. Wireless Commun., vol. 16, no. 2, pp. 872-884, 2017.
Joint Beamformer Design for Wireless Fronthaul and Access Links in C-RANs. B Hu, C Hua, C Chen, X Guan, IEEE Trans. Wireless Commun. 175B. Hu, C. Hua, C. Chen, and X. Guan, "Joint Beamformer Design for Wireless Fronthaul and Access Links in C-RANs," IEEE Trans. Wireless Commun., vol. 17, no. 5, pp. 2869-2881, 2018.
Optimal and Approximation Algorithms for Joint Routing and Scheduling in Millimeter-Wave Cellular Networks. D Yuan, H.-Y Lin, J Widmer, M Hollick, IEEE/ACM Trans. Netw. 285D. Yuan, H.-Y. Lin, J. Widmer, and M. Hollick, "Optimal and Approxi- mation Algorithms for Joint Routing and Scheduling in Millimeter-Wave Cellular Networks," IEEE/ACM Trans. Netw., vol. 28, no. 5, pp. 2188- 2202, 2020.
SCAROS: A Scalable and Robust Self-Backhauling Solution for Highly Dynamic Millimeter-Wave Networks. A Ortiz, A Asadi, G H Sim, D Steinmetzer, M Hollick, IEEE J. Sel. Areas Commun. 3712A. Ortiz, A. Asadi, G. H. Sim, D. Steinmetzer, and M. Hollick, "SCAROS: A Scalable and Robust Self-Backhauling Solution for Highly Dynamic Millimeter-Wave Networks," IEEE J. Sel. Areas Commun., vol. 37, no. 12, pp. 2685-2698, 2019.
Study on Channel Model for Frequencies from 0. 3GPP. 51003GPP, "Study on Channel Model for Frequencies from 0.5 to 100
Ghz, 14.003rd Generation Partnership Project (3GPP). Technical ReportGHz," 3rd Generation Partnership Project (3GPP), Technical Report (TR) 38.901, 2017, version 14.00.
Distributed Beamforming for Multi-Group Multicasting Relay Networks. N Bornhorst, M Pesavento, A B Gershman, IEEE Trans. Signal Process. 601N. Bornhorst, M. Pesavento, and A. B. Gershman, "Distributed Beam- forming for Multi-Group Multicasting Relay Networks," IEEE Trans. Signal Process., vol. 60, no. 1, pp. 221-232, 2012.
5G; NR; Physical Layer Procedures for Data. 3GPP. 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 38.214, 2020, version 16.2.03GPP, "5G; NR; Physical Layer Procedures for Data," 3rd Generation Partnership Project (3GPP), Technical Specification (TS) 38.214, 2020, version 16.2.0.
Integrated Link Adaptation and Power Control to Improve Error and Throughput Performance in Broadband Wireless Packet Networks. K K Leung, W Li-Chun, IEEE Trans. Wireless Commun. 14K. K. Leung and W. Li-Chun, "Integrated Link Adaptation and Power Control to Improve Error and Throughput Performance in Broadband Wireless Packet Networks," IEEE Trans. Wireless Commun., vol. 1, no. 4, pp. 619-629, 2002.
An Accurate Approximation of Resource Request Distributions in Millimeter Wave 3GPP New Radio Systems. R Kovalchukov, D Moltchanov, Y Gaidamaka, E Bobrikova, Internet of Things, Smart Spaces, and Next Generation Networks and Systems. O. Galinina, S. Andreev, S. Balandin, and Y. KoucheryavySpringer International PublishingR. Kovalchukov, D. Moltchanov, Y. Gaidamaka, and E. Bobrikova, "An Accurate Approximation of Resource Request Distributions in Millimeter Wave 3GPP New Radio Systems," in Internet of Things, Smart Spaces, and Next Generation Networks and Systems, O. Galinina, S. Andreev, S. Balandin, and Y. Koucheryavy, Eds. Cham: Springer International Publishing, 2019, pp. 572-585.
Mixed-integer Nonlinear Programming. N V Sahinidis, Optim. Eng. 202N. V. Sahinidis, "Mixed-integer Nonlinear Programming 2018," Optim. Eng., vol. 20, no. 2, p. 301-306, 2019.
Programming in a Linear Structure. G B Dantzig, uSA, Washington D.CG. B. Dantzig, "Programming in a Linear Structure," 1948, uSA, Washington D.C.
Multicast Multigroup Precoding and User Scheduling for Frame-Based Satellite Communications. D Christopoulos, S Chatzinotas, B Ottersten, IEEE Trans. Wireless Commun. 149D. Christopoulos, S. Chatzinotas, and B. Ottersten, "Multicast Multi- group Precoding and User Scheduling for Frame-Based Satellite Com- munications," IEEE Trans. Wireless Commun., vol. 14, no. 9, pp. 4695- 4707, 2015.
Joint Optimization of Cooperative Beamforming and Relay Assignment in Multi-User Wireless Relay Networks. E Che, H D Tuan, H H Nguyen, IEEE Trans. Wireless Commun. 1310E. Che, H. D. Tuan, and H. H. Nguyen, "Joint Optimization of Co- operative Beamforming and Relay Assignment in Multi-User Wireless Relay Networks," IEEE Trans. Wireless Commun., vol. 13, no. 10, pp. 5481-5495, 2014.
Numerical Optimization. J Nocedal, S Wright, Springer-VerlagNew York, USAJ. Nocedal and S. Wright, Numerical Optimization. New York, USA: Springer-Verlag, 2006.
Nonsmooth Optimization for Efficient Beamforming in Cognitive Radio Multicast Transmission. A H Phan, H D Tuan, H H Kha, D T Ngo, IEEE Trans. Signal Process. 606A. H. Phan, H. D. Tuan, H. H. Kha, and D. T. Ngo, "Nonsmooth Optimization for Efficient Beamforming in Cognitive Radio Multicast Transmission," IEEE Trans. Signal Process., vol. 60, no. 6, pp. 2941- 2951, 2012.
A Joint Solution for Scheduling and Precoding in Multiuser MISO Downlink Channels. A Bandi, M R B Shankar, S Chatzinotas, B Ottersten, IEEE Trans. Wireless Commun. 191A. Bandi, M. R. B. Shankar, S. Chatzinotas, and B. Ottersten, "A Joint Solution for Scheduling and Precoding in Multiuser MISO Downlink Channels," IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 475-490, 2020.
Optimal and Suboptimal Transmit Beamforming. M Bengtsson, B Ottersten, Handbook of Antennas in Wireless Communications, L. C. Godara. CRC PressM. Bengtsson and B. Ottersten, "Optimal and Suboptimal Transmit Beamforming," in Handbook of Antennas in Wireless Communications, L. C. Godara, Ed., CRC Press, 2001.
Distributed Peerto-Peer Beamforming for Multiuser Relay Networks. H Chen, A B Gershman, S Shahbazpanahi, IEEE International Conference on Acoustics, Speech and Signal Processing. H. Chen, A. B. Gershman, and S. Shahbazpanahi, "Distributed Peer- to-Peer Beamforming for Multiuser Relay Networks," in IEEE In- ternational Conference on Acoustics, Speech and Signal Processing (ICASSP), 2009, pp. 2265-2268.
A General Inner Approximation Algorithm for Nonconvex Mathematical Programs. B R Marks, G P Wright, Operations Research. 264B. R. Marks and G. P. Wright, "A General Inner Approximation Al- gorithm for Nonconvex Mathematical Programs," Operations Research, vol. 26, no. 4, pp. 681-683, 1978.
| [] |
[
"CODET: CODE GENERATION WITH GENERATED TESTS",
"CODET: CODE GENERATION WITH GENERATED TESTS"
] | [
"Bei Chen [email protected] \nMicrosoft Corporation\n\n",
"Fengji Zhang [email protected] \nMicrosoft Corporation\n\n",
"Anh Nguyen [email protected] \nMicrosoft Corporation\n\n",
"Daoguang Zan \nMicrosoft Corporation\n\n",
"Zeqi Lin [email protected] \nMicrosoft Corporation\n\n",
"Jian-Guang Lou [email protected] \nMicrosoft Corporation\n\n",
"Weizhu Chen [email protected] \nMicrosoft Corporation\n\n"
] | [
"Microsoft Corporation\n",
"Microsoft Corporation\n",
"Microsoft Corporation\n",
"Microsoft Corporation\n",
"Microsoft Corporation\n",
"Microsoft Corporation\n",
"Microsoft Corporation\n"
] | [] | The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pretrained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CODET, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CODET then executes the code samples using the generated test cases and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS, and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CODET can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CODET improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results. * The first three authors contributed equally. | 10.48550/arxiv.2207.10397 | [
"https://export.arxiv.org/pdf/2207.10397v2.pdf"
] | 250,920,542 | 2207.10397 | 876eb375cb7b365475040046df669c039ad54202 |
CODET: CODE GENERATION WITH GENERATED TESTS
Bei Chen [email protected]
Microsoft Corporation
Fengji Zhang [email protected]
Microsoft Corporation
Anh Nguyen [email protected]
Microsoft Corporation
Daoguang Zan
Microsoft Corporation
Zeqi Lin [email protected]
Microsoft Corporation
Jian-Guang Lou [email protected]
Microsoft Corporation
Weizhu Chen [email protected]
Microsoft Corporation
CODET: CODE GENERATION WITH GENERATED TESTS
The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pretrained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CODET, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CODET then executes the code samples using the generated test cases and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS, and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CODET can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CODET improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results. * The first three authors contributed equally.
INTRODUCTION
Despite the remarkable progress in pre-training techniques for code generation, selecting a single correct solution from multiple candidates generated by large language models remains a hard problem. For instance, Codex (Chen et al., 2021), a state-of-the-art pre-trained language model for code generation, can achieve a pass@100 (pass if one or more among 100 generated solutions for a given problem can pass the corresponding test cases) of 77.4%, but a pass@1 (correct rate of a single solution) of only 33.5% on the HumanEval benchmark (Chen et al., 2021) 1 . This huge gap limits the practical usefulness of code generation models and motivates us to explore how to pick the correct or best solution from multiple candidates.
A straightforward way to verify the correctness of a solution is to execute it and check if it passes all corresponding test cases. This execution-guided approach has been widely adopted in various code-related tasks, such as code generation (Chen et al., 2021;Li et al., 2022b;Shi et al., 2022), code translation (Roziere et al., 2021), and program synthesis (Chen et al., 2018;Ellis et al., 2019). However, this approach relies heavily on the quality and quantity of test cases, which are often costly and time-consuming to create and maintain. Moreover, in real-world applications like Copilot 2 , a code generation tool that assists developers in writing code, it is unrealistic to expect users to provide test cases for every problem they want to solve. Therefore, we propose to automatically generate test cases for arbitrary programming problems and use them to quickly verify any solution. Figure 1: The illustration of CODET. Both the code solutions and the test cases are generated by the pre-trained language model. The best code solution is then selected by a dual execution agreement.
In this paper, we propose CODET: CODE generation with generated Test-driven dual execution agreement, as illustrated in Figure 1. First, we leverage the same pre-trained language model that generates code solutions, such as Codex, to generate a large number of test cases for each programming problem by providing an elaborate instruction as prompt. Next, we use a dual execution agreement approach inspired by the classical RANSAC algorithm (Fischler & Bolles, 1981). We execute each generated code solution on each generated test case, and iteratively find multiple groups of code solution and test case pairs. Each group, or consensus set, has solutions that pass the same test cases, indicating that they have the same functionality, even if they are different in implementation. We expect that a solution that passes more test cases is more correct, and that a solution that has more similar solutions, i.e., solutions in the same consensus set, is more consistent with the problem specification. So, we rank each consensus set by both the number of test cases and solutions in it, and choose the best solution from the highest-ranked consensus set.
Our method is simple and efficient, as it does not require any labelled data or additional rankers, but it achieves surprisingly exceptional performance. We evaluate our method on five different pre-trained language models for code generation: three OpenAI Codex models (Chen et al., 2021), INCODER (Fried et al., 2022b), and CODEGEN (Nijkamp et al., 2022), as well as four established benchmarks for code generation: HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), APPS (Hendrycks et al., 2021), and CodeContests (Li et al., 2022b). The experimental results show that our method can effectively select the correct solution from multiple candidates, improving the pass@1 score significantly on all benchmarks in the zero-shot setting. For instance, CODET achieves improvements using code-davinci-002: HumanEval (47.0% → 65.8%), MBPP (58.1% → 67.7%), APPS INTRODUCTORY (27.2% → 34.6%), and CodeContests (0.7% → 2.1%). Moreover, when we combine code-davinci-002, the most powerful pre-trained model, and CODET, we outperform previous state-of-the-art methods by a large margin, e.g., HumanEval: 42.7% (Inala et al., 2022) → 65.8%. We also conduct a thorough analysis to provide more insights. Our work is publicly available at https://github.com/microsoft/CodeT.
METHODOLOGY
The task of code generation is to solve a programming problem: generate code solution x based on context c. As shown in Figure 2, context c contains natural language problem description in the form of code comment, and a code snippet that includes statements such as imports and the function header. A code solution is a code snippet that solves the programming problem described in the context. Generally, we sample a set of code solutions, denoted as X = {x 1 , x 2 , · · ·, x N }, based on the context c using a pre-trained language model M, which can be formulated as X = M(c). Our goal is to select the best code solutionx from the set of generated code solutions X, wherê x is the most likely solution to correctly solve the given programming problem. To this end, we propose CODET in the hope of unleashing the inherent power of the pre-trained language model M. Specifically, we use M to generate test cases for the programming problem (Section 2.1), and then select the best code solutionx based on a dual execution agreement (Section 2.2). Figure 2: Code generation and test case generation: an example from the HumanEval benchmark. Example input-output cases are removed from the context.
TEST CASE GENERATION
Besides generating code solutions, we also need to generate test cases to evaluate the correctness of the code solutions. A test case is a pair of input and expected output for the function defined in the context. For example, in Figure 2, a test case for the programming problem of checking whether there exist close elements in a list less than a threshold. To generate test cases, we use the same pre-trained language model M that we use for generating code solutions, but we add an instruction p to the context c as a prompt to indicate that we want test cases instead of code solutions. As shown in Figure 2, the instruction p consists of three parts: (1) a "pass" statement as a placeholder of the function body, which signals that we do not need to generate code for the function, (2) a comment "check the correctness of [entry point]" to clarify the intention of generating test cases, where "[entry point]" is the name of the function, and (3) an "assert" statement to start the test case generation, which specifies the format of the test cases as input-output pairs.
We then feed the concatenated context and instruction, concat(c, p), to the language model M, and sample a set of test cases, denoted as Y = {y 1 , y 2 , · · ·, y M }, from the model output. The process of test case generation can be formulated as Y = M(concat(c, p)). The language model will try to complete the instruction by generating plausible input-output pairs for the function. Note that we remove all example input-output cases from the context c before generating code solutions and test cases, to avoid exposing real test cases to the language model and to increase the diversity and difficulty of the generated test cases.
DUAL EXECUTION AGREEMENT
In this subsection, we explain how we select the best code solutionx from the set of generated code solutions X = {x 1 , x 2 , · · ·, x N }, using the set of generated test cases Y = {y 1 , y 2 , · · ·, y M } as a criterion. We can execute a code solution x on a test case y, which means running the function defined by x on the input part of y and comparing the output with the output part of y. If the code solution x can be executed without errors and the output matches the expected output, then we say the code solution x can pass the test case y. Furthermore, we say there is a functionality agreement between two code solutions x i and x j if they can pass the same set of test cases in Y. Our approach is based on the following assumptions: (1) the code solutions and the test cases are independently and randomly sampled from the pre-trained language model M given a certain programming problem, and (2) incorrect code solutions are often diverse, and the probability of having a functionality agreement between two incorrect code solutions by chance is very low. These assumptions are similar to those of the classical RANSAC algorithm (Fischler & Bolles, 1981), which is a robust method for finding consensus among noisy data. Inspired by RANSAC, we propose our approach CODET to perform dual execution agreement, which is an iterative approach as follows:
• We randomly select a pair (x, y) from the set of all possible pairs D = {(x, y)|x ∈ X, y ∈ Y}. We then try to execute the code solution x on the test case y. If x can pass y, then we say that the pair (x, y) is a hypothetical inlier, because it hypothetically describes the correct functionality for the programming problem. Otherwise, we say that (x, y) is an outlier, because it fails to describe the correct functionality. Figure 3 shows a simple example of the programming problem "return the square of a number". (x 1 , y 1 ) and (x 3 , y 2 ) are two of the hypothetical inliers, while (x 1 , y 4 ) and (x 3 , y 1 ) are two of the outliers. Table 1: Statistics of benchmarks: the total number of problems in the benchmark (Problems), the average number of ground-truth test cases per problem (GT Tests), and the number of sampling code solutions for each problem (n).
• If (x, y) is a hypothetical inlier, we collect all other pairs from D that agree with this hypothetical inlier, forming a set S called consensus set. To find the pairs that agree with (x, y), we first find all test cases that x can pass, denoted as S y . Then, we find all code solutions that can pass exactly the same test cases as x, denoted as S x . Finally, the consensus set is the set of all pairs that consist of a code solution from S x and a test case from S y , i.e., S = {(x, y)|x ∈ S x , y ∈ S y }. For example in Figure 3, we can get S x = {x 1 , x 2 }, S y = {y 1 , y 2 , y 3 } from the hypothetical inlier (x 1 , y 1 ) (shown in green box), and S x = {x 3 }, S y = {y 2 , y 3 , y 4 , y 5 } from (x 3 , y 2 ) (shown in purple box).
• We score the consensus set as f (S) = |S x ||S y |, where |S x | is the number of code solutions in S x and |S y | is the number of test cases in S y . This score is equal to the number of pairs in the consensus set. The intuition is that the more pairs that agree with the hypothetical functionality, the more likely this functionality is correct, according to our assumptions. Following the example in Figure 3, the consensus set scores are 6 and 4 for the hypothetical inliers (x 1 , y 1 ) and (x 3 , y 2 ), respectively.
We repeat the above procedure for a fixed number of times, each time producing a consensus set with its score. Finally, we get the best code solutionx by selecting any code solution from the consensus set with the highest score. If we want to obtain k code solutions, we can select the top k consensus sets with the highest scores, and one code solution is picked up from each of the k consensus sets.
In practice, when the number of code solutions in D is not large, we can simplify the above method by examining all possible pairs in D, instead of sampling pairs from D. Specially, for each code solution x ∈ X, we run it with every test case in Y and keep track of which test cases it passes. We group together code solutions that pass the same test cases, because they have the same functionality. This way, we divide all code solutions in X into groups based on their functionality, which we write as X = {S 1 x , S 2 x , · · ·, S K x }, where K is the number of code solution groups. Each group S x has a set of test cases that it passes, which we write as S y . Then, we get K consensus sets, each of which has the form S = {(x, y)|x ∈ S x , y ∈ S y }. We can score each consensus set by f (S) = |S x ||S y |, as before. This naive version captures the same underline intuition, but it finds all consensus sets right away, without sampling pairs repeatedly.
EXPERIMENTAL SETUP
Models Our experiments are based on Codex (Chen et al., 2021), INCODER (Fried et al., 2022a) and CODEGEN (Nijkamp et al., 2022). Codex is a descendant of GPT-3 (Brown et al., 2020) and proficient in understanding the provided context and generating functional programs. We use three Codex models with different capabilities provided by OpenAI: code-cushman-001, code-davinci-001, and code-davinci-002. INCODER is a unified generative model that can perform left-to-right code generation and code infilling, while CODEGEN is a family of large-scale language models to perform conversational program synthesis. We take use of the INCODER 6.7B version (INCODER-6B) and the CODEGEN 16B Python mono-lingual version (CODEGEN-MONO-16B).
Methods
Baseline
AlphaCode-C CODET Table 2: Pass@k (%) on the HumanEval and MBPP benchmarks. AlphaCode-C is our replication of the clustering method in Li et al. (2022b). The numbers in red indicate the absolute improvements of CODET over baseline on pass@1 and pass@10. We also list the baseline results from Fried et al.
(2022a) and Nijkamp et al. (2022) for reference in gray, where the settings of context are not exactly the same as ours. For CODET, temperature is set to 0.8 and sampling number is set to 100. We do not show CODET pass@100, since it is the same as the baseline pass@100.
Metrics and Baseline
We use the metric pass@k (with n samples) for performance evaluation and take advantage of ground truth test cases to determine the functional correctness of code solutions. For each problem, we sample n code solutions and then select k of them for evaluation. If any of the k code solutions passes all ground truth test cases, the problem is considered solved. Then pass@k is the percentage of solved problems. We use the unbiased definition of pass@k as our baseline (Chen et al., 2021), where k solutions are randomly picked from n samples. Our CodeT uses a dual execution agreement mechanism to select k solutions from n samples, as mentioned in 2.2. In addition, we include a clustering method from Li et al. (2022b) for comparison, denoted as AlphaCode-C. Our replication is to use the test inputs generated by CODET, run the solutions on the test inputs, group the solutions by test outputs, and rank the clusters by size (details in Appendix I).
Benchmarks We conduct experiments on four public code generation benchmarks in the zeroshot setting. The statistics of benchmarks are shown in Table 1 To enable zero-shot inference, we construct the context for APPS and CodeContests as follows: the original problem description is treated as a comment where input-output examples are removed, and a simple function header "def solution(stdin : str) → str :" is placed after the comment to accommodate the input/output data format. More implementation details can be found in Appendix A.
EXPERIMENTAL RESULTS
In this section, we evaluate CODET on five different pre-trained models and four benchmarks to verify its effectiveness, followed by test case analysis and case studies to provide more insights.
RESULTS ON HUMANEVAL AND MBPP
The experimental results of various models on the HumanEval and MBPP benchmarks are summarized in Table 2. If we compare the pass@100 to pass@1 on the Baseline column, it is clear that the Table 3: Pass@k (%) results on the APPS and CodeContests benchmarks using code-davinci-002 in the zero-shot setting. The numbers in red indicate the absolute improvements of CODET over baseline on pass@1, pass@10 and pass@100. For CODET, temperature is set to 0.8 and sampling number is set to 50 for APPS and 1, 000 for CodeContests.
former is significantly better than the latter, indicating the potential to select the best code solution from the 100 generated samples.
For three Codex models, when we compare the CODET column with the Baseline column, CODET pass@1 achieves an absolute improvement of about 10% over the baseline pass@1. The improvements are consistently above 10% on HumanEval. Surprisingly, even for the strongest baseline, code-davinci-002, the improvement is 18.8%, boosting the pass@1 to 65.8%, which is a 20+% absolute improvement over the best previously reported results (Inala et al., 2022). We attribute this larger improvement to the higher quality of test cases generated by code-davinci-002, providing a deeper analysis in Section 4.3. CODET also achieves exceptional performance on the MBPP benchmark, although the magnitude of the improvements is slightly less than that of HumanEval. Using the code-davinci-002 as an example, the pass@1 improves by 9.6%. We also report pass@2 and pass@10 of CODET to further show its superiority. The pass@2 results of CODET are close to the baseline pass@10 results. Meanwhile, the improvements on pass@10 are also consistently over 10% on the HumanEval benchmark.
The experimental results of INCODER-6B and CODEGEN-MONO-16B further verify the effectiveness of CODET. It is obvious CODET can significantly improve the pass@1, with absolute improvements in the range of 4.2% to 13.1%. INCODER-6B achieves the greatest improvement with a gain of 13.1% on the MBPP benchmark. Similar to the experimental results of Codex, the pass@2 results are close to the baseline pass@10. All the results demonstrate that CODET can boost the performance of various pre-trained language models consistently.
As for AlphaCode-C, it is consistently inferior to CODET on both benchmarks using different models, demonstrating the superiority of our dual execution agreement that takes test case information into consideration. In addition, we notice that duplication exists in the generated code solutions and test cases. We perform an ablation study in Appendix D to show that de-duplication has little influence on the results of CODET. Moreover, we discuss the sensitivity of CODET to the temperature in Appendix E, showing the rationality of choosing a rather high temperature at 0.8.
RESULTS ON APPS AND CODECONTESTS
We also conduct experiments on two more challenging benchmarks, APPS and CodeContests. We build the zero-shot versions of APPS and CodeContests to be in line with our setting of HumanEval and MBPP by removing the example input-output cases in the problem descriptions. We employ code-davinci-002 for code solution and test case generation. The sampling number is set to 50 for APPS to save computation cost on the 5, 000 testing problems, while for CodeContests, following Li et al. (2022b), the sampling number is set to 1, 000 to solve especially hard problems. From the results summarized in Table 3, we can clearly observe the consistent performance improvements on both benchmarks using CODET. The absolute pass@1 improvement is 7.4% for introductory problems in APPS, while the improvements are not significant for competition level problems in APPS and CodeContest, indicating their difficulties. In addition, we notice that code-davinci-002 may generate many trivial code solutions for the problems in APPS and CodeContests due to the superior difficulty of these two benchmarks. We perform a comprehensive study in Appendix F to demonstrate the robustness of CODET to this issue. Inspired by Chen et al. (2021) and Li et al. (2022b), we also conduct experiments in the one-shot setting, which is detailed in Appendix G. 50.3 15.9 55.4 11.5 64.5 6.3 CODEGEN-MONO-16B 47.7 11.0 54.9 10.2 71.0 11.7 60.0 10.5 67.6 11.0 76.5 8.0 Table 4: Pass@k (%) on the HumanEval and MBPP benchmarks with code-cushman-001, codedavinci-001, INCODER, and CODEGEN using the test cases generated by code-davinci-002. The numbers in orange indicate the absolute improvements of pass@k using code-davinci-002 test cases over that using their own generated test cases.
ANALYSIS ON TEST CASES
The test cases are vital to CODET since the core idea is based on test-driven execution agreement. Hence, in this subsection, we analyze the test cases by answering the following research questions.
Q1. What is the quality of the generated test cases?
We evaluate the correctness of the generated test cases using the canonical solutions. A test case is considered correct if the canonical solution can pass it. Figure 4a summarizes the distributions of test case accuracy on HumanEval, where the horizontal axis represents the accuracy value for each problem and the vertical axis represents the probability density of problems with the corresponding accuracy value. We can see that the test cases generated by Codex models are of much higher accuracy than CODEGEN/INCODER. Besides accuracy, we also introduce the test case toxicity rate as a measurement of quality. We consider a test case to be "toxic" if any generated code solution can pass it while the canonical solution cannot. Toxic test cases may hinder the scoring of consensus sets and lead to the failure of CODET. As shown in Figure 4b, we can find that the toxicity rate highly correlates to the test case accuracy with respect to different models, where the proportions of toxic test cases for Codex models are smaller than CODEGEN/INCODER. We also evaluate the code coverage of generated test cases using two coverage criterias in Appendix H.2, where Codex models still outperform CODEGEN/INCODER with an average coverage of over 95%. Comparing the test case quality and the performance of CODET shown in Table 2, we can find that the quality of test cases strongly correlates to the performance gain using CODET concerning different models.
Q2. Can better test cases further boost the performance of mediocre models?
From the above discussion with Figure 4, we can find that code-davinci-002 is the most capable model for generating high-quality test cases. Hence, we conduct an experiment to boost the performance of the other four models (code-cushman-001, code-davinci-001, INCODER, and CODEGEN) using test cases generated by code-davinci-002. Table 4 summarizes the performance gain with respect to different models on the HumanEval and MBPP benchmarks. In general, using the test cases generated by code-davinci-002 can significantly improve the performance of using the test cases generated by the less capable models themselves. For code-cushman-001 and code-davinci-def below_threshold(l: list, t: int):
""" Return True if all numbers in the list l are below threshold t. """
Correct
Incorrect
(a)
def sort_array(array):
""" Given an array of non-negative integers, return a copy of the given array after sorting, you will sort the given array in ascending order if the sum( first index value, last index value) is odd, or sort it in descending order if the sum( first index value, last index value) is even. """ Table 5, we can conclude that using more test cases in CODET could generally lead to better performance, while the performance gap narrows when Sampling Number ≥ 50 and Limit ≥ 3. Moreover, CODET improves the pass@1 by 9.5% with only 10 test cases using code-davinci-002, suggesting the high test case efficiency. We can use a smaller Sampling Number in real-world application to balance the performance and computation cost. More results can be found in Appendix H.3.
CASE STUDY
In CODET, we design the dual execution agreement based on the idea that a good code solution can pass the most test cases and agree with the most solutions of the same functionality. We use "dual" because both the code solutions and the test cases are critical. Figure 5a shows a case from the HumanEval benchmark using code-cushman-001. The highest scoring consensus set has the correct functionality that returns true if all numbers in the list are below threshold t, while the consensus set ranked 2 does not understand the boundary condition exactly. The solutions in the second consensus set can pass more test cases (i.e., 226) than that in the first consensus set (i.e., 218). However, considering both code solutions and test cases, CODET can successfully rank the consensus sets and find the correct solutions. Such cases are not rare, suggesting that our design of the dual execution agreement is reasonable. For further statistical demonstration, we conduct an ablation study to score the consensus set by considering only the number of code solutions or test cases. The results again support our claim, as detailed in Appendix I.
CODET is empowered by the pre-trained language models, but is also limited by them. Therefore, the second assumption made in Section 2.2 does not always hold, leading to error cases where the correct code solution is generated, but not in the top 1 consensus set. For CODET with codecushman-001 on the HumanEval benchmark, we find 53 out of 164 programming problems that belong to this situation. We manually investigated these problems and found that 20% of them can be blamed on issues such as ambiguous problem descriptions, uncovered corner cases, and lack of import statements, while the remaining problems are attributed to the failure of the model to understand the problem descriptions. Figure 5b shows an error case caused by ambiguity. The correct understanding of the description "sum(first index value, last index value)" is to add the first and last values, while the code solutions that sum all values from the first to the last are ranked top 1. More real cases can be found in Appendix J. And hope the error analysis can provide inspiration for future studies on improving code generation for more difficult programming problems.
RELATED WORK
Code Generation with Large Models Recently, a number of large pre-trained language models have been proposed for code generation. Benefiting from billions of trainable parameters and massive publicly available source code, models could achieve surprisingly good performance. For instance, AlphaCode (Li et al., 2022b) Code Selection from Multiple Samples Despite large models have achieved great performance in code generation, the models need to sample many times to find the correct answer. Recently, several approaches were proposed to tackle this issue. In the domain of solving math word problems, Cobbe et al. (2021) chose the one with highest rank by a trained verifier, and Shen et al. (2021) proposed to jointly train the generator and ranker through a multi-task framework. In the domain of general purpose code generation, Inala et al. (2022) trained a fault-aware ranker. Moreover, some work has been proposed to leverage the execution information (Shi et al., 2022;Li et al., 2022b;Lahiri et al., 2022). Unlike previous works that require model training or pre-existing test cases or user interactions, we let the large models generate test cases for themselves and automatically rank the solutions based on the test-driven dual execution agreement. The idea of ranking based on agreement also appears in the domain of reasoning Li et al., 2022a).
CONCLUSION AND FUTURE WORK
In this paper, we propose a simple yet effective approach, called CODET, leveraging pre-trained language models to generate both the code solutions and the test cases. CODET executes the code solutions using the test cases and chooses the best solution based on the dual execution agreement. We demonstrate the dual agreement with both the test cases and other solutions is critical to the success of CODET, perform a thorough analysis on the quality of generated test cases and their impact on CODET, and study cases to provide more insights. Experimental results clearly demonstrate the superiority of CODET, improving the pass@1 numbers significantly on various benchmarks. While there remain challenges that CODET only works for executable code generation and it introduces extra computation cost for test case generation. In future work, we will explore the ways to tackle these challenges and improve CODET to solve more difficult programming problems.
Methods
Baseline CODET (2021) to truncate the generated content by five stop sequences: "\nclass", "\ndef", "\n#", "\nif", and "\nprint". For the implementation of INCODER and CODEGEN, we use the HuggingFace transformers library (Wolf et al., 2019) and run both models with half precision. In addition, when the number of consensus sets in CODET is smaller than k, the selection is done from the highest scoring consensus set to the lowest. When reaching the set with the lowest score, it repeats from the highest scoring consensus set. In most cases, the number of consensus sets is larger than k, as shown in Figure 6.
B RESULTS ON ORIGINAL HUMANEVAL
As mentioned in Section 3, for all benchmarks, we remove the example input-output cases from the original contexts to avoid exposing real test cases. To study the influence of such modification, we take HumanEval as an example and perform an additional experiment with its original contexts. The results are summarized in Table 6. On the one hand, the baseline pass@10 and pass@100 results on the original HumanEval benchmark outperform the modified version, which is reasonable because the example input-output cases may provide useful information for code generation. Nevertheless, the pass@1 results on the original benchmark are basically the same or even worse than the modified version, suggesting that the Codex models have not fully understood the semantics of the example input-output cases provided in the contexts. On the other hand, the performance of CODET is significantly improved using the original benchmark. This is as expected because the original contexts used for test case generation include real test cases, which could be borrowed by the models during the generation. Such real test cases will greatly empower CODET to distinguish correct code solutions. Hence, in our experiments, it is indispensable to remove the example input-output cases to avoid exposing the real test cases. In this way, the effectiveness of CODET can be fairly verified.
C ANALYSIS ON CODE SOLUTIONS
In CODET, code solutions that can pass exactly the same test cases are considered consistent in functionality and are grouped into the same consensus set. Since we employ top p sampling with a rather high temperature of 0.8, the functionality of the code solutions may vary significantly, which results in more consensus sets. We draw a histogram in Figure 6 to show the number of consensus sets produced by code-cushman-001 and CODET for each problem in the HumanEval benchmark. The average and median numbers are 26.8 and 25.5, respectively. We can find that most problems have less than 50 consensus sets, but the numbers have a high variance among different problems. We also draw the distribution of the numbers of code solutions for the top-ranked consensus sets in Figure 7. The consensus sets ranked top 1 tend to have more code solutions with an average value of 9.8, and the numbers also have a high variance. Baseline pass@100 CODET pass@1 Figure 9: The baseline pass@100 and CODET pass@1 with code-cushman-001 at different temperature settings.
As mentioned in Appendix A, we use the square root of |S x | to reduce the impact caused by code solutions, because we believe passing more test cases is more important than having more code solutions with the same functionality. For example, there may be one code solution that can pass five test cases, whereas another five code solutions in a consensus set can pass only one test case. We intuitively consider that the former may be more likely correct. For validation, we perform an experiment by comparing the performance of CODET with the "sqrt", "log" functions, and without any constraint (i.e., "linear") on the number of code solutions. Figure 8 shows the results of three Codex models on the HumanEval benchmark. We can find that reducing the importance of code solutions can consistently improve the performance of CODET. Similar observations have been found in other models and benchmarks, where the performance of employing "sqrt" is always better than or competitive to "linear", indicating the rationality of our design.
De-duplication
HumanEval MBPP Table 8: Pass@k (%) results on the zero-shot APPS and CodeContests benchmarks using codedavinci-002 and CODET with/without the trivial code solutions filtered. The numbers in red indicate the absolute improvements after filtering the trivial solutions.
find that de-duplication has slight and inconsistent influence on the performance of CODET. For the HumanEval benchmark, the pass@1 results using code solution de-duplication alone are better than other settings. Nonetheless, for the MBPP benchmark, the best pass@1 results are achieved without de-duplication. Therefore, in our main experiments, we reserve all the generated code solutions and test cases when performing CODET and leave the study of more advanced de-duplication methods for future work.
E SENSITIVITY TO THE TEMPERATURE
The hyper-parameter temperature has a great impact on the quality of generated code solutions and test cases when using top p sampling. We use a high temperature of 0.8 in our main experiments since CODET could benefit from a larger number of diverse samples. To investigate the sensitivity of CODET to the temperature, we perform an ablation study by using a range of temperatures to report the results of baseline pass@100 and CODET pass@1. Figure 9 shows the results of codecushman-001 on the HumanEval benchmark at different temperature settings. We can find that a higher temperature does improve the baseline pass@100 and CODET pass@1, and CODET achieves a good performance when temperature is set to 0.8.
F REMOVING TRIVIAL CODE SOLUTIONS
The problems in the APPS COMPETITION and CodeContests benchmarks are of great difficulty compared to HumanEval and MBPP, leading to the poor performance of the most capable codedavinci-002 model. After checking the incorrect code solutions generated by code-davinci-002, we identify many trivial solutions that just return the input argument or a constant value. Such solutions may hinder the ranking process of CODET if they can pass any generated test case. A trivial solution can be easily identified by its input arguments and returned values. If a solution always returns the same output value for different inputs, or its returned values are always the same as the inputs, it must be a trivial solution. To investigate the impact of trivial code solutions, we use code-davinci-002 on the zero-shot APPS and CodeContests benchmarks, and perform CODET after filtering out all the trivial solutions. As a result, we can remove an average of 4.5 (91.6) trivial solutions from the 50 (1, 000) generated solutions per problem for the APPS (CodeContests) benchmark. Table 9: Pass@k (%) results on the APPS and CodeContests benchmarks using code-davinci-002 and the one-shot setting. The numbers in red indicate the absolute improvements of CODET (Filter) over Baseline (Filter) on pass@1, pass@10 and pass@100. For CODET (Filter), temperature is set to 0.8 and sampling number is set to 50 for APPS and 1, 000 for CodeContests. We do not report pass@1000 for "Baseline Filter" because the numbers of code solutions after filtering are less than the sampling numbers.
ever, as shown in Table 8, after removing a prominent percentage of trivial solutions, there is little performance gain, which could exactly demonstrate the robustness of CODET.
G RESULTS ON APPS AND CODECONTESTS IN THE ONE-SHOT SETTING
Inspired by Chen et al. (2021) and Li et al. (2022b), we build one-shot versions of APPS and Code-Contests by appending a single input-output example to the problem description as a formatting hint. After generation, we filter out the generated solutions that cannot pass the given example input-output cases, which we call the "Baseline Filter" method. After filtering, we can still perform CODET using the rest of code solutions, called the "CODET Filter" method. Following the zeroshot experiments on APPS and CodeContests, we employ code-davinci-002 for generation and set the sampling number to 50 for APPS and 1, 000 for CodeContests.
We summarize the experimental results in Table 9, where we can find the one-shot performance using CODET is much better than that reported in Table 3 in the zero-shot setting. The performance of the baselines can be significantly improved by filtering the solutions with the given example test cases. Moreover, "CODET Filter" can further outperform "Baseline Filter" on the APPS benchmark, especially for the introductory and interview problems. Nonetheless, for CodeContests and the competition level problems in APPS, "CODET Filter" has little performance improvement or even performs slightly worse than "Baseline Filter". After manual investigation, we blame such issue to the generated low-quality test cases, which hinder the scoring of consensus sets. This suggests the interest of future study on test case generation for more challenging programming problems.
H MORE ANALYSIS ON TEST CASES H.1 STATISTICS ON TEST CASES
How many valid test cases do the models generate for CODET? Taking the HumanEval benchmark as an example, we sample 100 times for each problem when generating test cases. As illustrated in Figure 2, at each time of sampling, we feed the context c along with an instruction p to the model and get the generated content that may contain multiple test cases. Then, as mentioned in Section 4.3, we further post-process the generated samples to get individual test cases that are syntactically correct. Finally, we only keep the first five valid test cases for each sample, which means a problem can be equipped with 500 test cases at most.
I ABLATION STUDY ON THE SCORE OF CONSENSUS SET
In CODET, the score of a consensus set is calculated as f (S) = |S x ||S y |, where S x and S y are the code solutions and test cases in the consensus set, respectively. We can naturally derive two variants of scoring. One is f (S) = |S x |, in line with the idea of self-consistency , which only considers the number of code solutions with the same functionality. The other one is f (S) = |S y |, which corresponds to simply counting the test cases that each code solution can pass. To evaluate the performance of these two variants, we perform an ablation study on the HumanEval benchmark using three Codex models. The experimental results are summarized in Table 13, from which we can observe that only considering the number of code solutions or test cases for consensus set scoring performs consistently worse than CODET, and even worse than the baseline. Therefore, it is essential to consider the importance of both code solutions and test cases, suggesting the reasonable design of our dual execution agreement.
As mentioned in Section 3, AlphaCode (Li et al., 2022b) also includes a clustering method (denoted as AlphaCode-C) to select the generated code solutions, which shares a similar goal with our ablation method f : clustering code solutions based on code functionality, and then scoring each cluster by size. AlphaCode-C requires a number of additional test inputs to produce outputs from code solutions, which are then used to determine the functional equivalence. AlphaCode-C relies on a separate test input generation model, which needs extra training and annotation. The model is unavailable and hard to replicate, as the paper does not provide sufficient details. We replicate AlphaCode-C by extracting test inputs from the test cases generated by CODET. We run all code solutions on the test inputs, and group them by outputs. The clusters are ranked by size and then we select the code solutions from each cluster in order. From Table 2 and Table 13, we can find that AlphaCode-C is inferior to f , though they share the similar idea. The reason is that AlphaCode-C will group the trivial code solutions (e.g., solutions that always output "None", "0", or an empty string with whatever inputs) together, leading to a large cluster of incorrect solutions that significantly affects performance. While such trivial code solutions are hard to pass the generated test cases in CODET, thus having lower consensus scores for ranking. This confirms the effectiveness of considering test case information.
Incorrect return any(i < 0 for i in operations)
Correct
(b) The first consensus set has fewer test cases. Figure 10: Two cases from the HumanEval benchmark, where CODET can find the correct consensus sets though they have (a) fewer code solutions, or (b) fewer test cases.
J MORE EXAMPLES FOR CASE STUDY Figure 10 illustrates two cases that CODET can successfully find the correct consensus sets. Specifically, the case in Figure 10a requires to remove the vowels in the input text. There are 41 incorrect solutions and 147 test cases in the consensus set ranked 2, which forget to remove the upper-case vowels. Though the correct solutions in the top 1 consensus set are fewer (i.e., 31), they can pass more test cases (i.e., 170) and thus have a higher score. The case in Figure 10b is to decide when the balance of account will fall below zero. The functionality of the incorrect solutions in the second consensus set is to tell whether there are withdrawing operations. Nevertheless, the incorrect solutions can pass more test cases (i.e., 255) than the correct solutions (i.e., 248) in the top 1 consensus set. Fortunately, there are 79 correct solutions and only 6 incorrect solutions, making it possible for CODET to rank the correct consensus ahead. Both cases demonstrate the plausibility of using the dual execution agreement instead of solely considering the functional agreement between code solutions or the number of passed test cases. Figure 11 illustrates the cases that CODET fails to find the correct consensus sets. Specifically, Figure 11a demonstrates the situation that there are partially correct solutions that may fail at certain corner cases. In the example, there are 20 incorrect solutions in the top 1 consensus set that can pass 205 test cases, which will fail if the input is a string of length 1. The correct consensus set ranked 3 has more test cases (i.e., 222), while it has a lower consensus score due to the small number of code solutions (i.e., 9). The second example in Figure 11b shows the most common situation where CODET fails because the model cannot fully understand the problem. We can find that the incorrect solutions in the top 1 consensus set are totally missing the points of the given problem. While the model still tends to generate more incorrect solutions and test cases based on its wrong understanding. All the bad cases call for future improvements on the quality of generated code solutions and test cases.
Figure 3 :
3A simple example of the programming problem "return the square of a number". The gray line between x and y indicates that x can pass y, i.e., (x, y) is a hypothetical inlier. The green or purple box indicates a consensus set.
. ( 1 )
1HumanEval (Chen et al., 2021) consists of hand-written Python programming problems. The original contexts include example input-output cases, which are removed in our experiments to avoid exposing real test cases.The experiment in Appendix B shows that this removal operation is reasonable and indispensable.(2) MBPP (Austin et al., 2021) (sanitized version) contains crowd-sourced Python programming problems, and we follow HumanEval to construct the context for it. (3) APPS (Hendrycks et al., 2021) consists of coding problems collected from open-access coding websites, which have different difficulty levels. (4) CodeContests (Li et al., 2022b) includes competitive programming problems scraped from the Codeforces platform.
Figure 6 :Figure 7 :Figure 8 :
678The numbers of consensus sets that are produced by code-cushman-001 and CODET on the HumanEval benchmark. The distribution of the code solution numbers for the top 5 consensus sets. The long tail distribution with number ≥ 20 is truncated. The CODET results of three Codex models with and without constraint on the number of code solutions.
Figure 4: The distributions of (a) test case accuracy and (b) toxicity rate for each problem on Hu-manEval. Test cases are of better quality if they have higher accuracy and lower toxicity rate.Benchmarks
HumanEval
MBPP
k
1
2
10
1
2
10
code-cushman-001
47.1 2.6
58.6 8.5
71.2 5.5
59.7 4.3
64.8 3.1
75.5 2.8
code-davinci-001
52.0 1.8
62.9 4.0
78.1 2.3
64.3 2.4
71.7 2.6
80.5 1.2
INCODER-6B
26.8 6.2
30.4 2.8
40.8 3.7
Rank #1: The consensus set has 61 solutions and 218 test cases. Rank #2: The consensus set has 30 solutions and 226 test cases.if l == []: return True
return l[0] < t and
below_threshold(l[1:], t)
for e in l:
if e > t: return False
return True
Figure 5: Two real cases from the HumanEval benchmark with CODET and code-cushman-001. 001, the absolute improvements are in the range of 1.8% to 4.3% on pass@1, while for INCODER and CODEGEN, the range is from 6.2% to 15.9%. The above results indicate that the correct code solutions generated by mediocre models can be further exploited by adopting better test cases.Q3. How effective is CODET when there are fewer test cases?Rank #1: The consensus set has 4
solutions and 138 test cases.
Rank #2: The consensus set has 3
solutions and 158 test cases.
Correct
initial_sum = sum(array[0:1])
+ sum(array[-1:])
if initial_sum % 2 == 0:
return sorted(array,
reverse=True)
else: return sorted(array)
sum = 0
for i in range(len(array)):
sum += array[i]
if sum % 2 == 0:
array.sort(reverse=True)
else: array.sort()
return array
Incorrect
(b)
Limit
Sampling Number
10
20
50
100
1
56.5 57.5 60.7 62.4
2
62.2 62.8 63.2 63.6
3
62.9 63.2 65.5 65.0
4
64.1 64.5 65.7 65.0
5
63.9 64.2 65.2 65.8
Table 5 :
5When generating test cases for the HumanEval benchmark, we sample 100 times for each problem and each sample may include multiple assertion statements (i.e., test cases), denoted as Sampling Number = 100. Then we extract the first 5 syntactically correct test cases from each sample, denoted as Limit = 5. This means each problem is equipped with 500 test cases at most. The actual numbers of extracted test cases are summarized in Appendix H.1. We perform an ablation study on the number of test cases by decreasing Sampling Number and Limit. As shown inPass@1 (%) on HumanEval
using CODET and code-davinci-002
with different numbers of test cases.
Sampling Number denotes the num-
ber of samples generated by model,
and Limit denotes the test cases ex-
tracted per sample.
we take advantage of the Codex inference API provided by OpenAI as well as the two competitive open-source models CODEGEN and INCODER to perform zero-shot code generation.Automatic Test Case Generation Automated test case generation for programming problems can reduce the effort of writing test cases manually by developers. Early works including Ran-Raffel et al., 2020) fine-tuned on labelled data for test case generation. Unlike previous works that require heuristic rules or model training, we directly sample test cases from powerful code generation models like Codex in the zero-shot setting with elaborate prompts.claimed to have outperformed half of the human competi-
tors in real-world programming competitions, and Codex (Chen et al., 2021) is empowering Copilot
to provide real-time coding suggestions. Other open-source code generation models include GPT-
Neo (Black et al., 2021), GPT-J (Wang & Komatsuzaki, 2021), CodeParrot (Tunstall et al., 2022),
PolyCoder (Xu et al., 2022), CODEGEN (Nijkamp et al., 2022), and INCODER (Fried et al., 2022a).
In our study, doop (Pacheco et al., 2007), EvoSuite (Fraser & Arcuri, 2011), MOSA (Panichella et al., 2015),
DynaMOSA (Panichella et al., 2017), and MIO (Arcuri, 2017), were proposed to automatically
generate test cases for statically typed programming languages like Java. The later proposed Pyn-
guin (Lukasczyk & Fraser, 2022) could handle dynamically typed language like Python. Never-
theless, they are all search-based heuristics methods, which have limitations to the diversity and
quantity of generated test cases. To combat these limitations, recently proposed approaches (Tufano
et al., 2020; Li et al., 2022b) leveraged pre-trained language models like BART (Lewis et al., 2019)
and T5 (
Table 6 :
6Pass@k (%) on the original HumanEval benchmark with Codex models. The numbers in orange indicate the absolute improvements of pass@k on the original benchmark over our modified benchmark inTable 2. The number of sampling test cases for each problem is set to 100 for the HumanEval and MBPP benchmarks, and 50 for the APPS and CodeContests benchmarks. When scoring consensus sets in CODET, we use the square root of |S x | to reduce the impact caused by code solutions. A supporting experiment can be found in Appendix C. For code solution post-processing, we follow Chen et al.A MORE IMPLEMENTATION DETAILS
We set the temperature to 0.8, the top p to 0.95, the max generation length to 300, and the timeout
of executing a test case to 0.1 seconds. Specially, for baseline pass@1, we use the greedy search
setting with temperature 0.
Table 7 :
7Pass@k (%) on the HumanEval and MBPP benchmarks using CODET and code-cushman-
001 with different de-duplication settings. The setting "No No" in the first line means that neither
the code solutions nor the test cases are de-duplicated, which is used in our main experiments.
Methods
CODET
CODET (Remove Trivial)
k
1
10
100
1
10
100
APPS
INTRODUCTORY 34.6 53.2 -
34.9 0.3 53.4 0.2 -
INTERVIEW
8.1
18.1 -
8.3 0.2
18.2 0.1 -
COMPETITION
2.2
8.6
-
2.5 0.3
8.7 0.1
-
CodeContests
2.1
5.3
9.9 2.7 0.6
5.3 0.0
10.0 0.1
Table 10
10summarizes the average and
Table 10 :
10The numbers of extracted test cases
for each problem generated by five models on
the HumanEval benchmark.
Methods
Code Coverage
Statement Branch
code-cushman-001
95.3
98.1
code-davinci-001
94.9
97.6
code-davinci-002
95.7
98.5
INCODER
94.0
96.3
CODEGEN
78.2
78.6
Table 11 :
11The Code Coverage (%) statistics of test cases generated by five models on the Hu-manEval benchmark.Limit
Sampling Number
10
20
50
100
code-cushman-001
1
37.8 40.0 40.8 38.7
2
42.1 41.8 43.4 41.8
3
41.6 41.9 43.8 42.5
4
41.2 41.2 43.8 43.3
5
41.0 41.9 45.4 44.5
code-davinci-002
1
56.5 57.5 60.7 62.4
2
62.2 62.8 63.2 63.6
3
62.9 63.2 65.5 65.0
4
64.1 64.5 65.7 65.0
5
63.9 64.2 65.2 65.8
(a) pass@1
Limit
Sampling Number
10
20
50
100
code-cushman-001
1
43.3 48.1 48.2 49.1
2
48.1 48.1 49.5 49.8
3
49.0 47.7 48.7 48.7
4
49.2 47.9 49.4 49.1
5
48.3 48.5 48.9 50.1
code-davinci-002
1
65.1 67.8 71.9 71.5
2
71.7 73.2 74.2 74.1
3
73.2 73.5 75.1 75.0
4
73.3 74.1 75.5 74.3
5
73.5 74.3 74.5 75.1
(b) pass@2
Limit
Sampling Number
10
20
50
100
code-cushman-001
1
55.1 56.6 61.9 62.9
2
58.7 61.4 64.5 65.8
3
60.9 62.5 63.4 65.3
4
61.4 63.3 63.3 65.8
5
63.1 62.6 63.8 65.7
code-davinci-002
1
77.9 79.6 82.8 84.3
2
80.8 81.8 84.3 86.5
3
82.3 83.2 85.5 87.1
4
82.9 84.4 85.4 86.9
5
83.8 84.1 85.2 86.6
(c) pass@10
Table 12 :
12Pass@k (%) on the HumanEval benchmark using CODET with different test case numbers. Sampling Number is the number of test case samples we generate for each problem. Each sample may contain multiple assertion statements. These assertion statements are potential test cases, but we do not use all of them. Instead, we extract a Limit number of syntactically correct assertion statements from each sample, and discard the rest.H.2 CODE COVERAGE OF TEST CASESTo further inspect the quality of generated test cases, we utilize the code coverage measurement and report two coverage criterias -the statement coverage and the branch coverage. The statement coverage can be calculated as the percentage of statements in a code solution that are executed by test cases. The branch coverage is the percentage of executed branches for the control structure (e.g. the if statement). We execute the canonical solution for each HumanEval problem on the test cases generated by five models, then collect the coverage results using Coverage.py 4 . As a result, the average numbers of statements and branches in the canonical solution of a problem are 6.30 and 4.42, respectively. As shown inTable 11, all the models except CODEGEN have good performance on both statement and branch coverage, reaching an average of over 94% coverage. Such results may be attributed to the relatively short canonical solutions and the massive sampling number of test cases. Nevertheless, there are still corner cases that the models cannot cover, which calls for future improvements.H.3 RESULTS OF REDUCING THE NUMBER OF TEST CASESTo investigate the performance of CODET using fewer test cases, we perform an ablation study on the number of test cases that participate in the dual execution agreement. As shown inTable 12, we report the results on the HumanEval benchmark using code-cushman-001 and code-davinci-002 with a range of test case numbers. The number of test cases is related to two hyper-parameters. One is the number of test case samples, which is set to 100 for HumanEval in our main experiments. TheMethods
Code Solution Only f
Test Case Only f
k
1
2
10
1
2
10
code-cushman-001 41.2 −3.3
+7.7
49.2 −0.9 61.9 −3.8
+7.6
29.9 −14.6
−3.6
36.6 −13.5 59.5 −6.2
+5.2
code-davinci-001
44.4 −5.8
+5.4
54.7 −4.2 69.0 −6.8
+8.4
35.0 −15.2
−4.0
46.0 −12.9 70.2 −5.6
+9.6
code-davinci-002
55.9 −9.9
+8.9
67.0 −8.1 82.7 −3.9
+7.8
58.4 −7.4
+11.4
65.1 −10.0 86.1 −0.5
+11.2
Table 13 :
13Pass@k (%) on the HumanEval benchmark with ranking only on the number of code solutions (f (S) = |S x |) or test cases (f (S) = |S y |) in a consensus set. The numbers in red and green indicate the absolute improvements over baseline and CODET, respectively. other one is Limit that controls the amount of syntactically correct test cases we extract from each sample, which is set to 5 for all benchmarks in our main experiments. Note that Limit multiplied by the Sampling Number is the maximum number of test cases for a problem, not the exact number, because not every sample contains the Limit number of valid test cases. A valid test case (i.e., assertion statement) should start with "assert" and contain the name of the corresponding entry point function. We can conclude from the results that using more test cases in CODET could generally lead to better performance. While the performance gap narrows when Limit ≥ 3 and the sampling number ≥ 50. Moreover, using only 10 test cases per problem for CODET can still improve the baseline pass@1 performance of code-cushman-001 by absolute 4.3% and code-davinci-002 by absolute 9.5%. It demonstrates that CODET has high test case efficiency and we can use a smaller Sampling Number in real-world application to balance the performance and computation cost.
D INFLUENCE OF DE-DUPLICATIONSince we sample multiple times during generation, there is the chance that many of the generated code solutions and test cases are exactly the same. On the one hand, the number of duplicates may indicate the importance of a sample. On the other hand, duplicates may hinder the scoring of consensus sets in CODET when the quality of generation is unsatisfactory. Hence, we perform an ablation study to investigate the effects of removing duplicate code solutions and test cases. Specifically, we first format the generated Python code to conform to the PEP 8 style guide 3 , and then remove duplicate code solutions and test cases before performing CODET. The de-duplication results on the HumanEval and MBPP benchmarks using CODET and code-cushman-001 are shown inTable 7, where we can choose to de-duplicate the code solutions, or the test cases, or both. We can 3 https://peps.python.org/pep-0008
https://coverage.readthedocs.io/en/6.4.2
def remove_vowels(text):""" remove_vowels is a function that takes string and returns string without vowels. """ vowels = 'aeiouAEIOU' text_without_vowels = '' for character in text: if character not in vowels: text_without_vowels += character return text_without_vowels Correct (a) Uncovered corner cases.def minSubArraySum(nums):""" Given an array of integers nums, find the minimum sum of any non-empty sub-array of nums. """if not nums: return 0 total = nums[0] min_sum = total for i in range(1, len(nums)Figure 11: Three incorrect cases from the HumanEval benchmark, where CODET cannot find the correct consensus sets due to (a) uncovered corner cases, or (b) failure of problem understanding.
Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, C H Steven, Hoi, arXiv:2207.01780arXiv preprintHung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven CH Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. arXiv preprint arXiv:2207.01780, 2022.
Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
On the advance of making language models better reasoners. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen, arXiv:2206.02336arXiv preprintYifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022a.
Competition-level code generation with alphacode. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, arXiv:2203.07814arXiv preprintYujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814, 2022b.
Pynguin: Automated unit test generation for python. Stephan Lukasczyk, Gordon Fraser, arXiv:2202.05218arXiv preprintStephan Lukasczyk and Gordon Fraser. Pynguin: Automated unit test generation for python. arXiv preprint arXiv:2202.05218, 2022.
Silvio Savarese, and Caiming Xiong. A conversational paradigm for program synthesis. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, 2022arXiv preprintErik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. A conversational paradigm for program synthesis. arXiv preprint, 2022.
Feedback-directed random test generation. Carlos Pacheco, K Shuvendu, Lahiri, D Michael, Thomas Ernst, Ball, 29th International Conference on Software Engineering (ICSE'07). IEEECarlos Pacheco, Shuvendu K Lahiri, Michael D Ernst, and Thomas Ball. Feedback-directed random test generation. In 29th International Conference on Software Engineering (ICSE'07), pp. 75-84. IEEE, 2007.
Reformulating branch coverage as a many-objective optimization problem. Annibale Panichella, Paolo Fitsum Meshesha Kifetew, Tonella, IEEE 8th international conference on software testing, verification and validation (ICST). IEEEAnnibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. Reformulating branch coverage as a many-objective optimization problem. In 2015 IEEE 8th international conference on software testing, verification and validation (ICST), pp. 1-10. IEEE, 2015.
Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. Annibale Panichella, Paolo Fitsum Meshesha Kifetew, Tonella, IEEE Transactions on Software Engineering. 442Annibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Transac- tions on Software Engineering, 44(2):122-158, 2017.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020.
Leveraging automated unit tests for unsupervised code translation. Baptiste Roziere, M Jie, Francois Zhang, Mark Charton, Gabriel Harman, Guillaume Synnaeve, Lample, arXiv:2110.06773arXiv preprintBaptiste Roziere, Jie M Zhang, Francois Charton, Mark Harman, Gabriel Synnaeve, and Guil- laume Lample. Leveraging automated unit tests for unsupervised code translation. arXiv preprint arXiv:2110.06773, 2021.
Generate & rank: A multi-task framework for math word problems. Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, Qun Liu, arXiv:2109.03034arXiv preprintJianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. Generate & rank: A multi-task framework for math word problems. arXiv preprint arXiv:2109.03034, 2021.
Natural language to code translation with execution. Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I Wang, arXiv:2204.11454arXiv preprintFreda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I Wang. Natural lan- guage to code translation with execution. arXiv preprint arXiv:2204.11454, 2022.
Unit test case generation with transformers and focal context. Michele Tufano, Dawn Drain, Alexey Svyatkovskiy, Neel Shao Kun Deng, Sundaresan, arXiv:2009.05617arXiv preprintMichele Tufano, Dawn Drain, Alexey Svyatkovskiy, Shao Kun Deng, and Neel Sundaresan. Unit test case generation with transformers and focal context. arXiv preprint arXiv:2009.05617, 2020.
Natural language processing with transformers. Lewis Tunstall, Thomas Leandro Von Werra, Wolf, O'Reilly Media, Inc2022Lewis Tunstall, Leandro von Werra, and Thomas Wolf. Natural language processing with trans- formers. " O'Reilly Media, Inc.", 2022.
Ben Wang, Aran Komatsuzaki, Gpt-J-6b, A 6 Billion Parameter Autoregressive Language Model. Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021.
Self-consistency improves chain of thought reasoning in language models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou, arXiv:2203.11171arXiv preprintXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Jamie Brew, ArXivThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface's trans- formers: State-of-the-art natural language processing. ArXiv, 2019.
A systematic evaluation of large language models of code. F Frank, Uri Xu, Graham Alon, Vincent Josua Neubig, Hellendoorn, Deep Learning for Code Workshop. Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. A systematic evaluation of large language models of code. In Deep Learning for Code Workshop, 2022.
| [
"https://github.com/microsoft/CodeT.",
"https://github.com/kingoflolz/mesh-transformer-jax,"
] |
[
"Far-Ultraviolet H 2 Emission from Circumstellar Disks",
"Far-Ultraviolet H 2 Emission from Circumstellar Disks"
] | [
"Laura Ingleby [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Nuria Calvet [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Edwin Bergin [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Ashwin Yerasi [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Catherine Espaillat \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Gregory Herczeg [email protected] \nMax-Planck-Institut fur extraterrestriche Physik\n1312, 85741Postfach, GarchingGermany\n",
"Evelyne Roueff [email protected] \nLUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen\n92195MeudonFrance\n",
"Hervé Abgrall [email protected] \nLUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen\n92195MeudonFrance\n",
"Jesus Hernández [email protected] \nCentro de Investigaciones de Astronomía (CIDA)\n5101MéridaVenezuela\n",
"César Briceño [email protected] \nCentro de Investigaciones de Astronomía (CIDA)\n5101MéridaVenezuela\n",
"Ilaria Pascucci [email protected] \nDepartment of Physics and Astronomy\nJohns Hopkins University\n21218BaltimoreMD\n",
"Jon Miller [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Jeffrey Fogel [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Lee Hartmann \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Michael Meyer [email protected] \nPhysics Department\nETH Hoenggerberg Campus\nCH-8093ZurichSwitzerland\n",
"John Carpenter \nDepartment of Astronomy\nCalifornia Institute of Technology\nMail Code 249-17, 1200 East California Boulevard91125PasadenaCA\n",
"Nathan Crockett [email protected] \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n",
"Melissa Mcclure \nDepartment of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI\n"
] | [
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Max-Planck-Institut fur extraterrestriche Physik\n1312, 85741Postfach, GarchingGermany",
"LUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen\n92195MeudonFrance",
"LUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen\n92195MeudonFrance",
"Centro de Investigaciones de Astronomía (CIDA)\n5101MéridaVenezuela",
"Centro de Investigaciones de Astronomía (CIDA)\n5101MéridaVenezuela",
"Department of Physics and Astronomy\nJohns Hopkins University\n21218BaltimoreMD",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Physics Department\nETH Hoenggerberg Campus\nCH-8093ZurichSwitzerland",
"Department of Astronomy\nCalifornia Institute of Technology\nMail Code 249-17, 1200 East California Boulevard91125PasadenaCA",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI",
"Department of Astronomy\nUniversity of Michigan\n830 Dennison Building, 500 Church Street48109Ann ArborMI"
] | [] | We analyze the far-ultraviolet (FUV) spectra of 33 classical T Tauri stars (CTTS), including 20 new spectra obtained with the Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) on the Hubble Space Telescope. Of the sources, 28 are in the ∼1 Myr old Taurus-Auriga complex or Orion Molecular Cloud, 4 in the 8-10 Myr old Orion OB1a complex and one, TW Hya, in the 10 Myr old TW Hydrae Association. We also obtained FUV ACS/SBC spectra of 10 non-accreting sources surrounded by debris disks with ages between 10 and 125 Myr. We use a feature in the FUV spectra due mostly to electron impact excitation of H 2 to study the evolution of the gas in the inner disk. We find that the H 2 feature is absent in non-accreting sources, but is detected in the spectra of CTTS and correlates with accretion luminosity. Since all young stars have active chromospheres which produce strong X-ray and UV emission capable of exciting H 2 in the disk, the fact that the non-accreting sources show no H 2 emission implies that the H 2 gas in the inner disk has dissipated in the non-accreting sources, although dust (and possibly gas) remains at larger radii. Using the flux at 1600Å, we estimate that the column density of H 2 left in the inner regions of the debris disks in our sample is less than ∼ 3 × 10 −6 g cm −2 , nine orders of magnitude below the surface density of the minimum mass solar nebula at 1 AU. | 10.1088/0004-637x/703/2/l137 | [
"https://arxiv.org/pdf/0909.0688v1.pdf"
] | 2,730,737 | 0909.0688 | b902d15d426dbcfc2ef9e5da1c6b6b7931f0eb13 |
Far-Ultraviolet H 2 Emission from Circumstellar Disks
3 Sep 2009
Laura Ingleby [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Nuria Calvet [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Edwin Bergin [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Ashwin Yerasi [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Catherine Espaillat
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Gregory Herczeg [email protected]
Max-Planck-Institut fur extraterrestriche Physik
1312, 85741Postfach, GarchingGermany
Evelyne Roueff [email protected]
LUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen
92195MeudonFrance
Hervé Abgrall [email protected]
LUTH and UMR 8102 du CNRS, Observatoire de Paris, Section de Meudon, Place J. Janssen
92195MeudonFrance
Jesus Hernández [email protected]
Centro de Investigaciones de Astronomía (CIDA)
5101MéridaVenezuela
César Briceño [email protected]
Centro de Investigaciones de Astronomía (CIDA)
5101MéridaVenezuela
Ilaria Pascucci [email protected]
Department of Physics and Astronomy
Johns Hopkins University
21218BaltimoreMD
Jon Miller [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Jeffrey Fogel [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Lee Hartmann
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Michael Meyer [email protected]
Physics Department
ETH Hoenggerberg Campus
CH-8093ZurichSwitzerland
John Carpenter
Department of Astronomy
California Institute of Technology
Mail Code 249-17, 1200 East California Boulevard91125PasadenaCA
Nathan Crockett [email protected]
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Melissa Mcclure
Department of Astronomy
University of Michigan
830 Dennison Building, 500 Church Street48109Ann ArborMI
Far-Ultraviolet H 2 Emission from Circumstellar Disks
3 Sep 2009
We analyze the far-ultraviolet (FUV) spectra of 33 classical T Tauri stars (CTTS), including 20 new spectra obtained with the Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) on the Hubble Space Telescope. Of the sources, 28 are in the ∼1 Myr old Taurus-Auriga complex or Orion Molecular Cloud, 4 in the 8-10 Myr old Orion OB1a complex and one, TW Hya, in the 10 Myr old TW Hydrae Association. We also obtained FUV ACS/SBC spectra of 10 non-accreting sources surrounded by debris disks with ages between 10 and 125 Myr. We use a feature in the FUV spectra due mostly to electron impact excitation of H 2 to study the evolution of the gas in the inner disk. We find that the H 2 feature is absent in non-accreting sources, but is detected in the spectra of CTTS and correlates with accretion luminosity. Since all young stars have active chromospheres which produce strong X-ray and UV emission capable of exciting H 2 in the disk, the fact that the non-accreting sources show no H 2 emission implies that the H 2 gas in the inner disk has dissipated in the non-accreting sources, although dust (and possibly gas) remains at larger radii. Using the flux at 1600Å, we estimate that the column density of H 2 left in the inner regions of the debris disks in our sample is less than ∼ 3 × 10 −6 g cm −2 , nine orders of magnitude below the surface density of the minimum mass solar nebula at 1 AU.
Introduction
Gas comprises 99% of the mass of primordial disks. As time increases, it is accreted onto the star, formed into planets, and lost by photoevaporation, leaving behind a debris disk, in which most of the mass is locked into planets and other solid bodies traced by secondary dust arising from collisions. Although the general outline of this process is agreed upon, many specific questions remain unanswered, mainly because the gas is difficult to observe. As a result, only ∼1% of the disk mass, the dust, has been used as a probe of the disk evolution. However, although interconnected, the evolution of gas and dust may take different paths (Pascucci et al. 2009), making observations of the gas itself necessary to understand these processes. Of particular importance are observations of the gas in the inner disk, because it sets the chemical and physical conditions for planet formation. The bulk of the gas in these cold disks is in H 2 , which lacks a permanent dipole component, so the pure rotational and rovibrational lines are weak. Nonetheless, extensive surveys of these lines in primordial disks have been carried out (Bary et al. 2008;Bitner et al. 2008, and references therein), and they have been detected in a handful of objects. Searches using less abundant molecules have also succeeded and provided information on the gas in the inner region of gas-rich disks Salyk et al. 2008;Pascucci et al. 2009;Najita et al. 2008). Gas has also been searched for in disks of more evolved sources which are no longer accreting, within the age range when the transition from primordial to debris is supposed to happen, ∼5 -20 Myr. In particular, Pascucci et al. (2006) looked for H 2 in the disks of several non-accreting sources and found that the amount of gas still present at 5 -20 Myr is not large enough to form the gas giant planets at that time. This observation agrees with results indicating that the amount of hot gas in disks of non-accreting sources is decreased when compared to accreting sources (Carmona et al. 2007).
UV observations are very promising for detecting the gas. The strong stellar Lyα radiation bathes the UV thin regions of the circumstellar material and, as long as the H 2 has a temperature of a few thousand degrees, the line excites electrons to upper electronic states, which produces a plethora of emission lines in the UV when they de-excite (Herczeg et al. 2006, H06, and references therein). At the same time, the stellar high energy radiation fields eject electrons from heavy metals, and the resulting free electrons produce additional electrons by ionizing H and He atoms; these secondary electrons then excite H 2 to upper levels, resulting in a characteristic spectrum of lines and continuum in the UV (Spitzer & Tomasko 1968;Bergin et al. 2004, B0). For electron excitation to work efficiently, temperatures need to be high enough for neutral H to be present. The relatively high temperature requirements mean that the H 2 detected by these means must either to be close to the star or to be excited by shocks. UV H 2 emission has been found to be extended in objects surrounded by substantial natal material, in the regions where the stellar outflow shocks this material, or in fast accretors, where the H 2 may arise in the high density outflow itself (H06). However, without remnant envelopes such as the objects in this study, the only known exception being T Tau, the most likely place to find the required high temperatures is in the inner disk. This makes the UV H 2 emission ideal for probing the H 2 gas in the innermost regions of disks, regions which are difficult to access by other means.
We obtained ACS/SBC prism spectra of a fair number of accreting Classical T Tauri stars (CTTS), non-accreting weak T Tauri stars (WTTS), and more evolved disks (DD), covering the interesting age range, ∼1 -100 Myr. Our goal was to search for UV H 2 emission and study its evolution. The poor spectral resolution of the ACS spectra made the identification of Lyα fluorescent lines impossible. However, we were able to identify a feature around ∼ 1600Å, first proposed by B04 as due mostly to electron impact excitation of H 2 . In this letter we present and analyze these spectra. We show that the H 2 feature is absent in all non-accreting and evolved stars while present in all accreting stars, and use UV fluxes to give very rough estimates of upper limits for the remaining surface density of H 2 in the latter.
Observations
We obtained observations of 20 CTTS and 10 non-accreting and evolved targets using the Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) on the Hubble Space Telescope in 2007. The observations were obtained in GO programs 10810 (PI: Bergin), 10840 (PI: Calvet) and 11199 (PI: Hartmann). Each ACS observation consists of a brief image in the F165LP filter and a longer image obtained with the PR130L prism. Images appear unresolved. Offsets between the target location in the filter and prism image, including the wavelength solution, were obtained from Larsen (2006). The target spectrum was then extracted from a 41-pixel (1.3") wide extraction window. Background count rates of 0.05 -0.1 counts s −1 were calculated from offset windows and subtracted from the extracted spectrum. The absolute wavelength solution was then determined by fitting the bright C IV λ1549Å doublet. Fluxes were calibrated from the sensitivity function obtained from white dwarf standard stars by Bohlin (2007). The spectra range from 1230-1900Å with a 2-pixel resolution of ∼ 300 at 1230Å and ∼ 80 at 1600Å. Table 1 lists the ACS targets used in this analysis and the properties of these objects. The CTTS sources include 16 objects in the Taurus-Auriga molecular cloud and four sources in the 25 Ori aggregate in the Orion OB1a subassociation. Spectral types for the CTTS in Taurus are from Furlan et al. (2006), and ages from Hartmann (2003). To correct for reddening we used the law towards the star HD 29647 (Whittet et al. 2004) and estimated A V by de-reddening the median photometry of Herbst et al. (1994) to fit the fluxes of a standard star in the region of the spectrum (V to J bands) where the emission is mostly photospheric 1 . We obtained accretion luminosities L acc for the Taurus sources using the U band excesses following Gullbring et al. (1998), and the median U from photometry in Herbst et al. (1994). The ages, spectral types, luminosities, A V 's, and L acc for the sources in 25 Ori were taken from Briceño et al. (2007); Hernández et al. (2007) and Calvet et al. (2005).
The non-accreting sources (WTTS/DD) were selected to have no evidence of accretion and to have excesses in either Spitzer Space Telescope Infrared Spectrograph (IRS) spectra or 24 and 70 µm Multiband Imaging Photometer (MIPS) photometry, indicating the presence of debris disks. The sources in the TW Hydrae Association have been identified as WTTS by spectral observations which showed Hα in emission (Webb et al. 1999) and strong Li 6707 in absorption (Kastner et al. 1997). The WTTS/DD and their properties were discussed in Carpenter et al. (2009Carpenter et al. ( , 2008, Hillenbrand et al. (2008), Verrier & Evans (2008), Chen et al. (2005) and Low et al. (2005). Examples of the ACS target spectra are shown in Figure 1.
We supplemented the ACS data with previously published medium and high resolution STIS data of CTTS (Calvet et al. 2004;Herczeg et al. 2002Herczeg et al. , 2004. The source properties, listed in Table 1, were taken from Calvet et al. (2004) for the Orion Molecular Cloud sources, and derived as described for the ACS Taurus sources for the STIS Taurus sources. We adopt the spectral type and age from Webb et al. (1999) and A V from Herczeg et al. (2004) for TW Hya. Accretion luminosities for the STIS sample were taken from Calvet et al. (2004) and Ingleby et al. (2009).
Results
Following B04, we identified a feature in the STIS spectra at 1600Å which is due mostly to electron impact H 2 emission. Due to the low resolution of the ACS spectra, we used the high resolution spectrum of TW Hya (Herczeg et al. 2004) to identify this feature in the ACS spectra; in Figure 2 we compare the feature in the observed STIS spectrum of TW Hya and in the STIS spectrum smoothed to the resolution of the ACS spectra. While the H 2 lines are no longer observable in the smoothed spectrum, the feature at 1600Å is.
In addition to electron impact H 2 emission, the flux at 1600Å has contributions from accretion shock emission and Lyα fluorescent lines (Ingleby et al. 2009). Attempting to isolate an indicator that is due to electron impact H 2 emission, we measured the flux between 1575 and 1625Å and subtracted from it the continuum and the contribution from nearby strong lines (He II 1640Å and C IV 1550Å). Since it is unclear how strong the emission from additional sources is at 1600Å, we calculated the continuum in three ways. First, by joining the troughs in the spectrum on either side of the 1600Å feature; second, by fitting a 5th order polynomial to the entire FUV spectrum; third, by adopting a continuum which assumes that the rise in the spectrum at 1600Å is due entirely to electron impact H 2 emission. Figure 2 shows the location of the subtracted continuum for each method in TW Hya, and Figure 3 shows examples of the measurements for three ACS targets. These three methods for measuring the H 2 feature luminosity were used to estimate the errors. Comparing the TW Hya spectra at both resolutions indicates that the feature luminosity decreases by ∼2 in the low resolution spectrum because some of the flux is blended into the continuum. This error is small compared to the uncertainty in the continuum location.
Using these procedures, we measured the luminosity of the 1600Å feature in both the ACS spectra and the STIS spectra smoothed to the resolution of ACS; the feature luminosities are given in Table 1. For the WTTS/DD, we find that the H 2 feature is not observable and the values presented in Table 1 are upper limits based on the rms fluctuations from 1575 to 1625Å. We thus find that the H 2 feature shows only in the accreting sources. This is not an age effect; our sample includes CTTS and WTTS of similar age at ∼10 Myr (left panel of Figure 4) but only the accreting sources show the H 2 feature. Moreover, we find a clear correlation of the strength of the feature with L acc in the CTTS (right panel of Figure 4), with a Pearson correlation coefficient of 0.68, indicating that the H 2 emission depends on the accretion properties of the source and not on the age. A similar result was found in Carmona et al. (2007), where the probability of detecting near-IR H 2 lines was greater in sources with higher accretion rates.
Discussion
Free electrons are required for the process of electron excitation to be effective ( §1). Since, in turn, high energy radiation fields are necessary to produce fast electrons, the absence of H 2 emission in the WTTS/DD could in principle be due to a low level of X-ray or EUV emission in these objects relative to the CTTS. However, Telleschi et al. (2007) found that there is little difference between the X-ray luminosities of CTTS and WTTS in their X-ray survey of pre-main sequence objects in Taurus. Even though there is a soft X-ray excess created in the accretion shock region of CTTS (Günther et al. 2007, and references therein), it does not significantly increase the X-ray production in most young stars (Telleschi et al. 2007). Similarly, Kastner et al. (1997) showed that CTTS and WTTS in the 10 Myr TW Hya Association have similar X-ray luminosities. Moreover, the X-ray luminosity does not decrease significantly over the first 100 Myr of low mass stars (Briceno et al. 1997;Kastner et al. 1997), so the CTTS and WTTS/DD in our sample should have comparable X-ray luminosities.
The EUV radiation field, including emission from approximately 100 to 1000Å, is also responsible for the ionization of heavy atoms, contributing to the population of free electrons available to excite an H 2 molecule. The EUV is difficult to investigate because the radiation is extremely extincted by interstellar hydrogen. Alexander et al. (2005) find that the EUV flux level does not change in the first ∼10 Myr, from studies of the ratio He II 1640/CIV 1550Å. If we assume that the FUV level is an indicator of the strength of the EUV emission, we come to similar conclusions. Figure 3 shows one CTTS and one DD that have the same FUV luminosity, so one would expect a strong enough EUV radiation field in both sources to create the free electrons needed to excite H 2 if it were present. However, the excess emission at 1600Å is clearly seen in the CTTS (FP Tau) and absent in the DD (MML 36).
Since the high energy radiation fields in both CTTS and WTTS/DD are comparable in strength, the most likely explanation for the lack of H 2 emission in WTTS/DD is that there is essentially no gas in their inner disks. Given the close relationship between the H 2 feature strength and L acc shown in Figure 4, our results suggest that H 2 gas dissipates in timescales consistent with the cessation of accretion; when the gas is dissipated in the inner disk, there is no material left to accrete.
We use the observations to make a rough estimate of the column density of H 2 being collisionally excited. We assume that the H 2 is emitted in an optically thin region of the disk with area A and thickness z. The emitted luminosity per unit volume is E λ = hνσ λ vχ e n 2 H 2 , where hν is the energy of the emitted photon, σ λ the H 2 cross section, v the impacting electron velocity, n e the electron number density, χ e the electron fraction, and n H 2 the number density of H 2 . The expected flux at 1600Å due to electron impact excitation is then
F 1600 = hνσ 1600 vχ e Σ 2 R 2 16m H zd 2 .(1)
where Σ is the surface density of H 2 excited by electron impacts, m H the mass of hydrogen, R the radius of the emitting region, and d is the distance. In Ingleby et al. (2009) we find that the electron excitation model that provides the best fit to the 1600Å feature of our sample of CTTS with STIS spectra is characterized by a temperature T ∼ 5000 K and an electron energy of ∼12 eV. For these values, σ 1600 = 10 −20 cm 2Å −1 (Abgrall et al. 1997). According to the thermal models of (Meijerink et al. 2008, M08), gas reaches T ∼ 5000K within 1 AU of the star, which is consistent with the upper limit to the extension of the H 2 emitting region set by the STIS resolution in the case of TW Hya (Herczeg et al. 2002). We further assume that most electrons are capable of exciting H 2 and adopt χ e = 5 × 10 −3 , as well as R ∼1 AU and z ∼0.1 AU (M08). Using these numbers, and assuming that all the flux at 1600Å is due to electron impact excitation, we get the estimates of Σ in Table 1, which for CTTS are consistent with predicted formation in the uppermost levels of the disk (M08).
A similar estimate can be made for the column density of electron excited H 2 in the WTTS/DD in our sample, which have some dust remaining at larger radii but no detected IR H 2 lines (Carpenter et al. 2009(Carpenter et al. , 2008Hillenbrand et al. 2008;Verrier & Evans 2008;Chen et al. 2005;Low et al. 2005). These estimates are given in Table 1. We used the flux of MML 36, which is the WTTS/DD with the highest flux at 1600Å in our sample, to estimate the mass of H 2 inside ∼1 AU; we found that there must be less than 10 −7 earth masses, 10 −7 % of the MMSN, lower than the 0.01% of the MMSN estimated by Pascucci et al. (2006). This has important implications for the formation of terrestrial planets, especially if gas is needed to circularize orbits (Agnor & Ward 2002). Kominami & Ida (2002) theorize that at least 0.01% of the MMSN must be present during the for-mation of proto-planets, which form around 10 Myr according to simulations by Kenyon & Bromley (2006). Our column density estimates indicate that the amount of H 2 gas present in WTTS/DD with ages of 10-100 Myr is too small to circularize the orbits of the terrestrial planets being formed at that time. Our results support the conclusion by Pascucci et al. (2006) that there must be an additional source responsible for damping eccentricities, one possibility being dynamical friction with remaining planetesimals. Another possibility is that other species of gas exist after the H 2 has been depleted, for example, C and O have been detected around the 10 Myr debris disk β Pic (Fernández et al. 2006;Roberge et al. 2006). C and O do not feel strong radiation pressure due to the low FUV flux in WTTS and therefore may remain after the H 2 has been depleted (Roberge et al. 2006).
Acknowledgments
We thank Al Glassgold for discussions clarifying the ionization mechanisms in the disk. This work was supported by NASA through grants GO-08317, GO-09081, GO-9374, GO-10810 and GO-10840 from the Space Telescope Science Institute. This material is also based upon work supported by the National Science Foundation under Grant No. 0707777 to EAB. Note. - * Stellar properties for binaries are from White & Ghez (2001). UZ Tau A and B are themselves binaries; UZ Tau A is a spectroscopic binary and UZ Tau B is a binary system (White & Ghez 2001) but is unresolved by ACS/SBC. † CVSO 224 is a CTTS surrounded by a transitional disk (Espaillat et al. 2008) and has a very lowṀ . The ACS/SBC spectrum of this target is noisy and while we cannot confidently quantify the H 2 emission, we do see the rise in the spectrum at 1600Å which indicates its presence. -Observed and convolved spectra for TW Hya. The bottom spectrum is the high resolution STIS FUV spectrum. The top spectrum is the TW Hya spectrum convolved to the ACS spectral resolution and offset by +1.0. The solid and dashed lines on the smoothed spectrum show the three subtracted continua. These three continua are also shown plotted on the high resolution spectrum and indicate that the lowest continuum may provide the best measure of the luminosity. The strong emission lines are labeled along with the H 2 feature. Fig. 3.-H 2 measurements for ACS sources. The first three panels show ACS sources and the location of the subtracted continua, shown as the solid, dashed and dot-dashed lines. Also plotted are the He II and C IV emission lines as the thick solid line. The final panel compares accreting and non-accreting sources with the same luminosity. An excess in FP Tau is observed at 1600Å, which is due to electron impact H 2 emission, and also between the S IV and C IV lines, which is likely due to blended electron impact and Lyα fluorescent lines. H 2 luminosity vs. L acc . The H 2 luminosity is observed to increase with L acc . Errors on L acc are calculated using the scatter in the correlation with L U presented in Gullbring et al. (1998).
Fig. 1 .
1-Sample of ACS CTTS spectra. Spectra have been corrected for reddening using the values of A V listed inTable 1. Spectra have been scaled vertically for clarity. The bottom spectrum (dashdotted line) in each panel is the STIS TW Hya spectrum smoothed to the resolution of the ACS spectra for comparison and offset by -1.2. The vertical line at 1600Å marks the center of the feature used to identify the H 2 . Left panel, from top to bottom including the offset in parenthesis: DP Tau (+2.7), DR Tau (+1.5), FM Tau (+0.5), FP Tau (+2.2) and GK Tau (+0.3). Middle panel; from top to bottom: HN Tau A (+1.9), HN Tau B (+3.0), IP Tau (+1.5), UZ Tau A (+1.2) and UZ Tau B (+0.45). The right panel shows ACS spectra of WTTS/DD; from top to bottom: HD 12039 (+3.4), HD 202917 (+2.5), HD 61005 (+2.0), HD 92945 (+1.7) and HD 98800 (+0.8).
Fig. 2 .
2Fig. 2.-Observed and convolved spectra for TW Hya. The bottom spectrum is the high resolution STIS FUV spectrum. The top spectrum is the TW Hya spectrum convolved to the ACS spectral resolution and offset by +1.0. The solid and dashed lines on the smoothed spectrum show the three subtracted continua. These three continua are also shown plotted on the high resolution spectrum and indicate that the lowest continuum may provide the best measure of the luminosity. The strong emission lines are labeled along with the H 2 feature.
Fig. 4 .
4-Left: Luminosity of the H 2 vs. age. Filled circles represent WTTS and open circles represent CTTS. For the WTTS we show only an upper limit on the luminosity of the H 2 . Right:
Table 1 .
1SourcesObject
Spectral Type
L
A V
Age
L acc
H 2 Feature
Σ
L ⊙
mag Myr
L ⊙
10 −5 L ⊙
10 −6 g cm −2
ACS CTTS
AA Tau
M0
1.1
1.4
1
0.13± .15
.03
79.9± 0
67
> 49.3
CI Tau
K6
1.3
2.1
1
0.47± .34
.11
3.3± 2.0
0
> 9.9
DE Tau
M1
1.2
1.1
1
0.16± .16
.05
2.9± 6.9
1.4
> 36.3
DL Tau
K7
1.0
1.6
1
0.32± .26
.08
3.3± 4.1
2.1
> 22.5
DN Tau
M0
1.2
0.8
1
0.04± .07
.01
0.49± 4.5
.13
> 18.4
DO Tau
M0
1.4
2.4
1
0.29± .24
.08
46.1± 13
8.6
> 84.6
DP Tau
M0
0.2
0.5
1
0.01± .02
.003
4.2± 1.9
1.4
> 17.6
DR Tau
K7
1.7
1.0
1
1.03± .53
.22
14.1± 4.3
3.7
> 43.2
FM Tau
M0
0.5
1.9
1
0.30± .26
.07
16.0± 6.8
13
> 61.7
FP Tau
M3
0.4
0.1
1
0.001± .004
.0004
0.021± .10
.004
> 4.9
GK Tau
M0
1.4
1.1
1
0.06± .08
.02
0.98± 1.7
.58
> 18.1
HN Tau A *
K5
0.2
1.2
1
0.07± .10
.02
16.6± 9.2
1.8
> 38.8
HN Tau B *
M4
0.03 0.9
1
-
0.15± .53
.04
> 5.8
IP Tau
M0
0.7
0.9
1
0.02 ± .05
.004
0.61± 2.8
0.4
> 15.0
UZ Tau A *
M1
0.3
0.5
1
0.02± .07
.02
0.80± 2.8
0
> 14.3
UZ Tau B *
M2
0.3
1.0
1
0.02± .07
.02
1.5± 2.9
.91
> 8.5
CVSO 206
K6
0.2
0.2
9
-
1.2± .17
.51
> 13.9
CVSO 35
K7
0.7
0.7
9
0.02± .01
.01
3.2± 2.3
2.7
> 16.6
CVSO 224 †
M3
0.1
0.5
9
-
-
-
OB1a 1630
M2
1.0
0.0
9
-
1.3± 1.3
1.0
> 13.3
STIS CTTS
BP Tau
K7
1.3
1.0
1
0.23± .29
.20
14.1± 23
7.4
>41.6
DM Tau
M1
0.3
0.6
1
0.08± .10
.07
15.4± 15
5.5
> 39.5
GM Aur
K3
1.2
1.1
1
0.18± .21
.16
19.7± 48
4.9
> 48.7
LkCa 15
K5
1.0
1.0
1
0.03± .06
.02
8.6± 6.8
2.5
> 26.4
RY Tau
G1
9.6
2.2
1
1.6± 2.4
.80
338.0± 400
120
> 148.4
SU Aur
G1
7.8
0.9
1
0.10± .20
.01
6.8± 14
1.5
> 30.0
T Tau
G6
7.8
1.8
1
0.90± 1.2
.60
104.5± 37
18
> 103.9
CO Ori
G0
22.3 2.0
1
1.7± 2.5
.90
303.5± 550
110
> 149.0
EZ Ori
G3
5.9
0.6
1
0.10± 0
0
20.0± 8.5
7.9
> 41.1
Targets with high mass accretion rate, as DL Tau and DR Tau show significant veiling at J(Edwards et al. 2006), so the estimated extinction may be in error, although it is consistent with values from Taurus.
. H Abgrall, E Roueff, X Liu, D E Shemansky, ApJ. 481557Abgrall, H., Roueff, E., Liu, X., & Shemansky, D. E. 1997, ApJ, 481, 557
. C B Agnor, W R Ward, ApJ. 567579Agnor, C. B., & Ward, W. R. 2002, ApJ, 567, 579
. R D Alexander, C J Clarke, J E Pringle, MNRAS. 358283Alexander, R. D., Clarke, C. J., & Pringle, J. E. 2005, MNRAS, 358, 283
. J S Bary, D A Weintraub, S J Shukla, J M Leisenring, J H Kastner, ApJ. 6781088Bary, J. S., Weintraub, D. A., Shukla, S. J., Leisenring, J. M., & Kastner, J. H. 2008, ApJ, 678, 1088
. E Bergin, ApJ. 614133Bergin, E., et al. 2004, ApJ, 614, L133
. M A Bitner, ApJ. 6881326Bitner, M. A., et al. 2008, ApJ, 688, 1326
. R C Bohlin, ACS 2007-06Instrument Science ReportBohlin, R.C. 2007, Instrument Science Report ACS 2007-06
. C Briceno, L W Hartmann, J R Stauffer, M Gagne, R A Stern, J.-P Caillault, AJ. 113740Briceno, C., Hartmann, L. W., Stauffer, J. R., Gagne, M., Stern, R. A., & Caillault, J.-P. 1997, AJ, 113, 740
. C Briceño, L Hartmann, J Hernández, N Calvet, A K Vivas, G Furesz, A Szentgyorgyi, ApJ. 6611119Briceño, C., Hartmann, L., Hernández, J., Calvet, N., Vivas, A. K., Furesz, G., & Szentgyorgyi, A. 2007, ApJ, 661, 1119
. N Calvet, J Muzerolle, C Briceño, J Hernández, L Hartmann, J L Saucedo, K D Gordon, AJ. 1281294Calvet, N., Muzerolle, J., Briceño, C., Hernández, J., Hartmann, L., Saucedo, J. L., & Gordon, K. D. 2004, AJ, 128, 1294
. N Calvet, C Briceño, J Hernández, S Hoyer, L Hartmann, A Sicilia-Aguilar, S T Megeath, P & D'alessio, AJ. 129935Calvet, N., Briceño, C., Hernández, J., Hoyer, S., Hartmann, L., Sicilia-Aguilar, A., Megeath, S. T., & D'Alessio, P. 2005, AJ, 129, 935
. A Carmona, M E Van Den Ancker, T Henning, M Goto, D Fedele, B Stecklum, A&A. 476853Carmona, A., van den Ancker, M. E., Henning, T., Goto, M., Fedele, D., & Stecklum, B. 2007, A&A, 476, 853
. J M Carpenter, ApJS. 179423Carpenter, J. M., et al. 2008, ApJS, 179, 423
. J M Carpenter, ApJS. 181197Carpenter, J. M., et al. 2009, ApJS, 181, 197
. J S Carr, J R Najita, Science. 3191504Carr, J. S., & Najita, J. R. 2008, Science, 319, 1504
. C H Chen, ApJ. 6341372Chen, C. H., et al. 2005, ApJ, 634, 1372
. S Edwards, W Fischer, L Hillenbrand, J Kwan, ApJ. 646319Edwards, S., Fischer, W., Hillenbrand, L., & Kwan, J. 2006, ApJ, 646, 319
. C Espaillat, ApJ. 689145Espaillat, C., et al. 2008, ApJ, 689, L145
. R Fernández, A Brandeker, Y Wu, ApJ. 643509Fernández, R., Brandeker, A., & Wu, Y. 2006, ApJ, 643, 509
. E Furlan, ApJS. 165568Furlan, E., et al. 2006, ApJS, 165, 568
. E Gullbring, L Hartmann, C Briceno, N Calvet, ApJ. 492323Gullbring, E., Hartmann, L., Briceno, C., & Calvet, N. 1998, ApJ, 492, 323
. H M Günther, J H M M Schmitt, J Robrade, C Liefke, A&A. 4661111Günther, H. M., Schmitt, J. H. M. M., Robrade, J., & Liefke, C. 2007, A&A, 466, 1111
. L Hartmann, ApJ. 585398Hartmann, L. 2003, ApJ, 585, 398
. W Herbst, D K Herbst, E J Grossman, D Weinstein, AJ. 108Herbst, W., Herbst, D. K., Grossman, E. J., & Weinstein, D. 1994, AJ, 108, 1906
. G J Herczeg, J L Linsky, J A Valenti, C M Johns-Krull, B E Wood, ApJ. 572310Herczeg, G. J., Linsky, J. L., Valenti, J. A., Johns-Krull, C. M., & Wood, B. E. 2002, ApJ, 572, 310
. G J Herczeg, B E Wood, J L Linsky, J A Valenti, C M Johns-Krull, ApJ. 607369Herczeg, G. J., Wood, B. E., Linsky, J. L., Valenti, J. A., & Johns-Krull, C. M. 2004, ApJ, 607, 369
. G J Herczeg, J L Linsky, F M Walter, G F Gahm, C M Johns-Krull, ApJS. 165256Herczeg, G. J., Linsky, J. L., Walter, F. M., Gahm, G. F., & Johns-Krull, C. M. 2006, ApJS, 165, 256
. J Hernández, ApJ. 6711784Hernández, J., et al. 2007, ApJ, 671, 1784
. L A Hillenbrand, ApJ. 677630Hillenbrand, L. A., et al. 2008, ApJ, 677, 630
. L Ingleby, in prepIngleby, L. et al., 2009, in prep.
. J H Kastner, B Zuckerman, D A Weintraub, T Forveille, Science. 27767Kastner, J. H., Zuckerman, B., Weintraub, D. A., & Forveille, T. 1997, Science, 277, 67
. S J Kenyon, B C Bromley, AJ. 1311837Kenyon, S. J., & Bromley, B. C. 2006, AJ, 131, 1837
. J Kominami, S Ida, Icarus. 15743Kominami, J., & Ida, S. 2002, Icarus, 157, 43
. S S Larsen, ACS-2006-02Instrument Science ReportLarsen, S.S. 2006, Instrument Science Report ACS-2006-02
. F J Low, P S Smith, M Werner, C Chen, V Krause, M Jura, D C Hines, ApJ. 6311170Low, F. J., Smith, P. S., Werner, M., Chen, C., Krause, V., Jura, M., & Hines, D. C. 2005, ApJ, 631, 1170
. J R Najita, N Crockett, J S Carr, ApJ. 6871168Najita, J. R., Crockett, N., & Carr, J. S. 2008, ApJ, 687, 1168
. R Meijerink, A E Glassgold, J R Najita, ApJ. 676518Meijerink, R., Glassgold, A. E., & Najita, J. R. 2008, ApJ, 676, 518
. I Pascucci, ApJ. 6511177Pascucci, I., et al. 2006, ApJ, 651, 1177
. I Pascucci, D Apai, K Luhman, T Henning, J Bouwman, M R Meyer, F Lahuis, A Natta, ApJ. 696143Pascucci, I., Apai, D., Luhman, K., Henning, T., Bouwman, J., Meyer, M. R., Lahuis, F., & Natta, A. 2009, ApJ, 696, 143
. A Roberge, P D Feldman, A J Weinberger, M Deleuil, J.-C Bouret, Nature. 441724Roberge, A., Feldman, P. D., Weinberger, A. J., Deleuil, M., & Bouret, J.-C. 2006, Nature, 441, 724
. C Salyk, K M Pontoppidan, G A Blake, F Lahuis, E F Van Dishoeck, N J Evans, ApJ. 67649Salyk, C., Pontoppidan, K. M., Blake, G. A., Lahuis, F., van Dishoeck, E. F., & Evans, N. J., II 2008, ApJ, 676, L49
. L J Spitzer, M G Tomasko, ApJ. 152971Spitzer, L. J., & Tomasko, M. G. 1968, ApJ, 152, 971
. A Telleschi, M Güdel, K R Briggs, M Audard, F Palla, A&A. 468425Telleschi, A., Güdel, M., Briggs, K. R., Audard, M., & Palla, F. 2007, A&A, 468, 425
. P E Verrier, N W Evans, MNRAS. 3901377Verrier, P. E., & Evans, N. W. 2008, MNRAS, 390, 1377
. R A Webb, B Zuckerman, I Platais, J Patience, R J White, M J Schwartz, C Mccarthy, ApJ. 51263Webb, R. A., Zuckerman, B., Platais, I., Patience, J., White, R. J., Schwartz, M. J., & McCarthy, C. 1999, ApJ, 512, L63
. R J White, A M Ghez, ApJ. 556265White, R. J., & Ghez, A. M. 2001, ApJ, 556, 265
. D C B Whittet, S S Shenoy, G C Clayton, K D Gordon, ApJ. 60229Whittet, D. C. B., Shenoy, S. S., Clayton, G. C., & Gordon, K. D. 2004, ApJ, 602, 29
| [] |
[
"The Fast Evolving, Tremendous and Blue Superoutburst in ASASSN-21au Reveals a Dichotomy in the Outbursts of Long-period AM CVns",
"The Fast Evolving, Tremendous and Blue Superoutburst in ASASSN-21au Reveals a Dichotomy in the Outbursts of Long-period AM CVns"
] | [
"L E Rivera Sandoval \nDepartment of Physics\nUniversity of Alberta\nCCIS 4-183T6G 2E1EdmontonABCanada\n",
"C O Heinke \nDepartment of Physics\nUniversity of Alberta\nCCIS 4-183T6G 2E1EdmontonABCanada\n",
"J M Hameury \nUMR 7550\nObservatoire astronomique de Strasbourg\nUniversité de Strasbourg\nCNRS\n67000StrasbourgFrance\n",
"Y Cavecchi \nUniversidad Nacional Autónoma de México\nInstituto de Astronomía\nCiudad UniversitariaCDMX 04510Mexico\n",
"T Vanmunster \nCBA Belgium Observatory & CBA Extremadura Observatory\nWalhostraat 1aB-3401LandenBelgium\n",
"T Tordai \nPolaris Observatory\nHungarian Astronomical Association\nLaborc u. 2/c1037BudapestHungary\n",
"F Romanov "
] | [
"Department of Physics\nUniversity of Alberta\nCCIS 4-183T6G 2E1EdmontonABCanada",
"Department of Physics\nUniversity of Alberta\nCCIS 4-183T6G 2E1EdmontonABCanada",
"UMR 7550\nObservatoire astronomique de Strasbourg\nUniversité de Strasbourg\nCNRS\n67000StrasbourgFrance",
"Universidad Nacional Autónoma de México\nInstituto de Astronomía\nCiudad UniversitariaCDMX 04510Mexico",
"CBA Belgium Observatory & CBA Extremadura Observatory\nWalhostraat 1aB-3401LandenBelgium",
"Polaris Observatory\nHungarian Astronomical Association\nLaborc u. 2/c1037BudapestHungary"
] | [] | ASASSN-21au is an ultracompact accreting white dwarf binary (AM CVn) with a period of ∼ 58 min. Using multiwavelength observations of the system, we discovered a dichotomy in the behavior of outbursts in AM CVns. The binary showed an initial brightness increase which lasted for at least 82 days, followed by an additional increase which lasted 2 weeks. Afterwards ASASSN-21au went into superoutburst with a total duration of 19 days, showing an amplitude with respect to quiescence of ∼ 7.5 mags in g, with a precursor and an echo outburst. A correlation between Xrays, UV and optical was identified for the first time in an AM CVn during this stage. The color evolution of ASASSN-21au indicates that during the superoutburst the dominant component was the accretion disk. The short duration, large amplitude and color evolution of the superoutburst agree with expectations from the disk instability model. These characteristics are opposite to the ones observed in SDSS J080710+485259 and SDSS J113732+405458, which have periods of ∼ 53 min and ∼ 60 min, respectively. The initially slow brightness increase in the light curve of ASASSN-21au and the behavior after the superoutburst favors a scenario in which changes in the mass-transfer rate led to disk instabilities, while the outburst mechanism of SDSS J080710+485259 and SDSS J113732+405458 has been attributed to enhanced mass-transfer alone. Further observations are needed to understand the origin of this dichotomy. | 10.3847/1538-4357/ac3fb7 | [
"https://export.arxiv.org/pdf/2107.11006v2.pdf"
] | 236,318,528 | 2107.11006 | 5adb21001c8c3e9de27496c5d0cac5cc119c1557 |
The Fast Evolving, Tremendous and Blue Superoutburst in ASASSN-21au Reveals a Dichotomy in the Outbursts of Long-period AM CVns
January 31, 2023
L E Rivera Sandoval
Department of Physics
University of Alberta
CCIS 4-183T6G 2E1EdmontonABCanada
C O Heinke
Department of Physics
University of Alberta
CCIS 4-183T6G 2E1EdmontonABCanada
J M Hameury
UMR 7550
Observatoire astronomique de Strasbourg
Université de Strasbourg
CNRS
67000StrasbourgFrance
Y Cavecchi
Universidad Nacional Autónoma de México
Instituto de Astronomía
Ciudad UniversitariaCDMX 04510Mexico
T Vanmunster
CBA Belgium Observatory & CBA Extremadura Observatory
Walhostraat 1aB-3401LandenBelgium
T Tordai
Polaris Observatory
Hungarian Astronomical Association
Laborc u. 2/c1037BudapestHungary
F Romanov
The Fast Evolving, Tremendous and Blue Superoutburst in ASASSN-21au Reveals a Dichotomy in the Outbursts of Long-period AM CVns
January 31, 2023Draft version Typeset using L A T E X twocolumn style in AASTeX631Stars: individual (ASASSN-21au)AM Canum Venaticorum starsWhite dwarf starsCom- pact binary starsStellar accretion disksDwarf novaeHydrogen deficient starsInteracting binary starsCataclysmic variable starsTransient detectionStellar accretion
ASASSN-21au is an ultracompact accreting white dwarf binary (AM CVn) with a period of ∼ 58 min. Using multiwavelength observations of the system, we discovered a dichotomy in the behavior of outbursts in AM CVns. The binary showed an initial brightness increase which lasted for at least 82 days, followed by an additional increase which lasted 2 weeks. Afterwards ASASSN-21au went into superoutburst with a total duration of 19 days, showing an amplitude with respect to quiescence of ∼ 7.5 mags in g, with a precursor and an echo outburst. A correlation between Xrays, UV and optical was identified for the first time in an AM CVn during this stage. The color evolution of ASASSN-21au indicates that during the superoutburst the dominant component was the accretion disk. The short duration, large amplitude and color evolution of the superoutburst agree with expectations from the disk instability model. These characteristics are opposite to the ones observed in SDSS J080710+485259 and SDSS J113732+405458, which have periods of ∼ 53 min and ∼ 60 min, respectively. The initially slow brightness increase in the light curve of ASASSN-21au and the behavior after the superoutburst favors a scenario in which changes in the mass-transfer rate led to disk instabilities, while the outburst mechanism of SDSS J080710+485259 and SDSS J113732+405458 has been attributed to enhanced mass-transfer alone. Further observations are needed to understand the origin of this dichotomy.
INTRODUCTION
AM CVns are a relatively poorly studied class of accreting white dwarf (WD) binaries. Having typical orbital periods (P orb ) between 5 and 70 min (e.g. Green et al. 2020;Ramsay et al. 2018), AM CVns include the accreting binaries with the shortest orbits known so far, which makes them important sources of low frequency gravitational waves which can be detected by LISA Tutukov et al. 1985;Nelson et al. 1986). Depending on which evolutionary channel is followed, the chemical composition of the donor will change (e.g. Nelemans et al. 2010). The orbital periods of CVs are also much longer, with typical values of 85 min to 10 hrs.
As occurs in the case of CVs, the evolution of AM CVns after reaching the period minimum (e.g. Iben & Tutukov 1991;Yungelson 2008) dictates that the mass-transfer rate from the companion reduces as the binary evolves. The degenerate nature of the donor star causes the binary's orbit to become wider with time (e.g. Tutukov et al. 1985). These binaries also become progressively fainter with time, which makes them difficult to detect when in quiescence. However, for the AM CVns that show a transient behavior (like DNe in regular CVs), their brightness increases by several magnitudes during DN-like outbursts, making them detectable to all sky optical surveys (e.g. Levitan et al. 2013;van Roestel et al. 2021a). Most AM CVns known so far have been identified in this way.
According to the standard disk instability model (see review by Hameury 2020, and references therein), the presence of outbursts in AM CVns depends on the value of the mass-transfer rate. Observationally a dividing line occurs at periods around P orb = 20 min. For shorter periods, the high mass-transfer rate ensures the disks are hot and stable, hence no outbursts are expected, while at longer periods the disk builds up to intermittent outbursts. However, unlike the case of CVs, for AM CVns the mass-transfer rate at relatively long orbital periods is expected to be so low that the accretion disk becomes cold and stable again, so outbursts are not expected to be observed either. The lower orbital period limit for that phenomenon is not really clear, either theoretically (due to the several free parameters in the models) or observationally.
Further complications have been recently added by observations of long-period AM CVns in outbursts (Rivera Sandoval et al. 2020, which suggest that enhanced mass-transfer also plays a very important role as an outburst triggering mechanism.
In this paper we present multiwavelength observations of a recently discovered AM CVn star, known as ASASSN-21au, which was identified spectroscopically during outburst and which had a variable superhump period (a period in which the disk precesses, close to the orbital period; Patterson et al. 2005) between 57 − 60 min (Isogai et al. 2021, vsnet-alert 25369) 1 . The observa-tions of ASASSN-21au here discussed demonstrate that there is a dichotomy in the behavior of outbursts in longperiod AM CVns.
OBSERVATIONS AND DATA ANALYSIS
Swift X-ray and UV Observations
We have obtained X-ray and UV data of ASASSN-21au during its first superoutburst and post-outburst cooling phase with the XRT and UVOT instruments on board the Neil Gehrels Swift Observatory (Swift). Data were taken from 2021-02-17 to 2021-06-11 with a total exposure time of 12 ks distributed in 14 observations. ASASSN-21au was detected in X-rays in 9 of the observations, with upper limits on the remaining measurements, mainly due to the short exposure times. The XRT data analysis was performed using XSPEC (Arnaud 1996, Version 12.11). Count rates were obtained in the X-ray energy range 0.3-10 keV for each of the individual observations. For the spectral analysis we used an absorbed power-law model (tbabs*powerlaw) fixing N h to the Galactic value of 2.87 × 10 20 cm −2 toward the position of the source. In order to obtain meaningful values for the spectral fits we divided the observations in 2 groups: those obtained during the plateau phase of the superoutburst and those taken during the postoutburst cooling phase where the binary was detected. For each group we binned the data considering at least 5 counts per bin and used Cash-statistics (Cash 1979) due to the low number of counts acquired.
For each individual Swift observation, UVOT data were obtained in the 6 available filters: U V W 2, U V M 2, U V W 1, U, B, V which in total cover the wavelength range from 1600 to 8000Å. Data were analysed with the suggested tools in the Swift threads 2 . For the background subtraction we used a circular region of 5 radius around the target, and a circular region with a radius of 25 for the background determination. Photometry was calibrated to the AB system.
Optical and Additional UV Observations
Data in the clear and V filters were obtained by members of the AAVSO (Kafka, S. 2021) and VSNET with the first measurement starting on 2021-02-12 and the last one taken on 2021-04-15. Several epochs with cadences of 1 min per observation were obtained. We have aligned the data from the different observers into a common frame by using the first dataset of observations as reference, which is consistent with the results from the Swift V filter.
For comparison purposes we have also made use of public observations from the ATLAS project in the cyan and orange bands (Tonry et al. 2018) and from ZTF (Masci et al. 2019) in the g and r bands. Only data with good quality flags have been used for the analysis. Additionally, in the case of the ZTF data, we excluded points with airmass > 1.8 since the differential chromatic refraction that produces color biases dominates above that value (Masci et al. 2019). We also used data from ASAS-SN (Shappee et al. 2014;Kochanek et al. 2017) in the g band, data from Pan-STARRS (Chambers et al. 2016) in the g, r, i and z bands, from GALEX (Bianchi et al. 2011) in the N U V , from Gaia EDR3 (Gaia Collaboration 2020) and the Gaia alerts 3 in the G band. Photometry from these surveys is here reported in the AB system.
We also checked for indications of previous outburst activity from the binary in the ATLAS, ASAS-SN, ZTF, CSS (Drake et al. 2009) and DASCH (Grindlay et al. 2009) databases, but no outbursts of similar amplitude to the one reported here were previously recorded.
RESULTS
The Multiwavelength Light Curve of ASASSN-21au
The multiwavelength light curves of ASASSN-21au are shown in Figure 1 and Figure 2. The first brightness increase of ASASSN-21au was detected by Gaia on 2020-10-30 (JD * = 252.7, where JD * = JD − 2458900) when the binary was 1 mag above (marked as increased level 1) its original G quiescence level. On 2020-11-24 (JD * = 278) ZTF g detected the binary at the same increased level 1 (corresponding to ∼ 0.8 mags above its ZTF g quiescent level). Constraints due to the Sun prevented a good coverage with ZTF to observe the rise phase, but based on the current ZTF and Gaia data, a limit on the duration of the rise and the increased level 1 (τ L1 ) can be determined as 67 days < τ L1 < 172 days. On JD * = 333 the binary reached a second increased brightness level (marked with an orange line in Figures 1 and 2), which was 1.5 mags above the original Gaia G quiescent value. The system took approximately 2 additional weeks to reach that level and remained in such state for another 2 weeks. On 2021-02-03 (JD * = 349) ASASSN-21au showed a sudden brightness increase due to the precursor of the superoutburst (denoted with a blue vertical line in Figures 1 and 2).
The first ASAS-SN solid detections of ASASSN-21au were three points at g ∼ 13 mags on 2021-02-05 3 http://gsaweb.ast.cam.ac.uk/alerts/alert/Gaia21cbs/ (JD*=351), followed by several other points 2-3 days later at g ∼ 16.4 − 17.4 mags, which indicate the fading of the precursor. Afterwards the rise to the plateau phase started. The maximum of the precursor was similar to the maximum of the plateau (∼ 7.5 mags in g above its ZTF quiescent level). While precursors are common in DNe of the type SU UMa and they have been recently confirmed in multiple AM CVns (Pichardo Marcano et al. 2021;Duffy et al. 2021), they are relatively difficult to detect using ground-based images due to their short duration. But continuous coverage of AM CVns' superoutbursts with space telescopes now suggest that precursors are a relatively common characteristic (Pichardo Marcano et al. 2021). In the case of the precursor of ASASSN-21au, it is also remarkable that its luminosity drops by 3 magnitudes right before the superoutburst begins, much more than in any other system (Pichardo Marcano et al. 2021;Duffy et al. 2021).
In the bluest UV band (U V W 2), there was a decrease of 3 orders of magnitude in flux between the plateau of the superoutburst and the post-outburst cooling phase around JD * = 373.3, going from 1.60 × 10 −13 erg s −1 cm −2 to 3.36 × 10 −16 erg s −1 cm −2 , while in the V band the flux decreased by only 2 orders of magnitude, showing that the dominant emission components during superoubursts were the accretion disk, the boundary layer and the accreting WD.
The superoutburst lasted for 15 days, excluding the precursor, and ended on 2021-02-23 (or JD * = 369) as indicated by the data presented in Figure 2. This superoutburst was followed by an echo outburst, occurring more than a week after the superoutburst ended; these are frequent in SU UMa systems that show only . Color evolution for ASASSN-21au during quiescence and the increased levels 1 and 2 as shown in Figure 1 using ZTF data. The binary is redder during the period of increased brightness.
superoutbursts, and have been interpreted by Hameury & Lasota (2021) as the manifestation of an increased mass-transfer rate, possibly as a result of the secondary irradiation by the heated accreting WD at the end of a superoutburst. This interpretation is strengthened by the fact that in ASASSN-21au, the system has not returned to full quiescence at the end of the superoutburst.
We now compare the behavior between the X-rays, optical and UV observations obtained by UVOT. As shown in other AM CVns, during the post-outburst cooling phase of ASASSN-21au there is X-ray emission indicating the presence of residual accretion. Interestingly, during the post-outburst cooling phase the spectrum was substantially redder than during the peak of the superoutburst. A possible explanation for such a behavior is that the inner parts of the disk may have evaporated in that cooling phase. Figure 2 also shows a correlation between the optical, UV and X-rays. During the optical and UV maxima, the X-ray emission also peaked, and during the optical and UV decay to the post-outburst cooling phase the X-ray flux decreased by a factor of 3. This behavior is different from that observed in the AM CVns KL Dra (Ramsay et al. 2012b) and SDSS J141118+481257 (henceforth SDSS 1411; Rivera Sandoval & Maccarone 2019), where an anticorrelation between the optical/UV and the Xrays during superoutburst was observed. The same anticorrelation has been observed in many CVs (e.g. Wheatley et al. 2003;Byckling et al. 2009;Fertig et al. 2011), where it has been explained as due to changes in the optical depth of the boundary layer.
According to the standard theory (Patterson & Raymond 1985a,b), increases in the X-ray flux with accretion rate are expected so long as the boundary layer remains optically thin; once the boundary layer becomes optically thick, the majority of the flux shifts to the far-UV and the X-ray flux drops. Three nearby CVs, SS Cyg, U Gem and GW Lib, have shown a clear correlation between X-ray and optical/UV flux. SS Cyg showed this during the early, and late, parts of its outburst, with an anticorrelation at the highest optical fluxes (and, presumably, mass-transfer rates; Wheatley et al. 2003). Fertig et al. (2011) summarize observations of 6 well-studied DN outbursts, in which 4 showed suppression of the X-rays with increased mass-transfer at some point. The peak X-ray luminosities of these CVs (in this paradigm, reflecting the critical accretion rate for the boundary layer to turn optically thick) mostly lie between L X = 1 × 10 32 and 1 × 10 33 erg/s, though VW Hyi showed X-ray suppression from a peak of only 6 × 10 30 erg/s. However, the critical accretion rate for the boundary layer turning optically thick may be different for He vs. H accretion. The peak L X values for the two AM CVn systems with well-studied outbursts where an anticorrelation has been observed have 5 × 10 30 erg/s (KL Dra, Ramsay et al. 2012a) using the Gaia distance of 948 pc (Bailer-Jones et al. 2021), and 2 × 10 31 erg/s (SDSS 1411, Rivera Sandoval & Maccarone 2019). These are substantially smaller than the typical values for H-accreting CVs, suggesting that the critical accretion rate is systematically lower for He accretion. Assuming that ASASSN-21au reaches a peak L X below the peak L X of the other two AM CVn systems (to account for its X-ray/optical correlation), its inferred distance should be below 400 pc. However, since we do not understand the origin of the large scatter in peak L X values we cannot be too confident of this estimate. It is also possible that the X-ray emitting region is still optically thin if the accretion disk does not extend to the accreting WD surface, but is, even at large accretion rates, truncated by, e.g., a strong magnetic field.
We do not see clear changes in the X-ray spectral index of ASASSN-21au between the plateau and the postoutburst cooling phase, despite the changes in flux. Photon indexes of Γ = 2.2 ± 0.28 and Γ = 2.08 ± 0.56 were determined for each phase, consistent within the errors.
ASASSN-21au and the Dichotomy in the Outburst
Behavior of AM CVns
Outburst Duration: Disk Instabilities versus
Enhanced Mass-Transfer
The light curves of ASASSN-21au reveal several interesting characteristics. First, the initial scaled brightness increases up to day JD * = 349 are slow, of low amplitude and with a so-called red color evolution (Figure 3 and §3.4), reaching ZTF g − r > 0.5. Those are the same characteristics observed in the outburst light curves of the AM CVns SDSS J080710+485259 (henceforth SDSS 0807) and SDSS J113732+405458 (henceforth SDSS 1137; Rivera Sandoval et al. 2020Sunny Wong et al. 2021), which have P orb close to 53 and 60 min, respectively.
Second, on JD * = 349 ASASSN-21au developed a short duration superoutburst, which contrasts with the longer than a year duration outbursts of SDSS 0807 and SDSS 1137, suggesting that there is more than one mechanism at work in the outbursts of long-period AM CVns. On one hand a superoutburst lasting for 19 days (including the precursor), with a short rise time, and plateau phase followed by an abrupt cut-off, as observed in ASASSN-21au is fully consistent with expectations from the disk instability model (DIM; Cannizzo & Nelemans 2015;Cannizzo & Ramsay 2019). On the other hand, outbursts lasting for more than a year cannot be explained by the DIM, for the simple reason that the viscous time (t visc ; Frank et al. 2002) is far too short. The DIM predicts that, during the outburst plateau 4 , the disk is in a quasi-steady state and evolves exponentially on this timescale:
where α ∼ 0.2 is the Shakura-Sunyaev parameter in the hot state,Ṁ 16 is the mass accretion rate in 10 16 g s −1 units, M 1 the primary mass in solar units and r 10 is the outer disk radius in 10 10 cm, with r 10 ∼ 1−1.5 for an orbital period close to 1 hr. t visc is much smaller than the observed duration of the whole outburst. An increase of the mass-transfer due to irradiation of the secondary does increase the outburst duration, but under the DIM with enhanced mass-transfer for the systems with outbursts longer than a year one would need fine tuning to change the outbursts duration by almost two orders of magnitude. There are indications from ATLAS data that SDSS 1137's outburst lasted ∼ 650 days 5 . If the duration of SDSS 1137's outburst was in fact longer than previously reported Sunny Wong et al. 2021), the difference in duration with ASASSN-21au would be even more re- Figure 4. Orbital period (P orb ) vs outburst duration (τ dur ) for AM CVns. The duration of the superoutburst (SO) of ASASSN-21au is indicated by a blue star. A star in red color indicates the duration of the superoutburst including the initial brightenings. Upper error for that red star symbol has been overestimated and it has been determined from the last quiescent ZTF point to the first Gaia detection with increased brightness. The long-period AM CVn SDSS 0807 which had an outburst with a duration longer than a year is marked in yellow color. In the case of SDSS 1137 we have plotted the duration as reported in Rivera but there are indications that the event lasted around 650 days. Red points are other AM CVns as reported by Cannizzo & Ramsay (2019) and references therein. The cyan points are upper limits. The orange line indicates the relation between P orb and τ dur obtained from the DIM by Cannizzo & Ramsay (2019) and the blue line is the empirical relation derived by Levitan et al. (2015). The duration of the SO in ASASSN-21au is consistent with expectations from the DIM, but it is several tens of times shorter than the one of SDSS 0807 and SDSS 1137 clearly indicating the existence of a dichotomy. The red dashed line indicates the period of the AM CVn with the longest P orb detected so far (Green et al. 2020). Up to date no outbursts have been detected from that binary.
markable. Another interesting difference among these long-period AM CVn systems is that the U emission in SDSS 1137 seems to have been suppressed, as there was no evidence of an increase in that band , while in the case of ASASSN-21au there is a clear increase in U during superoutburst (Figure 2).
In Figure 4 we compare the duration of the superoutburst of ASASSN-21au to the ones of other AM CVns. We have also plotted the duration of the superoutburst of ASASSN-21au when we include the period of increased brightness previous to the superoutburst. This figure shows that there is a clear dichotomy in the outburst duration of long-period AM CVns. Based on the behavior of ASASSN-21au up to day JD * = 349, and the similarities with SDSS 0807 and SDSS 1137, it seems that the mechanism that caused the brightness increases in ASASSN-21au favors an enhanced mass-transfer scenario as well. However, the subsequent development of a DIM outburst in ASASSN-21au contrasts with the outburst evolution of SDSS 0807 and SDSS 1137. We stress that inclination effects do not explain such dichotomy given that eclipsing AM CVns (ZTFJ0407-00, YZ LMi, PTF1J1919+4815, Gaia14aae) show blue superoutbursts of short duration and large amplitude (e.g. van Roestel et al. 2021b;Duffy et al. 2021;Campbell et al. 2015).
Outburst Decay Times
For comparison purposes of the decay times, we have fitted the time evolution of the magnitude with a straight line and determined its slope n for ASASSN-21au, SDSS 0807 and SDSS 1137 during the plateau's decline and post-outburst cooling phase using the ZTF data for consistency (Table 1). However, for the case of ASASSN-21au we also used the AAVSO clear data which had a better coverage in time for the decay from the plateau (367 < JD * < 369) and early times of the cooling phase (369 < JD * < 376.5, n=(21.22 ± 0.84)×10 −3 mag/d). For this binary we also fitted the decay from the echo outburst (377.8 < JD * < 381) using the ASAS−g observations and obtained n = (1179.80 ± 46.05) × 10 −3 mag/d. The latter value is consistent with that obtained when fitting the decay part of the superoutburst using the AAVSO data in the clear filter (Table 1). Note that in the case of SDSS 0807 there is substantial scatter in both ZTF bands (r and g) and thus the fit is affected by that. From this analysis one sees that the decay of ASASSN-21au is ∼ 3 orders of magnitude steeper than in the other AM CVns systems during the fast decay from the plateau in both ZTF bands, stressing its different behavior compared to the other systems.
Mass-transfer Rate and Luminosity Estimates
Since no distance is known to ASASSN-21au, we are unable to determine the accretion rate to compare it directly to expectations from the DIM. However, rough calculations can be made in order to estimate that parameter. Considering the model by Bildsten et al. (2006) for an AM CVn of period 58 min and a massive (∼ 1M ) WD, which seems to be appropriate (considering the case of SDSS 1137 with P orb ∼ 60 min and recent estimates of masses in AM CVns; van Roestel et al. 2021b), the absolute V mag of ASASSN-21au should be ∼13. As there are no V measurements during quiescence, we instead use the g value from Pan-STARRS (20.96 ± 0.06 mags, very similar to Gaia G). This allows us to estimate a distance of 400 pc (consistent with our estimate in §3.1, based on peak L X ). By scaling the luminosities to this value one obtains the relations L X, outburst = 1.4 × 10 31 d 2 400 erg/s, during outburst and L X, post = 7.3 × 10 30 d 2 400 erg/s during the early postoutburst cooling phase, where d 400 is d/400 pc. On the other hand, the NUV (U V W 2, U V M 2, U V W 1) luminosity is L N U V, outburst = 1.6 × 10 34 d 2 400 erg/s and 7.4 × 10 31 d 2 400 erg/s, respectively in each state. Assuming that the NUV luminosity during outburst is due to accretion on the WD with R = 0.008 R (similar to Sirius B, and appropriate for a 1 M WD) we obtainṀ N U V, outburst = 6.7 × 10 16 R 008 d 2 400 M −1 1 g/s, where R 008 is R/0.008R and M 1 is the mass of the accreting WD in solar masses. The value ofṀ would be even larger if we consider the bolometric luminosity. In fact if the extreme UV luminosity is larger than the NUV one (as expected),Ṁ would be larger thaṅ M + crit (R out ) = 1.2 × 10 17 g/s (considering the expressions given in A2 of Kotko et al. 2012, and assuming R out,10 =1.3), as would occur if the full disk is in the hot state. An upper limit on the accretion rate in full quiescence can be determined from the X-ray luminosity during the post-outburst cooling phase 6 . We use the X-ray flux for the upper limit, since in the cooling phase most of the UV flux presumably originates from the hot WD with some contribution from the illuminated inner accretion disk, while the accretion flow is thought to radiate optically thin bremsstrahlung, placing most of the output radiative flux into the Xrays. We find, using the same parameters, thatṀ qui is < 3 × 10 13 R in,008 d 2 400 M −1 1 g/s, which is of the same order asṀ − crit (R in ) if R in > d 1.2 400 M −0.08 1 × 10 9 cm, again as expected if the DIM holds 7 .
The Spectral Energy Distribution of ASASSN-21au in Quiescence
Using archival data obtained during full quiescence we have carried out a spectral energy distribution analysis fitting a blackbody to constrain the temperature of ASASSN-21au. We used the GALEX, Pan-STARRS and Gaia DR3 data, together with a E(g-r) = 0.05 (from the 3D dust map of Green et al. 2019) and obtained T= 14300 ± 1600 K, which is roughly consistent with 6 In full quiescence the X-ray luminosity must be lower than the observed one during the post-outburst cooling phase. 7 Note that contrary to Rout, the value of R in is not fixed by the orbital parameters. Figure 5. Spectral energy distribution of ASASSN-21au during quiescence using archival data. The blue line denotes the best blackbody fit with a temperature of 14300±1600 K. The z-band point is at 7σ above the fit, possibly due to an excess. Additional IR data is required to confirm this.
expectations from Bildsten et al. (2006) for an AM CVn of P orb ∼ 58 min and M ∼1 M WD. It is possible that there is an IR excess as the Pan-STARRS-z data point is 7σ above the blackbody fit line ( Figure 5). However, given that we have only one IR data point, a fit with 2 blackbodies (accreting WD and disk/donor) is very poorly constrained leading to practically meaningless results. New, deeper IR observations are needed to confirm this excess. As showed in Bildsten et al. (2006), the contribution of the accretion disk in quiescence should be minimal in optical, and the hot WD should dominate the optical light in quiescence. Note that a comparison of the WD temperature based on the spectral energy distribution during the postoutburst cooling phase is not really possible considering the small amount of data obtained in this period. Also, during that phase a contribution from the accretion disk is still expected which would be difficult to disentangle from the WD emission, especially considering the limited data points available.
The Varying Color Evolution of ASASSN-21au: the Binary Reveals a Blue Superoutburst
Recently, it has been shown that outbursts in longperiod AM CVns have a color evolution that is not compatible with the one expected by the DIM (Rivera Sandoval et al. 2020. In SDSS 0807 and SDSS 1137 the AM CVns become redder and brighter as they reach the maximum of the outburst. That color evolution together with the duration and amplitude of the out-n-Plateau's decay (mag/d) n-Cooling phase (mag/d) Table 1. Values of the slope n when fitting a linear model (nτ + b) to the magnitude evolution during the decay from the plateau and the post-outburst cooling phase of ASASSN-21au, SDSS 0807 and SDSS 1137.
ID Dates ZTF g ZTF r Clear Dates ZTF g ZTF r (JD) (×10 −3 ) (×10 −3 ) (×10 −3 ) (JD) (×10 −3 ) (×10 −3 ) ASASSN-
bursts suggest that the outbursts are confined to the outer parts of the disk, perhaps due to enhanced masstransfer from the donor.
In Figure 3 we present the color evolution of ASASSN-21au using the data from ZTF g and r before JD * = 349 (when the precursor started). We see that the binary becomes slightly redder compared to quiescence as it increases its brightness, reaching ZTF g − r > 0.5. Note that measurements have large error bars because the binary is faint and is close to the limiting magnitude of ZTF. However, despite the large scatter, one finds that the color behavior of ASASSN-21au is completely compatible with the color evolution observed in SDSS 0807 and SDSS 1137, suggesting then that such part of the light curve in ASASSN-21au can also be explained under an enhanced mass-transfer scenario. Note that during quiescence the binary is blue because the WD is dominating the emission.
On the other hand, in Figure 6 we present the color evolution of ASASSN-21au using the simultaneous UV and optical data obtained by Swift during superoutburst and post-outburst cooling phase. We chose the bluest (U V W 2) and reddest (V ) UVOT filters to trace the behavior. Figure 6 shows that ASASSN-21au follows an opposite pattern to that observed in SDSS 0807 and SDSS 1137. In the case of ASASSN-21au, the binary becomes bluer and brighter as it is closer to the peak, and as it cools down it becomes redder. Note that in Figure 6, there is a second turn towards bluer colors at a V magnitude of 17.1, which is due to a measurement obtained at the end of the echo outburst, when the accretion disk and accreting WD were still hot. After that, the binary continues becoming redder and fainter. The last blue turn in that diagram corresponds to the accreting WD becoming the dominant emitting source, being much brighter in UV than in optical, where just upper limits were detected in V . A ZTF color evolution of ASASSN-21au during outburst in g and r is also displayed in Figure 7 of the Appendix.
The color evolution of ASASSN-21au during superoutburst is similar to that of the binary systems SDSS 1411 (Rivera , PTF 0719+4858 and SDSS J1043+5632 (Pichardo Marcano et al. 2021). It Figure 6. Color evolution for the AM CVn ASASSN-21au with an orbital period ∼ 58 min during superouburst as indicated by the UVOT data in the NUV band U V W 2 and the optical filter V . The dominant sources of emission are marked with numbers in the plot and are: 1,3 = Disk + WD + boundary layer, 2,4 = Disk, 5 = WD. The point at UVW2-V= 1.38 is bluer than the preceding ones because it was taken at the end of the echo outburst. Afterwards the binary reddens again. The general color evolution of ASASSN-21au is compatible with expectations from the DIM . However, the pattern is opposite to the one followed by SDSS 0807 and SDSS 1137 (Rivera indicating that the mechanism that drives the outbursts in these long-period systems is different.
is also compatible with expectations from the DIM . This strengthens our conclusion that the main mechanism responsible for the superoutbursts (after JD * = 349) in ASASSN-21au is a disk instability.
DISCUSSION
From the theoretical point of view, two parameters that play a very important role in the DIM are: the mass-transfer rate, and the truncation radius of the inner disk. The disk is unstable if the mass-transfer ratė M is in the rangeṀ − crit (R in ) <Ṁ <Ṁ + crit (R out ) wherė M − crit andṀ + crit are the critical mass-transfer rates for being on the cold and hot branches, estimated at the inner and outer disk radii, respectively. In the case of CVs, the actual mass-transfer rate is observed to vary significantly between systems at a given orbital period, meaning that the actual mass-transfer rate deviates from its secular mean which, in principle, depends mainly on the orbital parameters (e.g. Knigge et al. 2011;Dubus et al. 2018). It is then not difficult to believe that something similar occurs in some AM CVns.
Furthermore, disk truncation in CVs has also been demonstrated to exist. For example, observational estimates of the inner disk radius during quiescence have been reported to be larger than the WD radius (Balman & Revnivtsev 2012). Disk truncation occurs if the WD's magnetic field is strong enough to alter the accretion flow, or if the accretion flow becomes optically thin and geometrically thick close to the WD. There are no strong observational constraints on this, but there is a mere indication that the latter mechanism might be preferred in some systems (see e.g. Hameury & Lasota 2021). Disk truncation is important for cold and stable accretion disks in X-ray binaries and CVs (e.g. Dubus et al. 2018). But in AM CVns that is not a necessary condition for the binary to have a cold and stable disk, because extremely small mass-transfer rates are expected to be present in long-period AM CVns due to their evolution (e.g. Bildsten et al. 2006;Deloye et al. 2007) which would be sufficient to meet the con-ditionṀ − crit (R in ) >Ṁ and so to have cold and stable disks (e.g. Kotko et al. 2012). It is also important to point out that these extremely low mass-transfer rates in AM CVns could make the accretion disks somehow different from those in CVs. For example, the optically thick assumption might not be valid. Furthermore, an additional parameter to the previously mentioned ones could be at play in AM CVns: the metallicity, which ultimately translates in the kind of donor, and which would affect the optical thickness of the disk as well.
The characteristics of the light curve of ASASSN-21au, its similarities and differences to the light curves of SDSS 0807 and SDSS 1137 are then compatible with the following scenario: enhanced mass-transfer could be the mechanism that triggers the brightness increases in the 3 AM CVns. Thus explaining their low amplitude, slow brightness increase and red color evolution. The red color is in agreement with the outer parts of the disks being the dominant sources of emission, perhaps due to a larger contribution from the hot spot.
It is important to note that the long outbursts observed in SDSS 0807 and SDSS 1137 appear to be rather unique in the zoo of compact binaries. (i) Even if reports of long term increases before a DIM outburst have been reported in other binaries, they are quite different. For example, CVs of the type Z Cam can show long standstills between (short) normal outbursts due to an increase in the accretion rate, but the behavior and timescales (e.g. much faster rise and decay) of a Z Cam star are different to the ones observed in SDSS 1137 and SDSS 0807 (see e.g. Oppenheimer et al. 1998, for a long term light curve of Z Cam). Low mass X-ray binaries have also shown years duration standstills between outbursts. However, such mechanism is also analogous to the one of Z Cam stars (e.g. Swift J1753.5-0127, Shaw et al. 2019). (ii) SDSS 1137 and SDSS 0807 show clearly distinguishable outbursts instead of erratic transitions between high and low states as observed in other CVs. These include systems with high mass-transfer rates, such as the VY Scl and some intermediate polars as FO Aqr (see e.g. Kennedy et al. 2020) and AM Her (which shows low states). These contrasts should, however, not come as a surprise, given the large difference in the secondaries of AM CVns and CVs. It is then clear that the outbursts in the long-period AM CVns SDSS 1137 and SDSS 0807 do not have an origin in disk instabilities and so far have no counterpart on CVs nor X-ray binaries.
If there are mass enhancements for ASASSN-21au, they could even occur at different rates, considering the 2 different brightness levels observed prior to the superoutburst. At a later stage, mass enhancement could have produced disk instabilities in ASASSN-21au, either due to a further increase in the mass-transfer rate value or to the accretion disk accumulating enough mass at the same increased mass-transfer rate before the superoutburst. This would explain the fast, large amplitude and blue superoutburst observed in ASASSN-21au which is consistent with expectations from the DIM. The presence of mass enhancement would also explain the increased magnitude after the superoutburst in ASASSN-21au and the presence of an echo outburst at later times (see §3.1). The observations of ASASSN-21AU during superoutburst clearly contrast to what has been observed in other AM CVn systems with similar long orbital periods, thus revealing a dichotomy in their behavior.
It is unclear why instabilities were not observed in SDSS 0807 8 and SDSS 1137 but under an enhanced mass-transfer scenario, the mass-transfer rate could have been low enough such that the conditionṀ − crit (r in ) >Ṁ was never violated. As mentioned before, brightness increases have been observed to exist previous to DIM outbursts in other accreting WDs (e.g. SS Cyg) and X-ray binaries (e.g. Bernardini et al. 2016;Goodwin et al. 2020;Kimura et al. 2021). However, we point out that one should be cautious when comparing such phenomena to the event presented in this manuscript given the difference in mass-transfer rates, as well as in the secondary's structure and composition. For example, SS Cyg has a mass-transfer rate in quiescence which is ∼ 4 orders of magnitude larger (Miller-Jones et al. 2013) than that expected for ASASSN-21au, SDSS 0807 or SDSS 1137, which for CVs, places SS Cyg well within the expected unstable regime of the DIM, producing then consistent results with that model. On the other hand, AM CVns systems with long orbital periods must have a mass-transfer rate higher than the secular mean for a disk instability to be triggered, and, as discussed previously, when the value of the mass transfer rate is extremely low, it can lead to different disk behavior.
CONCLUSION
In this paper we have shown observational evidence that a dichotomy exists in the outburst properties of long-period AM CVns. Short outbursts are most likely caused by a disk instability, whereas outbursts lasting for a year or more cannot be explained by the DIM and are probably due to a long lasting mass-transfer event. The whole behavior of ASASSN-21au is consistent with a scenario in which initial mass enhancement produced disk instabilities at later times, contrary to what has been observed in other long-period AM CVns. Additional studies of similar systems are needed to fully confirm this scenario. At present, the origin and characteristics of mass-transfer events remain a mystery; they have also been postulated in systems such as CVs close to the minimum period that exhibit rare superoutbursts (the WZ Sge systems). The causes of such phenomena need to be further investigated through high quality and multiwavelength observations of AM CVns both during outbursts and in quiescence.
APPENDIX
A. THE COLOR EVOLUTION OF ASASSN-21AU
DURING SUPEROUTBURST Figure 7 shows the color evolution of ASASSN-21au using the ZTF data. Figure 7. Color evolution based on ZTF data of ASASSN-21au with an orbital period ∼ 58 min up to the end of the plateau phase of the superoutburst. The binary is initially blue due to the WD dominating the emission during quiescence. During the increased brightness levels 1 and 2 (see 3.1) the binary increases its brightness and it becomes redder. When the precursor of the superoutburst starts, ASASSN-21au becomes blue. The same behavior is observed when the rise of the plateau occurs. The binary becomes redder during the decay of the precursor and the decay of the plateau.
Figure 1 .
1Optical light curve of ASASSN-21au during quiescence and the first indications of brightness increases. The blue vertical line denotes the beginning of the precursor of the superoutburst.
Figure 2 .
2Multiwavelength light curve of ASASSN-21au during superoutburst. The beginning was determined using ZTF data and is marked with a blue solid line. the blue dotted line indicates the end of the superoutburst and the beginning of the postoutburst cooling phase. The dashed red line marks the end of the precursor and the yellow band denotes the period where an echo outburst occurred between JD* 378 and 381. A): XRT lightcurve in the energy range 0.3 − 10 keV. X-ray detections during the post-outburst cooling phase are signatures of residual accretion. B): Multiband UVOT observations. They are simultaneous with the X-ray data. The GALEX NUV quiescent level is indicated with a black dashed line. C): Fast (1 min) cadence AAVSO data. Periodic oscillations due to superhumps were detected in the superoutburst. D): Light curves with data from the ASAS-SN and ATLAS surveys. E): Gaia and ZTF light curves. The beginning of the precursor was detected on JD*=349. The colors of the horizontal lines have the same meaning than those in Figure 1.
Figure 3
3Figure 3. Color evolution for ASASSN-21au during quiescence and the increased levels 1 and 2 as shown in Figure 1 using ZTF data. The binary is redder during the period of increased brightness.
We have chosen to use an average superhump period of 58.4 min as obtained from a Lomb-Scargle(Lomb 1976;Scargle 1982) analysis.
https://www.swift.ac.uk/analysis/uvot/
Note that the plateau phase does not mean that there is no evolution, but that the emission evolves much slower than during the rise and decline phases. 5 T. Kupfer and J. van Roestel, private communication.
Given the slightly blue color evolution of SDSS 0807 near the peak of the outburst, Rivera discussed the possibility that disk instabilities could have been developed during that phase. However, despite that scenario cannot be discarded, it is not very likely considering the observed values of g-r in that phase and the timescales expected from the DIM as discussed in this paper, which were not observed in the case of SDSS 0807.
ACKNOWLEDGEMENTSWe thank the referee for her/his comments which improved the manuscript. LERS was supported by an Avadh Bhatia Fellowship at the University of Alberta and a Gruber-IAU Fellowship during the realization of this work. CH is supported by NSERC Discovery Grant RGPIN-2016-04602. The authors acknowledge the Swift team for scheduling the target of opportunity requests. The authors also acknowledge the ATLAS, ASAS-SN, Pan-STARRS and VizieR data bases for providing part of the data presented in this manuscript. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www. cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www. cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. We acknowledge the Photometric Science Alerts Team (http://gsaweb.ast.cam. ac.uk/alerts). Based on observations obtained with the Samuel Oschin 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
K A Arnaud, Astronomical Society of the Pacific Conference Series. G. H. Jacoby & J. Barnes10117Astronomical Data Analysis Software and Systems VArnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
. C A L Bailer-Jones, J Rybizki, M Fouesneau, M Demleitner, R Andrae, 10.3847/1538-3881/abd806AJ. 161147Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147, doi: 10.3847/1538-3881/abd806
. Ş Balman, M Revnivtsev, 10.1051/0004-6361/201219469A&A. 546112Balman, Ş., & Revnivtsev, M. 2012, A&A, 546, A112, doi: 10.1051/0004-6361/201219469
. F Bernardini, D M Russell, A W Shaw, 10.3847/2041-8205/818/1/L5ApJL. 8185Bernardini, F., Russell, D. M., Shaw, A. W., et al. 2016, ApJL, 818, L5, doi: 10.3847/2041-8205/818/1/L5
. L Bianchi, J Herald, B Efremova, 10.1007/s10509-010-0581-xAp&SS. 335161Bianchi, L., Herald, J., Efremova, B., et al. 2011, Ap&SS, 335, 161, doi: 10.1007/s10509-010-0581-x
. L Bildsten, D M Townsley, C J Deloye, G Nelemans, 10.1086/500080ApJ. 640466Bildsten, L., Townsley, D. M., Deloye, C. J., & Nelemans, G. 2006, ApJ, 640, 466, doi: 10.1086/500080
. K Breivik, K Kremer, M Bueno, 10.3847/2041-8213/aaaa23ApJL. 8541Breivik, K., Kremer, K., Bueno, M., et al. 2018, ApJL, 854, L1, doi: 10.3847/2041-8213/aaaa23
. K Byckling, J P Osborne, P J Wheatley, 10.1111/j.1365-2966.2009.15378.xMNRAS. 3991576Byckling, K., Osborne, J. P., Wheatley, P. J., et al. 2009, MNRAS, 399, 1576, doi: 10.1111/j.1365-2966.2009.15378.x
. H C Campbell, T R Marsh, M Fraser, 10.1093/mnras/stv1224MNRAS. 4521060Campbell, H. C., Marsh, T. R., Fraser, M., et al. 2015, MNRAS, 452, 1060, doi: 10.1093/mnras/stv1224
. J K Cannizzo, G Nelemans, 10.1088/0004-637X/803/1/19ApJ. 80319Cannizzo, J. K., & Nelemans, G. 2015, ApJ, 803, 19, doi: 10.1088/0004-637X/803/1/19
. J K Cannizzo, G Ramsay, 10.3847/1538-3881/ab04acAJ. 157130Cannizzo, J. K., & Ramsay, G. 2019, AJ, 157, 130, doi: 10.3847/1538-3881/ab04ac
. W Cash, 10.1086/156922ApJ. 228939Cash, W. 1979, ApJ, 228, 939, doi: 10.1086/156922
. K C Chambers, E A Magnier, N Metcalfe, arXiv:1612.05560arXiv e-printsChambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv e-prints, arXiv:1612.05560. https://arxiv.org/abs/1612.05560
. C J Deloye, R E Taam, C Winisdoerffer, G Chabrier, 10.1111/j.1365-2966.2007.12262.xMNRAS. 381525Deloye, C. J., Taam, R. E., Winisdoerffer, C., & Chabrier, G. 2007, MNRAS, 381, 525, doi: 10.1111/j.1365-2966.2007.12262.x
. A J Drake, S G Djorgovski, A Mahabal, 10.1088/0004-637X/696/1/870ApJ. 696Drake, A. J., Djorgovski, S. G., Mahabal, A., et al. 2009, ApJ, 696, 870, doi: 10.1088/0004-637X/696/1/870
. G Dubus, M Otulakowska-Hypka, J.-P Lasota, 10.1051/0004-6361/201833372A&A. 61726Dubus, G., Otulakowska-Hypka, M., & Lasota, J.-P. 2018, A&A, 617, A26, doi: 10.1051/0004-6361/201833372
. C Duffy, G Ramsay, D Steeghs, 10.1093/mnras/stab389MNRAS. Duffy, C., Ramsay, G., Steeghs, D., et al. 2021, MNRAS, doi: 10.1093/mnras/stab389
. D Fertig, K Mukai, T Nelson, J K Cannizzo, 10.1086/661949PASP. 1231054Fertig, D., Mukai, K., Nelson, T., & Cannizzo, J. K. 2011, PASP, 123, 1054, doi: 10.1086/661949
J Frank, A King, D J Raine, Accretion Power in Astrophysics: Third Edition. CambridgeCambridge University PressFrank, J., King, A., & Raine, D. J. 2002, Accretion Power in Astrophysics: Third Edition (Cambridge: Cambridge University Press)
VizieR Online Data Catalog, I/350. Gaia Collaboration. 2020, VizieR Online Data Catalog, I/350
. A J Goodwin, D M Russell, D K Galloway, 10.1093/mnras/staa2588MNRAS. 4983429Goodwin, A. J., Russell, D. M., Galloway, D. K., et al. 2020, MNRAS, 498, 3429, doi: 10.1093/mnras/staa2588
. G M Green, E Schlafly, C Zucker, J S Speagle, D Finkbeiner, 10.3847/1538-4357/ab5362ApJ. 88793Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, ApJ, 887, 93, doi: 10.3847/1538-4357/ab5362
. M J Green, T R Marsh, P J Carter, 10.1093/mnras/staa1509MNRAS. 4961243Green, M. J., Marsh, T. R., Carter, P. J., et al. 2020, MNRAS, 496, 1243, doi: 10.1093/mnras/staa1509
Preserving Astronomy's Photographic Legacy: Current State and the Future of North American Astronomical Plates. J Grindlay, S Tang, R Simcoe, Astronomical Society of the Pacific Conference Series. W. Osborn & L. Robbins410101Grindlay, J., Tang, S., Simcoe, R., et al. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 410, Preserving Astronomy's Photographic Legacy: Current State and the Future of North American Astronomical Plates, ed. W. Osborn & L. Robbins, 101
. J M Hameury, 10.1016/j.asr.2019.10.022Advances in Space Research. 661004Hameury, J. M. 2020, Advances in Space Research, 66, 1004, doi: 10.1016/j.asr.2019.10.022
. J M Hameury, C Knigge, J P Lasota, F J Hambsch, R James, 10.1051/0004-6361/202037631A&A. 636Hameury, J. M., Knigge, C., Lasota, J. P., Hambsch, F. J., & James, R. 2020, A&A, 636, A1, doi: 10.1051/0004-6361/202037631
. J M Hameury, J P Lasota, 10.1051/0004-6361/202140548A&A. 650114Hameury, J. M., & Lasota, J. P. 2021, A&A, 650, A114, doi: 10.1051/0004-6361/202140548
. Icko Iben, J Tutukov, A V , 10.1086/169848ApJ. 370615Iben, Icko, J., & Tutukov, A. V. 1991, ApJ, 370, 615, doi: 10.1086/169848
K Isogai, Y Tampo, N Kojiguchi, The Astronomer's Telegram. 143901Isogai, K., Tampo, Y., Kojiguchi, N., et al. 2021, The Astronomer's Telegram, 14390, 1
S Kafka, Observations from the AAVSO International Database. Kafka, S. 2021, Observations from the AAVSO International Database, https://www.aavso.org
. M R Kennedy, P M Garnavich, C Littlefield, 10.1093/mnras/staa1415MNRAS. 4954445Kennedy, M. R., Garnavich, P. M., Littlefield, C., et al. 2020, MNRAS, 495, 4445, doi: 10.1093/mnras/staa1415
. M Kimura, S Yamada, N Nakaniwa, 10.1093/pasj/psab073PASJ. 731262Kimura, M., Yamada, S., Nakaniwa, N., et al. 2021, PASJ, 73, 1262, doi: 10.1093/pasj/psab073
. C Knigge, I Baraffe, J Patterson, 10.1088/0067-0049/194/2/28ApJS. 19428Knigge, C., Baraffe, I., & Patterson, J. 2011, ApJS, 194, 28, doi: 10.1088/0067-0049/194/2/28
. C S Kochanek, B J Shappee, K Z Stanek, 10.1088/1538-3873/aa80d9PASP. 129104502Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, PASP, 129, 104502, doi: 10.1088/1538-3873/aa80d9
. I Kotko, J P Lasota, G Dubus, J M Hameury, 10.1051/0004-6361/201219156A&A. 54413Kotko, I., Lasota, J. P., Dubus, G., & Hameury, J. M. 2012, A&A, 544, A13, doi: 10.1051/0004-6361/201219156
. D Levitan, P J Groot, T A Prince, 10.1093/mnras/stu2105MNRAS. 446391Levitan, D., Groot, P. J., Prince, T. A., et al. 2015, MNRAS, 446, 391, doi: 10.1093/mnras/stu2105
. D Levitan, T Kupfer, P J Groot, 10.1093/mnras/sts672MNRAS. 430996Levitan, D., Kupfer, T., Groot, P. J., et al. 2013, MNRAS, 430, 996, doi: 10.1093/mnras/sts672
. W.-M Liu, L Jiang, W.-C Chen, 10.3847/1538-4357/abdfc7ApJ. 91022Liu, W.-M., Jiang, L., & Chen, W.-C. 2021, ApJ, 910, 22, doi: 10.3847/1538-4357/abdfc7
. N R Lomb, 10.1007/BF00648343Ap&SS. 39447Lomb, N. R. 1976, Ap&SS, 39, 447, doi: 10.1007/BF00648343
. F J Masci, R R Laher, B Rusholme, 10.1088/1538-3873/aae8acPASP. 13118003Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, PASP, 131, 018003, doi: 10.1088/1538-3873/aae8ac
. J C A Miller-Jones, G R Sivakoff, C Knigge, 10.1126/science.1237145Science. 340950Miller-Jones, J. C. A., Sivakoff, G. R., Knigge, C., et al. 2013, Science, 340, 950, doi: 10.1126/science.1237145
. G Nelemans, L R Yungelson, M V Van Der Sluys, C A Tout, 10.1111/j.1365-2966.2009.15731.xMNRAS. 4011347Nelemans, G., Yungelson, L. R., van der Sluys, M. V., & Tout, C. A. 2010, MNRAS, 401, 1347, doi: 10.1111/j.1365-2966.2009.15731.x
. L A Nelson, S A Rappaport, P C Joss, 10.1086/164767ApJ. 311226Nelson, L. A., Rappaport, S. A., & Joss, P. C. 1986, ApJ, 311, 226, doi: 10.1086/164767
. B D Oppenheimer, S J Kenyon, J A Mattei, 10.1086/300250AJ. 1151175Oppenheimer, B. D., Kenyon, S. J., & Mattei, J. A. 1998, AJ, 115, 1175, doi: 10.1086/300250
. J Patterson, J C Raymond, 10.1086/163187ApJ. 292535Patterson, J., & Raymond, J. C. 1985a, ApJ, 292, 535, doi: 10.1086/163187
. 10.1086/163188ApJ. 292550-. 1985b, ApJ, 292, 550, doi: 10.1086/163188
. J Patterson, J Kemp, D A Harvey, 10.1086/447771PASP. 1171204Patterson, J., Kemp, J., Harvey, D. A., et al. 2005, PASP, 117, 1204, doi: 10.1086/447771
. M Pichardo Marcano, L E Rivera Sandoval, T J Maccarone, S Scaringi, 10.1093/mnras/stab2685MNRAS. 5083275Pichardo Marcano, M., Rivera Sandoval, L. E., Maccarone, T. J., & Scaringi, S. 2021, MNRAS, 508, 3275, doi: 10.1093/mnras/stab2685
. J E Pringle, R F Webbink, 10.1093/mnras/172.3.493MNRAS. 172493Pringle, J. E., & Webbink, R. F. 1975, MNRAS, 172, 493, doi: 10.1093/mnras/172.3.493
. G Ramsay, T Barclay, D Steeghs, 10.1111/j.1365-2966.2011.19924.xMNRAS. 4192836Ramsay, G., Barclay, T., Steeghs, D., et al. 2012a, MNRAS, 419, 2836, doi: 10.1111/j.1365-2966.2011.19924.x
. G Ramsay, P J Wheatley, S Rosen, T Barclay, D Steeghs, 10.1111/j.1365-2966.2012.21660.xMNRAS. 4251486Ramsay, G., Wheatley, P. J., Rosen, S., Barclay, T., & Steeghs, D. 2012b, MNRAS, 425, 1486, doi: 10.1111/j.1365-2966.2012.21660.x
. G Ramsay, M J Green, T R Marsh, 10.1051/0004-6361/201834261A&A. 620141Ramsay, G., Green, M. J., Marsh, T. R., et al. 2018, A&A, 620, A141, doi: 10.1051/0004-6361/201834261
. Rivera Sandoval, L E Maccarone, T J , 10.1093/mnrasl/sly205MNRAS. 4836Rivera Sandoval, L. E., & Maccarone, T. J. 2019, MNRAS, 483, L6, doi: 10.1093/mnrasl/sly205
. Rivera Sandoval, L E Maccarone, T J Cavecchi, Y Britt, C Zurek, D , 10.1093/mnras/stab1246MNRAS. 505215Rivera Sandoval, L. E., Maccarone, T. J., Cavecchi, Y., Britt, C., & Zurek, D. 2021, MNRAS, 505, 215, doi: 10.1093/mnras/stab1246
. Rivera Sandoval, L E Maccarone, T J Pichardo Marcano, M , 10.3847/2041-8213/abb130ApJL. 90037Rivera Sandoval, L. E., Maccarone, T. J., & Pichardo Marcano, M. 2020, ApJL, 900, L37, doi: 10.3847/2041-8213/abb130
. G J Savonije, M De Kool, Van Den, E P J Heuvel, A&A. 15551Savonije, G. J., de Kool, M., & van den Heuvel, E. P. J. 1986, A&A, 155, 51
. J D Scargle, 10.1086/160554ApJ. 263835Scargle, J. D. 1982, ApJ, 263, 835, doi: 10.1086/160554
. B J Shappee, J L Prieto, D Grupe, 10.1088/0004-637X/788/1/48ApJ. 78848Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48, doi: 10.1088/0004-637X/788/1/48
. A W Shaw, B E Tetarenko, G Dubus, 10.1093/mnras/sty2787MNRAS. 4821840Shaw, A. W., Tetarenko, B. E., Dubus, G., et al. 2019, MNRAS, 482, 1840, doi: 10.1093/mnras/sty2787
. Sunny Wong, T L Van Roestel, J Kupfer, T Bildsten, L , 10.3847/2515-5172/abd7faResearch Notes of the American Astronomical Society. 5Sunny Wong, T. L., van Roestel, J., Kupfer, T., & Bildsten, L. 2021, Research Notes of the American Astronomical Society, 5, 3, doi: 10.3847/2515-5172/abd7fa
. J L Tonry, L Denneau, A N Heinze, 10.1088/1538-3873/aabadfPASP. 13064505Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505, doi: 10.1088/1538-3873/aabadf
. A V Tutukov, A V Fedorova, E V Ergma, Tutukov, A. V., Fedorova, A. V., Ergma, E. V., &
. L R Yungelson, Soviet Astronomy Letters. 1152Yungelson, L. R. 1985, Soviet Astronomy Letters, 11, 52
. A V Tutukov, L R Yungelson, AcA. 29665Tutukov, A. V., & Yungelson, L. R. 1979, AcA, 29, 665
. J Van Roestel, L Creter, T Kupfer, 10.3847/1538-3881/ac0622AJ. 162113van Roestel, J., Creter, L., Kupfer, T., et al. 2021a, AJ, 162, 113, doi: 10.3847/1538-3881/ac0622
. J Van Roestel, T Kupfer, M J Green, 10.1093/mnras/stab2421MNRAS. in pressvan Roestel, J., Kupfer, T., Green, M. J., et al. 2021b, MNRAS, in press, doi: 10.1093/mnras/stab2421
. P J Wheatley, C W Mauche, J A Mattei, 10.1046/j.1365-8711.2003.06936.xMNRAS. 34549Wheatley, P. J., Mauche, C. W., & Mattei, J. A. 2003, MNRAS, 345, 49, doi: 10.1046/j.1365-8711.2003.06936.x
. L R Yungelson, 10.1134/S1063773708090053Astronomy Letters. 34620Yungelson, L. R. 2008, Astronomy Letters, 34, 620, doi: 10.1134/S1063773708090053
| [] |
[
"EVOLUTION OF THE STELLAR-TO-DARK MATTER RELATION: SEPARATING STAR-FORMING AND PASSIVE GALAXIES FROM Z = 1 TO 0",
"EVOLUTION OF THE STELLAR-TO-DARK MATTER RELATION: SEPARATING STAR-FORMING AND PASSIVE GALAXIES FROM Z = 1 TO 0"
] | [
"Jeremy L Tinker \nDraft version\n\n",
"Alexie Leauthaud \nDraft version\n\n",
"Kevin Bundy \nDraft version\n\n",
"Matthew R George \nDraft version\n\n",
"Peter Behroozi \nDraft version\n\n",
"Richard Massey \nDraft version\n\n",
"Jason Rhodes \nDraft version\n\n",
"Risa H Wechsler \nDraft version\n\n"
] | [
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n",
"Draft version\n"
] | [] | We use measurements of the stellar mass function, galaxy clustering, and galaxy-galaxy lensing within the COSMOS survey to constrain the stellar-to-halo mass relation (SHMR) of star forming and quiescent galaxies over the redshift range z = [0.2, 1.0]. For massive galaxies, M * 10 10.6 M ⊙ , our results indicate that starforming galaxies grow proportionately as fast as their dark matter halos while quiescent galaxies are outpaced by dark matter growth. At lower masses, there is minimal difference in the SHMRs, implying that the majority low-mass quiescent galaxies have only recently been quenched of their star formation. Our analysis also affords a breakdown of all COSMOS galaxies into the relative numbers of central and satellite galaxies for both populations. At z = 1, satellite galaxies dominate the red sequence below the knee in the stellar mass function. But the number of quiescent satellites exhibits minimal redshift evolution; all evolution in the red sequence is due to low-mass central galaxies being quenched of their star formation. At M * ∼ 10 10 M ⊙ , the fraction of central galaxies on the red sequence increases by a factor of ten over our redshift baseline, while the fraction of quenched satellite galaxies at that mass is constant with redshift. We define a "migration rate" to the red sequence as the time derivative of the passive galaxy abundances. We find that the migration rate of central galaxies to the red sequence increases by nearly an order of magnitude from z = 1 to z = 0. These results imply that the efficiency of quenching star formation for centrals is increasing with cosmic time, while the mechanisms that quench the star formation of satellite galaxies in groups and clusters is losing efficiency. 9 The passive galaxy completeness limit essentially cuts part-way through the lowest stellar mass bin at each redshift interval. We compares measurements of the clustering of all passive galaxies in this bin to those that are above the limit, finding that the results are consistent with one another, but the higher number of galaxies in the full bin yields better error bars, especially at small scales. | 10.1088/0004-637x/778/2/93 | [
"https://arxiv.org/pdf/1308.2974v1.pdf"
] | 20,509,869 | 1308.2974 | a275f129369cd373797a9c363d3cc4cc9994f6dc |
EVOLUTION OF THE STELLAR-TO-DARK MATTER RELATION: SEPARATING STAR-FORMING AND PASSIVE GALAXIES FROM Z = 1 TO 0
13 Aug 2013 May 11, 2014
Jeremy L Tinker
Draft version
Alexie Leauthaud
Draft version
Kevin Bundy
Draft version
Matthew R George
Draft version
Peter Behroozi
Draft version
Richard Massey
Draft version
Jason Rhodes
Draft version
Risa H Wechsler
Draft version
EVOLUTION OF THE STELLAR-TO-DARK MATTER RELATION: SEPARATING STAR-FORMING AND PASSIVE GALAXIES FROM Z = 1 TO 0
13 Aug 2013 May 11, 2014DRAFT VERSION MAY 11, 2014 Preprint typeset using L A T E X style emulateapj v. 4/12/04Subject headings: cosmology: observations-galaxies: evolution-galaxies: halos
We use measurements of the stellar mass function, galaxy clustering, and galaxy-galaxy lensing within the COSMOS survey to constrain the stellar-to-halo mass relation (SHMR) of star forming and quiescent galaxies over the redshift range z = [0.2, 1.0]. For massive galaxies, M * 10 10.6 M ⊙ , our results indicate that starforming galaxies grow proportionately as fast as their dark matter halos while quiescent galaxies are outpaced by dark matter growth. At lower masses, there is minimal difference in the SHMRs, implying that the majority low-mass quiescent galaxies have only recently been quenched of their star formation. Our analysis also affords a breakdown of all COSMOS galaxies into the relative numbers of central and satellite galaxies for both populations. At z = 1, satellite galaxies dominate the red sequence below the knee in the stellar mass function. But the number of quiescent satellites exhibits minimal redshift evolution; all evolution in the red sequence is due to low-mass central galaxies being quenched of their star formation. At M * ∼ 10 10 M ⊙ , the fraction of central galaxies on the red sequence increases by a factor of ten over our redshift baseline, while the fraction of quenched satellite galaxies at that mass is constant with redshift. We define a "migration rate" to the red sequence as the time derivative of the passive galaxy abundances. We find that the migration rate of central galaxies to the red sequence increases by nearly an order of magnitude from z = 1 to z = 0. These results imply that the efficiency of quenching star formation for centrals is increasing with cosmic time, while the mechanisms that quench the star formation of satellite galaxies in groups and clusters is losing efficiency. 9 The passive galaxy completeness limit essentially cuts part-way through the lowest stellar mass bin at each redshift interval. We compares measurements of the clustering of all passive galaxies in this bin to those that are above the limit, finding that the results are consistent with one another, but the higher number of galaxies in the full bin yields better error bars, especially at small scales.
INTRODUCTION
One of the defining characteristics of the z = 0 galaxy distribution is its bimodality. Galaxies can be roughly categorized into the star-forming sequence of blue, disky, gas-rich galaxies, and the quiescent, ellipsoidal galaxies with old stellar populations and red colors (Strateva et al. 2001;Blanton et al. 2003;Kauffmann et al. 2003;Madgwick et al. 2003). This bimodality is firmly in place at z = 1 (Bell et al. 2004;Cooper et al. 2006;Willmer et al. 2006) and extends out to z = 2 and possibly beyond (Kriek et al. 2008;Williams et al. 2009). The physical processes that drive the creation and evolution of the red sequence are not fully understood. There are many possible routes to the red sequence, but the relative efficiency of each are unquantified. In this paper we use measurements of the stellar mass function, galaxy clustering, and galaxy-galaxy lensing from the COSMOS survey (Scoville et al. 2007) to disentangle the various process that attenuate star formation in galaxies. This paper is an extension of Leauthaud et al. (2011Leauthaud et al. ( , 2012 (hereafter, L11 and Electronic address: [email protected] L12). In L11 we presented our theoretical framework; in L12 we applied this framework to stellar mass defined samples in COSMOS; in this paper we extend this framework to samples defined by both stellar mass and star formation activity, and apply it once again to COSMOS data.
The proposed mechanisms for quenching star formation can be grouped into two broad categories: processes that affect galaxies that exist at the center of the potential well of their host dark matter halo, and processes that affect galaxies that orbit as satellites within a larger dark matter potential. Central galaxy processes include mergers, AGN feedback-triggered either by mergers or by disk instabilities-and shock heating of infalling gas at a critical halo mass scale or galaxy mass scale (e.g., Croton et al. 2006;Bower et al. 2006;Dekel & Birnboim 2006;Cattaneo et al. 2006;Hopkins et al. 2008). Satellite galaxy processes do include some AGN and merging activity, but are likely dominated by tidal effects from the host dark matter halo, harassment by other galaxies within the group, strangulation from an active cold gas supply, and ram pressure stripping of gas by interaction with the host halo's hot gas (e.g., Gunn & Gott 1972;Moore et al. 1998;Balogh et al. 2000).
In this work we define a galaxy group as a set of galaxies that share a common dark matter halo. Close pairs of halos certainly exist in the field (e.g., the Milky Way-Andromeda pair), but by our definition these are not galaxy groups. This definition matches up to the division of processes that quench galaxies defined in the previous paragraph: ram pressure, tidal stripping, and strangulation do not significantly affect galaxies until they have crossed the virial radius of a larger halo 8 . FIG. 1.-The COSMOS stellar mass function, measured in our redshift bins, compared to other stellar mass functions at comparable redshifts. The blue squares and red circles represent our COSMOS measurements for SF and passive galaxies, respectively. The gray and pink shaded bands show the COSMOS measurements from Drory et al. (2009). Because Drory measured the SMF in different redshift bins, the bands show the range of SMF values for the two bins that overlap with each redshift bin used here. The orange squares and green triangles represent the measurements from PRIMUS (Moustakas et al. 2013) for passive and SF galaxies. In each panel, PRIMUS results are shown for all redshift bins whose median redshift is contained within the given COSMOS redshift bin (shown at the top of each panel). PRIMUS results are based on spectroscopy, thus do not go as faint as the two COSMOS results. This definition also fits seamlessly with our theoretical framework for analyzing the clustering and lensing of galaxies.
To disentangle the relative numbers of central and satellite galaxies, we use the framework of the Halo Occupation Distribution (HOD; see, e.g., Peacock & Smith 2000;Seljak 2000;Scoccimarro et al. 2001;Cooray & Sheth 2002;Berlind & Weinberg 2002 for early works, and Zheng et al. 2007;van den Bosch et al. 2007;Tinker et al. 2010b for examples of more recent implementations of the framework). In brief, the HOD provides a statistical framework for the probability distribution function of galaxies within halos. Traditionally, HOD models parameterize P(N|M h ), the probability that a halo of mass M contains N galaxies in a pre-defined sample. The HOD for a given galaxy mass, M * , is based on two characteristic halo mass scales: the mean halo mass for central galaxies and a larger halo mass where there is (on average) one satellite galaxy of mass M ≥ M * . Here we use an extended model that parameterizes this probability as a function of galaxy mass, P(N|M h , M * ), rather than for a specified threshold. The specific model we implement is described in detail in L11, which begins with parameterization of the stellar mass-to-halo mass relation for central galaxies (SHMR). This function specifies the mean mass of a central galaxy as a function of halo mass. The halo mass scale for satellite galaxies is motivated by previous HOD analyses that find a tight relation between these two halo mass scales.
The benefit of the COSMOS survey for this work is that explained by accounting for galaxies that are satellites in nearby groups, as well as galaxies in the cluster infall region that have orbited within the virial radius of the cluster, but the apocenter of their orbit is outside R vir (Wetzel et al. 2013b.) it provides a consistent set of observations and the same definition of stellar mass at various redshifts. Additionally, the broad wavelength coverage of COSMOS are highly efficient at differentiating dusty star-forming galaxies from truly passive objects. Although data exists at multiple epochs from various surveys, the clustering of galaxies depend sensitively on survey selection (Sánchez & Cole 2008), and stellar mass estimates depend on both survey parameters and on assumptions in the stellar mass modeling Conroy & Gunn 2010). Constraints on the redshift evolution in the SHMR are significantly weakened when incorporating such uncertainties into the analysis (Behroozi et al. 2010), thus the COSMOS data set is crucial for identifying true redshift trends. In this paper we will interchangeably use the terms "quiescent", "quenched", and "passive" to refer to galaxies that have little to no star formation and are intrinsically located on the red sequence. Galaxies that appear red due to dust contamination of broadband colors are included in the starforming sequence. We will discuss this further in §2. We will refer to the "red sequence" to mean the set of galaxies that are intrinsically red. The complement of the red sequence is the set of star-forming (SF) galaxies.We will use the knee in the stellar mass function, approximately 10 10.6 M ⊙ at all redshifts considered (Drory et al. 2009;Marchesini et al. 2009), as the reference point between "high-mass" and "low-mass" galaxy samples. Our reference point for small and large distance scales is 1 Mpc (comoving; ∼ 110 arcsec at z = 0.5), which is the at the center of the transition in the galaxy correlation function from pair counts being dominated by galaxies in two distinct halos and pairs that arise from two galaxies (Moustakas et al. 2013), and zCOSMOS (Knobel et al. 2013) and SDSS. Circles with errors are the COSMOS data in this paper from Figure 3, which use a NUV − R − J color diagram to isolate passive galaxies. Squares are from a volume-limited SDSS sample that uses Dn4000 to determine fq. This sample will be used later in the paper and is discussed in §4.3. The dashed lines show the results from the Drory et al. (2009) COSMOS measurements, which are from the same raw data but use different methods to determine stellar mass. The PRIMUS results use a different estimate of stellar mass and use SED fitting to determine the delineation between passive and active galaxies. The zCOSMOS results use a single U − B color cut to determine the set of passive galaxies.
occupying a common halo. We will frequently refer to both the fraction of galaxies that are satellites, f sat , the fraction of galaxies that are quenched f q , and combinations of both. For clarity, the fraction of satellites that are quenched is referenced as f q (sat) while the fraction of quenched galaxies that are satellites is referenced as f sat (q)-i.e., the subsample for which the fraction is determined is referenced parenthetically, while the quantity by which the fraction is determined is listed in the subscript.
In all theoretical modeling we assume a flat ΛCDM cosmological model of (Ω m ,σ 8 ,Ω b ,n s ,h 0 ) = (0.272, 0.807, 0.0438, 0.963, 0.72).
We define a dark matter halo as a spherical, virialized object with a mean interior density of ∆ ≡ 3M h /4πΩ m ρ crit R 3 h = 200. All halo statistics used in this paper are calibrated from numerical simulations that match this halo definition.
DATA
Details of the COSMOS survey can be found in Scoville et al. (2007). Details of the measurement techniques and methods for the stellar mass functions (SMFs), angular galaxy clustering (w θ ), and galaxy-galaxy lensing (∆Σ) can be found in L12. All w θ measurements are taken from the Subaru catalog (2.3 deg 2 ) while lensing and SMF measurements are restricted to the HST ACS catalog (1.64 deg 2 ). The sample selection is also identical to L12. Here we repeat all these measurements, now broken into two subsamples of star-forming (blue) and passive (red) objects. Intrinsically passive galaxies are identified in a specific region in the (NUV − R) − (R − J) color-color space in the same manner as Bundy et al. (2010)). The addition of near-IR data breaks the degeneracy between dusty and star-forming objects (Pozzetti & Mannucci 2000;Labbé et al. 2005;Williams et al. 2009;Zhu et al. 2011). Photometric redshifts are obtained from Ilbert et al. (2009), versions v1.7 and v1.8. These photo-z estimates have negligible differences at z < 1 but v1.8 has improved accuracy relative to spectroscopic redshifts. The v1.7 photo-z's are used for the SMF and w θ and v1.8 for the lensing catalog. Later version of the photometric redshifts, which were not available during much of the present work, focus on z ∼ 2. We have confirmed that there are negligible changes to z < 1 results. Stellar masses are estimated using the Bayesian code of Bundy et al. (2006) and assume a Chabrier (2003) IMF. While the method of Bundy et. al. uses multi-band colors to constrain the M/L ratio in the observed K-band, with mass estimates derived from application to the K-band luminosity only.
We restrict our analysis to stellar masses above the 80% stellar mass completeness limit, as in L12. Due to the lower intrinsic luminosities at fixed stellar mass, passive galaxies have a higher completeness limit at fixed redshift by roughly ∼ 0.2 dex. For the stellar mass function and the galaxygalaxy lensing measurements, we restrict the measurements of the passive population to be above that limit (although we will compare our lensing fits to the data for these bins in the presentation of our results for comparison purposes). For the clustering measurements, we find that including passive galaxies down to the star-forming stellar mass limit does not bias the clustering of those samples 9 thus we incorporate these galaxies in the clustering bins. See Figure 2 in L12 for a plot of the completeness limits as a function of redshift. Stellar mass limits are also given in Table 1.
In addition to the SMFs, we also incorporate the ratio of the passive and SF SMFs into our analysis. The ratio takes into account that the amplitudes of the passive and SF SMFs are correlated to some degree, given that they are measured from FIG. 3.-Upper panels: The stellar mass functions in each redshift bin broken down by star formation activity. Points with errors represent COSMOS measurements; curves represent best-fit HOD models. Red circles represent passive galaxies while blue squares represent SF galaxies. Error bars are obtained from mock galaxy samples discussed in §2. Red solid curves represent the HOD model for passive galaxies. The dashed blue curves represent the HOD model for SF galaxies. The thin dashed and solid curves in each panel represent the abundance of satellite galaxies only for each subsample. Lower panels: The red-to-blue ratio of the SMFs. This quantity contains complementary information to the individual SMFs because the amplitudes of the passive and SF SMFs are correlated. Points with errors represent the COSMOS measurements, while the black curve is the HOD model. the same sample of galaxies. All measurements are made in three redshift bins that span a range of z = [0.22, 1.00]. The median redshifts are z = 0.36, 0.66, and 0.88. We will present our measurements in §4 when discussing our best-fit models.
We also incorporate information from the COSMOS X-ray group catalog of George et al. (2011). The central galaxy in each group is determined with high probability, yielding a measurement of the red fraction of central galaxies at M h ∼ 10 13.5 M ⊙ in each redshift bin. The error in this quantity is determined by bootstrap resampling of the group catalog. The purpose of including these data is to prevent unphysical divergent behavior of the models, e.g., models in which the red fraction of central galaxies turns over and approaches zero at high halo masses where the constraints from the three galaxy measures are weak. In practice, the inclusion of the group data does not significantly affect the results.
As described in L11 and L12, we use a large-volume, highresolution N-body simulation to create mock galaxy distributions with the same angular size and comoving depth of each slice in the COSMOS survey. We use the "Consuelo" simulation, which is part of the LasDamas simulation suite (C. McBride, in preparation). This simulation is 420 h −1 Mpc on a side and contains 1400 3 particles. These mocks are then used to estimate the covariance matrices of each data set. Within the simulation, we are able to create 409, 179, and 105 mocks for the z = 0.36, 0.66, and 0.88 redshift bins, respectively. We populate the halos in the simulation with galaxies using a preliminary HOD fit to the measurements, yielding a preliminary estimate of the covariances. We then repeat this procedure with HOD fits that utilize the first covariance matrices to produce the final errors used in the results presented here. These covariance matrices are used as the full errors on the SMFs and w θ measurements. Because these two statistics involve simple counting of galaxies and their pairs, the N-body simulations encompass both the sample variance from large-scale structure and shot noise from small number statistics. For the lensing measurements, the statistical errors arising from the ellipticity measurements of the background sources are added to the covariance matrices from the mocks, which estimate the sample variance. In most cases, the statistical errors dominate the uncertainty in ∆Σ.
THEORY
In L11, we outlined an HOD-based model that can be used to analytically predict the SMF, g-g lensing, and clustering signals. A key component of this model is the SHMR which is modelled as a mean-log relation, noted as M * = f SHMR (M h ), FIG. 4.-Angular clustering of COSMOS galaxies in stellar mass bins. From left to right, columns represent measurements at z = 0.36, z = 0.66, and z = 0.88. Point with error bars are measurements while curves indicate best-fit HOD models. Colors and point types are the same as Figure 3. Only angular bins with more than 10 pairs used in the analysis, thus data for passive galaxies often do not extend to the minimum angular separation. The volume of each redshift bin depends strongly on the median redshift, as indicated in Table 1. Thus, the z = 0.36 measurements have the largest error bars because they are taken from the smallest volume. For mass bins at log M * ≤ 10.3, the enhanced clustering of passive galaxies is driven by the high fraction of satellite galaxies that are quenched (c.f., Figure 11.) with a log-normal scatter 10 noted σ logM * . Here we give a brief review of the model of the minor modifications used to adapt it to passive and SF subsamples of galaxies.
3.1. The stellar-to-halo mass relation for central galaxies Following Behroozi et al. (2010), f SHMR (M h ) is mathematically defined following its inverse function: (1) 10 Scatter is quoted as the standard deviation of the logarithm base 10 of the stellar mass at fixed halo mass.
where M 1 is a characteristic halo mass, M * ,0 is a characteristic stellar mass, β is the low-mass slope, and δ and γ control the massive end slope. We note that equation 1 is only relevant for central galaxies. We use equation 1 to parameterize the SHMR of both passive and SF central galaxies, but each subsample will have a separate f SHMR .
Eq. 1 specifies the mean halo mass as a function of M * . We assume that the distribution of central galaxy mass at fixed halo mass, Φ c (M * |M h ), follows a log-normal distribution with scatter σ logM * . We will discuss halo occupation of central galaxies at fixed halo mass presently. Previous work suggests that σ logM * is independent of halo mass. More et al. (2011) finds a scatter in M * at fixed halo mass of 0.17 ± 0.04 dex. Moster et al. (2010) are able to fit the SDSS galaxy clustering measurements assuming constant σ logM * . In L12 we found that a halo mass-varying scatter produced no better fit than a model with constant scatter. We adopt a constant σ logM * here as well, but allow the scatter for passive and SF central galaxies to be independent.
Accounting for passive and star-forming subsamples
We are bound by the requirement that each halo contains one and only one central galaxy. The mass of that galaxy may be too small to be counted in any COSMOS sample, but formally we require that
f q (M h ) × Φ q cen (M * |M h ) + [1 − f q (M h )] × Φ SF cen (M * |M h ) dM * = 1,(2)
where f q (M h ) is a function specifying the fraction of times that a halo of mass M h contains a quenched central galaxy (independent of galaxy mass), and Φ x cen (M * |M h ) is the conditional stellar mass function for central quenched or SF galaxies, each normalized to unity. Parameterizing the quenching of central galaxies by halo mass as opposed to stellar mass (or the ratio between the two) makes an implicit choice of the mechanisms that quench star formation in central galaxies (see the discussions in Hopkins et al. 2008 and. Given the small scatter between stellar mass and halo mass, this choice is not likely to bias the results we focus on here, e.g., the fraction of centrals that are red. This choice is also beneficial for its ease of implementation in our halo occupation framework.
We do not choose a parametric form for f q (M h ). Rather, we choose five halo mass points at which to specify f q (M h ) and smoothly interpolate between them. The five masses are evenly spaced in log M h from 10.8 to 14.0.
Calculating halo occupation of centrals and satellites
In order to avoid explicit dependence of our HOD parameters on our bin size, we define all HODs as threshold quantities. Having halo occupation parameterized for threshold samples yields maximal flexibility for taking the same HOD parameters and calculating N M for a bin of arbitrary size. For a sample of galaxies above a threshold stellar mass, the central occupation function N cen M is expressed as
N cen (M h | > M * ) = 1 2 1 − erf log 10 (M * ) − log 10 ( f SHMR (M h )) √ 2σ logM * .(3)
As discussed in L11, equation (3) correctly captures the behavior of N cen M for massive galaxy samples, as opposed to the common parameterization where scatter is parameterized at fixed stellar mass as opposed to fixed halo mass. Eq. 3 is valid for both SF and passive central galaxies, but the parameters of the f SHMR are independent for each subsample. Eq. 3 assumes that there is one central galaxy per halo; in the case of our subsamples, this is not explicitly true. For red central galaxies, Eq. 3 is multiplied by f q (M h ), and by 1 − f q (M h ) for SF central galaxies.
The occupation of satellite galaxies as a function of halo mass, N sat M , is
N sat (M h | > M * ) = M h M sat αsat exp −(M cut + f −1 SHMR (M * ) M h ,(4)
where M sat is the halo mass scale for satellite galaxies, M cut is a cutoff scale, and α sat is how the number of satellites scales with halo mass. We treat the satellite occupation of the passive and SF subsamples independently; unlike central galaxies, there is no integral constraint on the total number of satellite galaxies any halo can have. Equation (4) is a minor modification from L11 (Eq. 12 therein); in L11, N sat M is proportional to N cen M -this guarantees that satellite occupation fully cuts off at the same halo mass scale as central galaxies of the same mass. However, in our new red/blue parameterization this would correlate N cen M to f q (M h ). We circumvent this problem by including f −1 SHMR to the numerator in the exponential cutoff, producing a similar cutoff scale.
HOD modeling of luminosity-dependent galaxy clustering has shown that M sat is roughly 20 times f −1 SHMR , varying weakly with luminosity (e.g., Zehavi et al. 2005Zehavi et al. , 2011Zheng et al. 2007Zheng et al. , 2009. We thus parameterize M sat and M cut as
M sat 10 12 M ⊙ = B sat f −1 SHMR 10 12 M ⊙ βsat ,(5)
and
M cut 10 12 M ⊙ = B cut f −1 SHMR 10 12 M ⊙ βcut .(6)
In L12 we set α sat = 1, in agreement with many previous results. However, the fraction of satellites that are star forming NOTE. -The value in each column is the χ 2 value divided by the number of data points minus the number of free parameters. depends on halo mass (Wetzel et al. 2012), thus we allow α sat to be free for both passive and SF subsamples. Equations (3) and (4) give the number of galaxies above a mass threshold as a function of halo mass. Our data are measured in stellar mass bins. To determine the halo occupation in a given bin, we simply take the difference between N sat M (or N cen M ) at the low-and high-mass edges of the bin.
The model has 27 free parameters. To model the halo occupation of a given subsample requires 11 free parameters. The SHMR has 5 free parameters (M 1 , M * ,0 , β, δ, γ), with one additional parameter for the scatter, σ logM * . The satellite occupation requires 5 more parameters (B sat , β sat , B cut , β cut , α sat ).
To determine the fraction of central galaxies that are red at each halo mass requires 5 more parameters for a total of 27. Each set of 27 parameters describes the galaxy-halo relation at a given redshift. For each of our three redshift bins, we fit the parameters separately. We use the halo mass function of Tinker et al. (2008a), the halo bias relation of Tinker et al. (2010a), and the concentration-mass relation for dark matter halos of Muñoz-Cuartas et al. (2011), assuming that satellite galaxies follow the dark matter within a halo with an NFW profile (Navarro et al. 1997). We refer the reader to L11 for a complete description of how to take the halo occupation parameters and calculate the SMFs, clustering, and lensing signals.
RESULTS
We use Markov Chain Monte Carlo (MCMC) analysis to find both the best-fit model and the uncertainties in model parameters. We analyze each redshift bin separately. For each trial model in the MCMC chain, we calculate a separate χ 2 for the SMF, for each mass bin in w θ , and each mass bin in ∆Σ, for passive and SF subsamples, and the red fraction of central galaxies within the X-ray groups. The total χ 2 is then
χ 2 tot = q,SF χ 2 smf + Nw i=1 χ 2 w,i + N∆Σ j=1 χ 2 ∆Σ, j + χ 2 f red + χ 2 ratio . (7)
The last two terms in the above equation represent the χ 2 for the red central fraction from the X-ray group catalog and the χ 2 for the ratio of the passive (q) and SF SMFs, respectively. We use a covariance matrix for each individual χ 2 calculation, with the exception of χ 2 f red . Parameter values and errors from the MCMC chains are in Table 2. The total χ 2 for each best-fit model is listed in Table 3. Figure 1 shows our measurements of the passive and SF SMFs in COSMOS. Data are shown down to the stellar mass completeness limits for each subtype. The stellar mass functions show limited evolution across our redshift range with the exception of low-mass passive galaxies: the abundance of these galaxies increases by a factor of 2-3 depending on stellar mass. This trend has been shown in a number of papers as a component of the "downsizing" of galaxy formation. Brinchmann & Ellis (2000) detected this trend in morphologically-selected samples, and Bundy et al. (2006) found similar results in the abundances of SF and passive galaxies in DEEP2. In our measurements, the z = 0.36 passive SMF shows a minimum at mgal ∼ 10 9.5 M ⊙ , with a subsequent upturn at lower masses, as shown by Drory et al. (2009) for COSMOS data and confirmed in PRIMUS by Moustakas et al. (2013).
Stellar Mass Functions and the Quenched Fraction of Galaxies
In Figure 1 we compare our measurements to those from Drory et al. (2009) and Moustakas et al. (2013). The Drory et al. (2009) measurements are also taken from COS-MOS, but with two main differences. First, they are measured in different redshift bins. Due to the small footprint of COS-MOS, the sample variance from different binning is a nonnegligible effect. Second, there are differences in the stellar mass calculations themselves: Drory et. al. fit the massto-light ratio (M/L) from all photometric bands, while the method of Bundy et. al. uses multi-band colors to constrain the M/L ratio in the observed K-band, with mass estimates derived from application to the K-band luminosity only. There are also minor differences in the stellar population templates used. Last, in this figure we plot the fitting function results rather than the measurements themselves. The Drory et. al. measurements lie slightly above their fits at the massive end, so the agreement with our data is somewhat better than implied in this figure. Even so, there are minimal differences in the SMFs.
The SMFs from Moustakas et al. (2013) are measured from PRIMUS (their Fig 11; tabulated data kindly provided by J. Moustakas), which is a larger area but does not go as deep as COSMOS due to the use of low-resolution spectroscopy to obtain galaxy redshifts. The abundance of passive galaxies is somewhat higher in the PRIMUS results, but the conclusion of Moustakas et al. (2013) agrees with our measurements here: that the only significant change in abundance is in the low-mass passive population.
Because the focus of this paper is on the growth of the red sequence, we compare our measurements for the redshift evolution of the overall quenched fraction to that of recent measurements from PRIMUS (Moustakas et al. 2013) and to the analysis of zCOSMOS by Knobel et al. (2013). We define the quenched fraction as the density of passive galaxies relative to the total number. PRIMUS contains within it the COSMOS field, but there are differences in both the stellar mass assignment and in the determination of which galaxies are passive. Moustakas et al. (2013) use SED fitting to estimate the star formation rates of PRIMUS objects and then divide the sample based upon this distribution. Figure 2 compares the quenched fractions for four different stellar masses between the two surveys. At all masses, the PRIMUS f q is slightly higher than the COSMOS value(s). An important comparison, however, is the slope of f q with redshift. For each bin in M * , the rate of change appears consistent between the two surveys. For zCOSMOS, the flux limit makes it difficult to achieve a long redshift baseline for anything but the most massive galaxies. But the quenched fractions in Knobel et al. (2013) are significantly higher than either PRIMUS or this work. Knobel Table 1. A breakdown of the components of the fits for four examples can be found in Figure 6. sample of passive galaxies, which may be susceptible to dust contamination. In their paper they compare their quenched fractions to those derived from a NUV − R − J color-color diagram (similar to the approach used here), finding very good agreement. In contrast to their results (their Figure 1), the single-color cut is not consistent with the NUV − R − J color selection and it yields a decreasing f q with decreasing z at the most massive galaxies, which is at odds with the other two results. We will make further comparisons with the results of Knobel et al. (2013) in the §4.5.
In this figure we have included data from the SDSS groups catalog of Tinker et al. (2011). These represent the data points at z = 0.05. In this figure, we are presenting f Q for the overall galaxy population, but the group finder is applied to volumelimited samples derived from the SDSS Main sample, yielding a full central-satellite decomposition of all galaxies in the sample. This group catalog is ∼ 95% complete in finding central galaxies and ∼ 90% pure in its sample of satellite galaxies.
Quenched fractions in sub-populations of the group catalog are corrected for impurity and completeness statistically (see further details in Tinker et al. 2011). We will make significant use of this catalog later in the paper. The differences in stellar mass estimates between COSMOS and SDSS make comparisons of absolute abundances problematic, but fractions are more robust. To facilitate a more robust comparison of the SDSS data with our COSMOS results, we have added 0.2 dex to the stellar mass estimates and added 0.2 dex of scatter. The former represents the 0.2 dex shift in the SMFs between SDSS (Li & White 2009) and COSMOS once deconvolved to a common scatter value. The latter is meant to mock up the increase uncertainties between SDSS spectroscopic redshifts and COSMOS photometric redshifts (see the discussion in Figure show the most massive galaxy bin for passive and SF galaxies. The bottom row shows the [9.8, 10.3] stellar mass bin. The solid curve (black) shows the overall fit, which is the sum of the other curves. The dotted curve (green) is the lensing profile of the dark matter halo around central galaxies. The short-dash curve (red) shows the lensing profile of the halos around satellite galaxies. The long-dash (yellow) curve represents he central point-source-i.e., the central galaxy itself. The dash-dot (gray) curve is the lensing contribution from nearby halosi.e., the two-halo term. For both mass bins, the passive galaxies have a higher fraction of satellites, evinced by the higher amplitude of the satellite lensing signal. At fixed M * , the halos that host the central galaxies are roughly equal mass between passive and SF galaxies (this does not mean that f SHMR is the same-we will discuss the differences between M * |M h and M h |M * in the following section.
lower f Q , the values used here should be considered lower limits on the quenched fraction of SDSS galaxies.
Comparison of the Measurements to the Best Model
Fits Figure 3 compares the stellar mass functions to the best-fit halo occupation models from the MCMC chains. The overall model SMF is shown with the thick solid curves, and the contribution to the SMFs from satellite galaxies is shown with the thin curves. The lower panels show the abundance ratio of SF and passive galaxies. In these panels, the growth of the red sequence at low mass is more evident. Figure 4 shows the clustering measurements for the passive and SF galaxies. Consistent with previous measurements from other redshifts and other surveys, the passive galaxies have equal or higher clustering than the SF galaxies at every bin of stellar mass. For low-mass galaxies, the enhanced clustering of passive galaxies is due to the high fraction of such galaxies being satellites in high-mass halos (e.g., Zehavi et al. 2005;Tinker et al. 2008b;van den Bosch et al. 2003;Skibba & Sheth 2009;Weinmann et al. 2006;Tinker & Wetzel 2010;Wetzel et al. 2012) This effect gives rise to the well-known color-density relation. At high masses, log M * ≥ 10.8, the large-scale bias appears roughly independent of color, while the small-scale clustering of passive galaxies is slightly enhanced. As we will see when inspecting the constraints on the SHMR in Figures 7, massive SF galaxies live in higher mass halos than their red counterparts, when binned by halo mass; this is true of both the SHMR results presented here and the group catalog results from Tinker et al. (2012) (hereafter T12). However, when binned by galaxy mass, scatter minimizes the difference in the mean halo mass and thus the large-scale bias. Massive SF galaxies have nearly negligible satellite fractions in comparison to massive passive galaxies (at least at z ≥ 0.48), yielding a higher amplitude for the passive galaxy subsample at small scales. Figure 5 rounds out our presentation of the data and model fits. Lower-mass star-forming galaxies primarily live as central galaxies in lower mass halos, the lensing signal is weaker than that of passive galaxies and thus has larger statistical errors. This is reflected in the large error bars for the lowermass SF measurements. To better understand the information that the lensing signal affords, Figure 6 shows a breakdown of the constituent parts of the lensing fit for high mass and low mass galaxies. Ignoring the contribution to the lensing signal between two halos 11 , the lensing signal has three parts: the halo profile around central galaxies, the halo profile around satellite galaxies, and the central point source (i.e., the galaxy itself). For red galaxies, the higher amplitude of the ∆Σ measurements at scales R 100 kpc is indicative of the higher satellite fractions, as this scale probes the mass profile of the dark matter halo in its outskirts. Interior to this scale, the lensing signal is a measure of the mass of dark matter halos around central galaxies. For both bins in M * shown, the mean halo mass of centrals appears roughly consistent between passive and SF subsamples. The differences are driven primarily by the fraction of galaxies that are satellites.
The Stellar-to-Halo Mass Ratios and their Evolution
The left-hand panels in Figure 7 show the SHMR for red and SF galaxies at each redshift bin. The curves show the best-fit model for each sample, while the shaded regions indicate the range that contains inner 68% of the models. At low masses, the SHMR becomes shallow and stellar mass increases much more rapidly than halo mass:
M * ∼ M 1/β h ∼ M 2
h . As galaxy mass increases, however, the relation reaches a pivot point at which central galaxies increase in mass slower than their halos and the SHMR becomes steep. This is now accepted as a generic result of the abundance matching paradigm Wang et al. 2007;Conroy & Wechsler 2009;Moster et al. 2010;Behroozi et al. 2010;Yang et al. 2011). In L12 we defined the pivot point quantitatively as the location in the M * -M h relation were the M * /M h ratio is maximal, usually around M * ∼ 10 10 M ⊙ and M h ∼ 10 12 M ⊙ .
At all redshifts, the qualitative behavior of the SHMR for SF and passive galaxies is quite similar; both subsamples show a pivot point. The pivot halo mass is roughly 10 12 M ⊙ and the pivot stellar mass is roughly 10 10.6 M ⊙ . We will present a more detailed comparison presently, but broadly speaking, there are few major differences in the results. When comparing the results at low masses, however, it is important to remember that these results do not reflect the fraction of halos occupied by red central galaxies. For the z = 0.66 and z = 0.88 redshift bins, the fraction of halos below 10 12 M ⊙ that have red central galaxies is vanishingly small. Only for z = 0.36 does the red central fraction become significant at these halo mass scales.
At scales above the pivot point, however, the behavior of the SHMR is quantifiably different. At z ∼ 0.88, massive SF galaxies occupy larger halos at fixed stellar mass. In each panel, the point with horizontal error bars shows the mean stellar mass within the X-ray group sample from George et al. (2011). Although the red central fraction from the groups is used within the MCMC chains, the mean stellar mass is not. At z ∼ 0.66, massive SF galaxies still reside in more massive halos than their quiescent counterparts, but now the mean relations are much closer together. At z ∼ 0.36, the mean SMHR for red and SF galaxies have crossed; massive passive galaxies occupy slightly more massive halos than similar SF galaxies. The sample variance for the low-z bin is significant, but an evolutionary trend can be seen across the full COSMOS sample. In T12 we compare these results to the central galaxies found in the group catalog, finding quantitative agreement. This figure also compares the new color-dependent results to the SHMR from L12. At low masses, the SF SHMR tracks the all-SHMR nearly exactly; this is expected given that SF galaxies dominate the population at these masses. At high masses, the all-SHMR is intermediate between the SF and passive SHMRs.
The origin of the differential evolution at the massive end comes from our specific combination of data. The stellar mass functions clearly indicate that there are more passive galaxies than SF galaxies at the massive end of the spectrum. The clustering and lensing, however, indicate that the large-scale bias and halo masses of the SF and passive subsamples are consistent. Recall that the left-hand panels show the mean stellar mass as a function of halo mass, even though we have plotted the observable, log M * , on the x-axis. At fixed M * , scatter becomes very important at the massive end. The right-hand panels in Figure 7 show the mean halo mass at fixed stellar mass. In this plot, the differences between the red and SF subsamples is almost entirely gone; thus, in bins of M * where satellite galaxies are negligible (i.e., at stellar masses significantly above the knee in the stellar mass function), one would expect the clustering and lensing of SF and passive galaxies to be consistent. The difference in the SHMRs is driven by the larger values of σ logM * for SF galaxies than for passive galaxies. For SF galaxies at z = 0.88, σ logM * = 0.25 ± 0.01, while for passive galaxies σ logM * = 0.18 ± 0.05. By z = 0.36, the passive galaxies have the smaller scatter, and the steeper SHMR at the massive end. Although our functional form for f SHMR is meant to have a high degree of flexibility at high halo masses, we cannot rule out a possible bias due to our parametric form for f SHMR . Additionally, the assumption of a symmetric, lognormal scatter may come into play in this regime where the scatter is important. With the current data we are unable to test alternative models for scatter.
To determine the origin of the constraints on the high-mass end of the SHMR, we ran a series of chains removing different data sets. Figure 8 show highlights from this series for the z = 0.88 redshift bin. Intriguingly, the constraints when using the SMFs only already show clear indication of a separation between the SHMR of SF and passive galaxies, although the difference is not as large as the final result. Adding just the most massive clustering bin increases the separation between passive and SF SHMR values into rough agreement with the full data. Similar results are found when removing the most massive clustering bin and incorporating all others; constraints on σ logM * come from a range of stellar masses, provided the halos occupied are in the regime where halo bias is monotonically increasing with halo mass (roughly M h ∼ 2×10 11 M ⊙ at this redshift). Because the halo bias function is highly non-linear, the mean halo mass is not the same as the bias-weighted halo mass. In this respect, the clustering has more constraining power on σ logM * than the lensing data. The top panel in Figure 8 demonstrates that our final results are not sensitive to the data derived from the X-ray groups.
Results when removing the lensing data are similar. Figure 9 shows the 68% ranges of f q (M h ) from the MCMC chains for each redshift bin. At z = 0.66 and z = 0.88, f q (M h ) has a sharp cutoff between M h = 10 11.5−12.0 M ⊙ . Although the median value of the cutoff evolves to somewhat lower mass between z = 0.88 and z = 0.66, the results from the two redshift bins are also consistent with no evolution. At z = 0.36, f q (M h ) is higher at all halo masses, most notably at M h 10 11.5 M ⊙ ; rather than a sharp cutoff in the quenched central fraction, there is a long tail toward lower masses where the f q (M h ) is 3-10%. This is driven by all three sets of data: a higher abundance of low-mass passive galaxies in the SMF, lower clustering amplitude at for low-mass samples in w θ and a lower satellite fraction in the ∆Σ measurements. We will explore this in detail in subsequent sections. Figure 9 also shows results from the z = 0 SDSS groups catalog of Tinker et al. (2011). The shape of f q (M h ) from the groups is similar to our non-parametric fit in COSMOS, but the amplitude is higher by ∼ 0.1 − 0.2 dex. This may reflect evolution given that the time elapsed between z = 0.36 and z = 0.05 is 3.3 Gyr, equal to the time elapsed from z = 0.88 to z = 0.36. It may also reflect differences in the definition of "quenched"; in Tinker et al. (2011), a 4000-Å break below 1.6 is used to denote quenched, as opposed to NUV-optial-NIR colors cuts used on the COSMOS data. Although this definition is less sensitive to dust than the traditional g − r color, D n 4000 may suffer from aperture bias for more massive galaxies. The results for COSMOS groups are plotted as well, one datum per redshift bin, color-coordinated with the MCMC results.
Central Red Fraction vs Halo Mass
Tinker & Wetzel (2010) constrained halo occupation for color-selected clustering from DEEP2 and COMBO17, concluding that there was not a strong cutoff in f q (M h ) (additional data from the UKIDSS-UDS were inconclusive). The clustering samples were created using a single color cut without any NIR data, contaminating the red sequence with dust-reddened star-forming galaxies. From Figure 9, many of these galaxies are centrals in low-mass halos, making f q (M h ) appear flatter and without any strong cutoff. Zhu et al. (2011) find that ∼ 25% of sub-L * galaxies with red colors are star-forming with specific rates of ∼ 10 −10 yr −1 .
We note again that the detailed constraints on f q (M h ) depend on our assumption that quenching of central galaxies is a function of halo mass, independent of stellar mass. Because the mean galaxy mass at fixed halo mass is similar between passive and SF subsamples, a parameterization of f q that depends on stellar mass rather than halo mass will likely yield consistent results. Figure 10 shows the SMFs for the SF and passive subsamples, broken down into the separate abundances of central and satellite galaxies. For SF galaxies, there is a modest increase in the number of both central and satellite galaxies in time. The abundance of red satellite galaxies exhibits little redshift evolution at low masses. There is actually a deficit of massive red satellites at z = 0.36. It is unclear whether this represents physical evolution versus sample variance, an issue we will discuss in this subsection.
Central-Satellite Decomposition of the Stellar Mass Functions
The only subsample that exhibits significant redshift evo-FIG. 7.-Stellar-to-halo mass ratios for passive and SF central galaxies. Shaded regions indicate 68% range in each quantity from the MCMC chains. The left-hand panels show f SHMR for each redshift bin, equivalent to the mean M * at fixed M h . The points with horizontal error bars represent the mean halo masses of the X-ray groups with passive and SF central galaxies, taken from T12. Long dashed curves show the SHMR for all galaxies, taken from L12. The right-hand panels show M h |M * . The larger scatter for SF galaxies creates more Eddington bias, thus when binned in M * , the mean halo mass is significantly smaller than f SHMR . Thus the lensing signals for massive galaxies are similar between passive and SF samples. lution is red central galaxies. At the massive end there is minimal evolution, consistent with larger surveys results of the evolution of the luminosity function of Luminous Passive Galaxies (LRGs; Cool et al. 2008;Wake et al. 2006). However, at M * 10 11 M ⊙ , the number of red central galaxies increases rapidly from z = 0.88 to z = 0.36. At M * = 10 10 M ⊙ , this abundance increases by 1.2 dex. At this same mass scale, the change in the number of satellite galaxies is negligible.
This result is more clearly expressed by looking at the fraction of galaxies that are red, and how this fraction depends on categorization as a central or a satellite galaxy. Figure 11 shows f q as a function of redshift for five values of M * over the range log M * = [9.7, 11.2]. In this figure we have included data from the SDSS groups catalog of Tinker et al. (2011). The differences in stellar mass estimates between the two surveys make comparisons of absolute abundances problematic, but fractions are more robust. To create the SDSS data in this figure, we have added 0.2 dex to the stellar mass estimates and added 0.2 dex of scatter. The former represents the 0.2 dex shift in the SMFs between SDSS (Li & White 2009) and COSMOS once deconvolved to a common scatter value. The latter is meant to mock up the increase uncertainties between SDSS spectroscopic redshifts and COSMOS photometric redshifts (see the discussion in Figure 14 in L12). Both of these changes lower f Q by 0.1 to 0.2, which the shift in mass scale dominating the effect. The upper error bars show the original SDSS values before shifting and adding scatter. Because both alterations to the SDSS data lower f Q , the values used here should be considered lower limits on the quenched fraction of SDSS galaxies. Figure 11a shows f q for all galaxies in each stellar mass bin. The rate of change in f q with redshift monotonically decreases with increasing stellar mass. For massive galaxies, f q is roughly constant. At log M * = 9.7, f q increases by a factor of five. Figure 11b shows the same quantity, but now for satellite galaxies only. Aside from the lowest mass bin, f q (sat) in all bins is consistent with no redshift evolution. Central galaxies, on the other hand, show significant evolution; at log M * 10, f q (cen) increases by an order of magnitude. Even at logM * = 10.5, f q (cen) increases by a factor of 5 over our redshift baseline. Knobel et al. (2013) use group catalogs in the zCOSMOS survey to measure the redshift evolution of centrals and satellites as well. Due to the flux limit of the zCOSMOS target selection, they only achieve a redshift baseline for galaxies M * 10 10.3 M ⊙ . They also find little to no evolution of the red fraction of satellites. For central galaxies, however, they find weaker evolution for the red fraction of central galaxies. In Figure 12 we compare f q for centrals and satellites between the two methods. An objective comparison is obstructed by the overall offset in f q between the two definitions of quenched (see Figure 2). Both approaches yield a small decrease in f q (sat) for massive galaxies as z decreases, but the Knobel et al. (2013) groups yield a quenched fraction at M * = FIG. 8.-Stellar-to-halo mass ratios for passive and SF central galaxies when using only subsets of the available data. In each panel, the shaded region is the 68% confidence interval for the SHMR from the MCMC chains. The lines indicate the same quantity from the original chains using all data (cf. Figure 7). All panels show results from the z = 0.88 redshift bin. Bottom panel: Chains that incorporate only the stellar mass functions and their ratio. Middle panel: The stellar mass functions, the SMF ratio, and the most massive clustering bin: galaxies with log M * = [11.1, 11.6]. Top panel: Chains using all data except fq(M h ) from the X-ray group catalog.
10 11 M ⊙ that is nearly unity. In this panel we plot the results from the COSMOS X-ray groups of George et al. (2011), which use the same definition of quenched as this work. The f q (sat) values are mostly consistent with those from our SHMR analysis. For central galaxies, the Knobel et al. (2013) groups yield contrasting results above and below M * = 10 10.5 M ⊙ . Below this limit, the zCOSMOS central galaxies show a moderate increase in f q (cen), but above this limit the zCOSMOS central galaxies exhibit significantly decreasing f q (cen) with decreasing redshift. The quenched fraction of M * = 10 11 M ⊙ centrals decreases from 90% to 60% over their redshift baseline.
The U − B color cut used in zCOSMOS may be susceptible to dust contamination, which may be stronger at higher redshifts where star formation rates are also higher. Additionally, there may be differences driven by the two methods-halo occupation and group finding. Misclassification of which galaxy in a group is the central is a major source of bias for group catalogs . Given that the quenched fraction of satellites exhibits no redshift evolution, this type of bias will only weaken the true trend of f q (cen). Moreover, Knobel et al. (2013) use a probabilistic scheme to select subsamples of centrals and satellite galaxies that have purity near 80%, forcing them to assume that these subsets are represen-tative of the overall populations. Halo occupation methods do not suffer from these biases, as central and satellite populations are constrained only in a statistical fashion, and not on an object-by-object basis. We also note that the central galaxies in the X-ray group catalog used here are a much cleaner sample of central galaxies, given that the group center can be verified with the X-ray brightness profile. Figure 13 shows a complementary statistic: the fraction of galaxies that are satellites, f sat , for the same stellar mass bins and redshift range. For all galaxies, f sat is between 0.25 − 0.35, consistent with previous analyses of z = 0 luminosity-dependent clustering (e.g., Zehavi et al. 2005;van den Bosch et al. 2007;Zheng et al. 2007;Zehavi et al. 2011). Halo occupation analysis of z ∼ 1 luminosity dependent clustering indicates a somewhat smaller f sat than at z = 0 (Zheng et al. 2007;Abbas et al. 2010). However, recent analysis of stellar-mass dependent clustering at z = 1 − 2 by Wake et al. (2011) find f sat values consistent with those in COSMOS. Due to the fact that satellite galaxies are predominantly red, they are fainter than SF galaxies at the same stellar mass, lowering the satellite fraction at a given mass. For SF galaxies, star formation rates increase with redshift, increasing the difference between luminosity and stellar mass defined samples.
The satellite fractions of star forming galaxies are lower than for the full sample, generally near ∼ 0.2, with minimal redshift evolution. Satellites dominate the population of low-mass passive galaxies at z ∼ 1. Even at log M * = 10.8, f sat = 0.55. By z = 0, satellites represent less than half of passive galaxies at log M * > 9.7. The change in f sat for passive galaxies is non-monotonic when incorporating the SDSS data, yielding a "dip" in f sat at z = 0.36. The small volume of this redshift slice raises the possibility that the galaxy distribution around z = 0.36 within COSMOS is a significant outlier with respect to the cosmic mean. We note that while the trend of f sat (red) with redshift is non-monotonic, the trend in f q (cen) is monotonic. Thus, if the z = 0.36 redshift slice is simply removed from consideration, the results in Figures 11 and 13 are still consistent with the scenario in which the only population to undergo significant evolution since z = 1 is red central galaxies. Figure 14 demonstrates where our constraints on the evolving population of red centrals derive from. If we assume that the fraction of halos with red centrals is fixed at z = 0.88, the abundance and clustering of the overall red population is markedly different at z = 0.36. Figure 14 shows results from the HOD model at z = 0.36, but the five parameters of the non-parametric f q (M h ) function have been replaced by the best-fit values at z = 0.88. In this model, the abundance of low-mass is low by a factor of ∼ 2.5 relative to the data, while the clustering is too high by an order of magnitude or more. The increased clustering amplitude is attributed to the higher satellite fraction of passive galaxies in this model. It is possible to construct a model with the z = 0.88 f q (M h ) that relieves the tension with the SMF, but this requires making up the difference by increasing the number of satellite galaxies, which only increases the tension with the clustering. In short, the only way to match both the SMF and w θ measurements at z = 0.36 is to increase the frequency of quenched central galaxies relative to z = 0.88. (George et al. 2011) that are used in the MCMC modeling. At z ≥ 0.66, there is a sharp cutoff in fq(M h ), implying that nearly all central galaxies at M * 10 10.5 M ⊙ are star forming at these redshifts. At lower redshifts, this cutoff moves to lower halo masses and there is a non-negligible contribution to the red sequence from low-mass central galaxies.
Signature of the Evolving Red Central Population in the Data
DISCUSSION
In an upcoming paper we will present a detailed analysis of the halo occupation results presented here, comparing these results to the growth histories, merging rates, and subhalo accretion and evolution in high-resolution N-body simulations. But it is possible to make significant qualitative assessment of our breakdown of the red sequence into central and satellite galaxy components.
Evolution of SHMR for high-mass galaxies. Observations indicate that the red sequence begins with massive galaxies at z 2 (Kriek et al. 2008;Williams et al. 2009). Thus it is not surprising that massive passive and SF galaxies have substantially different SHMRs at z = 1. Both the halo occupation analysis presented here and the X-ray groups in analyzed T12 indicate that, above the group halo mass scale ( 10 13 M ⊙ ), star-forming central galaxies are less massive than their red counterparts at fixed M h . The substantial difference between M * for SF and passive subsamples implies that star formation is not a stochastic process in these objects: if massive central galaxies underwent periodic episodes of star formation followed by longer-term quiescence, the galaxies at fixed halo mass would have the same stellar mass. The results also imply that massive quenched galaxies formed their stars very rapidly at high redshift, essentially getting 'ahead of the growth curve' relative to central galaxies that would still be forming stars by z = 1. At high redshift, central galaxies essentially "knew" they would be quenched by z = 1 (see the discussion in T12).
From z = 1 to z = 0, the SHMRs evolve quite differently depending on star formation activity. By z = 0.36, the mean relations have crossed and red central galaxies live in higher mass halos than SF central galaxies at fixed mass. This inversion is also consistent with results from z = 0 studies (Mandelbaum et al. 2006;More et al. 2011). Star forming galaxies of mass 10 11 M ⊙ grow by a factor of ∼ 1.6 using the star formation rates of Noeske et al. (2007) from z = 0.88 to z = 0.36. Host halos for these galaxies (M h ∼ 10 13 ) grow by a factor of ∼ 1.8 over the same redshift interval (Wechsler et al. 2002), thus star-forming central galaxies grow almost as fast as their host halos. For quenched galaxies, their growth rates are significantly slower than that of their host halos, causing the inversion of the SHMR seen in Figure 7. Although halos will accrete substantial stellar mass from smaller galaxies, most of this mass does not merge with the central galaxy; this is implied by the evolution of the luminosity function of massive passive galaxies (Wake et al. 2006;Cool et al. 2008). This mass contributed to the buildup of the intracluster light (Conroy et al. 2007;Purcell et al. 2007). The results here will put strong constraints on the growth of massive passive galaxies in our follow-up paper.
Evolution of the SHMR for low-mass galaxies. In contrast to the massive end of the galaxy population, low-mass galaxies show little evolution in the SHMR as well as very little difference in the SHMR between passive and SF subsamples. Due to the low abundance of low-mass red central galaxies, the errors on the SHMR below the pivot point are much higher for passive galaxies relative to SF galaxies. For each redshift bin, the red SHMR is slightly below the SF relation (at fixed M * ), but they are consistent within the error bars. From Figures 10 and 11, the abundance of red centrals is nearly negligible at z = 1 and increases rapidly relative to other constituents of the full galaxy population. Thus, most low mass quiescent central galaxies will be recent additions to the red sequence and the halo masses of red galaxies will be similar to those of SF galaxies. Low mass galaxies have significant gas content, with M gas M * at M * 9.5 at z = 0 (Baldry et al. 2008). The difference in stellar mass at fixed halo mass should be indicative of the amount of this gas that has been converted into stars during the quenching process. This proposition assumes no increase in the halo mass; ie, the quenching mechanism is not major mergers.
The migration rate of central galaxies to the red sequence. Our results are in good agreement with recent measurements from PRIMUS from Moustakas et al. (2013) that demonstrate that the growth of the red sequence from z = 1 to 0 is primarily due to low-mass galaxies being quenched of their star formation. Our SHMR analysis further indicates that this growth is happening in the low-mass central population as opposed to satellites in groups and clusters. Figure 15 shows the rate at which central galaxies are added to the red sequence. The x-axis is stellar mass and the y-axis is the difference in the abundance of red centrals between adjacent redshift bins, divided by the time lapse between redshift bins (units of number/volume/dex/Gyr). The shaded regions shows the 1 − σ range in rates given the uncertainties in the abundances in red centrals at each redshift. Within 1 − σ, there is evidence for an accelerated migration rate over the COSMOS redshift baseline, although the migration rates are consistent within their 2 − σ uncertainties. Figure 15 also shows the migration rate of central galaxies to the red sequence using the SDSS group catalog of Tinker et al. (2011). This group catalog allows us to isolate only the abundance of central quenched galaxies (quenched by the criterion of D n (4000) > 1.6). Although the redshift FIG. 10.-The stellar mass functions of passive and SF cosmos galaxies broken down into the contributions from central and satellite galaxies. Panels (a) and (b) show results for quenched satellite and central galaxies, respectively. Panels (c) and (d) show results for star-forming satellite and central galaxies, respectively. The shaded regions represent the 68% range of values within the MCMC chains. For all four subsamples, there is little redshift evolution at the high-mass end (M * 10 11 M ⊙ ). There is a dearth of high-mass quenched satellites at z = 0.36, but this is likely a statistical outlier. At low masses, the only subsample that shows significant evolution is passive central galaxies; at M * = 10 10 M ⊙ , the abundance of passive red centrals increases by more than an order of magnitude across our redshift baseline. Figure 11 shows that this growth in the fraction of quenched central galaxies continues to increases to z = 0. baseline within the SDSS Main galaxy sample is small, the overall number of galaxies is very large and it is possible to detect changes in the abundance of red central galaxies within the volume-limited group catalogs of Tinker et al. (2011) (see their Table 1). As discussed earlier, direct comparison of the SDSS stellar masses with COSMOS stellar masses is not possible, but given the lack of significant slope of the migration rate with stellar mass, the difference in M * estimator is less relevant. The SDSS results yield a migration rate nearly an order of magnitude higher than the z = 0.88 → 0.66 COSMOS results.
Previous studies have also detected an acceleration of the migration rate onto the red sequence with cosmic time. Both the PRIMUS results and results from zCOSMOS of Pozzetti et al. (2010) find that growth rate, in number and mass density, for objects on the red sequence is increasing with decreasing redshift at M * 10 10.6 M ⊙ . These studies find significantly less evolution in the growth rate than in this work, which is a natural consequence of analyzing to overall galaxy population as opposed to focusing on central galaxies.
The results here make strong predictions for the minimum M * that can be quenched in the field. Geha et al. (2012) find that there no isolated field galaxies below 10 9 M ⊙ that are passively evolving in the low-redshift NASA-Sloan Atlas. Our models predict that the f q (cen) drops below 1% at M * = 3 × 10 9 M ⊙ at z = 0.66 and M * = 6 × 10 9 M ⊙ at z = 0.88. Extending this search for the minimum quenched field galaxy can confirm and strengthen the constraints from the SHMR analysis.
The quenching timescale for satellite galaxies. The proba- FIG. 11.-The red (quenched) fraction of galaxies as a function of redshift for various stellar mass bins. Panel (a) shows fq for all galaxies. Panel (b) shows fq for satellite galaxies. Panel (c) shows fq for central galaxies. Error bars on the COSMOS measurements represent the 68% range within the MCMC chains. Data points at z = 0.05 are from the SDSS groups catalog of Tinker et al. (2011). The SDSS stellar masses have been modified to afford better comparison to COSMOS stellar masses, but these changes yield little to no change in the values on the y-axis. See text for details. Although the z = 0.36 redshift bin is somewhat anomalous in its statistics, it is consistent with the overall trends in this figure: namely, the monotonic growth of the quenched fraction of galaxies at all masses, the near-constant quenched fraction of satellite galaxies, and the rapid growth of a population of quenched central galaxies, especially at low masses.
bility that a satellite is quenched increases monotonically with the time that has passed since it was accreted (Wetzel et al. 2013a); older satellites are more likely to be quenched of their star formation. Thus we can compare our constraints on f q (sat) with our theoretical knowledge of the accretion and destruction of subhalos in N-body simulations. At high z, the mean age of a subhalo (i.e., the time that has elapsed since it was accreted) is significantly smaller than the mean age of subhalos at z = 0. Dynamical friction is more efficient at higher redshifts because the mean density of dark matter halos increases as (1 + z) 3 . Making the ansatz that the oldest subhalos have the lowest star formation rates allows us to infer the timescale that must elapse for galaxies that are accreted as star-forming to migrate to the red sequence; e.g., if 50% of subhalos are older than 4 Gyr and 50% of satellite galaxies are red, it takes approximately 4 Gyr for satellite galaxies to be quenched of their star formation (Tinker et al. 2010b;Tinker & Wetzel 2010;Wetzel et al. 2013a). Figure 16 shows the estimated quenching time for M * = 10 10.5 M ⊙ satellite galaxies. Here we use the simulation results from Tinker & Wetzel (2010). The two values represent the upper and lower bounds on the quenching timescale, based on assumptions about the fraction of satellite galaxies that were quenched prior to accretion; either that f q (cen) = 0 or that f q (cen) is the value at the redshift of the measurement. In reality, f q (cen) will be nonzero but lower than at the redshift of the measurement because galaxies were accreted at higher redshift. We compare these results to those for M * = 10 10.5 M ⊙ galaxies z = 0 (Wetzel et al. 2013a). This estimate takes into account the evolution in f q (cen). We also show results from Tinker et al. (2010b) and Tinker & Wetzel (2010) at higher redshift. These latter papers analyze clustering for different luminosity-defined samples, so this is not an apples-to-apples comparison. But in general these results are consistent with a scenario in which the quenching timescale of satellite galaxies varies with the evolving dynamical timescale of the host halos: t Q ∼ (1 + z) −3/2 . Peng et al. (2010Peng et al. ( , 2012 investigate the quenched fraction of galaxies as a function of local density, stellar mass, and redshift. They parameterize galaxy quenching as "mass quenching" and "environment quenching", demonstrating that the effects of these disparate mechanisms are fully separable. "Mass quenching" can be compared to central galaxy processes, while "environment quenching" is tightly associated with satellite processes. The fundamental difference in the approach of Peng et. al and this work is that the fundamental parameter in our approach is the mass of the host halo (and of the subhalo if it is a satellite), while Peng et. al. consider the stellar mass to be fundamental for the central galaxies and local density of galaxies to be fundamental for the satellites. For central galaxies, due to the small scatter between stellar mass and halo mass, it may not be possible to distinguish between these two approaches. Further work is required to see if a model in which central galaxy quenching is determined by galaxy mass fits the data as well as the model we have presented here. It is worth noting that Peng et al. (2010) find their 'mass-quenching efficiency' to increase with cosmic time, in agreement with our results.
For satellite galaxies, Peng et al. (2012) find that local density correlates better with quenching than either stellar mass of the satellite galaxy or host halo mass. This is at odds with our conclusions, as well as the model presented in Wetzel et al. (2012Wetzel et al. ( , 2013a, in which the observed correlation between host halo mass and quenched fraction of satellite galaxies is driven by the time that has elapsed since the satellites were accreted. More massive halos have older sub-FIG. 12.-The evolution of the quenched fraction from our SHMR analysis compared to that from the zCOSMOS groups catalog of Knobel et al. (2013). The color scheme is the same as previous figures, but here we make the comparison in the stellar mass bins used in Knobel et al. (2013). The top panel shows f Q for satellites while the bottom panel shows f Q for centrals. In the top panel, we also include results from the COSMOS X-ray group catalog of George et al. (2013). There is an overall shift in the total quenched fractions for the COSMOS and zCOSMOS samples (c.f., Figure 2) such that the zCOSMOS sample has a higher fraction of quenched galaxies. For massive objects, the zCOSMOS sample has a decreasing fq with decreasing z, which is driven by the decrease in fq(cen) for bins at M * > 10 10.5 M ⊙ . The photometric COSMOS sample used here has a monotonically increasing (or constant) overall red fraction, also driven by the behavior of the central galaxies. halo populations, thus contain satellites that are more often quenched of their star formation. In the next paper in this series (A. Wetzel, et. al., in preparation), we model various physical mechanisms in detail, scrutinizing the local density model as a driver of satellite evolution. Peng et al. (2010) find no evolution with redshift in their environment quenching efficiency, which is in stark contrast to the results in Figure 16 and our conclusion that satellite quenching efficiency is much higher in the past. The actual quenched fraction of satellite galaxies is nearly independent with redshift (cf. Figure 10 and Tinker & Wetzel (2010)), but Peng et al. (2010) do not take into account the redshift dependence of satellite dynamics discussed above, i.e., the fact that satellite at z = 1 survive as satellites ∼ 1/3 of the time z = 0 satellites do. The Peng et al. (2010) results imply that the fraction of satellites that are quenched after accretion is time independent, thus their results are consistent with an ef-ficiency (or timescale) that varies with the dynamical time of dark matter halos.
What is the mechanism responsible for the growth of the red sequence? The constant f q (sat) with redshift implies that the rates of creation and destruction of red satellite galaxies roughly balance. So although the mechanisms that quench star formation in groups and clusters-ram pressure, strangulation, harassment, etc-are constantly acting on star-forming satellites to quench their star formation, they have minimal impact on change in the number of objects on the red sequence from z = 1 to z = 0 12 . The conclusion of Wetzel et al. 2013a is that roughly 1/3 of z = 0 quenched galaxies with M * ≥ 10 9.7 M ⊙ were put on the red sequence by satellitedriven processes (their Figure 6). This is true whether averaging by number of galaxies or by total stellar mass. At z = 1, this fraction was higher, but the overall number of objects on the red sequence was somewhat smaller. For central galaxies, the primary mechanisms proposed to quench star formation are AGN and major mergers, or perhaps a combination of the two as the latter may drive the former. To be in agreement with the results here, the mechanism for star formation quenching in central galaxies must satisfy two requirements:
(1) become more efficient with time (ie, as z → 0) and (2) be roughly independent of stellar mass.
Let us take AGN and mergers as uncorrelated mechanisms. For mergers, Hopkins et al. (2010) find a general agreement among theoretical predictions and observational estimates in which the merger rate is ∼ 0.1 Gyr −1 at z = 1 and rapidly decreases by a factor of ∼ 5 from z = 1 − 0 for galaxies in the range 10 10 M ⊙ < M * < 10 11 M ⊙ . There is also a strong stellar mass dependence on the major merger rate (Maller 2008;Stewart et al. 2009). Which mergers actually put galaxies on the red sequence is not fully quantified, given that merger simulations with gas-rich progenitors can yield star-forming disk galaxy remnants (Robertson et al. 2006;Hopkins et al. 2009). Regarding AGN, although theoretical models focus on AGN as a method to halt star formation in massive galaxies, observed stellar mass functions of X-ray-AGN-hosting galaxies show little to no dependence on stellar mass (Bundy et al. 2008;Georgakakis et al. 2011). There is general consensus that AGN activity peaks at z ≈ 2 and monotonically decreases toward z = 0, but when quantified as a stellar mass function of AGN hosts, the picture is less clear. Bundy et al. (2008) show no redshift evolution in the X-ray AGN host SMF over z = [0.4, 1.4], while Georgakakis et al. (2011) find a lower amplitude of this quantity at z ≈ 0 relative to z = 1. These results rely on the pencil-thin 0.5 deg 2 AEGIS field, so sample variance may be significant. As with galaxy mergers, connecting AGN to quenching requires knowledge of which AGN matter; is there an X-ray luminosity threshold for quenching? If so, does it depend on stellar mass or gas mass or redshift?
Another possibility is simply a lack of fuel for star formation. Behroozi et al. (2013) demonstrate that the overall mass accretion rate monotonically declines for all dark matter halos at z → 0. If baryonic accretion falls accordingly, star-forming central galaxies may not have a high enough surface density to continue forming stars Galaxy morphology affords an extra lever-arm in constraining power that we have not utilized in this paper. Bundy et al. (2010) find a population of passive disks at z ∼ 0.6 but a FIG. 13.-The satellite fraction of galaxies as a function of redshift for various stellar mass bins. Panel (a) shows fsat for all galaxies. Panel (b) shows fsat for red galaxies. Panel (c) shows fsat for star-forming galaxies. Error bars on the COSMOS measurements represent the 68% range within the MCMC chains. Data points at z = 0.05 are from the SDSS groups catalog of Tinker et al. (2011).The SDSS stellar masses have been modified to afford better comparison to COSMOS stellar masses, but these changes yield little to no change in the values on the y-axis. See text for details. FIG. 14.-Results of a model in which fq(M h ) is held fixed to the best-fit value from z = 0.88. In all panels, the data are measurements from the z = 0.36 redshift bin. The left panel shows the SMF from z = 0.36 along with the original best fit (dotted curve, taken from Figure 3). The solid curve shows the results where all parameters are fixed to the best-fit values except for fq(M h ), which is taken from the z = 0.88 fit. In this model, fq(M h ) has a sharp cutoff at M h = 10 12 M ⊙ , thus suppressing the abundance of red central galaxies and lowering the overall SMF. The right panels show the effect on the clustering of passive galaxies. Reducing the abundance of quenched central galaxies increases the fraction of quenched galaxies that are satellites. The increased fsat enhances the clustering at all scales. Note that a better fit to the SMF can be obtained by increasing the number of red satellites, but this will only increase the clustering of passive galaxies. Thus, the solid curves should be considered a lower limit on w θ for models in which fq(M h ) does not evolve from z = 1 to z = 0.2. paucity of such objects at lower z (see George et al. 2013 for an investigation of such galaxies within groups). At low stellar masses, where we find the most significant increase in the red sequence, the morphological type with the highest fractional increase is ellipticals/S0, implying that the path the red sequence for low-mass central galaxies is accompanied by morphological change as well.
SUMMARY
We have constrained the stellar to halo mass relations for passive and star-forming galaxies over the redshift range z = [0.2, 1.0] in the COSMOS field. These constraints are derived from measurements of the stellar mass function, the angular correlation function, and galaxy-galaxy lensing for multiple stellar mass bins within each redshift bin. For massive galaxies, M * 10 10.6 M ⊙ , the SHMRs for passive and SF samples exhibit significant differential evolution, with passive galaxies growing much slower than their halos while SF galaxies grow roughly at the same rate as their host halos. At lower masses, there is little difference, implying that most faint passive galaxies are recent additions to the red sequence.
Our analysis affords a breakdown of the COSMOS galaxy population into central and satellite galaxies. With this breakdown, we demonstrate that the number of passive satellite galaxies shows little to no evolution with time, thus the change in the red sequence is driven by quenching of central galaxies, primarily at low masses. The overall migration rate of central galaxies to the red sequence is increasing with cosmic time, with the rate at z = 0.05 being nearly a factor of 10 higher than that derived at z = 0.78. Over the same redshift span, the quenching efficiency of satellite galaxies is decreasing with cosmic time. At z = 0.05, the timescale for quenching is ∼ 2.5 times longer than the quenching timescale for satellites at z = 0.88.
We parameterize the quenching of central galaxies as being a function of their host halo mass. At z = 0.88, we find a sharp cutoff in quenched central galaxies at M h ∼ 10 1 2, a cutoff that shifts down 0.2-0.4 dex by z = 0.66. These results are reminiscent of recent theoretical work demonstrating a critical halo mass scale for shock-heating of infalling gas; the coldmode/hot-mode accretion scenerio (e.g., Birnboim & Dekel 2003;Kereš et al. 2005Kereš et al. , 2009Dekel & Birnboim 2006). This shift continues to z = 0.36, but at this redshift there is also a tail of quenched central galaxies that extend to M h 10 11 M h , the lowest halo mass scale for which we can probe halo occupation. The z = 0.36 bin does exhibit unusual clustering and abundances that indicate sample variance is playing some role, but the redshift trends found in the both the quenched central and satellite galaxy populations is consistent with those found from SDSS results. Simply removing the z = 0.36 results from consideration does not change any of the conclusions of this paper.
We thank the referee for many helpful comments and sug-gestions that have improved this work. This work was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. The HST COSMOS Treasury program was supported through NASA grant HST-GO-09822. We wish to thank Tony Roman, Denise Taylor, and David Soderblom for their assistance in planning and scheduling of the extensive COSMOS observations. We gratefully acknowledge the contributions of the entire COS-MOS collaboration consisting of more than 70 scientists. The rate at which central galaxies migrate to the red sequence as a function of stellar mass. The y-axis, ∆n/∆t, represents the difference in the red central stellar mass functions between redshift bins, normalized by the time between each redshift. With three bins we are able to measure two values for ∆n/∆t. The shaded regions show the 68% confidence intervals for this quantity after combining the uncertainties for each redshift bin. The points with errors represent the same quantity but using the sample of central galaxies from the SDSS group catalog. The red points use a volume-limited sample of groups complete to M * = 10 10.1 M ⊙ , with an upper redshift limit of 0.064. The blue points use a volume-limited sample of groups complete to M * = 10 9.7 ,M ⊙ , with a redshift limit of z = 0.04. Error bars on both sets of points are Poisson. These results imply that the quenching efficiency for central galaxies at M * 10 10.5 M ⊙ is increasing rapidly from z = 1 to z = 0.
More information on the COSMOS survey is available at http://cosmos.astro.caltech.edu/. It is a pleasure the acknowledge the excellent services provided by the NASA IPAC/IRSA staff (Anastasia Laity, Anastasia Alexov, Bruce Berriman and John Good) in providing online archive and server capabilities for the COSMOS data-sets. The quenching timescale of satellite galaxies as a function of redshift. The purple and orange filled circles show results for M * = 10 10.5 M ⊙ galaxies in COSMOS. the "v1" method assumes that all galaxies were starforming when accreted. The "v2" method uses fq(cen) from the redshift of the measurement to obtain t Q ; these models bracket the physical range of models. The red triangle at z = 0.05 is from the analysis of SDSS groups in Wetzel et al. (2013a), which models the evolution of the red central fraction explicitly. The green squares are taken from the clustering analysis of Tinker & Wetzel (2010). In order of increasing redshift, these data points represent COMBO-17 (Phleps et al. 2006), DEEP2 M B < −19.5, DEEP2 M B < −20.5 , and UKIDSS-UDS (Williams et al. 2009). The yellow pentagon at z = 2.3 is from Tinker et al. (2010b), analyzing the clustering of DRGs from Quadri et al. (2008). The shaded band shows t Q ∼ (1 + z) −3/2 , normalized by the datum from the SDSS groups data. This power-law dependence on z represents the change in the dynamical friction timescale as the mean density of halos changes proportionately with the mean density of the universe. The observations in COSMOS as well as the other sampled plotted above indicate that the fraction of red satellites is constant with redshift. Because satellite lifetimes decrease with increasing redshift, the quenching of satellite galaxies must be more efficient in the past.
FIG. 2 .
2-A comparison of the evolving quenched fractions in COS-MOS (from this paper), COSMOS (from Drory et al. 2009), PRIMUS
log 10 ( f −1 SHMR (M * )) = log 10 (M 1 ) + β log 10 M * M
et al. (2013) use a U − B color cut to select their FIG. 5.-Galaxy-galaxy lensing of COSMOS galaxies in stellar mass bins. Point with error bars are measurements while curves indicate best-fit HOD models. Colors and point types are the same as Figure 3. Stellar mass bins for the lensing measurements can be found in
14 in L12). Both of these changes lower f Q by 0.1 to 0.2, with the shift in mass scale dominating the effect. The upper error bars show the original SDSS values before shifting and adding scatter. Because both alterations to the SDSS data FIG. 6.-A breakdown of the lensing fits for two stellar mass bins in the z = 0.36 redshift bin. The top row (panels [a] and [c])
FIG. 9 .
9-The fraction of central galaxies that are red as a function of halo mass, fq(M h ), for all three redshift slices. The shaded regions show the 68% range of values within the MCMC chains. The filled circles show the same quantity for the SDSS groups catalog of Tinker et al. (2011). The filled squares show the quenched fraction of central galaxies in COSMOS groups
TABLE 1 BINNING
1SCHEME FOR GALAXIES IN log 10 (M * )∆Σ
bin1
bin2
bin3
bin4
bin5
bin6
bin7
z 1 , min 11.12 10.89 10.64
10.3
9.82
9.2
8.7
z 1 , max
12.0
11.12 10.89 10.64
10.3
9.8
9.2
z 2 , min 11.29 11.05 10.88 10.65
10.3
9.8
9.3
z 2 , max
12.0
11.29 11.05 10.88 10.65
10.3
9.8
z 3 , min 11.35 11.16 10.97 10.74 10.39
9.8
-
z 3 , max
12.0
11.35 11.16 10.97 10.74 10.39
-
w θ
bin1
bin2
bin3
bin4
bin5
bin6
z 1 , min
8.8
9.3
9.8
10.3
10.8
-
z 1 , max
9.3
9.8
10.3
10.8
11.3
-
z 2 , min
-
9.3
9.8
10.3
10.8
11.1
z 2 , max
-
9.8
10.3
10.8
11.3
11.6
z 3 , min
-
-
9.8
10.3
10.8
11.1
z 3 , max
-
-
10.3
10.8
11.3
11.6
TABLE 2 HOD
2VALUES FROM MCMCParameter
z 1
z 2
z 3
active galaxies
log M 1
12.56 ± 0.05
12.77 ± 0.05
12.69 ± 0.04
log M * ,0
10.96 ± 0.06
10.98 ± 0.03
10.97 ± 0.02
β
0.44 ± 0.02
0.46 ± 0.03
0.43 ± 0.02
δ
0.52 ± 0.29
1.15 ± 0.31
0.73 ± 0.25
γ
1.48 ± 0.43
2.15 ± 0.51
4.71 ± 0.56
σ logM *
0.21 ± 0.06
0.24 ± 0.02
0.25 ± 0.01
Bcut
0.28 ± 1.91
0.22 ± 1.09
0.18 ± 1.10
Bsat
33.96 ± 19.61 24.55 ± 21.29 112.70 ± 26.81
βcut
0.77 ± 1.80
0.62 ± 0.96
1.00 ± 1.42
βsat
1.05 ± 0.44
1.16 ± 0.52
2.65 ± 0.39
αsat
0.99 ± 0.20
0.96 ± 0.18
0.84 ± 0.14
passive galaxies
log M 1
12.08 ± 0.20
12.18 ± 0.23
12.21 ± 0.17
log M * ,0
10.70 ± 0.10
10.78 ± 0.13
10.83 ± 0.10
β
0.32 ± 0.09
0.13 ± 0.07
0.02 ± 0.04
δ
0.93 ± 0.25
0.81 ± 0.18
0.44 ± 0.11
γ
0.81 ± 0.58
0.09 ± 0.41
0.81 ± 0.23
σ logM *
0.28 ± 0.03
0.21 ± 0.03
0.18 ± 0.05
Bcut
21.42 ± 10.34
0.01 ± 0.01
0.21 ± 1.42
Bsat
17.90 ± 22.99
21.35 ± 9.50
13.16 ± 3.83
βcut
−0.12 ± 0.46
−1.55 ± 1.53
0.46 ± 0.80
βsat
0.62 ± 0.52
0.58 ± 0.14
0.77 ± 0.22
αsat
1.08 ± 0.26
1.15 ± 0.10
0.98 ± 0.12
passive central fraction
log fq(M 1 )
−1.28 ± 0.20
−7.32 ± 2.32
−6.89 ± 2.18
log fq(M 2 )
−0.85 ± 0.10
−1.17 ± 1.04
−1.23 ± 1.47
fq(M 3 )
0.54 ± 0.07
0.47 ± 0.10
0.43 ± 0.09
fq(M 4 )
0.63 ± 0.05
0.68 ± 0.07
0.59 ± 0.08
fq(M 5 )
0.77 ± 1.36
0.81 ± 0.20
0.76 ± 0.17
TABLE 3 χ
32 VALUES FOR BEST-FIT MODELSz = [0.22, 0.48]
z = [0.48, 0.74]
z = [0.74, 1.00]
218.5/(247 − 27) 273.0/(241 − 27) 220.5/(207 − 27)
Center for Cosmology and Particle Physics, Department of Physics, New York University 2 Kavli Institute for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, the University of Tokyo, Kashiwa, Japan 277-8583 (Kavli IPMU, WPI) 3 Department of Astronomy, University of California, and Lawrence
Some studies have found an increased fraction of quenched galaxies extending several virial radii outside of clusters (e.g., Balogh et al. 2000; Hansen et al. 2009; von der Linden et al. 2010), but these results are easily
The two-halo terms is included in all modeling, but has minimal impact on our results because we do not measure ∆Σ out past 1 Mpc.
We note that the mass of these 'destroyed' satellites is not lost, but it is likely that much of it goes into ICL and is not accounted for by a simple mass-weighted integral over the red-galaxy stellar mass function.
. U Abbas, MNRAS. 4061306Abbas, U. et al. 2010, MNRAS, 406, 1306
. I K Baldry, K Glazebrook, S P Driver, MNRAS. 388945Baldry, I. K., Glazebrook, K., & Driver, S. P. 2008, MNRAS, 388, 945
. M L Balogh, J F Navarro, S L Morris, ApJ. 540113Balogh, M. L., Navarro, J. F., & Morris, S. L. 2000, ApJ, 540, 113
. P S Behroozi, C Conroy, R H Wechsler, ApJ. 717379Behroozi, P. S., Conroy, C., & Wechsler, R. H. 2010, ApJ, 717, 379
. P S Behroozi, R H Wechsler, C Conroy, ApJ. 76231Behroozi, P. S., Wechsler, R. H., & Conroy, C. 2013, ApJ, 762, L31
. E F Bell, C Wolf, K Meisenheimer, H.-W Rix, A Borch, S Dye, M Kleinheinrich, L Wisotzki, D H Mcintosh, ApJ. 608752Bell, E. F., Wolf, C., Meisenheimer, K., Rix, H.-W., Borch, A., Dye, S., Kleinheinrich, M., Wisotzki, L., & McIntosh, D. H. 2004, ApJ, 608, 752
. A A Berlind, D H Weinberg, ApJ. 575587Berlind, A. A. & Weinberg, D. H. 2002, ApJ, 575, 587
. Y Birnboim, A Dekel, MNRAS. 345349Birnboim, Y. & Dekel, A. 2003, MNRAS, 345, 349
. M R Blanton, D W Hogg, N A Bahcall, I K Baldry, J Brinkmann, I Csabai, D Eisenstein, M Fukugita, J E Gunn, Ž Ivezić, D Q Lamb, R H Lupton, J Loveday, J A Munn, R C Nichol, S Okamura, D J Schlegel, K Shimasaku, M A Strauss, M S Vogeley, D H Weinberg, ApJ. 594186Blanton, M. R., Hogg, D. W., Bahcall, N. A., Baldry, I. K., Brinkmann, J., Csabai, I., Eisenstein, D., Fukugita, M., Gunn, J. E., Ivezić, Ž., Lamb, D. Q., Lupton, R. H., Loveday, J., Munn, J. A., Nichol, R. C., Okamura, S., Schlegel, D. J., Shimasaku, K., Strauss, M. A., Vogeley, M. S., & Weinberg, D. H. 2003, ApJ, 594, 186
. R G Bower, A J Benson, R Malbon, J C Helly, C S Frenk, C M Baugh, S Cole, C G Lacey, MNRAS. 370645Bower, R. G., Benson, A. J., Malbon, R., Helly, J. C., Frenk, C. S., Baugh, C. M., Cole, S., & Lacey, C. G. 2006, MNRAS, 370, 645
. J Brinchmann, R S Ellis, ApJ. 53677Brinchmann, J. & Ellis, R. S. 2000, ApJ, 536, L77
. K Bundy, R S Ellis, C J Conselice, J E Taylor, M C Cooper, C N A Willmer, B J Weiner, A L Coil, K G Noeske, P R M Eisenhardt, ApJ. 651120Bundy, K., Ellis, R. S., Conselice, C. J., Taylor, J. E., Cooper, M. C., Willmer, C. N. A., Weiner, B. J., Coil, A. L., Noeske, K. G., & Eisenhardt, P. R. M. 2006, ApJ, 651, 120
. K Bundy, A Georgakakis, K Nandra, R S Ellis, C J Conselice, E Laird, A Coil, M C Cooper, S M Faber, J A Newman, C M Pierce, J R Primack, R Yan, ApJ. 681931Bundy, K., Georgakakis, A., Nandra, K., Ellis, R. S., Conselice, C. J., Laird, E., Coil, A., Cooper, M. C., Faber, S. M., Newman, J. A., Pierce, C. M., Primack, J. R., & Yan, R. 2008, ApJ, 681, 931
. K Bundy, C Scarlata, C M Carollo, R S Ellis, N Drory, P Hopkins, M Salvato, A Leauthaud, A M Koekemoer, N Murray, O Ilbert, P Oesch, C.-P Ma, P Capak, L Pozzetti, N Scoville, ApJ. 719Bundy, K., Scarlata, C., Carollo, C. M., Ellis, R. S., Drory, N., Hopkins, P., Salvato, M., Leauthaud, A., Koekemoer, A. M., Murray, N., Ilbert, O., Oesch, P., Ma, C.-P., Capak, P., Pozzetti, L., & Scoville, N. 2010, ApJ, 719, 1969
. A Cattaneo, A Dekel, J Devriendt, B Guiderdoni, J Blaizot, MNRAS. 3701651Cattaneo, A., Dekel, A., Devriendt, J., Guiderdoni, B., & Blaizot, J. 2006, MNRAS, 370, 1651
. G Chabrier, PASP. 115763Chabrier, G. 2003, PASP, 115, 763
. A L Coil, J A Newman, D Croton, M C Cooper, M Davis, S M Faber, B F Gerke, D C Koo, N Padmanabhan, R H Wechsler, B J Weiner, ApJ. 672153Coil, A. L., Newman, J. A., Croton, D., Cooper, M. C., Davis, M., Faber, S. M., Gerke, B. F., Koo, D. C., Padmanabhan, N., Wechsler, R. H., & Weiner, B. J. 2008, ApJ, 672, 153
. C Conroy, J E Gunn, ApJ. 712833Conroy, C. & Gunn, J. E. 2010, ApJ, 712, 833
. C Conroy, J E Gunn, M White, ApJ. 699486Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486
. C Conroy, R H Wechsler, ApJ. 696620Conroy, C. & Wechsler, R. H. 2009, ApJ, 696, 620
. C Conroy, R H Wechsler, A V Kravtsov, ApJ. 647826ApJConroy, C., Wechsler, R. H., & Kravtsov, A. V. 2006, ApJ, 647, 201 -. 2007, ApJ, 668, 826
. R J Cool, D J Eisenstein, X Fan, M Fukugita, L Jiang, C Maraston, A Meiksin, D P Schneider, D A Wake, ApJ. 682919Cool, R. J., Eisenstein, D. J., Fan, X., Fukugita, M., Jiang, L., Maraston, C., Meiksin, A., Schneider, D. P., & Wake, D. A. 2008, ApJ, 682, 919
. M C Cooper, J A Newman, D J Croton, B J Weiner, C N A Willmer, B F Gerke, D S Madgwick, S M Faber, M Davis, A L Coil, D P Finkbeiner, P Guhathakurta, D C Koo, MNRAS. 370198Cooper, M. C., Newman, J. A., Croton, D. J., Weiner, B. J., Willmer, C. N. A., Gerke, B. F., Madgwick, D. S., Faber, S. M., Davis, M., Coil, A. L., Finkbeiner, D. P., Guhathakurta, P., & Koo, D. C. 2006, MNRAS, 370, 198
. A Cooray, R Sheth, Phys. Rep. 3721Cooray, A. & Sheth, R. 2002, Phys. Rep., 372, 1
. D J Croton, V Springel, S D M White, G De Lucia, C S Frenk, L Gao, A Jenkins, G Kauffmann, J F Navarro, N Yoshida, MNRAS. 36511Croton, D. J., Springel, V., White, S. D. M., De Lucia, G., Frenk, C. S., Gao, L., Jenkins, A., Kauffmann, G., Navarro, J. F., & Yoshida, N. 2006, MNRAS, 365, 11
. A Dekel, Y Birnboim, MNRAS. 3682Dekel, A. & Birnboim, Y. 2006, MNRAS, 368, 2
. N Drory, K Bundy, A Leauthaud, N Scoville, P Capak, O Ilbert, J S Kartaltepe, J P Kneib, H J Mccracken, M Salvato, D B Sanders, D Thompson, C J Willott, ApJ. 7071595Drory, N., Bundy, K., Leauthaud, A., Scoville, N., Capak, P., Ilbert, O., Kartaltepe, J. S., Kneib, J. P., McCracken, H. J., Salvato, M., Sanders, D. B., Thompson, D., & Willott, C. J. 2009, ApJ, 707, 1595
. M Geha, M R Blanton, R Yan, J L Tinker, ApJ. 75785Geha, M., Blanton, M. R., Yan, R., & Tinker, J. L. 2012, ApJ, 757, 85
. A Georgakakis, A L Coil, C N A Willmer, K Nandra, D D Kocevski, M C Cooper, D J Rosario, D C Koo, J R Trump, S Juneau, MNRAS. 4182590Georgakakis, A., Coil, A. L., Willmer, C. N. A., Nandra, K., Kocevski, D. D., Cooper, M. C., Rosario, D. J., Koo, D. C., Trump, J. R., & Juneau, S. 2011, MNRAS, 418, 2590
. M R George, A Leauthaud, K Bundy, A Finoguenov, J Tinker, Y.-T Lin, S Mei, J.-P Kneib, H Aussel, P S Behroozi, M T Busha, P Capak, L Coccato, G Covone, C Faure, S L Fiorenza, O Ilbert, E Le Floc'h, A M Koekemoer, M Tanaka, R H Wechsler, M Wolk, ApJ. 742125George, M. R., Leauthaud, A., Bundy, K., Finoguenov, A., Tinker, J., Lin, Y.- T., Mei, S., Kneib, J.-P., Aussel, H., Behroozi, P. S., Busha, M. T., Capak, P., Coccato, L., Covone, G., Faure, C., Fiorenza, S. L., Ilbert, O., Le Floc'h, E., Koekemoer, A. M., Tanaka, M., Wechsler, R. H., & Wolk, M. 2011, ApJ, 742, 125
. M R George, C.-P Ma, K Bundy, A Leauthaud, J Tinker, R H Wechsler, A Finoguenov, B Vulcani, ApJ. 770113George, M. R., Ma, C.-P., Bundy, K., Leauthaud, A., Tinker, J., Wechsler, R. H., Finoguenov, A., & Vulcani, B. 2013, ApJ, 770, 113
. J E Gunn, J R I Gott, ApJ. 1761Gunn, J. E. & Gott, J. R. I. 1972, ApJ, 176, 1
. S M Hansen, E S Sheldon, R H Wechsler, B P Koester, ApJ. 6991333Hansen, S. M., Sheldon, E. S., Wechsler, R. H., & Koester, B. P. 2009, ApJ, 699, 1333
. P F Hopkins, T J Cox, D Kereš, L Hernquist, ApJS. 175390Hopkins, P. F., Cox, T. J., Kereš, D., & Hernquist, L. 2008, ApJS, 175, 390
. P F Hopkins, T J Cox, J D Younger, L Hernquist, ApJ. 6911168Hopkins, P. F., Cox, T. J., Younger, J. D., & Hernquist, L. 2009, ApJ, 691, 1168
. P F Hopkins, D Croton, K Bundy, S Khochfar, Van Den, F Bosch, R S Somerville, A Wetzel, D Keres, L Hernquist, K Stewart, J D Younger, S Genel, C.-P Ma, ApJ. 724915Hopkins, P. F., Croton, D., Bundy, K., Khochfar, S., van den Bosch, F., Somerville, R. S., Wetzel, A., Keres, D., Hernquist, L., Stewart, K., Younger, J. D., Genel, S., & Ma, C.-P. 2010, ApJ, 724, 915
. O Ilbert, P Capak, M Salvato, H Aussel, H J Mccracken, D B Sanders, N Scoville, J Kartaltepe, S Arnouts, E Le Floc'h, B Mobasher, Y Taniguchi, F Lamareille, A Leauthaud, S Sasaki, D Thompson, M Zamojski, G Zamorani, S Bardelli, M Bolzonella, A Bongiorno, M Brusa, K I Caputi, C M Carollo, T Contini, R Cook, G Coppa, O Cucciati, S De La Torre, L De Ravel, P Franzetti, B Garilli, G Hasinger, A Iovino, P Kampczyk, J.-P Kneib, C Knobel, K Kovac, J F Le Borgne, V Le Brun, O L Fèvre, S Lilly, D Looper, C Maier, V Mainieri, Y Mellier, M Mignoli, T Murayama, R Pellò, Y Peng, E Pérez-Montero, A Renzini, E Ricciardelli, D Schiminovich, M Scodeggio, Y Shioya, J Silverman, J Surace, M Tanaka, L Tasca, L Tresse, D Vergani, E Zucca, ApJ. 6901236Ilbert, O., Capak, P., Salvato, M., Aussel, H., McCracken, H. J., Sanders, D. B., Scoville, N., Kartaltepe, J., Arnouts, S., Le Floc'h, E., Mobasher, B., Taniguchi, Y., Lamareille, F., Leauthaud, A., Sasaki, S., Thompson, D., Zamojski, M., Zamorani, G., Bardelli, S., Bolzonella, M., Bongiorno, A., Brusa, M., Caputi, K. I., Carollo, C. M., Contini, T., Cook, R., Coppa, G., Cucciati, O., de la Torre, S., de Ravel, L., Franzetti, P., Garilli, B., Hasinger, G., Iovino, A., Kampczyk, P., Kneib, J.-P., Knobel, C., Kovac, K., Le Borgne, J. F., Le Brun, V., Fèvre, O. L., Lilly, S., Looper, D., Maier, C., Mainieri, V., Mellier, Y., Mignoli, M., Murayama, T., Pellò, R., Peng, Y., Pérez-Montero, E., Renzini, A., Ricciardelli, E., Schiminovich, D., Scodeggio, M., Shioya, Y., Silverman, J., Surace, J., Tanaka, M., Tasca, L., Tresse, L., Vergani, D., & Zucca, E. 2009, ApJ, 690, 1236
. G Kauffmann, T M Heckman, S D M White, S Charlot, C Tremonti, E W Peng, M Seibert, J Brinkmann, R C Nichol, M Subbarao, D York, MNRAS. 34154Kauffmann, G., Heckman, T. M., White, S. D. M., Charlot, S., Tremonti, C., Peng, E. W., Seibert, M., Brinkmann, J., Nichol, R. C., SubbaRao, M., & York, D. 2003, MNRAS, 341, 54
. D Kereš, N Katz, M Fardal, R Davé, D H Weinberg, MNRAS. 395160Kereš, D., Katz, N., Fardal, M., Davé, R., & Weinberg, D. H. 2009, MNRAS, 395, 160
. D Kereš, N Katz, D H Weinberg, R Davé, MNRAS. 3632Kereš, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2
. C Knobel, S J Lilly, K Kovač, Y Peng, T J Bschorr, C M Carollo, T Contini, J.-P Kneib, O Le Fevre, V Mainieri, A Renzini, M Scodeggio, G Zamorani, S Bardelli, M Bolzonella, A Bongiorno, K Caputi, O Cucciati, S De La Torre, L De Ravel, P Franzetti, B Garilli, A Iovino, P Kampczyk, F Lamareille, J.-F Le Borgne, V Le Brun, C Maier, M Mignoli, R Pello, E Perez Montero, V Presotto, J Silverman, M Tanaka, L Tasca, L Tresse, D Vergani, E Zucca, L Barnes, R Bordoloi, A Cappi, A Cimatti, G Coppa, A M Koekemoer, C López-Sanjuan, H J Mccracken, M Moresco, P Nair, L Pozzetti, N Welikala, ApJ. 76924Knobel, C., Lilly, S. J., Kovač, K., Peng, Y., Bschorr, T. J., Carollo, C. M., Contini, T., Kneib, J.-P., Le Fevre, O., Mainieri, V., Renzini, A., Scodeggio, M., Zamorani, G., Bardelli, S., Bolzonella, M., Bongiorno, A., Caputi, K., Cucciati, O., de la Torre, S., de Ravel, L., Franzetti, P., Garilli, B., Iovino, A., Kampczyk, P., Lamareille, F., Le Borgne, J.-F., Le Brun, V., Maier, C., Mignoli, M., Pello, R., Perez Montero, E., Presotto, V., Silverman, J., Tanaka, M., Tasca, L., Tresse, L., Vergani, D., Zucca, E., Barnes, L., Bordoloi, R., Cappi, A., Cimatti, A., Coppa, G., Koekemoer, A. M., López- Sanjuan, C., McCracken, H. J., Moresco, M., Nair, P., Pozzetti, L., & Welikala, N. 2013, ApJ, 769, 24
. M Kriek, A Van Der Wel, P G Van Dokkum, M Franx, G D Illingworth, ApJ. 682896Kriek, M., van der Wel, A., van Dokkum, P. G., Franx, M., & Illingworth, G. D. 2008, ApJ, 682, 896
. I Labbé, J Huang, M Franx, G Rudnick, P Barmby, E Daddi, P G Van Dokkum, G G Fazio, N M F Schreiber, A F M Moorwood, H.-W Rix, H Röttgering, I Trujillo, P Van Der Werf, ApJ. 62481Labbé, I., Huang, J., Franx, M., Rudnick, G., Barmby, P., Daddi, E., van Dokkum, P. G., Fazio, G. G., Schreiber, N. M. F., Moorwood, A. F. M., Rix, H.-W., Röttgering, H., Trujillo, I., & van der Werf, P. 2005, ApJ, 624, L81
. A Leauthaud, J Tinker, P S Behroozi, M T Busha, R H Wechsler, ApJ. 73845Leauthaud, A., Tinker, J., Behroozi, P. S., Busha, M. T., & Wechsler, R. H. 2011, ApJ, 738, 45
. A Leauthaud, J Tinker, K Bundy, P S Behroozi, R Massey, J Rhodes, M R George, J.-P Kneib, A Benson, R H Wechsler, M T Busha, P Capak, M Cortês, O Ilbert, A M Koekemoer, O Le Fèvre, S Lilly, H J Mccracken, M Salvato, T Schrabback, N Scoville, T Smith, J E Taylor, ApJ. 744159Leauthaud, A., Tinker, J., Bundy, K., Behroozi, P. S., Massey, R., Rhodes, J., George, M. R., Kneib, J.-P., Benson, A., Wechsler, R. H., Busha, M. T., Capak, P., Cortês, M., Ilbert, O., Koekemoer, A. M., Le Fèvre, O., Lilly, S., McCracken, H. J., Salvato, M., Schrabback, T., Scoville, N., Smith, T., & Taylor, J. E. 2012, ApJ, 744, 159
. C Li, S D M White, MNRAS. 3982177Li, C. & White, S. D. M. 2009, MNRAS, 398, 2177
. D S Madgwick, R Somerville, O Lahav, R Ellis, MNRAS. 343871Madgwick, D. S., Somerville, R., Lahav, O., & Ellis, R. 2003, MNRAS, 343, 871
A H Maller, Astronomical Society of the Pacific Conference Series. J. G. Funes & E. M. Corsini396251Astronomical Society of the Pacific Conference SeriesMaller, A. H. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 396, Astronomical Society of the Pacific Conference Series, ed. J. G. Funes & E. M. Corsini, 251-+
. R Mandelbaum, U Seljak, G Kauffmann, C M Hirata, J Brinkmann, MNRAS. 368715Mandelbaum, R., Seljak, U., Kauffmann, G., Hirata, C. M., & Brinkmann, J. 2006, MNRAS, 368, 715
. D Marchesini, P G Van Dokkum, N M Schreiber, M Franx, I Labbé, S Wuyts, ApJ. 7011765Marchesini, D., van Dokkum, P. G., Förster Schreiber, N. M., Franx, M., Labbé, I., & Wuyts, S. 2009, ApJ, 701, 1765
. B Moore, G Lake, N Katz, ApJ. 495139Moore, B., Lake, G., & Katz, N. 1998, ApJ, 495, 139
. S More, Van Den, F C Bosch, M Cacciato, R Skibba, H J Mo, X Yang, MNRAS. 410210More, S., van den Bosch, F. C., Cacciato, M., Skibba, R., Mo, H. J., & Yang, X. 2011, MNRAS, 410, 210
. B P Moster, R S Somerville, C Maulbetsch, Van Den, F C Bosch, A V Macciò, T Naab, L Oser, ApJ. 710903Moster, B. P., Somerville, R. S., Maulbetsch, C., van den Bosch, F. C., Macciò, A. V., Naab, T., & Oser, L. 2010, ApJ, 710, 903
. J Moustakas, A L Coil, J Aird, M R Blanton, R J Cool, D J Eisenstein, A J Mendez, K C Wong, G Zhu, S Arnouts, ApJ. 76750Moustakas, J., Coil, A. L., Aird, J., Blanton, M. R., Cool, R. J., Eisenstein, D. J., Mendez, A. J., Wong, K. C., Zhu, G., & Arnouts, S. 2013, ApJ, 767, 50
. J C Muñoz-Cuartas, A V Macciò, S Gottlöber, A A Dutton, MNRAS. 411584Muñoz-Cuartas, J. C., Macciò, A. V., Gottlöber, S., & Dutton, A. A. 2011, MNRAS, 411, 584
. J F Navarro, C S Frenk, S D M White, ApJ. 490493Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493
. K G Noeske, S M Faber, B J Weiner, D C Koo, J R Primack, A Dekel, C Papovich, C J Conselice, Le Floc'h, E Rieke, G H Coil, A L Lotz, J M Somerville, R S Bundy, K , ApJ. 66047Noeske, K. G., Faber, S. M., Weiner, B. J., Koo, D. C., Primack, J. R., Dekel, A., Papovich, C., Conselice, C. J., Le Floc'h, E., Rieke, G. H., Coil, A. L., Lotz, J. M., Somerville, R. S., & Bundy, K. 2007, ApJ, 660, L47
. J A Peacock, R E Smith, MNRAS. 3181144Peacock, J. A. & Smith, R. E. 2000, MNRAS, 318, 1144
. Y Peng, S J Lilly, K Kovač, M Bolzonella, L Pozzetti, A Renzini, G Zamorani, O Ilbert, C Knobel, A Iovino, C Maier, O Cucciati, L Tasca, C M Carollo, J Silverman, P Kampczyk, L De Ravel, D Sanders, N Scoville, T Contini, V Mainieri, M Scodeggio, J.-P Kneib, O Le Fèvre, S Bardelli, A Bongiorno, K Caputi, G Coppa, S De La Torre, P Franzetti, B Garilli, F Lamareille, J.-F Le Borgne, V Le Brun, M Mignoli, E Perez Montero, R Pello, E Ricciardelli, M Tanaka, L Tresse, D Vergani, N Welikala, E Zucca, P Oesch, U Abbas, L Barnes, R Bordoloi, D Bottini, A Cappi, P Cassata, A Cimatti, M Fumana, G Hasinger, A Koekemoer, A Leauthaud, D Maccagni, C Marinoni, H Mccracken, P Memeo, B Meneux, P Nair, C Porciani, V Presotto, R Scaramella, ApJ. 721193Peng, Y.-j., Lilly, S. J., Kovač, K., Bolzonella, M., Pozzetti, L., Renzini, A., Zamorani, G., Ilbert, O., Knobel, C., Iovino, A., Maier, C., Cucciati, O., Tasca, L., Carollo, C. M., Silverman, J., Kampczyk, P., de Ravel, L., Sanders, D., Scoville, N., Contini, T., Mainieri, V., Scodeggio, M., Kneib, J.-P., Le Fèvre, O., Bardelli, S., Bongiorno, A., Caputi, K., Coppa, G., de la Torre, S., Franzetti, P., Garilli, B., Lamareille, F., Le Borgne, J.-F., Le Brun, V., Mignoli, M., Perez Montero, E., Pello, R., Ricciardelli, E., Tanaka, M., Tresse, L., Vergani, D., Welikala, N., Zucca, E., Oesch, P., Abbas, U., Barnes, L., Bordoloi, R., Bottini, D., Cappi, A., Cassata, P., Cimatti, A., Fumana, M., Hasinger, G., Koekemoer, A., Leauthaud, A., Maccagni, D., Marinoni, C., McCracken, H., Memeo, P., Meneux, B., Nair, P., Porciani, C., Presotto, V., & Scaramella, R. 2010, ApJ, 721, 193
. Y Peng, S J Lilly, A Renzini, M Carollo, ApJ. 7574Peng, Y.-j., Lilly, S. J., Renzini, A., & Carollo, M. 2012, ApJ, 757, 4
. S Phleps, J A Peacock, K Meisenheimer, C Wolf, A&A. 457145Phleps, S., Peacock, J. A., Meisenheimer, K., & Wolf, C. 2006, A&A, 457, 145
. L Pozzetti, M Bolzonella, E Zucca, G Zamorani, S Lilly, A Renzini, M Moresco, M Mignoli, P Cassata, L Tasca, F Lamareille, C Maier, B Meneux, C Halliday, P Oesch, D Vergani, K Caputi, K Kovač, A Cimatti, O Cucciati, A Iovino, Y Peng, M Carollo, T Contini, J.-P Kneib, O Le Févre, V Mainieri, M Scodeggio, S Bardelli, A Bongiorno, G Coppa, S De La Torre, L De Ravel, P Franzetti, B Garilli, P Kampczyk, C Knobel, J.-F Le Borgne, V Le Brun, R Pellò, E Perez Montero, E Ricciardelli, J D Silverman, M Tanaka, L Tresse, U Abbas, D Bottini, A Cappi, L Guzzo, A M Koekemoer, A Leauthaud, D Maccagni, C Marinoni, H J Mccracken, P Memeo, C Porciani, R Scaramella, C Scarlata, N Scoville, A&A. 52313Pozzetti, L., Bolzonella, M., Zucca, E., Zamorani, G., Lilly, S., Renzini, A., Moresco, M., Mignoli, M., Cassata, P., Tasca, L., Lamareille, F., Maier, C., Meneux, B., Halliday, C., Oesch, P., Vergani, D., Caputi, K., Kovač, K., Cimatti, A., Cucciati, O., Iovino, A., Peng, Y., Carollo, M., Contini, T., Kneib, J.-P., Le Févre, O., Mainieri, V., Scodeggio, M., Bardelli, S., Bongiorno, A., Coppa, G., de la Torre, S., de Ravel, L., Franzetti, P., Garilli, B., Kampczyk, P., Knobel, C., Le Borgne, J.-F., Le Brun, V., Pellò, R., Perez Montero, E., Ricciardelli, E., Silverman, J. D., Tanaka, M., Tresse, L., Abbas, U., Bottini, D., Cappi, A., Guzzo, L., Koekemoer, A. M., Leauthaud, A., Maccagni, D., Marinoni, C., McCracken, H. J., Memeo, P., Porciani, C., Scaramella, R., Scarlata, C., & Scoville, N. 2010, A&A, 523, A13
. L Pozzetti, F Mannucci, MNRAS. 31717Pozzetti, L. & Mannucci, F. 2000, MNRAS, 317, L17
. C W Purcell, J S Bullock, A R Zentner, ApJ. 66620Purcell, C. W., Bullock, J. S., & Zentner, A. R. 2007, ApJ, 666, 20
. R F Quadri, R J Williams, K.-S Lee, M Franx, P Van Dokkum, G B Brammer, ApJ. 6851Quadri, R. F., Williams, R. J., Lee, K.-S., Franx, M., van Dokkum, P., & Brammer, G. B. 2008, ApJ, 685, L1
. B Robertson, J S Bullock, T J Cox, T Di Matteo, L Hernquist, V Springel, N Yoshida, ApJ. 645986Robertson, B., Bullock, J. S., Cox, T. J., Di Matteo, T., Hernquist, L., Springel, V., & Yoshida, N. 2006, ApJ, 645, 986
. A G Sánchez, S Cole, MNRAS. 385830Sánchez, A. G. & Cole, S. 2008, MNRAS, 385, 830
. R Scoccimarro, R K Sheth, L Hui, B Jain, ApJ. 54620Scoccimarro, R., Sheth, R. K., Hui, L., & Jain, B. 2001, ApJ, 546, 20
. N Scoville, H Aussel, M Brusa, P Capak, C M Carollo, M Elvis, M Giavalisco, L Guzzo, G Hasinger, C Impey, J.-P Kneib, O Lefevre, S J Lilly, B Mobasher, A Renzini, R M Rich, D B Sanders, E Schinnerer, D Schminovich, P Shopbell, Y Taniguchi, N D Tyson, ApJS. 1721Scoville, N., Aussel, H., Brusa, M., Capak, P., Carollo, C. M., Elvis, M., Giavalisco, M., Guzzo, L., Hasinger, G., Impey, C., Kneib, J.-P., LeFevre, O., Lilly, S. J., Mobasher, B., Renzini, A., Rich, R. M., Sanders, D. B., Schinnerer, E., Schminovich, D., Shopbell, P., Taniguchi, Y., & Tyson, N. D. 2007, ApJS, 172, 1
. U Seljak, MNRAS. 318203Seljak, U. 2000, MNRAS, 318, 203
. R A Skibba, R K Sheth, MNRAS. 3921080Skibba, R. A. & Sheth, R. K. 2009, MNRAS, 392, 1080
. R A Skibba, Van Den, F C Bosch, X Yang, S More, H Mo, F Fontanot, MNRAS. 410417Skibba, R. A., van den Bosch, F. C., Yang, X., More, S., Mo, H., & Fontanot, F. 2011, MNRAS, 410, 417
. K R Stewart, J S Bullock, E J Barton, R H Wechsler, ApJ. 7021005Stewart, K. R., Bullock, J. S., Barton, E. J., & Wechsler, R. H. 2009, ApJ, 702, 1005
. I Strateva, Ž Ivezić, G R Knapp, V K Narayanan, M A Strauss, J E Gunn, R H Lupton, D Schlegel, N A Bahcall, J Brinkmann, R J Brunner, T Budavári, I Csabai, F J Castander, M Doi, M Fukugita, Z Győry, M Hamabe, G Hennessy, T Ichikawa, P Z Kunszt, D Q Lamb, T A Mckay, S Okamura, J Racusin, M Sekiguchi, D P Schneider, K Shimasaku, D York, AJ. 1221861Strateva, I., Ivezić, Ž., Knapp, G. R., Narayanan, V. K., Strauss, M. A., Gunn, J. E., Lupton, R. H., Schlegel, D., Bahcall, N. A., Brinkmann, J., Brunner, R. J., Budavári, T., Csabai, I., Castander, F. J., Doi, M., Fukugita, M., Győry, Z., Hamabe, M., Hennessy, G., Ichikawa, T., Kunszt, P. Z., Lamb, D. Q., McKay, T. A., Okamura, S., Racusin, J., Sekiguchi, M., Schneider, D. P., Shimasaku, K., & York, D. 2001, AJ, 122, 1861
. J Tinker, A V Kravtsov, A Klypin, K Abazajian, M Warren, G Yepes, S Gottlöber, D E Holz, ApJ. 688709Tinker, J., Kravtsov, A. V., Klypin, A., Abazajian, K., Warren, M., Yepes, G., Gottlöber, S., & Holz, D. E. 2008a, ApJ, 688, 709
. J Tinker, A Wetzel, C Conroy, ArXiv:1107.5046MNRAS, submitted. Tinker, J., Wetzel, A., & Conroy, C. 2011, MNRAS, submitted, ArXiv:1107.5046
. J L Tinker, C Conroy, P Norberg, S G Patiri, D H Weinberg, M S Warren, ApJ. 68653Tinker, J. L., Conroy, C., Norberg, P., Patiri, S. G., Weinberg, D. H., & Warren, M. S. 2008b, ApJ, 686, 53
. J L Tinker, M R George, A Leauthaud, K Bundy, A Finoguenov, R Massey, J Rhodes, R H Wechsler, ApJ. 7555Tinker, J. L., George, M. R., Leauthaud, A., Bundy, K., Finoguenov, A., Massey, R., Rhodes, J., & Wechsler, R. H. 2012, ApJ, 755, L5
. J L Tinker, P Norberg, D H Weinberg, M S Warren, ApJ. 659877Tinker, J. L., Norberg, P., Weinberg, D. H., & Warren, M. S. 2007, ApJ, 659, 877
. J L Tinker, B E Robertson, A V Kravtsov, A Klypin, M S Warren, G Yepes, S Gottlöber, ApJ. 724878Tinker, J. L., Robertson, B. E., Kravtsov, A. V., Klypin, A., Warren, M. S., Yepes, G., & Gottlöber, S. 2010a, ApJ, 724, 878
. J L Tinker, R H Wechsler, Z Zheng, ApJ. 70967Tinker, J. L., Wechsler, R. H., & Zheng, Z. 2010b, ApJ, 709, 67
. J L Tinker, A R Wetzel, Van Den, F C Bosch, X Yang, H J Mo, Van Den, F C Bosch, X Yang, H J Mo, S M Weinmann, A V Macciò, S More, M Cacciato, R Skibba, X Kang, MNRAS. 719841MNRASTinker, J. L. & Wetzel, A. R. 2010, ApJ, 719, 88 van den Bosch, F. C., Yang, X., & Mo, H. J. 2003, MNRAS, 340, 771 van den Bosch, F. C., Yang, X., Mo, H. J., Weinmann, S. M., Macciò, A. V., More, S., Cacciato, M., Skibba, R., & Kang, X. 2007, MNRAS, 376, 841
. A Von Der Linden, V Wild, G Kauffmann, S D M White, S Weinmann, MNRAS. 4041231von der Linden, A., Wild, V., Kauffmann, G., White, S. D. M., & Weinmann, S. 2010, MNRAS, 404, 1231
. D A Wake, R C Nichol, D J Eisenstein, J Loveday, A C Edge, R Cannon, I Smail, D P Schneider, R Scranton, D Carson, N P Ross, R J Brunner, M Colless, W J Couch, S M Croom, S P Driver, J Da Ângela, S Jester, R De Propris, M J Drinkwater, J Bland-Hawthorn, K A Pimbblet, I G Roseboom, T Shanks, R G Sharp, J Brinkmann, MNRAS. 372537Wake, D. A., Nichol, R. C., Eisenstein, D. J., Loveday, J., Edge, A. C., Cannon, R., Smail, I., Schneider, D. P., Scranton, R., Carson, D., Ross, N. P., Brunner, R. J., Colless, M., Couch, W. J., Croom, S. M., Driver, S. P., da Ângela, J., Jester, S., de Propris, R., Drinkwater, M. J., Bland- Hawthorn, J., Pimbblet, K. A., Roseboom, I. G., Shanks, T., Sharp, R. G., & Brinkmann, J. 2006, MNRAS, 372, 537
. D A Wake, K E Whitaker, I Labbé, P G Van Dokkum, M Franx, R Quadri, G Brammer, M Kriek, B F Lundgren, D Marchesini, A Muzzin, ApJ. 72846Wake, D. A., Whitaker, K. E., Labbé, I., van Dokkum, P. G., Franx, M., Quadri, R., Brammer, G., Kriek, M., Lundgren, B. F., Marchesini, D., & Muzzin, A. 2011, ApJ, 728, 46
. L Wang, C Li, G Kauffmann, G De Lucia, MNRAS. 3771419Wang, L., Li, C., Kauffmann, G., & De Lucia, G. 2007, MNRAS, 377, 1419
. R H Wechsler, J S Bullock, J R Primack, A V Kravtsov, A Dekel, ApJ. 56852Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., & Dekel, A. 2002, ApJ, 568, 52
. S M Weinmann, Van Den, F C Bosch, X Yang, H J Mo, MNRAS. 3662Weinmann, S. M., van den Bosch, F. C., Yang, X., & Mo, H. J. 2006, MNRAS, 366, 2
. A R Wetzel, J L Tinker, C Conroy, MNRAS. 424232Wetzel, A. R., Tinker, J. L., & Conroy, C. 2012, MNRAS, 424, 232
A R Wetzel, J L Tinker, C Conroy, Van Den, F C Bosch, MNRAS -. 2013b. Wetzel, A. R., Tinker, J. L., Conroy, C., & van den Bosch, F. C. 2013a, MNRAS -. 2013b, ArXiv e-prints
. R J Williams, R F Quadri, M Franx, P Van Dokkum, I Labbé, ApJ. 6911879Williams, R. J., Quadri, R. F., Franx, M., van Dokkum, P., & Labbé, I. 2009, ApJ, 691, 1879
. C N A Willmer, S M Faber, D C Koo, B J Weiner, J A Newman, A L Coil, A J Connolly, C Conroy, M C Cooper, M Davis, D P Finkbeiner, B F Gerke, P Guhathakurta, J Harker, N Kaiser, S Kassin, N P Konidaris, L Lin, G Luppino, D S Madgwick, K G Noeske, A C Phillips, R Yan, ApJ. 647853Willmer, C. N. A., Faber, S. M., Koo, D. C., Weiner, B. J., Newman, J. A., Coil, A. L., Connolly, A. J., Conroy, C., Cooper, M. C., Davis, M., Finkbeiner, D. P., Gerke, B. F., Guhathakurta, P., Harker, J., Kaiser, N., Kassin, S., Konidaris, N. P., Lin, L., Luppino, G., Madgwick, D. S., Noeske, K. G., Phillips, A. C., & Yan, R. 2006, ApJ, 647, 853
. X Yang, H J Mo, Van Den, F C Bosch, Y Zhang, J Han, ArXiv:(1110.1420MNRAS, submitted. Yang, X., Mo, H. J., van den Bosch, F. C., Zhang, Y., & Han, J. 2011, MNRAS, submitted, ArXiv:(1110.1420)
. I Zehavi, Z Zheng, D H Weinberg, M R Blanton, N A Bahcall, A A Berlind, J Brinkmann, J A Frieman, J E Gunn, R H Lupton, R C Nichol, W J Percival, D P Schneider, R A Skibba, M A Strauss, M Tegmark, D G York, ApJ. 73659Zehavi, I., Zheng, Z., Weinberg, D. H., Blanton, M. R., Bahcall, N. A., Berlind, A. A., Brinkmann, J., Frieman, J. A., Gunn, J. E., Lupton, R. H., Nichol, R. C., Percival, W. J., Schneider, D. P., Skibba, R. A., Strauss, M. A., Tegmark, M., & York, D. G. 2011, ApJ, 736, 59
. I Zehavi, Z Zheng, D H Weinberg, J A Frieman, A A Berlind, M R Blanton, R Scoccimarro, R K Sheth, M A Strauss, I Kayo, Y Suto, M Fukugita, O Nakamura, N A Bahcall, J Brinkmann, J E Gunn, G S Hennessy, Ž Ivezić, G R Knapp, J Loveday, A Meiksin, D J Schlegel, D P Schneider, I Szapudi, M Tegmark, M S Vogeley, D G York, ApJ. 6301Zehavi, I., Zheng, Z., Weinberg, D. H., Frieman, J. A., Berlind, A. A., Blanton, M. R., Scoccimarro, R., Sheth, R. K., Strauss, M. A., Kayo, I., Suto, Y., Fukugita, M., Nakamura, O., Bahcall, N. A., Brinkmann, J., Gunn, J. E., Hennessy, G. S., Ivezić, Ž., Knapp, G. R., Loveday, J., Meiksin, A., Schlegel, D. J., Schneider, D. P., Szapudi, I., Tegmark, M., Vogeley, M. S., & York, D. G. 2005, ApJ, 630, 1
. Z Zheng, A L Coil, I Zehavi, ApJ. 667760Zheng, Z., Coil, A. L., & Zehavi, I. 2007, ApJ, 667, 760
. Z Zheng, I Zehavi, D J Eisenstein, D H Weinberg, Y P Jing, ApJ. 707554Zheng, Z., Zehavi, I., Eisenstein, D. J., Weinberg, D. H., & Jing, Y. P. 2009, ApJ, 707, 554
. G Zhu, M R Blanton, S M Burles, A L Coil, R J Cool, D J Eisenstein, J Moustakas, K C Wong, J Aird, ApJ. 726110Zhu, G., Blanton, M. R., Burles, S. M., Coil, A. L., Cool, R. J., Eisenstein, D. J., Moustakas, J., Wong, K. C., & Aird, J. 2011, ApJ, 726, 110
| [] |
[] | [
"Henrik Schließauf *[email protected] \nUniversität zu Köln\nMathematisches Institut\nWeyertal 86-9050931KölnGermany\n"
] | [
"Universität zu Köln\nMathematisches Institut\nWeyertal 86-9050931KölnGermany"
] | [] | We study the one-dimensional Fermi-Ulam ping-pong problem with a Bohr almost periodic forcing function and show that the set of initial condition leading to escaping orbits typically has Lebesgue measure zero. | 10.1007/s12346-021-00545-y | [
"https://arxiv.org/pdf/1908.02529v1.pdf"
] | 199,472,588 | 1908.02529 | ab31551833cd5577e0fd553e668109d554c6bdf4 |
7 Aug 2019 August 6, 2019
Henrik Schließauf *[email protected]
Universität zu Köln
Mathematisches Institut
Weyertal 86-9050931KölnGermany
7 Aug 2019 August 6, 2019Escaping orbits are also rare in the almost periodic Fermi-Ulam ping-pong
We study the one-dimensional Fermi-Ulam ping-pong problem with a Bohr almost periodic forcing function and show that the set of initial condition leading to escaping orbits typically has Lebesgue measure zero.
Introduction
The Fermi-Ulam ping-pong is a model describing how charged particles bounce off magnetic mirrors and thus gain energy. They undergo the so called Fermi acceleration and one central question is whether the particles velocities can get close to the speed of light that way. The model was introduced by Fermi [Fer49] in order to explain the origin of high energy cosmic radiation. A common one-dimensional mathematical formulation of this problem is as follows: The point particle bounces completely elastically between two vertical plates of infinite mass, one fixed at x = 0 and one moving in time as x = p(t) for some forcing function p = p(t) > 0. The particle alternately hits the walls and experiences no external force in between the collisions. The motion can be described by the successor map f : (t 0 , v 0 ) → (t 1 , v 1 ), mapping the time t 0 ∈ R of an impact at the left plate x = 0 and the corresponding velocity v 0 > 0 right after the collision to (t 1 , v 1 ), representing the subsequent impact at x = 0. Since one is interested in the long term behavior, we study the forward iterates (t n , v n ) = f n (t 0 , v 0 ) for n ∈ N and in particular the 'escaping set'
E = {(t 0 , v 0 ) : lim n→∞ v n = ∞},
consisting of initial data, which lead to infinitely fast particles. The most studied case is that of a periodic forcing p(t). Ulam [Ula61] conjectured an increase in energy with time on the average. Based on some numerical simulations, he however realized that rather large fluctuations and no clear gain in energy seemed to be the typical behavior.
Two decades later, the development of KAM theory allowed to prove that the conjecture is indeed false. If the forcing p is sufficiently smooth, all orbits stay bounded in the phase space, since the existence of invariant curves prevents the orbits from escaping [LL91,Pus83]. The proofs are based on Moser's twist thoerem [Mos62], which relies on the higher regularity. And indeed, Zharnitsky [Zha98] showed the existence of escaping orbits if only continuity is imposed on p. In the non-periodic case, one can even find C ∞forcings with this behavior [KO10]. More recently, Dolgopyat and De Simoi developed a new approach. They consider the periodic case and study some maps which are basically approximations of the successor map f . This way they could prove several results regarding the Lebesgue measure of the escaping set E [Dol08b,Dol08a,dSD12,Sim13]. Finally, Zharnitsky [Zha00] investigated the case of a quasi-periodic forcing function whose frequencies satisfy a Diophantine inequality. Again, using an invariant curve theorem, he was able to show that the velocity of every particle is uniformly bounded in time. Since no such theorem is available if the Diophantine condition is dropped, a different approach is necessary in this case. This was done by Kunze and Ortega in [KO18]. They apply a refined version of the Poincaré recurrence theorem due to Dolgopyat [Dol] to the set of initial condition leading to unbounded orbits, and thereby show that most orbits are recurrent. Thus, typically the escaping set E will have Lebesgue measure zero. Now, in this work we will give an affirmative answer to the question raised in [KO18] whether this result can be generalized to the almost periodic case. Indeed, most of their arguments translate naturally into the language of Bohr almost periodic functions. Our main theorem (Theorem 5.1) states that the escaping set E is most likely to have measure zero, provided the almost periodic forcing p is sufficiently smooth. In order to explain more precisely what we mean by 'most likely', we first need to introduce some properties and notation regarding almost periodic functions. This is done in section 2. Subsequently we will study measure-preserving successor maps of a certain type and their iterations. We end this part by stating Theorem 3.1, a slightly generalized version of a theorem by Kunze and Ortega [KO18], which describes conditions under which the escaping set typically will have measure zero. This will be the most important tool and its proof will be given in the following section. Then, in the last section we discuss the ping-pong model in more detail and finally state and prove the main theorem.
2 Almost periodic functions and their representation
Compact topological groups and minimal flows
Let Ω be a commutative topological group, which is metrizable and compact. We will consider the group operation to be additive. Moreover, suppose there is a continuous homomorphism ψ : R → Ω, such that the image ψ(R) is dense in Ω. This function ψ induces a canonical flow on Ω, namely
Ω × R → Ω, ω · t = ω + ψ(t).
This flow is minimal, since ω · R = ω + ψ(R) = ω + ψ(R) = Ω holds for every ω ∈ Ω. Let us also note that in general ψ can be nontrivial and periodic, but this happens if and only if Ω S 1 [OT06]. Now consider the unit circle S 1 = {z ∈ C : |z| = 1} and a continuous homomorphism ϕ : Ω → S 1 . Such functions ϕ are called characters and together with the point wise product they form a group, the so called dual group Ω * . Its trivial element is the constant map with value 1. It is a well known fact that nontrivial characters exist, whenever Ω is nontrivial [Pon66]. Also non-compact groups admit a dual group. Crucial to us will be the fact that
R * = {t → e iαt : α ∈ R}.
Now, for a nontrivial character ϕ ∈ Ω * we define
Σ = ker ϕ = {ω ∈ Ω : ϕ(ω) = 1}.
Then Σ is a compact subgroup of Ω. If in addition Ω ≇ S 1 , it can be shown that Σ is perfect [OT06]. This subgroup will act as a global cross section to the flow on Ω. Concerning this, note that since ϕ • ψ describes a nontrivial character of R, there is a unique α 0 such that ϕ(ψ(t)) = e iαt for all t ∈ R. Therefore, the minimal period of this function,
S = 2π |α| ,
can be seen as a returning time on Σ in the following sense. If we denote by τ(ω) the unique number in [0, S ) such that ϕ(ω) = e iατ(ω) , then one has ϕ(ω · t) = ϕ(ω + ψ(t)) = ϕ(ω)ϕ(ψ(t)) = e iατ(ω) e iαt and thus ω · t ∈ Σ ⇔ t ∈ −τ(ω) + S Z.
Also τ as defined above is a function τ : Ω → [0, S ) that is continuous where τ(ω) 0, i.e. on Ω \ Σ. From this we can derive that the restricted flow
Φ : Σ × [0, S ) → Ω, Φ(σ, t) = σ · t,
is a continuous bijection. Like τ(ω), its inverse
Φ −1 (ω) = (ω · (−τ(ω)), τ(ω))
is continuous only on Ω \ Σ. Therefore, Φ describes a homeomorphism from Σ × (0, S ) to Ω \ Σ.
Example 2.1. One important example for such a group Ω is the N-Torus T N , where T = R/Z. We will denote classes in T N byθ = θ + Z. Then, the image of the homomorphism
ψ(t) = (ν 1 t, . . . , ν N t)
winds densely around the torus T N , whenever the frequency vector ν = (ν 1 , . . . , ν N ) ∈ R N is nonresonant, i.e. rationally independent. It is easy to verify that the dual group of T N is given by (T N ) * = {(θ 1 , . . . ,θ N ) → e 2πi(k 1 θ 1 +...+k N θ N ) : k ∈ Z N }.
Therefore, one possible choice for the cross section would be
Σ = {(θ 1 , . . . ,θ N ) ∈ T N : e 2πiθ 1 = 1} = {0} × T N−1 ,
so ϕ(θ 1 , . . . ,θ N ) = e 2πiθ 1 . In this case, consecutive intersections of the flow and Σ would be separated by an interval of the length 1/ν 1 .
Almost periodic functions
The notion of almost periodic functions was introduced by H. Bohr as a generalization of strictly periodic functions [Boh25]. A function u ∈ C(R) is called (Bohr) almost periodic, if for any ǫ > 0 there is a relatively dense set of ǫ-almost-periods of this function. By this we mean, that for any ǫ > 0 there exists L = L(ǫ) such that any interval of length L contains at least on number T such that
|u(t + T ) − u(t)| < ǫ ∀t ∈ R.
Later, Bochner [Boc27] gave an alternative but equivalent definition of this property: For a continuous function u, denote by u τ (t) the translated function u(t + τ). Then u is (Bohr) almost periodic if and only if every sequence u τ n n∈N of translations of u has a subsequence that converges uniformly. There are several other characterizations of almost periodicity, as well as generalizations due to Stepanov [Ste26], Weyl [Wey27] and Besicovitch [Bes26]. In this work we will only consider the notion depicted above and therefore call the corresponding functions just almost periodic (a.p.). We will however introduce one more way to describe a.p. functions using the framework of the previous section: Consider (Ω, ψ) as above and a function U ∈ C(Ω). Then, the function defined by
u(t) = U(ψ(t)) (2.1)
is almost periodic. This can be verified easily with the alternative definition due to Bochner. Since U ∈ C(Ω), any sequence u τ n n∈N will be uniformly bounded and equicontinuous. Hence the Arzelà-Ascoli theorem guarantees the existence of a uniformly convergent subsequence. We will call any function obtainable in this manner representable over (Ω, ψ). Since the image of ψ is assumed to be dense, it is clear that the function U ∈ C(Ω) is uniquely determined by this relation. As an example take Ω S 1 , then ψ is periodic. Thus (2.1) gives rise to periodic functions. Conversely it is true, that any almost periodic function can be constructed this way. For this purpose we introduce the notion of hull. The hull H u of a function u is defined by
H u = {u τ : τ ∈ R},
where the closure is taken with respect to uniform convergence on the whole real line. Therefore if u is a.p., then H u is a compact metric space. If one uses the continuous extension of the rule u τ * u s = u τ+s ∀τ, s ∈ R onto all of H u as the group operation, then the hull becomes a commutative topological group with neutral element u.
(For v, w ∈ H u with v = lim n→∞ u τ v n and w = lim n→∞ u τ w n we have v * w = lim n→∞ u τ v n +τ v n , −v = lim n→∞ u −τ v n .
These limits exist by Lemma 6.1 from the appendix. The continuity of both operations can be shown by a similar argument.) If we further define the flow ψ u (τ) = u τ , then the pair (H u , ψ u ) matches perfectly the setup of the previous section. Now, the representation formula (2.1) holds for U ∈ C(H u ) defined by
U(w) = w(0) ∀w ∈ H u .
This function is sometimes called the 'extension by continuity' of the almost periodic function u(t) to its hull H u . This construction is standard in the theory of a.p. functions and we refer the reader to [NS60] for a more detailed discussion.
For a function U : Ω → R let us introduce the derivative along the flow by
∂ ψ U(ω) = lim t→0 U(ω + ψ(t)) − U(ω) t .
Let C 1 ψ (Ω) be the space of continuous functions U : Ω → R such that ∂ ψ U exists for all ω ∈ Ω and ∂ ψ U ∈ C(Ω). The spaces C k ψ (Ω) for k ≥ 2 are defined accordingly. Let us also introduce the norm
U C k ψ (Ω) = U ∞ + k n=1 ∂ (n) ψ U ∞ . Now consider U ∈ C(Ω) and assume the almost periodic function u(t) = U(ψ(t)) is continuously differentiable. Then ∂ ψ U exists on ψ(R) and we have u ′ (t) = ∂ ψ U (ψ(t)) for all t ∈ R.
Lemma 2.2. Let U ∈ C(Ω) and u ∈ C(R) be such that u(t) = U(ψ(t)). Then we have u ∈ C 1 (R) and u ′ (t) is a.p. if and only if U ∈ C 1 ψ (Ω). One part of the equivalence is trivial. The proof of the other part can be found in [OT06,Lemma 13]. We also note that the derivative u ′ (t) of an almost periodic function is itself a.p. if and only if it is uniformly continuous. This, and many other interesting properties of a.p. functions are demonstrated in [Bes26].
Example 2.3. Let us continue Example 2.1, where Ω = T N . For U ∈ C(T N ) consider the function u(t) = U(ψ(t)) = U(ν 1 t, . . . , ν N t).
Such functions are called quasi-periodic. In this case, ∂ ψ is just the derivative in the direction of ν ∈ R N . So if U is in the space C 1 (T N ) of functions in C 1 (R N ), which are 1-periodic in each argument, then
∂ ψ U = N i=1 ν i ∂ θ i U.
Note however, that in general C 1 ψ (T N ) is a proper subspace of C 1 (T N ).
Haar measure and decomposition along the flow
It is a well known fact, that for every compact commutative topological group Ω there is a unique Borel probability measure µ Ω , which is invariant under the group operation, i.e.
µ Ω (D + ω) = µ Ω (D) holds for every Borel set D ⊂ Ω and every ω ∈ Ω. This measure is called the Haar measure of Ω. (This follows from the existence of the invariant Haar integral of Ω and the Riesz representation theorem. Proofs can be found in [Pon66] and [HR79], respectively.) For Example if Ω = S 1 we have
µ S 1 (B) = 1 2π λ{t ∈ [0, 2π) : e it ∈ B},
where λ is the Lebesgue measure on R. Let ψ, Σ and Φ be as in section 2.1. Then Φ defines a decomposition Ω Σ × [0, S ) along the flow. Since Σ is a subgroup, it has a Haar measure µ Σ itself. Also the interval [0, S ) naturally inherits the probability measure
µ [0,S ) (I) = 1 S λ(I).
As shown in [CT13], the restricted flow Φ : Σ × [0, S ) → Ω, Φ(σ, t) = σ · t also allows for a decomposition of the Haar measure µ Ω along the flow.
Lemma 2.4. The map Φ is an isomorphism of measure spaces, i.e.
µ Ω (B) = 1 S (µ Σ ⊗ λ)(Φ −1 (B)) (2.2)
holds for every Borel set B ⊂ Ω.
Before we prove this lemma, let us begin with some preliminaries. Consider the function χ :
Σ × [0, ∞) → Σ × [0, S ) defined by χ(σ, t) = Φ −1 (σ · t) = Φ −1 (σ + ψ(t)). (2.3)
Since Φ is just the restricted flow, we have χ = id on Σ × [0, S ). This yields
χ(σ, t) = Φ −1 (σ+ψ(t)) = Φ −1 σ + ψ t S S + ψ t − s S S = σ + ψ t S S , t − t S S ω σ σ 0 S Σ Figure 2: Let χ(σ, t) = (σ, s). The map χ 'divides out' every complete period of ϕ • ψ, i.e. s = t mod S , while preserving the relationσ · s = ω = σ · t.
for every (σ, t) ∈ Σ × R, where ⌊·⌋ indicates the floor function. This representation shows that χ is measure-preserving on every strip Σ × [t, t + S ) of width S , since µ Σ and λ are invariant under translations in Σ and R, respectively. Moreover, the equality
χ(Φ −1 (ω) + Φ −1 (ω)) = Φ −1 (ω +ω) ∀ω,ω ∈ Ω (2.4)
follows directly from the definition of χ.
Proof of Lemma 2.4. First we show that Φ −1 is Borel measurable. To prove this, it suffices to show that the image Φ(A × I) of every open rectangle A × I ⊂ Σ × [0, S ) is a Borel set. If 0 I this image is open in Ω \ Σ, since Φ −1 is continuous. But if 0 ∈ I, again Φ(A × (I \ {0})) is open and Φ(A × {0}) = A is it as well. Now, consider the measure µ Φ on Ω defined by µ Φ (B) = 1 S (µ Σ ⊗ λ)(Φ −1 (B)). (2.5)
Since µ Φ (Ω) = 1, this is a Borel probability measure. We will show that µ Φ is also invariant under addition in the group. For this purpose, let B ⊂ Ω be a Borel set and let ω 0 ∈ Ω. Then, by (2.4) we have
µ Φ (B + ω 0 ) = 1 S (µ Σ ⊗ λ)(Φ −1 (B + ω 0 )) (2.6) = 1 S (µ Σ ⊗ λ) χ(Φ −1 (B) + Φ −1 (ω 0 )) . (2.7) Denoting Φ −1 (ω 0 ) = (σ 0 , s 0 ), we get Φ −1 (B)+Φ −1 (ω 0 ) ⊂ Σ×[s 0 , s 0 +S ). So it is contained in a strip of width S and therefore 1 S (µ Σ ⊗ λ) χ(Φ −1 (B) + (σ 0 , s 0 )) = 1 S (µ Σ ⊗ λ) Φ −1 (B) + (σ 0 , s 0 ) But the product measure µ Σ ⊗ λ is invariant under translations in Σ × R. Thus, in total we have µ Φ (B + ω 0 ) = 1 S (µ Σ ⊗ λ) Φ −1 (B) = µ Φ (B).
(2.8) Therefore, µ Φ is a Borel probability measure on Ω which is invariant under group action.
Since the Haar measure is unique, it follows µ Ω = µ Φ .
3 A theorem about escaping sets 3.1 Measure-preserving embeddings From now on we will consider functions
f : D ⊂ Ω × (0, ∞) → Ω × (0, ∞),
where D is an open set. We will call such a function measure-preserving embedding, if f is continuous, injective and furthermore
(µ Ω ⊗ λ)( f (B)) = (µ Ω ⊗ λ)(B)
holds for all Borel sets B ⊂ D, where λ denotes the Lebesgue measure of R. It is easy to show that under these conditions, f : D →D is a homeomorphism, whereD = f (D).
Since we want to use the iterations of f , we have to carefully construct a suitable domain on which these forward iterations are well-defined. We initialize
D 1 = D, f 1 = f and set D n+1 = f −1 (D n ), f n+1 = f n • f for n ∈ N.
This way f n is well-defined on D n . Clearly, f n is a measure-preserving embedding as well. Also inductively it can be shown that
D n+1 = {(ω, r) ∈ D : f (ω, r), .
. . , f n (ω, r) ∈ D} and therefore D n+1 ⊂ D n ⊂ D for all n ∈ N. Initial conditions in the set
D ∞ = ∞ n=1 D n ⊂ Ω × (0, ∞)
correspond to complete forward orbits, i.e. if (ω 0 , r 0 ) ∈ D ∞ , then
(ω n , r n ) = f n (ω 0 , r 0 )
is defined for all n ∈ N. It could however happen that D ∞ = ∅ or even D n = ∅ for some n ≥ 2. The set of initial data leading to unbounded orbits is denoted by
U = {(ω 0 , r 0 ) ∈ D ∞ : lim sup n→∞ r n = ∞}. (3.1)
Complete orbits such that lim n→∞ r n = ∞ will be called escaping orbits. The corresponding set of initial data is
E = {(ω 0 , r 0 ) ∈ D ∞ : lim n→∞ r n = ∞}.
Almost periodic successor maps
Now, consider a measure-preserving embedding f : D ⊂ Ω × (0, ∞) → Ω × (0, ∞), which has the special structure f (ω, r) = (ω + ψ (F(ω, r)), r + G(ω, r)),
(3.2)
where F, G : D → R are continuous. For ω ∈ Ω we introduce the notation ψ ω (t) = ω + ψ(t) = ω · t and define
D ω = (ψ ω × id) −1 (D) ⊂ R × (0, ∞).
On this open set, consider the map f ω :
D ω ⊂ R × (0, ∞) → R × (0, ∞) given by f ω (t, r) = (t + F(ψ ω (t), r), r + G(ψ ω (t), r)). (3.3)
Then f ω is continuous and meets the identity
f • (ψ ω × id) = (ψ ω × id) • f ω on D ω ,
i.e. the following diagram is commutative:
D f (D) ⊂ T N × (0, ∞) D ω f ω (D ω ) ⊂ R × (0, ∞) f f ω ψ ω ×id ψ ω ×id
Therefore f ω is injective as well. Again we define D ω,1 = D ω and D ω,n+1 = f −1 ω (D ω,n ) to construct the set
D ω,∞ = ∞ n=1 D ω,n ⊂ R × (0, ∞),
where the forward iterates (t n , r n ) = f n ω (t 0 , t 0 ) are defined for all n ∈ N. Analogously, unbounded orbits are generated by initial conditions in the set These sets can also be obtained through the relations
D ω,∞ = (ψ ω × id) −1 (D ∞ ), U ω = (ψ ω × id) −1 (U), E ω = (ψ ω × id) −1 (E).= W(ω, r) satisfying W ∈ C 1 ψ (Ω × (0, ∞)), 0 < β ≤ ∂ r W(ω, r) ≤ δ for ω ∈ Ω, r ∈ (0, ∞),(3.
4)
with some constants β, δ > 0, and furthermore
W( f (ω, r)) ≤ W(ω, r) + k(r) for (ω, r) ∈ D, (3.5)
where k : (0, ∞) → R is a decreasing and bounded function such that lim r→∞ k(r) = 0.
Then, for almost all ω ∈ Ω, the set E ω ⊂ R × (0, ∞) has Lebesgue measure zero.
Here, C 1 ψ (Ω × (0, ∞)) denotes the space of functions U(ω, r) such that U(·, r) ∈ C 1 ψ (Ω) and U(ω, ·) ∈ C 1 (0, ∞) for every (ω, r) ∈ Ω × R. The function W can be seen as a generalized adiabatic invariant, since any growth will be slow for large energies.
Proof of Theorem 3.1
The proof of Theorem 3.1 is based on the fact, that almost all unbounded orbits of f are recurrent. In order to show this, we will apply the Poincaré recurrence theorem to the set U of unbounded orbits and the corresponding restricted map f U . We will use it in the following form [KO18, Lemma 4.2].
Lemma 4.1. Let (X, F , µ) be a measure space such that µ(X) < ∞. Suppose that there exists a measurable set Γ ⊂ X of measure zero and a map T : X \ Γ → X which is injective and so that the following holds: Since we can not guarantee that U has finite measure, we will also need the following refined version of the recurrence theorem due to Dolgopyat [Dol,Lemma 4.3].
Lemma 4.2. Let (X, F , µ) be a measure space and suppose that the map T : X → X is injective and such that the following holds:
(a) T is measurable, in the sense T (B), T −1 (B) ∈ F for B ∈ F , (b) T is measure-preserving, in the sense that µ(T (B)) = µ(B) for B ∈ F , and
(c) there is a set A ∈ F such that µ(A) < ∞ with the property that almost all points from X visit A in the future.
Then for every measurable set B ⊂ X almost all points of B visit B infinitely many times in the future (i.e. T is infinitely recurrent).
For the sake of completeness let us state the proof.
Proof of Lemma 4.2. Let Γ ⊂ X be measurable such that µ(Γ) = 0 and all points of X \ Γ vist A in the future. Thus, the first return time r(x) = min{k ∈ N : T k (x) ∈ A} is welldefined for x ∈ X \ Γ. It induces a map S : X \ Γ → A defined by S (x) = T r(x) (x). The restriction S A\Γ is injective: Assume S (x) = S (y) for distinct points x, y ∈ A \ Γ and suppose r(x) > r(y), then T r(x)−r(y) (x) = y ∈ A is a contradiction to the minimality of r(x). It is also measure-preserving [EW11, cf. Lemma 2.43]. Now, consider a measurable set B ⊂ X and define B j = {y ∈ B \ Γ : r(y) ≤ j} as well as
A j = S (B j ) = j k=1 (T k (B) ∩ A) ⊂ A ∀ j ∈ N.
But since µ(A) < ∞ by assumption, the Poincaré recurrence theorem (Lemma 4.1) applies to A j . Thus we can find measurable sets Γ j ⊂ A j with measure zero, such that every point x ∈ A j \ Γ j returns to A j infinitely often (via S ). Now consider the set
F = B ∩ Γ ∪ j∈Z S −1 (Γ j ) .
Then µ(F) = 0 and every point y ∈ B \ F returns to B infinitely often in the future. To see this, select j ∈ N such that r(y) ≤ j, i.e. y ∈ B j . Then x = S (y) ∈ A j \ Γ j . Hence there exist infinitely many k ∈ N so that k ≥ j and S k (x) ∈ A j . Let us fix one of these k. Then S k (x) = S (z) for some z ∈ B j . So in total we have
T r(z) (z) = S (z) = S k (x) = S k+1 (y) = T k j=1 r(S j (y)) (y).
Now, since k j=1 r(S j (y)) ≥ k + 1 > j ≥ r(z), this yields T m (y) = z ∈ B j ⊂ B, where m = k j=1 r(S j (y)) − r(z) ∈ N. One way to construct such a set A of finite measure is given by the next lemma [KO18]. It is based on the function W(ω, r) introduced in Theorem 3.1 and in fact is the only reason to assume the existence of W in the first place.
∞ j=1 ǫ j < ∞, lim j→∞ W j = ∞ and lim j→∞ ǫ −1 j k( 1 4γ W j ) = 0. Denote A = j∈N A j , A j = {(ω, r) ∈ Ω × (0, ∞) : |W(ω, r) − W j | ≤ ǫ j }. (4.1)
Then A has finite measure and every unbounded orbit of f enters A. More precisely, if (ω 0 , r 0 ) ∈ U, where U is from (3.1), and if (ω n , r n ) n∈N denotes the forward orbit under f , then there is K ∈ N so that (ω K , r K ) ∈ A.
Proof. First let us show that A has finite measure. By Fubini's theorem,
(µ Ω ⊗ λ)(A j ) = Ω λ(A j,ω ) dµ Ω (ω)
holds for the sections A j,ω = {r ∈ (0, ∞) : (ω, r) ∈ A j }. Now, consider the diffeomorphism w ω : r → W(ω, r). Its inverse w −1 ω is Lipschitz continuous with constant β −1 , due to (3.4). But then, A j,ω = w −1 ω ((W j − ǫ j , W j + ǫ j )) implies λ(A j,ω ) ≥ 2β −1 ǫ j . Thus in total we have
(µ Ω ⊗ λ)(A) ≤ ∞ j=1 (µ Ω ⊗ λ)(A j ) ≤ ∞ j=1 2ǫ j β < ∞.
Next we will prove the recurrence property. To this end, let (ω 0 , r 0 ) ∈ U be fixed and denote by (ω n , r n ) the forward orbit under f . We will start with some preliminaries. Using (3.4) and the mean value theorem, we can findr such that
β 2 ≤ W(ω, r) r ≤ 2δ ∀(ω, r) ∈ Ω × (r, ∞). (4.2)
Furthermore, by assumption we can find an index j 0 ≥ 2 such that W j 0 > max{W(ω 1 , r 1 ), k ∞ + max ω∈Ω W(ω,r), 2 k ∞ } and k 1 4γ
W j 0 ≤ ǫ j 0 .
Moreover we have lim sup n→∞ W(ω n , r n ) = ∞: Due to lim sup n→∞ r n = ∞, (3.4) implies W(ω n , r n ) ≥ β(r n − r 1 ) + W(ω n , r 1 ) for n sufficiently large. But then lim sup n→∞ W(ω n , r n ) = ∞ follows from the compactness of Ω. Now, since W(ω 1 , r 1 ) < W j 0 we can select the first index K ≥ 2 such that W(ω K , r K ) > W j 0 . So in particular this means W(ω K−1 , r K−1 ) ≤ W j 0 . Since (3.5) yields W(ω K , r K ) ≤ W(ω K−1 , r K−1 ) + k(r K−1 ), we can derive the following inequality:
W(ω K−1 , r K−1 ) ≥ W(ω K , r K ) − k ∞ > W j 0 − k ∞ ≥ max ω∈Ω W(ω,r) ≥ W(ω K−1 ,r)
Then, the monotonicity of w ω K−1 implies r K−1 >r. Hence we can combine (4.2) with the previous estimate to obtain
r K−1 ≥ 1 2δ W(ω K−1 , r K−1 ) ≥ 1 2δ (W j 0 − k ∞ ) ≥ 1 4δ W j 0 . Finally, since k(r) is decreasing, W(ω K , r K ) > W j 0 ≥ W(ω K−1 , r K−1 ) yields |W(ω K , r K ) − W j 0 | ≤ W(ω K , r K ) − W(ω K−1 , r K−1 ) ≤ k(r K−1 ) ≤ k 1 4δ W j 0 ≤ ǫ j 0 , which implies (ω K , r K ) ∈ A j 0 .
Now, we are ready to prove the theorem. We will assume that U ∅, since otherwise the assertion would be a direct consequence.
Step 1: Almost all unbounded orbits are recurrent. We will prove the existence of a set Z ⊂ U of measure zero such that if (ω 0 , r 0 ) ∈ U \ Z, then lim inf n→∞ r n < ∞.
In particular, we would have E ⊂ Z. To show this, we consider the restriction T = f U : U → U. This map is well-defined, injective and, like f , measure-preserving. We will distinguish three cases:
(i) (µ Ω ⊗ λ)(U) = 0, (ii) 0 < (µ Ω ⊗ λ)(U) < ∞, and (iii) (µ Ω ⊗ λ)(U) = ∞.
In the first case Z = U is a valid choice. In case (ii) we can apply the Poincaré recurrence theorem (Lemma 4.1), whereas in case (iii) the modified version of Dolgopyat (Lemma 4.2) is applicable due to Lemma 4.3. Now, let us cover Ω × R by the sets B j = Ω × ( j − 1, j + 1) for j ∈ N. Then, for B j = B j ∩ U one can use the recurrence property to find sets Z j ⊂ B j of measure zero such that every orbit (ω n , r n ) n∈N starting in B j \ Z j returns to B j infinitely often. But this implies lim inf n→∞ r n ≤ r 0 + 2 < ∞. Therefore, the set Z = j∈N Z j ⊂ U has all the desired properties.
Step 2: The assertion is valid on the subgroup Σ ⊂ Ω. Since E ⊂ Z by construction, the inclusion
E ω = (ψ ω ⊗ id) −1 (E) ⊂ (ψ ω ⊗ id) −1 (Z)
holds for all ω ∈ Ω. To j ∈ Z we can consider the restricted flow
Φ j : Σ × [ jS , ( j + 1)S ) → Ω, Φ j (σ, t) = σ · t = ψ σ (t).
It is easy to verify that just like Φ = Φ 0 of Lemma 2.4 those functions are isomorphisms of measure spaces. In other words, Φ j is bijective up to a set of measure zero, both Φ j and Φ −1 j are measurable, and for every Borel set B ⊂ Ω we have
µ Ω (B) = 1 S (µ Σ ⊗ λ)(Φ −1 j (B)). (4.3)
This clearly implies
(µ Ω ⊗ λ)(B) = 1 S (µ Σ ⊗ λ 2 )(Φ −1 j × id)(B) (4.4) for every Borel set B ⊂ Ω × (0, ∞). Let C j = {(σ, t, r) ∈ Σ × [ jS , ( j + 1)S ) × (0, ∞) : (Φ j (σ, t), r) ∈ Z} = (Φ −1 j × id)(Z)
. Since Z has measure zero, (4.4) yields (µ Σ ⊗ λ 2 )(C j ) = 0. Next we consider the cross sections
C j,σ = {(t, r) ∈ [ jS , ( j + 1)S ) × (0, ∞) : (σ, t, r) ∈ C j }.
Then, λ 2 (C j,σ ) = 0 for µ Σ -almost all σ ∈ Σ follows from Fubini's theorem. So for every j ∈ Z there is a set M j ⊂ Σ with µ Σ (M j ) = 0 such that λ 2 (C j,σ ) = 0 for all σ ∈ Σ \ M j . Thus M = j∈Z M j has measure zero as well and
λ 2 j∈Z C j,σ = 0 for all σ ∈ Σ \ M. But we have j∈Z C j,σ = {(t, r) ∈ R × (0, ∞) : (ψ σ (t), r) ∈ Z} = (ψ σ × id) −1 (Z),
and recalling that E σ ⊂ (ψ σ ×id) −1 (Z), we therefore conclude λ 2 (E σ ) = 0 for all σ ∈ Σ\M.
Step 3: Concluding from Σ to Ω. If we denote by T s (t, r) = (t + s, r) the translation in time, then clearly
f ω·s = T −s • f ω • T s on D ω·s
holds for all ω ∈ Ω and s ∈ R. But this implies T s (E ω·s ) = E ω , since the identity above stays valid under iterations. In particular we have
λ 2 (E ω·s ) = λ 2 (E ω ), ∀ω ∈ Ω, s ∈ R.
Again, we consider the restricted flow Φ :
Σ × [0, S ) → Ω, Φ(ω, t) = ω · t. Using M ⊂ Σ of
Step 2 we define Z * = Φ(M × [0, S )) ⊂ Ω. Then, (4.3) and µ Σ (M) = 0 imply that also Z * has measure zero. Now let ω ∈ Ω \ Z * be fixed and let (σ, τ) = Φ −1 (ω). Then σ ∈ Σ \ M and σ · τ = ω. Therefore, Step 2 implies
λ 2 (E ω ) = λ 2 (E σ·τ ) = λ 2 (E σ ) = 0,
which proves the assertion.
Statement and proof of the main result
We start with a rigorous description of the ping-pong map. To this end, let p be a forcing such that
p ∈ C 2 (R), 0 < a ≤ p(t) ≤ b ∀t ∈ R, p C 2 = p ∞ + ṗ ∞ + p ∞ < ∞. (5.1) Now, we consider the map (t 0 , v 0 ) → (t 1 , v 1 ),
which sends a time t 0 of impact to the left plate x = 0 and the corresponding velocity v 0 > 0 immediately after the impact to their successors t 1 and v 1 describing the subsequent impact to x = 0. If we further denote byt ∈ (t 0 , t 1 ) the time of the particle's impact to the moving plate, then we can determinet =t(t 0 , v 0 ) implicitly through the equation
(t − t 0 )v 0 = p(t), (5.2)
since this relation describes the distance that the particle has to travel before hitting the moving plate. With that we derive a formula for the successor map:
t 1 =t + p(t) v 1 , v 1 = v 0 − 2ṗ(t) (5.3)
To ensure that this map is well defined, we will assume that This condition guarantees that v 1 is positive and also implies that there is a unique solutioñ t =t(t 0 , v 0 ) ∈ C 1 (R × (v * , ∞)) to (5.2). Thus we can take R × (v * , ∞) as the domain of the ping-pong map (5.3). Now, we are finally ready to state the main theorem.
Theorem 5.1. Assume 0 < a < b and P ∈ C 2 ψ (Ω) are such that a ≤ P(ω) ≤ b ∀ω ∈ Ω.
(5.5)
Consider the family {p ω } ω∈Ω of almost periodic forcing functions defined by
p ω (t) = P(ω + ψ(t)), t ∈ R. (5.6)
Let v * = 2 max{max ̟∈Ω ∂ ψ P(̟), 0} and denote by
E ω = {(t 0 , v 0 ) ∈ R × (v * , ∞) : (t n , v n ) n∈N is well defined and lim n→∞ v n = ∞}
the escaping set for the ping-pong map with forcing function p(t) = p ω (t). Then, for almost all ω ∈ Ω, the set E ω ⊂ R 2 has Lebesgue measure zero.
Remark 5.2. The notation v * = 2 max{max ̟∈Ω ∂ ψ P(̟), 0} is consistent with (5.4), since for every ω ∈ Ω the set ω · R lies dense in Ω and thus sup t∈Rṗ ω (t) = sup t∈R ∂ ψ P(ω + ψ(t)) = max ̟∈Ω ∂ ψ P(̟).
We will give some further preliminaries before starting the actual proof. First we note, that the ping-pong map (t 0 , v 0 ) → (t 1 , v 1 ) is not symplectic. To remedy this defect, we reformulate the model in terms of time t and energy E = 1 2 v 2 . In these new coordinates the ping-pong map becomes P : (t 0 , E 0 ) → (t 1 , E 1 ), (5.7)
t 1 =t + p(t) √ 2E 1 , E 1 = E 0 − 2 2E 0ṗ (t) + 2ṗ(t) 2 = ( E 0 − √ 2ṗ(t)) 2 , (5.8) wheret =t(t 0 , E 0 ) is determined implicitly through the relationt = t 0 + p(t) √ 2E 0 . This map is defined for (t 0 , E 0 ) ∈ R × ( 1 2 v 2 * , ∞)
. Since it has a generating function [KO10, Lemma 3.7], it is measure-preserving. Furthermore, from the inverse function theorem we can derive that P is locally injective. Note however, that in general P fails to be injective globally (see Appendix 6.2). Now, we will demonstrate that W(t 0 , E 0 ) = p(t 0 ) 2 E 0 acts as an adiabatic invariant for the ping-pong map. For this purpose we will cite the following lemma [KO10, Lemma 5.1]:
Lemma 5.3. There is a constant C > 0, depending only upon p C 2 and a, b > 0 from (5.1), such that
|p(t 1 ) 2 E 1 − p(t 0 ) 2 E 0 | ≤ C∆(t 0 , E 0 ) ∀(t 0 , E 0 ) ∈ R × (v 2 * /2, ∞),
where (t 1 , E 1 ) = P(t 0 , E 0 ) denotes the ping-pong map for the forcing p, and
∆(t 0 , E 0 ) = E −1/2 0 + sup{|p(t) −p(s)| : t, s ∈ [t 0 − C, t 0 + C], |t − s| ≤ CE −1/2 0 }.
So far we have depicted the case of a general forcing function p. Now we will replace p(t) by p ω (t) from (5.6) and study the resulting ping-pong map. First we note that due to P ∈ C 2 ψ (Ω) we have p ω ∈ C 2 (R). Also 0 < a ≤ p ω (t) ≤ b holds for all ω ∈ Ω by assumption. Furthermore, since ω · R lies dense in Ω it is
p ω ∞ = P ∞ , ṗ ω ∞ = ∂ ψ P ∞ , p ω ∞ = ∂ 2 ψ P ∞ .
In particular this means p ω C 2 (R) = P C 2 ψ (Ω) for all ω ∈ Ω. Therefore all considerations above apply with uniform constants. As depicted in Remark 5.2, also the threshold v * = 2 max{max ̟∈Ω ∂ ψ P(̟), 0} is uniform in ω. Finally, sincep ω (t) = ∂ 2 ψ P(ω + ψ(t)), the function ∆(t 0 , E 0 ) can be uniformly bounded by
∆(E 0 ) = E −1/2 0 + sup{|∂ 2 ψ P(̟) − ∂ 2 ψ P(̟ ′ )| : ̟, ̟ ′ ∈ Ω, ̟ − ̟ ′ ≤ CE −1/2 0 }.
Hence, from Lemma 5.3 we obtain
Lemma 5.4. There is a constant C > 0, uniform in ω ∈ Ω, such that
|p(t 1 ) 2 E 1 − p(t 0 ) 2 E 0 | ≤ C∆(E 0 ) ∀(t 0 , E 0 ) ∈ R × (v 2 * /2, ∞),
where (t 0 , E 0 ) → (t 1 , E 1 ) denotes the ping-pong map P for the forcing function p ω (t).
Consider the equation
τ = 1 √ 2E 0 P(ω 0 + ψ(τ)).
(5.9)
Since P ∈ C 1 ψ (Ω) and 1 − (2E 0 ) −1/2 ∂ ψ P(ω 0 + ψ(τ)) ≥ 1 2 > 0 for E 0 > 1 2 v 2 * , equation (5.9) can be solved implicitly for τ = τ(ω 0 , E 0 ) ∈ C(Ω × (v 2 * /2, ∞)) (cf. [BGdS08] for a suitable implicit function theorem). For ω ∈ Ω and t 0 ∈ R one can consider (5.9) with ω 0 = ω + ψ(t 0 ). Then P ∈ C 1 ψ (Ω) and the classical implicit function theorem yield τ ∈ C 1 ψ (Ω × (v 2 * /2, ∞)). Moreover, comparing this to the definition oft, we observe the following relation:t (t 0 , E 0 ) = t 0 + τ(ω + ψ(t 0 ), E 0 ). (5.10)
Now we will give the proof of the main theorem, in which we will link the ping-pong map corresponding to p ω (t) to the setup of Section 3.
Proof of Theorem 5.1. Let D = Ω × (E * , ∞), where E * = max{ 1 2 v 2 * , E * * } and E * * will be determined below. Consider f :
D ⊂ Ω × (0, ∞) → Ω × (0, ∞), f (ω 0 , E 0 ) = (ω 1 , E 1 ), given by ω 1 = ω 0 + ψ(F(ω 0 , E 0 )), E 1 = E 0 + G(ω 0 , E 0 ), where F(ω 0 , E 0 ) = 1 √ 2E 0 + 1 √ 2E 1 P(ω 0 + ψ(τ)), G(ω 0 , E 0 ) = −2 2E 0 ∂ ψ P(ω 0 + ψ(τ)) + 2∂ ψ P(ω 0 + ψ(τ)) 2 ,
for τ = τ(ω 0 , E 0 ). Then f has special form (3.2) and therefore we can study the family { f ω } ω∈Ω of planar maps defined by (3.3). But plugging (5.10) into the definition of P shows, that f ω is just the ping-pong map P in the case of the forcing p ω (t). Independently of ω, these maps are defined on D ω = (ψ ω × id) −1 (D) = R × (E * , ∞).
Let us show that f is injective on Ω × (E * * , ∞), if E * * is sufficiently large. Therefore suppose f (ω 0 , E 0 ) = (ω 1 , E 1 ) = f (ω 0 ,Ẽ 0 ). Since ω 0 + ι(F(ω 0 , E 0 )) =ω 0 + ι(F(ω 0 ,Ẽ 0 )) there is ω ∈ Ω and t 0 ,t 0 ∈ R such that ω 0 = ω + ψ(t 0 ) andω 0 = ω + ψ(t 0 ). Implicit differentiation yields ∂ t 0 τ(ω + ψ(t 0 ), E 0 ) = O(E −1/2 0 ) and
∂ E 0 τ(ω + ψ(t 0 ), E 0 ) = O(E −3/2 0 ). Moreover, E 1 = O(E 0 ) implies D f ω (t 0 , E 0 ) = 1 + O(E −1/2 0 ) O(E −3/2 0 ) O(E 1/2 0 ) 1 + O(E −1/2 0 )
for the Jacobian matrix of f ω . Throughout this paragraph C will denote positive constants depending on E * * and P C 2 ψ (Ω) , which will not be further specified. Without loss of generality we may assume E 0 ≤Ẽ 0 . Then, applying the mean value theorem yields
|t 0 −t 0 | ≤ CE −1/2 0 |t 0 −t 0 | + CE −3/2 0 |E 0 −Ẽ 0 | and |E 0 −Ẽ 0 | ≤ CẼ 1/2 0 |t 0 −t 0 | + CE −1/2 0 |E 0 −Ẽ 0 |, provided E * * is sufficiently big. Thus, for large E * * we get |t 0 −t 0 | ≤ CE −3/2 0 |E 0 −Ẽ 0 | and |E 0 −Ẽ 0 | ≤ CẼ 1/2 0 |t 0 −t 0 |. Now, combining these inequalities gives us |t 0 −t 0 | ≤ CE −3/2 0Ẽ 1/2 0 |t 0 −t 0 |. But since E 1 = O(E 0 ) and alsoẼ 0 = O(E 1 ), we can conclude |t 0 −t 0 | ≤ CE −1 0 |t 0 −t 0 |.
In turn, this implies t 0 =t 0 and E 0 =Ẽ 0 for E * * sufficiently large, which proves the injectivity of f ω and f .
Next we want to show that f is also measure-preserving. To this end, consider the maps g : (2.3). Then, the identity
Σ × [0, S ) × (E * , ∞) → Σ × [0, ∞) × (0, ∞) defined by g(σ, s, E) = (σ, f σ (s, E)) and χ : Σ × [0, ∞) → Σ × [0, S ), χ(σ, t) = Φ −1 (σ · t) fromf = (Φ × id) • (χ × id) • g • (Φ −1 × id)
holds on D. This can be illustrated as follows:
(ω 0 , E 0 ) (ω 1 , E 1 ) (σ 0 , s 0 , E 0 ) (σ 0 , s 1 , E 1 ) (σ 1 , s ′ 1 , E 1 ) f Φ −1 ×id g χ×id Φ×id
Recalling Lemma 2.4 and the fact that f ω has a generating function, it suffices to show that χ × id preserves the measure of any Borel set B ⊂ g (Φ −1 × id)(D) . Therefore, consider the sets
B k = B ∩ (Σ × [(k − 1)S , kS ) × (0, ∞)) , k ∈ N.
Then we have
(µ Σ ⊗ λ 2 ) ((χ × id)(B k )) = (µ Σ ⊗ λ 2 ) (B k ) ,
as depicted in Section 2.3. Moreover, the injectivity of f implies the injectivity of χ × id on B and thus the sets (χ × id)(B k ) are mutually disjoint.
Since B = ∪ k∈N B k , this yields (µ Σ ⊗ λ 2 ) ((χ × id)(B)) = (µ Σ ⊗ λ 2 ) (B).
Finally, we need to find a function W ∈ C 1 ψ (Ω × (0, ∞)) such that (3.4) and (3.5) are verified. For this define
W(ω 0 , E 0 ) = P(ω 0 ) 2 E 0 .
Conditions (3.4) clearly holds if we take β = a 2 and δ = b 2 with a, b from (5.5). Moreover, the definition of f yields
W( f (ω 0 , E 0 )) − W(ω 0 , E 0 ) = P(ω 1 ) 2 E 1 − P(ω 0 ) 2 E 0 = P(ω 0 + ψ(F(ω 0 , E 0 ))) 2 E 1 − P(ω 0 ) 2 E 0 = p ω 0 (F(ω 0 , E 0 )) 2 E 1 − p ω 0 (0) 2 E 0 .
Now let t 0 = 0 and (t 1 , E 1 ) = f ω 0 (t 0 , E 0 ). Then t 1 = F(ω 0 , E 0 ) and thus Lemma 5.4 yields W( f (ω 0 , E 0 )) − W(ω 0 , E 0 ) = p ω 0 (t 1 ) 2 E 1 − p ω 0 (t 0 ) 2 E 0 ≤ C∆(E 0 ),
where C > 0 is uniform in ω 0 . But then taking k(E 0 ) = C∆(E 0 ) proves (3.5), since lim r→∞ ∆(r) = 0 follows from ∂ 2 ψ P ∈ C(Ω). Now we have validated all conditions of Theorem 3.1 for the map f : D → Ω×(0, ∞). Applying it yields λ 2 (Ê ω ) = 0 for almost all ω ∈ Ω, whereÊ ω = {(t 0 , E 0 ) ∈D ω,∞ : lim n→∞ E n = ∞} andD ω,∞ is defined as in Section 3.2. This can be translated back to the original coordinates (t, v) = (t, √ 2E): Let us denote by g ω the ping-pong map (t 0 , v 0 ) → (t 1 , v 1 ) from (5.3) for the forcing p(t) = p ω (t) and let D ω = R × ( √ 2E * , ∞),D ω,1 =D ω ,D ω,n+1 = g −1 ω (D ω,n ),D ω,∞ = ∞ n=1D ω,n .
Then λ 2 (Ẽ ω ) = 0 for almost all ω ∈ Ω, whereẼ ω = {(t 0 , v 0 ) ∈D ω,∞ : lim n→∞ v n = ∞}. Now, consider the escaping set E ω from the theorem and take (t 0 , v 0 ) ∈ E ω . Since lim n→∞ v n = ∞, there is n 0 ∈ N such that v n > √ 2E * for all n ≥ n 0 . But this just means (t n , v n ) ∈Ẽ ω for n ≥ n 0 . In particular, this implies E ω ⊂ n∈N g −n ω (Ẽ ω ). Considering that g ω is area-preserving, this proves the assertion: λ 2 (E ω ) = 0 for almost all ω ∈ Ω.
Remark 5.5. Let us also point out that the framework developed in the present paper can be applied to a lot of other dynamical systems. A famous example of such a system is given by the so called Littlewood boundedness problem. There, the question is whether solutions of an equationẍ + G ′ (x) = p(t) stay bounded in the (x,ẋ)-phase space if the potential G satisfies some superlinearity condition. In [Sch19] it is shown that the associated escaping set E typically has Lebesgue measure zero for G ′ (x) = |x| α−1 x with α ≥ 3 and a quasi-periodic forcing function p(t). Indeed, this result can be improved to the almost periodic case in a way analogous to the one presented here (for the ping-pong problem). where t ∈ R is arbitrary. Together this yields |u(τ n − s n + t) − u(τ m − s m + t)| < ǫ.
for all n, m ≥ N and t ∈ R, and thus proves the assertion.
Ping-pong map
The map P from (5.7) can fail to be injective globally. For this, suppose there aret 1 ,t 2 ∈ R witht 1 <t 2 such that the derivativeṗ(t) reaches its maximum at botht 1 andt 2 , and moreover p(t 1 ) > p(t 2 ). For the sake of simplicity, let us consider the original coordinates (t, v). Let v 1 > 0 be the unique number so thatt 1 + p(t 1 ) v 1 =t 2 + p(t 2 ) v 1 . Now, we define v 0 = v 1 + 2ṗ(t 1 ) = v 1 + 2ṗ(t 2 ) and t 0,i =t i − p(t i ) v 1 for i = 1, 2. From p(t 1 ) > p(t 2 ) we can derive t 0,1 < t 0,2 . But v 0 = v 1 + 2 sup t∈Rṗ (t) > v * implies that (t 0,i , v 0 ) are in the domain of P and furthermore P(t 0,i , v 0 ) = (t 1 , v 1 ), where t 1 =t 1 + p(t 1 ) v 1 .
Figure 1 :
1On the 2-torus T 2 , intersections of Σ = {0} ×T and the orbit of ψ(t) are separated by time intervals of length S = 1/ν 1 .
U ω = {(t 0 , r 0 ) ∈ D ω,∞ : lim sup n→∞ r n = ∞} and escaping orbits originate in
E ω = {(t 0 , r 0 ) ∈ D ω,∞ : lim n→∞ r n = ∞}.
Finally
we are in position to state the theorem [KO18, Theorem 3.1]: Theorem 3.1. Let f : D ⊂ Ω × (0, ∞) → Ω × (0, ∞) be a measure-preserving embedding of the form (3.2) and suppose that there is a function W
(a) T is measurable, in the sense T (B), T −1 (B) ∈ F for B ∈ F , and(b) T is measure-preserving, in the sense that µ(T (B)) = µ(B) for B ∈ F .Then for every measurable set B ⊂ X almost all points of B visit B infinitely many times in the future (i.e. T is infinitely recurrent).
Lemma 4 . 3 .
43Let f : D ⊂ Ω × (0, ∞) → Ω × (0, ∞) be a measure-preserving embedding and suppose that there is a function W = W(ω, r) satisfying W ∈ C 1 ψ (Ω × (0, ∞)), (3.4) and (3.5). Let (ǫ j ) j∈N and (W j ) j∈N be sequences of positive numbers with the properties
Proof of Theorem 3 . 1 .
31Consider the set U = {(ω 0 , r 0 ) ∈ D ∞ : lim sup n→∞ r n = ∞}.
.
Let u ∈ C(R) be almost periodic. If the sequences {u τ n }, {u s n } are uniformly convergent, then {u τ n −s n } is uniformly convergent as well.Proof. Let ǫ > 0 be given. Since {u τ n }, {u s n } are Cauchy sequences, there exists N ∈ N such that for n, m ≥ N we have|u τ n (−s n + t) − u τ m (−s n + t)| < ǫ 2 and |u s n (τ m − s n − s m + t) − u s m (τ m − s n − s m + t)| < ǫ 2 ,
On generalized almost periodic functions. A S Besicovitch, Proceedings of the London Mathematical Society. 1A.S. Besicovitch. On generalized almost periodic functions. Proceedings of the London Mathematical Society, s2-25(1):495-512, 1926.
The implicit function theorem for continuous functions. C Biasi, C Gutierrez, E L Santos, Topological Methods in Nonlinear Analysis. 321C. Biasi, C. Gutierrez, and E.L. dos Santos. The implicit function theorem for continuous functions. Topological Methods in Nonlinear Analysis, 32(1):177- 185, 2008.
Beiträge zur Theorie der fastperiodischen Funktionen. S Bochner, Mathematische Annalen. 961S. Bochner. Beiträge zur Theorie der fastperiodischen Funktionen. Mathema- tische Annalen, 96(1):119-147, 1927.
Zur Theorie der fast periodischen Funktionen. H Bohr, Acta Mathematica. 45H. Bohr. Zur Theorie der fast periodischen Funktionen. Acta Mathematica, 45:29-127, 1925.
Nonmonotone equations with large almost periodic forcing terms. J Campos, M Tarallo, Journal of Differential Equations. 2542J. Campos and M. Tarallo. Nonmonotone equations with large almost periodic forcing terms. Journal of Differential Equations, 254(2):686-724, 2013.
Lectures on Bouncing Balls. D Dolgopyat, 6Online; accessedD. Dolgopyat. Lectures on Bouncing Balls. https://www.math.umd.edu/˜dolgop/BBNotes2.pdf. [Online; ac- cessed 6-August-2019].
Bouncing balls in non-linear potentials. Discrete and Continuous Dynamical Systems. D Dolgopyat, 22D. Dolgopyat. Bouncing balls in non-linear potentials. Discrete and Continu- ous Dynamical Systems, 22(1):165-182, 2008.
Geometric and Probabilistic Structures in Dynamics, chapter Fermi acceleration. D Dolgopyat, AMSProvidence/RID. Dolgopyat. Geometric and Probabilistic Structures in Dynamics, chapter Fermi acceleration, pages 149-166. AMS, Providence/RI 2008.
Dynamics of some piecewise smooth Fermi-Ulam models. J De Simoi, D Dolgopyat, Chaos: An Interdisciplinary Journal of Nonlinear Science. 22226124J. de Simoi and D. Dolgopyat. Dynamics of some piecewise smooth Fermi- Ulam models. Chaos: An Interdisciplinary Journal of Nonlinear Science, 22(2):026124, 2012.
Ergodic Theory. M Einsiedler, T Ward, SpringerLondonM. Einsiedler and T. Ward. Ergodic Theory. Springer London, 2011.
On the Origin of the Cosmic Radiation. E Fermi, Phys. Rev. 75E. Fermi. On the Origin of the Cosmic Radiation. Phys. Rev., 75:1169-1174, 1949.
Abstract Harmonic Analysis I. E Hewitt, K Ross, SpringerNew YorkE. Hewitt and K. Ross. Abstract Harmonic Analysis I. Springer New York, 1979.
Complete Orbits for Twist Maps on the Plane: Extensions and Applications. M Kunze, R Ortega, Journal of Dynamics and Differential Equations. 233M. Kunze and R. Ortega. Complete Orbits for Twist Maps on the Plane: Ex- tensions and Applications. Journal of Dynamics and Differential Equations, 23(3):405-423, 2010.
Escaping orbits are rare in the quasi-periodic Fermi-Ulam ping-pong. Ergodic Theory and Dynamical Systems. M Kunze, R Ortega, M. Kunze and R. Ortega. Escaping orbits are rare in the quasi-periodic Fermi- Ulam ping-pong. Ergodic Theory and Dynamical Systems, page 1-17, 2018.
Invariant curves and time-dependent potentials. Ergodic Theory and Dynamical Systems. S Laederich, M Levi, 11S. Laederich and M. Levi. Invariant curves and time-dependent potentials. Ergodic Theory and Dynamical Systems, 11(02), 1991.
On invariant curves of area-preserving mappings of an annulus. J Moser, Nachr. Akad. Wiss. Göttingen. IIJ. Moser. On invariant curves of area-preserving mappings of an annulus. Nachr. Akad. Wiss. Göttingen, II, pages 1-20, 1962.
Qualitative Theory of Differential Equations. V V Nemytskii, V V Stepanov, Priceton Univ. PressV.V. Nemytskii and V.V. Stepanov. Qualitative Theory of Differential Equa- tions. Priceton Univ. Press, 1960.
Almost periodic linear differential equations with non-separated solutions. R Ortega, M Tarallo, Journal of Functional Analysis. 2372R. Ortega and M. Tarallo. Almost periodic linear differential equations with non-separated solutions. Journal of Functional Analysis, 237(2):402-426, 2006.
Topological Groups. Gordon & Breach. L S Pontryagin, L.S. Pontryagin. Topological Groups. Gordon & Breach, 1966.
On Ulam's problem. L D , Theoretical and Mathematical Physics. 571L.D. Pustyl'nikov. On Ulam's problem. Theoretical and Mathematical Physics, 57(1):1035-1038, 1983.
Escaping orbits are rare in the quasi-periodic Littlewood boundedness problem. H Schließauf, Nonlinear Differential Equations and Applications NoDEA. 26H. Schließauf. Escaping orbits are rare in the quasi-periodic Littlewood boundedness problem. Nonlinear Differential Equations and Applications NoDEA, 26, 2019.
Fermi acceleration in anti-integrable limits of the standard map. J De Simoi, Communications in Mathematical Physics. 3213J. De Simoi. Fermi acceleration in anti-integrable limits of the standard map. Communications in Mathematical Physics, 321(3):703-745, 2013.
Über einige Verallgemeinerungen der fast periodischen Funktionen. W Stepanoff, Mathematische Annalen. 951W. Stepanoff (V.V. Stepanov).Über einige Verallgemeinerungen der fast pe- riodischen Funktionen. Mathematische Annalen, 95(1):473-498, 1926.
On Some Statistical Properties of Dynamical Systems. S M Ulam, Contributions to Astronomy, Meteorology, and Physics. BerkeleyUniversity of California Press3Proc. of the Fourth Berkeley Symposium on MathS.M. Ulam. On Some Statistical Properties of Dynamical Systems. In Proc. of the Fourth Berkeley Symposium on Math. Statistics and Probability, Volume 3: Contributions to Astronomy, Meteorology, and Physics, pages 315-320, Berkeley, 1961. University of California Press.
Integralgleichungen und fastperiodische Funktionen. H , Mathematische Annalen. 971H. Weyl. Integralgleichungen und fastperiodische Funktionen. Mathematische Annalen, 97(1):338-356, 1927.
Instability in Fermi-Ulam ping-pong problem. V Zharnitsky, Nonlinearity. 116V. Zharnitsky. Instability in Fermi-Ulam ping-pong problem. Nonlinearity, 11(6):1481-1487, 1998.
Invariant curve theorem for quasiperiodic twist mappings and stability of motion in Fermi-Ulam problem. V Zharnitsky, Nonlinearity. 134V. Zharnitsky. Invariant curve theorem for quasiperiodic twist mappings and stability of motion in Fermi-Ulam problem. Nonlinearity, 13(4):1123-1136, 2000.
| [] |
[
"Communicate to Learn at the Edge",
"Communicate to Learn at the Edge"
] | [
"D Gündüz \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n",
"D Burth Kurka \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n",
"M Jankowski \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n",
"M Mohammadi Amiri \nDepartment of Electrical Engineering\nPrinceton University\n\n",
"E Ozfatura \nDepartment of Electrical and Electronic Engineering\nImperial College London\n\n",
"S Sreekumar \nDepartment of Electrical and Computer Engineering\nCornell University\n\n"
] | [
"Department of Electrical and Electronic Engineering\nImperial College London\n",
"Department of Electrical and Electronic Engineering\nImperial College London\n",
"Department of Electrical and Electronic Engineering\nImperial College London\n",
"Department of Electrical Engineering\nPrinceton University\n",
"Department of Electrical and Electronic Engineering\nImperial College London\n",
"Department of Electrical and Computer Engineering\nCornell University\n"
] | [] | Bringing the success of modern machine learning (ML) techniques to mobile devices can enable many new services and businesses, but also poses significant technical and research challenges. Two factors that are critical for the success of ML algorithms are massive amounts of data and processing power, both of which are plentiful, yet highly distributed at the network edge. Moreover, edge devices are connected through bandwidth-and power-limited wireless links that suffer from noise, time-variations, and interference. Information and coding theory have laid the foundations of reliable and efficient communications in the presence of channel imperfections, whose application in modern wireless networks have been a tremendous success. However, there is a clear disconnect between the current coding and communication schemes, and the ML algorithms deployed at the network edge. In this paper, we challenge the current approach that treats these problems separately, and argue for a joint communication and learning paradigm for both the training and inference stages of edge learning.I. MOTIVATIONModern machine learning (ML) techniques have made tremendous advances in areas such as machine vision, robotics, and natural language processing. Novel ML applications emerge every day, ranging from autonomous driving and finance to marketing and healthcare -potential applications are limitless. In parallel, the fifth generation (5G) of mobile technology promises to connect billions of heterogeneous devices to the network edge, supporting new applications and verticals under the banner of Internet of things (IoT). Edge devices will collect massive amounts of data, opening up new avenues for ML applications. The prevalent approach for the implementation of ML solutions on edge devices is to amass all the relevant data at a 2 Figure 1: Distributed learning and inference at the wireless network edge.cloud server, and train a powerful ML model using all the available data and processing power.However, such a 'centralized' solution is not applicable in many cases. This might violate the latency requirements of the underlying application, particularly in the inference stage; or, result in the infringement of user privacy.Moreover, as the data volumes increase, limited bandwidth and energy resources of IoT devices will become a bottleneck. For example, an autonomous car generates 5 to 20 terabytes of data per day. This is a particular challenge when the 'information density' of the collected data is low, i.e., large volumes of data with only limited relevant information for the underlying learning task.To meet the requirements of most IoT applications, the 'intelligence' should move from the centralized cloud to the network edge. However, both data and processing power, the essential constituents of machine intelligence, are highly distributed at the edge. As a result, communication becomes key to an intelligent network edge, and potential solutions must allow edge devices not only to share their data but also computational resources in a seamless and efficient manner. We can argue that the current success of ML, thanks to the tremendous increase in computational power, is similar to the 'great leap forward' in human evolution, which led to the development of human brain thanks to a favorable mutation. Continuing with this analogy, next big revolution in ML is likely to arrive thanks to the efficient orchestration and collaboration among intelligent devices, similarly to the impact of language in human history, which tremendously accelerated the advancement of our civilization by allowing humans to share information, experience, and intelligence.3A. The Communication ChallengeCommunication bottleneck in ML has been acknowledged in the literature; yet, most current approaches treat communication links as rate-limited ideal bit pipes. However, wireless links introduce errors due to noise and channel fading, and error-free operation is either impossible, or would result in significant delays. This is particularly prominent at the network edge, where bandwidth-and power-limited IoT devices share the same wireless medium, also creating interference to each other. Moreover, when information moves across a network, privacy and security concerns arise, exacerbated at the edge due to the vulnerability of individual devices and the broadcast nature of wireless transmissions.After decades of research, communication engineers have designed highly advanced coding and communication techniques that can mitigate channel imperfections and create reliable links among wireless devices; however, reducing the communication among edge devices to a network of ideal bit pipes has the following limitations: 1) communication protocols that enable such reliable links introduce significant overheads and delays, which are not acceptable for many ML applications; 2) such levels of reliability at the link level may not be required for some ML applications, resulting in inefficient resource management; 3) most communication protocols are designed to reduce or remove interference, which may not be desired in some distributed ML applications. To overcome these limitations, we need to reconsider physical layer and networking solutions taking into account the limitations and requirements of the underlying ML applications.Information and coding theory have laid the foundations of reliable, efficient and secure communication in the presence of channel imperfections and interference, whose application in modern wireless networks have been a tremendous success. While the fundamental information theoretic ideas and coding theoretic tools can play an important role in enabling fully distributed learning across distributed heterogeneous edge devices, many of the existing concepts and techniques are not relevant for ML applications, whose communication requirements and constraints (latency, reliability, security, privacy, etc.) are fundamentally different from the type of traffic current networks are designed for. Moreover, as we will try to show in this paper, we cannot overcome these limitations by a simple 'cross-layer' approach, i.e., by tuning the parameters of existing communication protocols. There is a clear disconnect between the current coding and communication techniques, and the ML algorithms and architectures that must be deployed at | 10.1109/mcom.001.2000394 | [
"https://arxiv.org/pdf/2009.13269v1.pdf"
] | 221,970,332 | 2009.13269 | 6b8f1c25e47efd019210086d20318488e00c8897 |
Communicate to Learn at the Edge
28 Sep 2020
D Gündüz
Department of Electrical and Electronic Engineering
Imperial College London
D Burth Kurka
Department of Electrical and Electronic Engineering
Imperial College London
M Jankowski
Department of Electrical and Electronic Engineering
Imperial College London
M Mohammadi Amiri
Department of Electrical Engineering
Princeton University
E Ozfatura
Department of Electrical and Electronic Engineering
Imperial College London
S Sreekumar
Department of Electrical and Computer Engineering
Cornell University
Communicate to Learn at the Edge
28 Sep 20201
Bringing the success of modern machine learning (ML) techniques to mobile devices can enable many new services and businesses, but also poses significant technical and research challenges. Two factors that are critical for the success of ML algorithms are massive amounts of data and processing power, both of which are plentiful, yet highly distributed at the network edge. Moreover, edge devices are connected through bandwidth-and power-limited wireless links that suffer from noise, time-variations, and interference. Information and coding theory have laid the foundations of reliable and efficient communications in the presence of channel imperfections, whose application in modern wireless networks have been a tremendous success. However, there is a clear disconnect between the current coding and communication schemes, and the ML algorithms deployed at the network edge. In this paper, we challenge the current approach that treats these problems separately, and argue for a joint communication and learning paradigm for both the training and inference stages of edge learning.I. MOTIVATIONModern machine learning (ML) techniques have made tremendous advances in areas such as machine vision, robotics, and natural language processing. Novel ML applications emerge every day, ranging from autonomous driving and finance to marketing and healthcare -potential applications are limitless. In parallel, the fifth generation (5G) of mobile technology promises to connect billions of heterogeneous devices to the network edge, supporting new applications and verticals under the banner of Internet of things (IoT). Edge devices will collect massive amounts of data, opening up new avenues for ML applications. The prevalent approach for the implementation of ML solutions on edge devices is to amass all the relevant data at a 2 Figure 1: Distributed learning and inference at the wireless network edge.cloud server, and train a powerful ML model using all the available data and processing power.However, such a 'centralized' solution is not applicable in many cases. This might violate the latency requirements of the underlying application, particularly in the inference stage; or, result in the infringement of user privacy.Moreover, as the data volumes increase, limited bandwidth and energy resources of IoT devices will become a bottleneck. For example, an autonomous car generates 5 to 20 terabytes of data per day. This is a particular challenge when the 'information density' of the collected data is low, i.e., large volumes of data with only limited relevant information for the underlying learning task.To meet the requirements of most IoT applications, the 'intelligence' should move from the centralized cloud to the network edge. However, both data and processing power, the essential constituents of machine intelligence, are highly distributed at the edge. As a result, communication becomes key to an intelligent network edge, and potential solutions must allow edge devices not only to share their data but also computational resources in a seamless and efficient manner. We can argue that the current success of ML, thanks to the tremendous increase in computational power, is similar to the 'great leap forward' in human evolution, which led to the development of human brain thanks to a favorable mutation. Continuing with this analogy, next big revolution in ML is likely to arrive thanks to the efficient orchestration and collaboration among intelligent devices, similarly to the impact of language in human history, which tremendously accelerated the advancement of our civilization by allowing humans to share information, experience, and intelligence.3A. The Communication ChallengeCommunication bottleneck in ML has been acknowledged in the literature; yet, most current approaches treat communication links as rate-limited ideal bit pipes. However, wireless links introduce errors due to noise and channel fading, and error-free operation is either impossible, or would result in significant delays. This is particularly prominent at the network edge, where bandwidth-and power-limited IoT devices share the same wireless medium, also creating interference to each other. Moreover, when information moves across a network, privacy and security concerns arise, exacerbated at the edge due to the vulnerability of individual devices and the broadcast nature of wireless transmissions.After decades of research, communication engineers have designed highly advanced coding and communication techniques that can mitigate channel imperfections and create reliable links among wireless devices; however, reducing the communication among edge devices to a network of ideal bit pipes has the following limitations: 1) communication protocols that enable such reliable links introduce significant overheads and delays, which are not acceptable for many ML applications; 2) such levels of reliability at the link level may not be required for some ML applications, resulting in inefficient resource management; 3) most communication protocols are designed to reduce or remove interference, which may not be desired in some distributed ML applications. To overcome these limitations, we need to reconsider physical layer and networking solutions taking into account the limitations and requirements of the underlying ML applications.Information and coding theory have laid the foundations of reliable, efficient and secure communication in the presence of channel imperfections and interference, whose application in modern wireless networks have been a tremendous success. While the fundamental information theoretic ideas and coding theoretic tools can play an important role in enabling fully distributed learning across distributed heterogeneous edge devices, many of the existing concepts and techniques are not relevant for ML applications, whose communication requirements and constraints (latency, reliability, security, privacy, etc.) are fundamentally different from the type of traffic current networks are designed for. Moreover, as we will try to show in this paper, we cannot overcome these limitations by a simple 'cross-layer' approach, i.e., by tuning the parameters of existing communication protocols. There is a clear disconnect between the current coding and communication techniques, and the ML algorithms and architectures that must be deployed at
Next, we present the challenges in achieving a fully distributed edge intelligence across heterogeneous agents communicating over imperfect wireless channels. We will treat the inference and training phases of ML algorithms separately as they have distinct reliability and latency requirements.
II. DISTRIBUTED INFERENCE
Inference refers to applying a trained model on a new data sample to make a prediction.
Although inference tasks require much less computational resources compared to training, they typically impose more strict latency constraints. For example, in self-driving cars (see Fig. 1), immediate detection of obstacles is critical to avoid accidents. A powerful deep neural network (DNN) model can be pre-trained and deployed for this task. However, it is often not possible to carry out inference locally at a single device, as decisions may rely on data (e.g., background and terrain information) available at an edge server, or on signals from other cars; or the device gathering the data (e.g., a bike) may not have the necessary processing capability. Communication becomes indispensable in such scenarios, and we need to guarantee that inference can still be accomplished within the accuracy and latency constraints of the underlying application.
Fundamental limits. As a first step towards understanding the fundamental limits of statistical inference over noisy channels, a distributed binary hypothesis testing (HT) problem is studied in [1]. Consider two devices with their local observations. One of the devices (e.g., the car in Fig. 1), called the observer, conveys some information about its observations to the other one, called the decision maker (e.g., the edge server in Fig. 1), over a noisy channel. The decision maker has to make a decision on the joint distribution of the observations of the two devices. Since the observer has access only to its own observations, it cannot make a local decision no matter how much processing power it has; instead it must convey some features of its observations to help the decision maker to make the correct decision. The question here is whether the features and the channel code to transmit them can be designed separately. If the goal were to transmit the samples at the observer with the minimal average distortion (under any additive finite distortion measure), according to Shannon's separation theorem the compression and channel coding tasks can be carried out separately and without loss of optimality, in the limit of infinite blocklength. However, it is shown in [1] that the optimality of separation breaks down in the remote HT problem, as the goal here is to decide on the joint distribution with minimal error probability. While this result shows that communication and inference cannot be separated even in the asymptotic limit (without loss of optimality), how a joint scheme should be designed in practice is a vastly unexplored research direction with great potential in future edge inference applications. Next, we provide several practical examples of edge inference problems, and illustrate how jointly treating communication and inference can help improve both the speed and the accuracy of the inference task.
Edge Inference with DNNs. DNNs achieve the state-of-the-art performance in most ML tasks. In distributed inference across mobile devices and edge servers, a common approach is to partition a pre-trained DNN baseline between the devices and the edge server depending on the former's computational capabilities (see Fig. 2) [2]. Conventional approaches abstract out the wireless channel as an error-free ideal bit-pipe, and focus only on the feature compression problem, ignoring the potential impacts of communication in terms of delay, complexity, and reliability. However, lossy transmission of feature vectors over a wireless channel is a joint source-channel coding (JSCC) problem, and separation is known to be suboptimal under strict latency constraints imposed by inference problems.
While JSCC has long been studied, mainly for image and video transmission, these works mostly took a model-driven approach exploiting particular properties of the underlying source and channel statistics. Recently, an alternative fully data-driven DNN-based scheme, called DeepJSCC, has been introduced in [3]. DeepJSCC not only beats digital alternatives for image transmission (e.g., BPG image compression + LDPC channel coding), but also provides 'graceful degradation' with channel quality, making it ideal for IoT applications, where accurate channel estimation is often not possible. DeepJSCC also reduces the coding/decoding delay compared to conventional digital schemes more than 5 times on a CPU, and more than 10 times on a GPU.
As opposed to conventional digital schemes, DeepJSCC can easily adapt to specific information source or channel statistics through training, e.g., landscape images transmitted from a drone or a satellite. This makes DeepJSCC especially attractive for edge inference as we do not have compression codes designed for feature vectors, whose statistics would change from application to application.
A practical edge inference problem is studied in [4], where the image of a person captured by a remote camera is to be identified within a database available at an edge server, called the reidentification (re-ID) problem. Here, the camera cannot make a local decision as it does not have access to the database. In [4], two approaches are proposed, both employing DNNs for remote inference: a task-oriented DNN-based compression scheme for digital transmission and a DNNbased analog JSCC approach,à la DeepJSCC. These schemes are compared in Fig. 3 in terms of top-1 identification accuracy when only 128 real symbols are transmitted over an additive white Gaussian noise (AWGN) channel. We observe that the analog approach, which maps the feature vectors directly to channel inputs (no explicit compression or channel coding), performs significantly better, achieving the baseline performance around a channel signal-to-noise-ratio (SNR) of approximately 8 dB. We highlight that the conventional scheme of transmitting the query images with the best possible quality (ignoring the learning task), and then applying the re-ID baseline on the reconstructed image is not included as it would require much higher SNR values to achieve a comparable performance. This result shows that separating communication from inference at the edge can be highly suboptimal. While joint design can offer significant performance gains, it brings about new challenges and requires novel coding and communication paradigms, including the extension of the proposed edge inference approach to time-varying and/or non-Gaussian channels, and to multi-antenna and multi-user networks.
In the inference stage, the challenge is to convey the most relevant information about the data samples to the decision maker to achieve the desired level of accuracy within the constraints of the edge network. The results above show that the channel characteristics must be taken into account during the training stage, rather than being abstracted out, and effectively, we learn how to communicate and infer jointly. In this section, we have assumed that the DNNs are trained centrally, and then deployed at the edge devices, assuming the availability of sufficient training data and an accurate model of the wireless communication channels. We focus on the training stage in the next section.
III. DISTRIBUTED TRAINING
Training is particularly challenging at the network edge due to the distributed nature of both the data and the processing power. Below, we will first address the scenario in which an edge device with its own dataset employs the computational resources of multiple edge servers to speed up training (see Fig. 4a). Later, we will consider the scenario when data is also distributed (see Fig. 4b).
In the training stage of a standard ML problem, the goal is to optimize the model parameters over a training dataset with respect to an application specific empirical loss function.
This optimization problem is typically solved by stochastic gradient descent (SGD), iteratively updating the parameter vector along the estimated gradient descent direction. This algorithm is highly parallelizable, allowing distributed and parallel implementation. When the dataset is large, distributed SGD across multiple edge servers can be utilized to reduce the training time. The dataset can be divided into non-overlapping subsets, each given to a different server. At each iteration of the gradient descent algorithm, the user broadcasts the current model parameters to all the servers. Each server computes a partial gradient based only on its local dataset, and returns the result to the master. The master waits to receive partial gradients from all the servers in order to aggregate them and obtain the full gradient. In this implementation, however, due to synchronised updates the completion time of each iteration is constrained by the straggling Straggling servers can be treated as 'erasures', and using ideas from coding theory, redundant computations can be introduced to efficiently compensate for erasures [5], [6]. This can help reduce the recovery threshold, the minimum number of responsive servers required to complete the computation task, e.g., computing a sufficiently accurate gradient estimate. However, this may require coding the data before offloading to the servers [5], or coding the results of computations at each server [6], and eventually decoding these responses by the user, which introduce additional complexity and delays. Despite the significant research efforts in recent years, optimal coding schemes remain elusive, and there is no comprehensive analysis of end-to-end latency that take into account the communication, coding, and computing delays.
Moreover, most of the existing techniques suffer from two main drawbacks: the recovery threshold can be reduced by increasing the redundancy; yet, the servers may end up executing more computations than required due to an inaccurate prediction of the straggling behaviour, resulting in over-computation. Also, most of the existing solutions are designed for persistent stragglers, and partial computations carried out by stragglers are discarded, resulting in underutilization of the computational resources. To overcome these limitations, each server can be allowed to send multiple messages during each training iteration [7], each corresponding to partial computations. This approach will provide additional flexibility for straggler mitigation, resulting in a trade-off between the amount of communication and computation. We highlight that the real performance indicator for these schemes is the average completion time of training, which requires the joint design of the underlying communication protocol and the coded computing scheme employed.
Private and secure distributed computation. Distributed training also introduces privacy and security challenges. Malicious servers can inject false data, while honest but curious servers can exploit user data for purposes beyond computation. Coded computing, in particular polynomial codes, can provide security and privacy guarantees in addition to straggler mitigation by delivering coded data samples to the computing servers [8], but the optimal trade-off between the required communication bandwidth between the user and the servers, and the privacy/ security guarantees (in terms of the number of colliding servers) remains an open challenge.
IV. FEDERATED EDGE LEARNING (FEEL)
When multiple edge devices with their own local datasets collaborate to train a joint model, devices may not want to offload their data due to privacy concerns. Yet, unlike in distributed training, data samples at different devices cannot be coded to provide privacy. Federated learning (FL) has been introduced by Google to enable collaborative training without sharing local datasets [9], typically orchestrated by a parameter server (PS) (see Fig. 4b). In FL, the PS broadcasts a global model to the devices. Each device runs SGD locally using the current global model. In FEEL, we assume that the training takes place at the network edge across wireless devices within physical proximity; therefore, communication from edge devices to the PS will be limited by the power and bandwidth constraints, interference among devices, and time-varying channel fading. When the model size is relatively small compared to the size of the dataset, exchanging model parameters rather than data provides another advantage of FEEL. Still, allocation and optimization of channel resources among devices will be essential to improve the learning performance. On the other hand, conventional solutions that maximize throughput do not necessarily translate into better accuracy or faster convergence in FEEL [11], [12]. Moreover, conventional measures based on number of iterations may not be relevant in FEEL, as the wall clock time depend hugely on the communication protocol [12]. Optimizing the communication protocols for FEEL poses many interesting research challenges; however, most current approaches, motivated by conventional communication systems, consider orthogonal resource allocation with the aim of minimizing interference.
Interference can be a bliss. In the uplink transmission from the devices, the PS is interested only in the average of the local models. Hence, rather than transmitting individual updates in an orthogonal fashion, signal superposition property of the wireless medium can be exploited to directly convey the sum of the local parameters through over-the-air computation [13], [14]. limited channel resources available to each device [10]. In analog transmission, however, even though all the devices transmit over the same channel resources, the required bandwidth can be fairly large. Some state-of-the-art models include tens of millions of parameters, whereas 1 LTE frame of 5MHz bandwidth and 10ms duration can carry only 6K complex symbols. In [13], sparsification of model updates is proposed followed by linear projection with a pseudo-random Gaussian matrix. This novel approach serves as an analog compression technique, and reliable reconstruction can be achieved by approximate message passing at the PS. In Fig. 5, we compare digital and analog schemes for the MNIST classification task over a Gaussian multiple access channel. In the IID case, local datasets are chosen randomly from the whole training dataset;
whereas in the non-IID case each device has samples from only two classes. We see that overthe-air computation provides significant gains in both the final accuracy and the convergence speed. Over-the-air computation allows scheduling more devices within the same time constraint, which provides variance reduction in updates, and better robustness against the channel noise [13]. This is yet another example, where a joint design of the communication and learning algorithms is essential.
We remark that over-the-air computation assumes symbol-level synchronization among the participating devices. In practice, this can be achieved through a synchronization channel, e.g., timing advance in LTE systems, resulting in a trade-off between the overall performance and the resources dedicated to synchronization, which is an interesting research direction to fully evaluate the potential benefits of over-the-air computation for FEEL.
Privacy in FEEL. Although FL has been introduced as a privacy-aware solution for collaborative learning, it is known to be vulnerable to membership as well as reconstruction attacks solely using the gradient information [15]. Although differential privacy can be achieved by introducing noise into the gradients transmitted by the devices, this typically requires adding
Figure 2 :
2DNNs at the network edge. the network edge, and we need a fundamentally new paradigm of coding, communication and networking with ML applications in mind.
Figure 3 :
3Accuracy vs. channel SNR for remote person re-ID over an AWGN channel.
Distributed training with centralized data. (b) FEEL with distributed data.
Figure 4 :
4Distributed training at the edge.server(s), where the straggling may be due to failing hardware, contention in the network, or even channel outages if the training is carried out at the wireless edge.
Device updates are aggregated at the PS, and used to update the global model. Communication, again, is a major challenge due to the bandwidth and power limitations of devices. To reduce the communication load, random subsets of devices are selected at each round, and local models are communicated after several local SGD updates. Another approach is to reduce the size of the messages communicated between the devices and the PS through compression. This is yet another research challenge where the extensive knowledge in information and coding theory for data compression can make an impact. While initial works have focused on rather simple scalar quantization and sparsification techniques[10], more advanced vector quantization and temporal coding tools exploiting correlations across gradient dimensions or multiple iterations can further reduce the communication load. But, the complexity of such tools must be carefully balanced with the potential gains.
Figure 5 :
5This is achieved by all the devices synchronously transmitting their model updates in an uncoded 'analog' fashion, which are superposed by the channel.Uplink transmission of local model updates in FEEL is a distributed computation problem, for which there is no separation theorem even when the sources are independent. Model updates at different devices are highly compressible, and are often correlated. Hence, when model updates are conveyed through digital communication, model compression can be used to adapt to the over-the-air averaging (IID) Digital transmission with quantization (IID) Federated averaging (non-IID) Analog over-the-air averaging (non-IID) Digital transmission with quantization (non-IID) Test accuracy of FEEL for MNIST classification with IID and non-IID data distributions.
Moreover, each iteration of the training process can be considered as a distributed computation problem, which renders throughput-maximizing conventional communication protocols obsolete, and requires the design of novel communication protocols and coding schemes. Since training is carried out in many (imperfect) iterations, we can relax some of the constraints of traditional coding and communication schemes (reliability, synchronization, power control, etc), resulting in novel communication problems. Finally, taking into account the physical layer channel characteristics can allow exploiting coding and communication theoretic tools to provide fundamental information theoretic privacy and security guarantees for both inference and training at the edge. Each of these perspectives and challenges open up new research problems in this exciting new research area exploring the connections between communication and learning.
significant amount of noise, making the model hard to converge. On the other hand, in FEEL, there is inherent noise and interference in the channel, which can be exploited to increase the security and privacy of the system through purely physical layer techniques. This opens up a new type of physical layer security/ privacy framework for FEEL applications.V. DISCUSSION AND CONCLUSIONSCommunication will play an essential role in employing ML tools at the network edge. Current approaches to communication-efficient distributed ML ignore the physical layer, and assume error and delay-free ideal links. This approach presumes a communication protocol, designed independently of the learning task, taking care of channel imperfections. In this paper, we have argued through references to recent theoretical results and practical implementations that such a separate architecture can be highly suboptimal, and a novel joint communication and learning framework is essential in approaching the fundamental limits of distributed learning. This calls for a new research paradigm integrating coding and communication theoretic ideas within the design of ML algorithms at the network edge. We have shown that the benefits of such a joint design paradigm can be significant for edge inference, both to boost the final performance and to meet the stringent delay constraints. Training is more computation intensive compared to inference; hence, computation and communication delays in training must be optimized jointly. Furthermore, heterogeneity of edge servers may result in additional bottlenecks due to stragglers. Coding can be used both to reduce the computation delays and to mitigate stragglers.
Distributed hypothesis testing over discrete memoryless channels. S Sreekumar, D Gündüz, IEEE Transactions on Information Theory. 664S. Sreekumar and D. Gündüz, "Distributed hypothesis testing over discrete memoryless channels," IEEE Transactions on Information Theory, vol. 66, no. 4, pp. 2044-2066, 2020.
JointDNN: An efficient training and inference engine for intelligent mobile cloud computing services. A Eshratifar, M Abrishami, M Pedram, IEEE Trans. on Mobile Computing. A. Eshratifar, M. Abrishami, and M. Pedram, "JointDNN: An efficient training and inference engine for intelligent mobile cloud computing services," IEEE Trans. on Mobile Computing, 2019.
Deep joint source-channel coding for wireless image transmission. E Bourtsoulatze, D Kurka, D Gündüz, IEEE Trans. on Cogn. Comms. and Networking. 53E. Bourtsoulatze, D. Kurka, and D. Gündüz, "Deep joint source-channel coding for wireless image transmission," IEEE Trans. on Cogn. Comms. and Networking, vol. 5, no. 3, pp. 567-579, Sep. 2019.
Deep joint source-channel coding for wireless image retrieval. M Jankowski, D Gündüz, K Mikolajczyk, IEEE ICASSP, 2020. M. Jankowski, D. Gündüz, and K. Mikolajczyk, "Deep joint source-channel coding for wireless image retrieval," in IEEE ICASSP, 2020, pp. 5070-5074.
Speeding up distributed machine learning using codes. K Lee, M Lam, R Pedarsani, D Papailiopoulos, K Ramchandran, IEEE Trans. Inf. Theory. 643K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, "Speeding up distributed machine learning using codes," IEEE Trans. Inf. Theory, vol. 64, no. 3, pp. 1514-1529, Mar. 2018.
Gradient coding: Avoiding stragglers in distributed learning. R Tandon, Q Lei, A G Dimakis, N Karampatziakis, Int'l Conf. on Machine Learning. R. Tandon, Q. Lei, A.G. Dimakis, and N. Karampatziakis, "Gradient coding: Avoiding stragglers in distributed learning," in Int'l Conf. on Machine Learning, Aug. 2017, pp. 3368-3376.
Straggler-aware distributed learning: Communication-computation latency tradeoff. E Ozfatura, S Ulukus, D Gunduz, Entropy: Special Issue on The Interplay Between Storage, Computing, and Communications from An Information-Theoretic Perspective. 22E. Ozfatura, S. Ulukus, and D. Gunduz, "Straggler-aware distributed learning: Communication-computation latency trade- off," Entropy: Special Issue on The Interplay Between Storage, Computing, and Communications from An Information- Theoretic Perspective, vol. 22, no. 5, May 2020.
Lagrange coded computing: Optimal design for resiliency, security, and privacy. Q Yu, S Li, N Raviv, S Kalan, M Soltanolkotabi, S Avestimehr, Proc. of Machine Learning Research. of Machine Learning ResearchQ. Yu, S. Li, N. Raviv, S. Kalan, M. Soltanolkotabi, and S. Avestimehr, "Lagrange coded computing: Optimal design for resiliency, security, and privacy," in Proc. of Machine Learning Research, Apr. 2019.
Communication-efficient learning of deep networks from decentralized data. B Mcmahan, E Moore, D Ramage, S Hampson, B A Arcas, Proc. Int'l Conf. on Artificial Intelligence and Stat. Int'l Conf. on Artificial Intelligence and StatB. McMahan, E. Moore, D. Ramage, S. Hampson, and B.A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Proc. Int'l Conf. on Artificial Intelligence and Stat., Apr. 2017, pp. 1273-1282.
QSGD: Communication-efficient SGD via gradient quantization and encoding. D Alistarh, D Grubic, J Li, R Tomioka, M Vojnovic, Advances in Neural Inform. Proc. Systems. D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, "QSGD: Communication-efficient SGD via gradient quantization and encoding," in Advances in Neural Inform. Proc. Systems, 2017.
Scheduling policies for federated learning in wireless networks. H H Yang, Z Liu, T Quek, V Poor, IEEE Trans. on Comms. 681H.H. Yang, Z. Liu, T. Quek, and V. Poor, "Scheduling policies for federated learning in wireless networks," IEEE Trans. on Comms., vol. 68, no. 1, pp. 317-333, Jan 2020.
Federated learning over wireless networks: Optimization model design and analysis. N H Tran, W Bao, A Zomaya, M N H Nguyen, C S Hong, IEEE INFOCOM Conference on Computer Communications. N.H. Tran, W. Bao, A. Zomaya, M.N.H. Nguyen, and C.S. Hong, "Federated learning over wireless networks: Optimization model design and analysis," in IEEE INFOCOM Conference on Computer Communications, 2019, pp. 1387-1395.
Federated learning over wireless fading channels. M , Mohammadi Amiri, D Gunduz, IEEE Transactions on Wireless Communications. 195M. Mohammadi Amiri and D. Gunduz, "Federated learning over wireless fading channels," IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3546-3557, 2020.
Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air. IEEE Transactions on Signal Processing. 68--, "Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air," IEEE Transactions on Signal Processing, vol. 68, pp. 2155-2169, 2020.
Deep leakage from gradients. L Zhu, Z Liu, S Han, Advances in Neural Information Processing Systems. 32L. Zhu, Z. Liu, and S. Han, "Deep leakage from gradients," in Advances in Neural Information Processing Systems 32, 2019, pp. 14 774-14 784.
| [] |
[
"Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity",
"Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity",
"Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity",
"Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity"
] | [
"Kristjan Ottar Klausen \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n",
"Anna Sitek \nDepartment of Theoretical Physics\nWroclaw University of Science and Technology\nWybrzeże Wyspiańskiego 2750-370WroclawPoland\n",
"Sigurdur I Erlingsson \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n",
"Andrei Manolescu \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n",
"Kristjan Ottar Klausen \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n",
"Anna Sitek \nDepartment of Theoretical Physics\nWroclaw University of Science and Technology\nWybrzeże Wyspiańskiego 2750-370WroclawPoland\n",
"Sigurdur I Erlingsson \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n",
"Andrei Manolescu \nDepartment of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland\n"
] | [
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland",
"Department of Theoretical Physics\nWroclaw University of Science and Technology\nWybrzeże Wyspiańskiego 2750-370WroclawPoland",
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland",
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland",
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland",
"Department of Theoretical Physics\nWroclaw University of Science and Technology\nWybrzeże Wyspiańskiego 2750-370WroclawPoland",
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland",
"Department of Engineering\nReykjavik University\nMenntavegur 1IS-101ReykjavikIceland"
] | [] | By solving the Bogoliubov-de Gennes Hamiltonian, the electron-hole coherence within a partially proximitized n-doped semiconductor shell of a core-shell nanowire heterostructure is investigated numerically and compared with the Andreev reflection interpretation of proximity induced superconductivity. Partial proximitization is considered to quantify the effects of a reduced coherence length. Three cases of partial proximitization of the shell are explored: radial, angular and longitudinal. For the radial case, it is found that the boundary conditions impose localization probability maxima in the center of the shell in spite of off-center radial proximitization. The induced superconductivity gap is calculated as a function of the ratio between the semiconducting and superconducting parts and the result is found to be independent of the shell thickness. In the angular case, the lowest energy state of a hexagonal wire with a single proximitized side is found to display the essence of Andreev reflection, only by lengthwise summation of the localization probability. In the longitudinal case, a clear correspondence with Andreev reflection is seen in the localization probability as a function of length along a half proximitized wire. | 10.1103/physrevb.107.035423 | [
"https://export.arxiv.org/pdf/2206.04830v2.pdf"
] | 249,605,822 | 2206.04830 | 45257a83c8ce01e8ea5b7934d5ded9dc129d8880 |
Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity
5 Jul 2022
Kristjan Ottar Klausen
Department of Engineering
Reykjavik University
Menntavegur 1IS-101ReykjavikIceland
Anna Sitek
Department of Theoretical Physics
Wroclaw University of Science and Technology
Wybrzeże Wyspiańskiego 2750-370WroclawPoland
Sigurdur I Erlingsson
Department of Engineering
Reykjavik University
Menntavegur 1IS-101ReykjavikIceland
Andrei Manolescu
Department of Engineering
Reykjavik University
Menntavegur 1IS-101ReykjavikIceland
Electron-hole coherence in core-shell nanowires with partial proximity induced superconductivity
5 Jul 2022
By solving the Bogoliubov-de Gennes Hamiltonian, the electron-hole coherence within a partially proximitized n-doped semiconductor shell of a core-shell nanowire heterostructure is investigated numerically and compared with the Andreev reflection interpretation of proximity induced superconductivity. Partial proximitization is considered to quantify the effects of a reduced coherence length. Three cases of partial proximitization of the shell are explored: radial, angular and longitudinal. For the radial case, it is found that the boundary conditions impose localization probability maxima in the center of the shell in spite of off-center radial proximitization. The induced superconductivity gap is calculated as a function of the ratio between the semiconducting and superconducting parts and the result is found to be independent of the shell thickness. In the angular case, the lowest energy state of a hexagonal wire with a single proximitized side is found to display the essence of Andreev reflection, only by lengthwise summation of the localization probability. In the longitudinal case, a clear correspondence with Andreev reflection is seen in the localization probability as a function of length along a half proximitized wire.
I. INTRODUCTION
Semiconductor nanowires with proximity induced superconductivity have emerged as key elements in various platforms proposed to realize qubits and other emerging technologies at the quantum scale 1-3 . The proximity effect is generally hypothesized to stem from electron-hole coherence, brought on by Andreev reflection at the superconductor interface 4 . The superconducting proximity effect has resurfaced time after time in the past decades as a hot research topic due to relevance to research topics in each decade [5][6][7][8][9][10][11][12] . Most recently due to the search for Majorana zero modes in nanostructures [13][14][15] . These zero modes are expected to be hosted in synthetic topological superconductors, where p-wave superconductivity can be engineered using spin-orbit coupling in conjunction with Zeeman splitting and proximitized superconductivity in semiconductors [16][17][18] .
Core-shell nanowires are radial heterojuctions consisting of a core which is wrapped by one or more layers of different material. Due to crystallographic structure they usually have polygonal cross sections [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] , and thus the shells became prismatic nanotubes, but circular systems have also been obtained 34 . The sharp corners of the cross section induce non-uniform electron localization along the circumference of the tube, in particular, low energy electrons are accumulated in the vicinity of sharp edges, while carriers of higher energy are shifted to the facets [35][36][37][38] . If the shell is very thin then the low-energy electrons are depleted from the facets and the shell becomes a multiple-channel system consisting of well-separated 1D electron channels situated along the edges. Due to their unique localization and a variety of other interesting properties, the core-shell nanowires have been extensively investigated in the last two decades 39,40 , showing promise in multiple applica-tions such as lasers 41 , energy harvesting devices 42,43 and photovoltaics 44 . By n-doping the chemical potential can be moved into the valence band such that electrons become the only charge carriers and the material behaves effectively as a metal with the effective mass of the host semiconductor. Earlier investigations have indicated that due to 1D electron channels along the sharp edges of prismatic tubes multiple Majorana Zero Modes can be hosted in a single core-shell nanowire 45,46 . However only if the electron-hole coherence length is larger than the whole structure, can the shell be considered fully proximitized and electron-hole coherence can be expected to be uniform.
In this paper the electron-hole coherence is investigated in an n-doped semiconductor core-shell nanowire with proximity induced superconductivity. Electron-hole coherence of the lowest energy states is compared with the Andreev reflection picture of proximitized superconductivity in the radial, angular and longitudinal interfaces arising within a single nanowire.
II. ELECTRON-HOLE COHERENCE AND THE PROXIMITY EFFECT
One of the earlier theoretical descriptions of the spatial dependence of the order parameter in the superconducting proximity effect was done by MacMillan in 1968 47 , using a Green's function approach based on the Gor'kov equations 48 to describe a normal metal-superconductor (NS) junction. In this method, the BCS potential for a quasi 1D problem is written in terms of the pairing interaction V (x) and the order parameter F (x),
∆(x) = V (x)F (x)(1)
where
F (x) = Ψ † (x)Ψ † (x) .(2)
MacMillan called the problem of the NS-junction possibly the simplest one in space-dependent superconductivity and proposed, in his own words, "a very nearly complete solution" of the problem for the case of infinite length of both metals, evaluating
F (x) = 1 πˆE c 0 0 Im[G 12 (E, x, x)] dE,(3)
where G 12 is the upper off-diagonal component of the Green's function in the Nambu spinor formalism,
Ψ(x) = ψ(x) ψ † (x) .(4)
F (x) can also be seen as the anomalous Green function 49 . Another fundamental reference in the field is a book chapter written by Deutscher and de Gennes 50 , published in 1969. There, the distinction between a clean and dirty juntion is made and the following simplified results presented for the spatial dependence of the order parameter. For a clean metal, where the mean free path l n therein is larger compared to the coherence length, l N > ξ N , the order parameter has the asymptotic form
F (x) = φ(x) exp − 2πk B T v F |x| ,(5)
where φ(x) is some slowly varying function. For the limiting case of the temperature being close to zero a result by Falk from 1963 51 is cited,
F (x) ∼ 1 |x| .(6)
Falks paper 51 has a similar Green's function based approach of the Gor'kov equations as McMillan 47 and proceeds McMillan's work by five years.
At present time, the effect of Andreev reflection is considered to be the mechanism behind the superconducting proximity effect 52,53 . Described by Andreev in 1964 to explain the thermal resistance of the intermediate state in superconductor, Andreev reflection refers to the conjugate retro-reflection of electrons and holes at a metal-superconductor boundary 54 , Fig. 1. Retroreflection means that an incoming electron from the normal metal side is reflected such that it traces back the incident trajectory. In order for an incident electron at the normal metal side with energy below the gap parameter ∆, to be transferred across the boundary, the formation of a Cooper pair in the superconductor requires another electron with equal and opposite momentum which can be seen as a reflected hole. Blonder, Tinkham and Klapwijk (BTK) refined the scattering approach to the problem using the Bogoliubov equations and further computed I-V curves along with transmission and reflection coefficients for all cases of energy relative to the superconducting gap including a delta-barrier at the interface 55 . This work has since become seminal for Andreev reflection and is fundamental to most tunneling spectroscopy experiments on superconducting junctions 52 . The scattering formalism has the advantage of being readily interpreted and familiar from the standard educational problem in quantum mechanics of scattering from a potential barrier. Klapwijk 7 later noted that the following self-consistency equation was ignored in the original BTK approach, due to the simplified geometry of the problem. The self-consistency equation can be written as (7) where u(r) and v(r) are the electron-and hole components of the quasiparticle wavefunction respectively, and f (E) is the Fermi-distribution function,
∆(r) = V (r)F (r) = V (r) E u(r)v † (r)[1 − 2f (E)],f (E) = [1 + exp(E − µ/(k B T ))] −1 .(8)
Even if the pairing interaction V (r) is zero in the normal metal, F (r) can be non-zero, stemming from electronhole coherence, which can be interpreted as the superconductivity leakage in the normal metal 7 . The selfconsistency equation determines the variation of ∆(r) at the intersection but the general features of Andreev reflection are independent of it 56 . Self-consistency has been shown to be of great importance for interfaces of dwave superconductivity 57 . Considerable work has been done in the past decade on the many subtleties of the superconducting gap parameter in hybrid semiconductorsuperconductor systems in relation to the quest for experimental realization of Majorana Zero Modes 58-61 .
III. MODEL AND METHODS
A three dimensional finite core-shell nanowire in an external magnetic field along the wire with a proximitized shell, is modeled using cylindrical coordinates where the z-axis is defined along the wire growth direction. The Bogoliubov-de Gennes (BdG) Hamiltonian 62,63 is solved by numerical diagonalization. The matrix elements are written in the composite basis |q consisting of the transverse modes |a , longitudinal modes |n , spin |σ and particle-hole eigenstates |η such that
|q = |ηanσ ,(9)
where |anσ are the eigenstates of the Hamiltonian for the wire without proximty induced superconductivity,
H w = H t + H l + H Z .(10)
The transverse and longitudinal components of the Hamiltonian are written as
H t +H l = (p φ + eA φ ) 2 2m e − 2 2m e r ∂ ∂r r ∂ ∂r + p 2 z 2m e ,(11)
where A φ = 1 2 Br is the vector potential in the symmetric gauge. The transverse eigenstates are expanded in terms of the lattice sites
|a = κ c a |r κ φ κ ,(12)
whilst the longitudinal ones are written in a sine basis,
|n = L −1/2 z √ 2 sin nπ z L z + 1 2 .(13)
The length of the wire is L z , the origin is defined in the nanowire center so that the wire spans the interval −Lz 2 , Lz 2 along the z axis. The external magnetic field B gives rise to the Zeeman term
H Z = −g * µ B σB,(14)
where g is the effective Landé g-factor and µ B the Bohr magneton. The matrix elements of the BdG Hamiltonian are then obtained by the following, for η = η ′
anση|H BdG |a ′ n ′ σ ′ η ′ = η[Re anσ|H w |a ′ n ′ σ ′ + iη anσ|H w |a ′ n ′ σ ′ − µδ (anσ)(a ′ n ′ σ ′ ) ],(15)
and for η = η ′ ,
anση|H BdG |a ′ n ′ σ ′ η ′ = ησδ σ,−σ ′ δ aa ′ δ nn ′ ∆ s .(16)
Partial proximitization is implemented in the superconducting gap parameter, ∆ s (r, φ, z), by step functions of position. The chemical potential, µ, is set to correspond to an n-doped semiconductor such that electrons are the main carriers of the system. The numerical simulations were performed for single shell nanowires with cross section diameter of 100 nm and shell thickness of d = 10 nm, the length was set to 10 µm in sections IV and V along with a shorter wire of 1 µm and infinite one explored in sections VI and IV, respectively.
IV. PARTIAL RADIAL PROXIMITIZATION
A core-shell nanowire fully coated with superconducting layer induces a radially symmetric proximitization of the shell. Full-shell nanowire systems allow for additional control of the superconducting energy gap due to the Little-Parks effect [64][65][66] . Partial radial proximitization of the shell would occur if the effective superconducting coherence length was suppressed in the sample 67 . A cylindrical shell is considered, to isolate the effect of a partial proximitization radially, Fig. 2(a). As the shell thickness d of the studied systems is much lower than the typical coherence length, this is arguably a rare instance for clean interfaces. Accordingly, the results are found to be independent of the shell thickness for the given diameter. Partial proximitization of a semiconductor wire with a superconductor having a ∆ s = 0.5 meV gap results in an induced gap of ∆ i = 0.15 meV of the whole system, Figs. 2(b,c). The wavefunction acquires the angular symmetry of the shell, the amplitude peak however is found to be centralized in the shell, irrespective of the radial asymmetry of proximitization. This follows from the boundary conditions that the wavefunction must vanish at both the inner and outer boundary of the shell in the continuous limit. Note that according to Eq. (13), the probability amplitude oscillates along the wire length. The oscillation wavelength is determined by the chemical potential, Fig. 2(d), as the higher energy level increases the frequency. Figs. 2(e,f) show the longitudinal summation of probability amplitudes for the first five positive and negative energy states. In this case there is no straight forward correlation to retro-reflection of a hole component at the semiconducting-superconducting interface since the boundary conditions force the induced hole component to be equally localized over both the proximitized and non-proximitized parts of the wire shell.
In Fig. 3 the induced superconducting gap is shown as a function of the radius of proximitization and area between the non-proximitized and proximitized parts of the shell. The results are found to be independent of the shell thickness, for the given diameter of the wire. The superconducting gap parameter of the proximitized part is set to ∆ s = 0.5 meV. The induced superconducting gap of the fully proximitized system is lowered by 10% due to the Little-Parks effect due to an applied external magnetic field, | B| = 65.8 mT, included in the simulation to lift spin degeneracy.
V. PARTIAL ANGULAR PROXIMITIZATION
Systems where nanowires are proximitized by fabrication of the wire on top of a superconducting slab are common experimental platforms for Majorana physics [67][68][69] . A hexagonal core-shell structure is considered to model such a system where the effective coherence length is smaller than the diameter of the nanowire, such that only a single side can be considered fully proximitized, Fig. 4(a). In the fully proximitized part, the gap parameter is set at ∆ s = 1 meV and the induced gap obtained is ∆ i = 0.05 meV, Figs. 4(b,c). For the first excited positive energy quasiparticle state, in the case of the chem-ical potential including only the lowest energy band, Fig. 4(d), the Andreev picture of the proximity effect in uncovered. However, it is only for the lengthwise summation of localization probability such that the total localization probability is projected onto the wire cross-section, Figs. 4(e,f). The fully proximitized part of the semiconductor shell has a hole component localized within it, by definition of the BdG quasiparticle spectrum. Reminiscent of Andreev reflection, the hole components spreads out to the normal conducting part of the shell, Fig. 4(e). The electron amplitude is lowered within the superconducting shell, Fig. 4(f), in accordance with the view that the superconductor incorporates an electron to form a Cooper pair, and reflects a hole in the process 55 . The electron-hole coherence results in a near uniformly spread out BdG localization probability over the shell with amplitude peaks in the corners. The first negative energy state has the opposite electron-hole localization probability from Figs. 4(e,f), as expected from electron-hole symmetry of the system. Along the length of the wire, the wavefunction localization probability of each state oscillates, Sect. VI, and the symmetry of Figs. 4 (e,f) can be inverted at specific sites. This also happens for the adjacent higher energy states in which the particle-hole coherence is inverted since for a given energy value of the BdG spectra slightly above ∆, there are four states in each band, two electron dominant and the other two hole dominant.
VI. PARTIAL LONGITUDINAL PROXIMITIZATION
Another possibility of partial proximitization is partial covering of a nanowire longitudinally with a superconductor 70,71 . A half proximitized wire is considered, such that the superconducting gap is uniform in the whole shell up to half the length of the wire, with ∆ S = 0.5 meV. Fig. 5 shows the electron-hole coherence at a single corner site, for the case of no external magnetic field, of a long hexagonal nanowire with L = 200 R, where the diameter of the wire is 100 nm. An exponential decay of the composite BdG localization probability into the non-proximitized part is obtained, Fig. 5(a). This stems from coherence of electron and hole tunneling tails into the non-proximitized half of the wire, Fig. 5(b). Diminishing of the BdG localization probabilty in the proximitized half of the wire is caused by a phase difference between the electron and hole wavefunction components, the −π/2 phase difference is characteristic of Andreev reflection 72,73 .
If an external magnetic field is applied, Zeeman splitting gives rise to a difference of the k-vectors between the spin-split states, resulting in an additional phase difference between the electron and hole components, Fig. 6(b). This phase difference leads to a beating pattern of the BdG localization probability, Fig. 6(a). In the case of a weaker superconducting gap parameter ∆ S = 50 µeV, Fig. 7, the exponential decay into the semiconducting part is enlarged compared with Fig. 5. The gap parameter can thus be seen as an effective potential barrier for the electron-hole coherence. For a shorter wire with L = 1 µm and ∆ S = 0.5 meV, Fig. 8, the exponential decay is less pronounced, compared with Fig. 5. The coherence length is the same but the electron component at the interface is near minimum in phase, rather than at maximum as in the case of the longer wire. The wavelength of the wavefunction depends on the Fermi level, and so the length can influence the phase of the electron and hole components at the interface. In both cases correspondence with Andreev reflection is obtained, per site of the shell, where the leakage of the BdG wavefunction results from electron-hole coherence.
VII. CONCLUSIONS
Low energy physics of the radial, angular and longitudinal superconductor interfaces of proximitized coreshell nanowires has been explored using the Bogoliubovde Gennes equations. Partial proximitization is considered to quantify the effects of a reduced coherence length and to investigate the Andreev reflection interpretation of proximitized superconductivity. For a thin shell, boundary conditions are found to impose symmetry of the localization probability in spite of partial radial proximitization. In the case of a hexagonal wire with a single proximitized side, it is only by lengthwise summation of localization probability that the essence of Andreev reflection can be seen. For a longitudinally half proximitized wire, electron-hole coherence is shown to cause the leakage of the BdG wavefunction into the non-proximitized part of the wire. Correspondence with Andreev reflection is obtained per site of the shell, from considering the localization probability as a function of length in the core-shell nanowire system.
FIG. 1 .
1Simplified sketch of the proximity effect and Andreev reflection. An electron with energy E < ∆ at an N-S boundary will be retro-reflected as a hole whilst forming a Cooper pair within the superconductor 55 .
FIG. 2 .
2(a) Partial radial proximitization of the nanowire shell, half-proximitized shell with ∆s = 0.5 meV in the outer half (purple) of the shell. (b) Finite wire BdG spectrum of the nanowire system, showing the induced gap ∆i = 0.15 meV. (c) Infinite wire BdG energy spectra. (d) Energy dispersion and chemical potential (red) of the infinite wire. (e) Longitudinal summation of probability amplitudes on interior sites for the hole component |v| 2 of the lowest positive and negative energy states. (f) Longitudinal summation of probability amplitudes for the corresponding electronic component |u| 2 .
FIG. 3 .
3Induced superconducting energy gap of a partially proximitized nanowire as a function of the ratio between the semiconducting (SM) and superconducting (∆) parts, both for their areas (A) and radii (R).
. 4. (a) Partial angular proximitization of a hexagonal nanowire shell, single side proximitization with ∆s = 1 meV. (b) Finite wire BdG spectrum of the whole system, showing the induced gap ∆i = 0.05 meV. (c) Infinite wire BdG spectra. (d) Dispersion and chemical potential (red) of the infinite wire Hamiltonian. (e) Longitudinal summation of probability amplitudes for the hole component |v| 2 of the lowest positive energy state, brightness denotes higher localization probability. (f) Corresponding electron component |u| 2 .
FIG. 5 .FIG. 6 .
56(a) Single corner site localization probability from the composite BdG wavefunction of the lowest energy state of a half proximitized hexagonal wire with no external magnetic field. (b) Corresponding electron and hole components, |u| 2 and |v| 2 respectively, showing the characteristic −π/2 Andreev reflection phase difference. (a) Single corner site localization probability from the composite BdG wavefunction of the lowest energy state of a half proximitized hexagonal wire, for the case of an external magnetic field | B| = 65.8 mT. (b) Corresponding electron and hole components, |u| 2 and |v| 2 respectively.
FIG. 7 .FIG. 8 .
78(a) Single corner site localization probability from the composite BdG wavefunction of the lowest energy state of a half proximitized hexagonal wire, ∆S = 50 µeV, with no external magnetic field. (b) Corresponding electron and hole components, |u| 2 and |v| 2 respectively. Localization probability of the composite BdG wavefunction and its corresponding electron |u| 2 and hole |v| 2 components of the lowest positive energy state. Wire length is L = 1 µm.
M. Hays, V. Fatemi,
ACKNOWLEDGMENTSThis research was supported by the Reykjavik University Research Fund, project no. 218043 and the Icelandic Research Fund, grant no. 206568-051. We are grateful to Vidar Gudmundsson for discussions.
. D Bouman, J Cerrillo, S Diamond, K Serniak, T Connolly, P Krogstrup, J Nygård, A L Yeyati, A Geresdi, M H Devoret, 10.1126/science.abf0345Science. 373D. Bouman, J. Cer- rillo, S. Diamond, K. Serniak, T. Connolly, P. Krogstrup, J. Nygård, A. L. Yeyati, A. Geresdi, and M. H. Devoret, Science 373, 430 (2021),
. http:/arxiv.org/abs/https:/www.science.org/doi/pdf/10.1126/science.abf0345https://www.science.org/doi/pdf/10.1126/science.abf0345.
. R Aguado, http:/arxiv.org/abs/https:/doi.org/10.1063/5.0024124Applied Physics Letters. 117240501R. Aguado, Applied Physics Letters 117, 240501 (2020), https://doi.org/10.1063/5.0024124.
. M Benito, G Burkard, Applied Physics Letters. 116190502M. Benito and G. Burkard, Applied Physics Letters 116, 190502 (2020).
. B Pannetier, H Courtois, Journal of Low Temperature Physics. 118599B. Pannetier and H. Courtois, Journal of Low Temperature Physics 118, 599 (2000).
. F Setiawan, C.-T , F. Setiawan, C.-T.
. K Wu, Levin, 10.1103/PhysRevB.99.174511Phys. Rev. B. 99174511Wu, and K. Levin, Phys. Rev. B 99, 174511 (2019).
. T D Stanescu, R M Lutchyn, S. Das Sarma, 10.1103/PhysRevB.90.085302Phys. Rev. B. 9085302T. D. Stanescu, R. M. Lutchyn, and S. Das Sarma, Phys. Rev. B 90, 085302 (2014).
. T M Klapwijk, 10.1007/s10948-004-0773-0Journal of Superconductivity. 17593T. M. Klapwijk, Journal of Superconductivity 17, 593 (2004).
. A Jacobs, R Kümmel, H Plehn, 10.1006/spmi.1999.0718Superlattices and Microstructures. 25669A. Jacobs, R. Kümmel, and H. Plehn, Superlattices and Microstructures 25, 669 (1999).
. A I , I V Kochergin, 10.1007/BF00683608J. Low Temp. Phys. 84A. I. D'yachenko and I. V. Kochergin, J. Low Temp. Phys. 84:3-4 (1991), 10.1007/BF00683608.
. C Van Haesendonck, L V Dries, Y Bruynseraede, A Gilabert, Journal of Physics F: Metal Physics. 112381C. van Haesendonck, L. V. den Dries, Y. Bruynseraede, and A. Gilabert, Journal of Physics F: Metal Physics 11, 2381 (1981).
. H Meissner, Stevens Institute of TechnologyH. Meissner, Stevens Institute of Technology (1971).
. J Clarke, 10.1051/jphyscol:1968201Journal de Physique Colloques. 292J. Clarke, Journal de Physique Colloques 29, C2 (1968).
. E Prada, P San-Jose, M W A De Moor, A Geresdi, E J H Lee, J Klinovaja, D Loss, J Nygård, R Aguado, L P Kouwenhoven, 10.1038/s42254-020-0228-yNature Reviews Physics. 2575E. Prada, P. San-Jose, M. W. A. de Moor, A. Geresdi, E. J. H. Lee, J. Klinovaja, D. Loss, J. Nygård, R. Aguado, and L. P. Kouwenhoven, Nature Reviews Physics 2, 575 (2020).
. C Jünger, A Baumgartner, R Delagrange, D Chevallier, S Lehmann, M Nilsson, K A Dick, C Thelander, C Schönenberger, 10.1038/s42005-019-0162-4Communications Physics. 276C. Jünger, A. Baumgartner, R. Delagrange, D. Chevallier, S. Lehmann, M. Nilsson, K. A. Dick, C. Thelander, and C. Schönenberger, Communications Physics 2, 76 (2019).
. R Aguado, 10.1393/ncr/i2017-10141-9La Rivista del Nuovo Cimento. 523R. Aguado, La Rivista del Nuovo Cimento , 523 (2017).
T D Stanescu, Introduction to Topological Quantum Matter & Quantum Computation. LondonCRC PressT. D. Stanescu, Introduction to Topological Quantum Mat- ter & Quantum Computation (CRC Press: London., 2017).
. J Alicea, 10.1103/PhysRevB.81.125318Phys. Rev. B. 81125318J. Alicea, Phys. Rev. B 81, 125318 (2010).
. J D Sau, R M Lutchyn, S Tewari, S. Das Sarma, 10.1103/PhysRevLett.104.040502Phys. Rev. Lett. 10440502J. D. Sau, R. M. Lutchyn, S. Tewari, and S. Das Sarma, Phys. Rev. Lett. 104, 040502 (2010).
. C Blömers, T Rieger, P Zellekens, F Haas, M I Lepsa, H Hardtdegen, Ö Gül, N Demarina, D Grützmacher, H Lüth, T Schäpers, Nanotechnology. 2435203C. Blömers, T. Rieger, P. Zellekens, F. Haas, M. I. Lepsa, H. Hardtdegen,Ö. Gül, N. Demarina, D. Grützmacher, H. Lüth, and T. Schäpers, Nanotechnology 24, 035203 (2013).
. T Rieger, M Luysberg, T Schäpers, D Grützmacher, M I Lepsa, Nano Letters. 125559T. Rieger, M. Luysberg, T. Schäpers, D. Grützmacher, and M. I. Lepsa, Nano Letters 12, 5559 (2012).
. F Haas, K Sladek, A Winden, M Der Ahe, T E Weirich, T Rieger, H Lüth, D Grützmacher, T Schäpers, H Hardtdegen, Nanotechnology. 2485603F. Haas, K. Sladek, A. Winden, M. von der Ahe, T. E. Weirich, T. Rieger, H. Lüth, D. Grützmacher, T. Schäpers, and H. Hardtdegen, Nanotechnology 24, 085603 (2013).
. S Funk, M Royo, I Zardo, D Rudolph, S Morkötter, B Mayer, J Becker, A Bechtold, S Matich, M Döblinger, M Bichler, G Koblmüller, J J Finley, A Bertoni, G Goldoni, G Abstreiter, Nano Letters. 136189S. Funk, M. Royo, I. Zardo, D. Rudolph, S. Morkötter, B. Mayer, J. Becker, A. Bechtold, S. Matich, M. Döblinger, M. Bichler, G. Koblmüller, J. J. Finley, A. Bertoni, G. Goldoni, and G. Abstreiter, Nano Letters 13, 6189 (2013).
. N Erhard, S Zenger, S Morkötter, D Rudolph, M Weiss, H J Krenner, H Karl, G Abstreiter, J J Finley, G Koblmüller, A W Holleitner, Nano Letters. 156869N. Erhard, S. Zenger, S. Morkötter, D. Rudolph, M. Weiss, H. J. Krenner, H. Karl, G. Abstreiter, J. J. Finley, G. Koblmüller, and A. W. Holleitner, Nano Letters 15, 6869 (2015).
. M Weiß, J B Kinzel, F J R Schülein, M Heigl, D Rudolph, S Morkötter, M Döblinger, M Bichler, G Abstreiter, J J Finley, G Koblmüller, A Wixforth, H J Krenner, Nano Letters. 142256M. Weiß, J. B. Kinzel, F. J. R. Schülein, M. Heigl, D. Rudolph, S. Morkötter, M. Döblinger, M. Bichler, G. Abstreiter, J. J. Finley, G. Koblmüller, A. Wixforth, and H. J. Krenner, Nano Letters 14, 2256 (2014).
. J Jadczak, P Plochocka, A Mitioglu, I Breslavetz, M Royo, A Bertoni, G Goldoni, T Smolenski, P Kossacki, A Kretinin, H Shtrikman, D K Maude, Nano Letters. 142807J. Jadczak, P. Plochocka, A. Mitioglu, I. Breslavetz, M. Royo, A. Bertoni, G. Goldoni, T. Smolenski, P. Kos- sacki, A. Kretinin, H. Shtrikman, and D. K. Maude, Nano Letters 14, 2807 (2014).
. F Qian, Y Li, S Gradečak, D Wang, C J Barrelet, C M Lieber, Nano Letters. 41975F. Qian, Y. Li, S. Gradečak, D. Wang, C. J. Barrelet, and C. M. Lieber, Nano Letters 4, 1975 (2004).
. F Qian, S Gradečak, Y Li, C.-Y. Wen, C M Lieber, Nano Letters. 52287F. Qian, S. Gradečak, Y. Li, C.-Y. Wen, and C. M. Lieber, Nano Letters 5, 2287 (2005).
. L Baird, G Ang, C Low, N Haegel, A Talin, Q Li, G Wang, Physica B: Condensed Matter. 4044933L. Baird, G. Ang, C. Low, N. Haegel, A. Talin, Q. Li, and G. Wang, Physica B: Condensed Matter 404, 4933 (2009).
. M Heurlin, T Stankevič, S Mickevičius, S Yngman, D Lindgren, A Mikkelsen, R Feidenhans'l, M T Borgstöm, L Samuelson, Nano Letters. 152462M. Heurlin, T. Stankevič, S. Mickevičius, S. Yngman, D. Lindgren, A. Mikkelsen, R. Feidenhans'l, M. T. Borgstöm, and L. Samuelson, Nano Letters 15, 2462 (2015).
. Y Dong, B Tian, T J Kempa, C M Lieber, Nano Letters. 92183Y. Dong, B. Tian, T. J. Kempa, and C. M. Lieber, Nano Letters 9, 2183 (2009).
. X Yuan, P Caroff, F Wang, Y Guo, Y Wang, H E Jackson, L M Smith, H H Tan, C Jagadish, Adv. Funct. Mater. 255300X. Yuan, P. Caroff, F. Wang, Y. Guo, Y. Wang, H. E. Jackson, L. M. Smith, H. H. Tan, and C. Jagadish, Adv. Funct. Mater. 25, 5300 (2015).
. D J O Göransson, M Heurlin, B Dalelkhan, S Abay, M E Messing, V F Maisi, M T Borgström, H Q Xu, 10.1063/1.5084222Applied Physics Letters. 11453108D. J. O. Göransson, M. Heurlin, B. Dalelkhan, S. Abay, M. E. Messing, V. F. Maisi, M. T. Borgström, and H. Q. Xu, Applied Physics Letters 114, 053108 (2019).
. T Rieger, D Grutzmacher, M I Lepsa, Nanoscale. 7356T. Rieger, D. Grutzmacher, and M. I. Lepsa, Nanoscale 7, 356 (2015).
. K.-H Kim, Y.-S No, 10.1186/s40580-017-0128-8Nano Convergence. 432K.-H. Kim and Y.-S. No, Nano Convergence 4, 32 (2017).
. G Ferrari, G Goldoni, A Bertoni, G Cuoghi, E Molinari, Nano Letters. 91631G. Ferrari, G. Goldoni, A. Bertoni, G. Cuoghi, and E. Molinari, Nano Letters 9, 1631 (2009).
. B M Wong, F Léonard, Q Li, G T Wang, Nano Letters. 113074B. M. Wong, F. Léonard, Q. Li, and G. T. Wang, Nano Letters 11, 3074 (2011).
. A Sitek, L Serra, V Gudmundsson, A Manolescu, Phys. Rev. B. 91235429A. Sitek, L. Serra, V. Gudmundsson, and A. Manolescu, Phys. Rev. B 91, 235429 (2015).
. A Sitek, G Thorgilsson, V Gudmundsson, A Manolescu, Nanotechnology. 27225202A. Sitek, G. Thorgilsson, V. Gudmundsson, and A. Manolescu, Nanotechnology 27, 225202 (2016).
. M Royo, M D Luca, R Rurali, I Zardo, 10.1088/1361-6463/aa5d8eJournal of Physics D: Applied Physics. 50143001M. Royo, M. D. Luca, R. Rurali, and I. Zardo, Journal of Physics D: Applied Physics 50, 143001 (2017).
. G Shen, D Chen, Nanoscale Research Letters. 4779G. Shen and D. Chen, Nanoscale Research Letters 4, 779 (2009).
. G Koblmüller, B Mayer, T Stettner, G Abstreiter, J J Finley, Semiconductor Science and Technology. 3253001G. Koblmüller, B. Mayer, T. Stettner, G. Abstreiter, and J. J. Finley, Semiconductor Science and Technology 32, 053001 (2017).
. C Florica, A Costas, N Preda, M Beregoi, A Kuncser, N Apostol, C Popa, G Socol, V Diculescu, I Enculescu, 10.1038/s41598-019-53873-0Scientific Reports. 917268C. Florica, A. Costas, N. Preda, M. Beregoi, A. Kuncser, N. Apostol, C. Popa, G. Socol, V. Diculescu, and I. En- culescu, Scientific Reports 9, 17268 (2019).
. M A Hassan, M A Johar, A Waseem, I V Bagal, J.-S Ha, S.-W Ryu, 10.1364/OE.27.00A184Opt. Express. 27184M. A. Hassan, M. A. Johar, A. Waseem, I. V. Bagal, J.-S. Ha, and S.-W. Ryu, Opt. Express 27, A184 (2019).
. S Z Oener, S A Mann, B Sciacca, C Sfiligoj, J Hoang, E C Garnett, http:/arxiv.org/abs/https:/doi.org/10.1063/1.4905652Applied Physics Letters. 10623501S. Z. Oener, S. A. Mann, B. Sciacca, C. Sfiligoj, J. Hoang, and E. C. Gar- nett, Applied Physics Letters 106, 023501 (2015), https://doi.org/10.1063/1.4905652.
. A Manolescu, A Sitek, J Osca, L Serra, V Gudmundsson, T D Stanescu, Phys. Rev. B. 96125435A. Manolescu, A. Sitek, J. Osca, L. Serra, V. Gudmunds- son, and T. D. Stanescu, Phys. Rev. B 96, 125435 (2017).
. K O Klausen, A Sitek, S I Erlingsson, A Manolescu, 10.1088/1361-6528/ab932eNanotechnology. 31354001K. O. Klausen, A. Sitek, S. I. Erlingsson, and A. Manolescu, Nanotechnology 31, 354001 (2020).
. W L Mcmillan, 10.1103/PhysRev.175.559Phys. Rev. 175559W. L. McMillan, Phys. Rev. 175, 559 (1968).
. L Gor'kov, Sov. Phys. -JETP (Engl. Transl. 73L. Gor'kov, Sov. Phys. -JETP (Engl. Transl.); (United States) 7:3 (1958).
H Bruus, K Flensberg, Many-body quantum theory in condensed matter physics -an introduction. United StatesOxford University PressH. Bruus and K. Flensberg, Many-body quantum theory in condensed matter physics -an introduction (Oxford Uni- versity Press, United States, 2004).
G Deutscher, P De Gennes, pp 1005-34 of Superconductivity. Vols. 1 and 2. Parks, R. D.New YorkMarcel Dekker, IncG. Deutscher and P. de Gennes, pp 1005-34 of Supercon- ductivity. Vols. 1 and 2. Parks, R. D. (ed.). New York, Marcel Dekker, Inc., 1969. (1969).
. D S Falk, 10.1103/PhysRev.132.1576Phys. Rev. 1321576D. S. Falk, Phys. Rev. 132, 1576 (1963).
M Tinkham, Introduction to Superconductivity. New YorkDover Publications, Inc. MineolaM. Tinkham, Introduction to Superconductivity (Dover Publications, Inc. Mineola, New York., 2003).
T Schäpers, Superconductor/Semiconductor Junctions. Berlin HeidelbergSpringer-VerlagT. Schäpers, Superconductor/Semiconductor Junctions (Springer-Verlag Berlin Heidelberg, 2001).
. A Andreev, Journal of Experimental and Theoretical Physics. 461823A. Andreev, Journal of Experimental and Theoretical Physics 46, 1823 (1964).
. G E Blonder, M Tinkham, T M Klapwijk, Phys. Rev. B. 254515G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B 25, 4515 (1982).
H Van Houten, C Beenakker, 10.1016/0921-4526(91)90712-Nanalogies in Optics and Micro-Electronics. 175H. van Houten and C. Beenakker, Physica B: Condensed Matter 175, 187 (1991), analo- gies in Optics and Micro-Electronics.
. A Martin, J F Annett, 10.1006/spmi.1999.0709Superlattices and Microstructures. 251019A. Martin and J. F. Annett, Superlattices and Microstructures 25, 1019 (1999).
. J Ridderbos, M Brauns, F K De Vries, J Shen, A Li, S Kölling, M A Verheijen, A Brinkman, W G Van Der Wiel, E P A M Bakkers, F A Zwanenburg, 10.1021/acs.nanolett.9b03438Nano Letters. 20122J. Ridderbos, M. Brauns, F. K. de Vries, J. Shen, A. Li, S. Kölling, M. A. Verheijen, A. Brinkman, W. G. van der Wiel, E. P. A. M. Bakkers, and F. A. Zwanenburg, Nano Letters 20, 122 (2020).
. W S Cole, S Das Sarma, T D Stanescu, 10.1103/PhysRevB.92.174511Phys. Rev. B. 92174511W. S. Cole, S. Das Sarma, and T. D. Stanescu, Phys. Rev. B 92, 174511 (2015).
. J Klinovaja, D Loss, 10.1103/PhysRevB.86.085408Phys. Rev. B. 8685408J. Klinovaja and D. Loss, Phys. Rev. B 86, 085408 (2012).
. J Osca, L M C Serra, Phys. Rev. B. 88144512J. Osca and L. m. c. Serra, Phys. Rev. B 88, 144512 (2013).
. J.-X Zhu, Bogoliubov de Gennes Methods and its Applications. SpringerJ.-X. Zhu, Bogoliubov de Gennes Methods and its Applica- tions (Springer, 2016).
. N N Bogoliubov, 10.1007/BF02745585Nuovo Cim. 7794N. N. Bogoliubov, Nuovo Cim. 7, 794 (1958).
. S Vaitiekenas, G W Winkler, B Van Heck, T Karzig, M.-T Deng, K Flensberg, L I Glazman, C Nayak, P Krogstrup, R M Lutchyn, C M Marcus, 10.1126/science.aav3392Science. 367S. Vaitiekenas, G. W. Winkler, B. van Heck, T. Karzig, M.-T. Deng, K. Flensberg, L. I. Glazman, C. Nayak, P. Krogstrup, R. M. Lutchyn, and C. M. Marcus, Science 367 (2020), 10.1126/science.aav3392.
. F Peñaranda, R Aguado, P San-Jose, E Prada, arXiv:1911.06805arXiv:1911.06805arXiv e-printscond-mat.mes-hallF. Peñaranda, R. Aguado, P. San-Jose, and E. Prada, arXiv e-prints , arXiv:1911.06805 (2019), arXiv:1911.06805 [cond-mat.mes-hall].
. A Kringhøj, G W Winkler, T W Larsen, D Sabonis, O Erlandsson, P Krogstrup, B Van Heck, K D Petersson, C M Marcus, 10.1103/PhysRevLett.126.047701Phys. Rev. Lett. 12647701A. Kringhøj, G. W. Winkler, T. W. Larsen, D. Sabonis, O. Erlandsson, P. Krogstrup, B. van Heck, K. D. Petersson, and C. M. Marcus, Phys. Rev. Lett. 126, 047701 (2021).
. T D Stanescu, S Tewari, Journal of Physics: Condensed Matter. 25233201T. D. Stanescu and S. Tewari, Journal of Physics: Con- densed Matter 25, 233201 (2013).
. H Zhang, D E Liu, M Wimmer, L P Kouwenhoven, 10.1038/s41467-019-13133-1Nature Communications. 105128H. Zhang, D. E. Liu, M. Wimmer, and L. P. Kouwenhoven, Nature Communications 10, 5128 (2019).
. F Maier, J Klinovaja, D Loss, 10.1103/physrevb.90.195421Physical Review B. 90F. Maier, J. Klinovaja, and D. Loss, Physical Review B 90 (2014), 10.1103/physrevb.90.195421.
. H Zhang, D E Liu, M Wimmer, L P Kouwenhoven, 10.1038/s41467-019-13133-1Nature Communications. 105128H. Zhang, D. E. Liu, M. Wimmer, and L. P. Kouwenhoven, Nature Communications 10, 5128 (2019).
. 71ö, H Gül, J D S Zhang, M W A Bommer, D De Moor, S R Car, E P A M Plissard, A Bakkers, K Geresdi, T Watanabe, L P Taniguchi, Kouwenhoven, 10.1038/s41565-017-0032-8Nature Nanotechnology. 1319271Ö . Gül, H. Zhang, J. D. S. Bommer, M. W. A. de Moor, D. Car, S. R. Plissard, E. P. A. M. Bakkers, A. Geresdi, K. Watanabe, T. Taniguchi, and L. P. Kouwenhoven, Nature Nanotechnology 13, 192 (2018).
. O Millo, G Koren, Philos. Trans. Royal Soc. A. 37620140143O. Millo and G. Koren, Philos. Trans. Royal Soc. A 376, 20140143 (2018).
H Van Houten, C Beenakker, analogies in Optics and Micro-Electronics. 175H. van Houten and C. Beenakker, Physica B: Condensed Matter 175, 187 (1991), analogies in Optics and Micro- Electronics.
| [] |
[] | [
"Date "
] | [] | [
"Mathematics Subject Classification. 47A58, 35K20"
] | For smooth bounded open sets in euclidean space, we construct corresponding contractive linear extension operators for the space of continuous functions which preserve regularity of functions in the domain of the Robin Laplacian. We also prove a Trotter-like approximation for the semigroup generated by the Laplacian subject to Robin boundary conditions in terms of these extension operators. The limiting case of Dirichlet boundary conditions is treated separately. | 10.1016/j.jfa.2009.05.009 | [
"https://arxiv.org/pdf/0807.1487v1.pdf"
] | 15,546,803 | 0807.1487 | 4504d4f82de3257a6a17a56b70d3785b4a03b862 |
July 9. 2008. 2000
Date
Mathematics Subject Classification. 47A58, 35K20
July 9. 2008. 2000APPROXIMATION OF THE SEMIGROUP GENERATED BY THE ROBIN LAPLACIAN IN TERMS OF THE GAUSSIAN SEMIGROUP ROBIN NITTKA The author thanks the Graduate School Mathematical Analysis of Evolution, Informa-tion and Complexity for their support during the work on this article.and phrases Trotter approximation formulaRobin boundary conditionsExtension operator
For smooth bounded open sets in euclidean space, we construct corresponding contractive linear extension operators for the space of continuous functions which preserve regularity of functions in the domain of the Robin Laplacian. We also prove a Trotter-like approximation for the semigroup generated by the Laplacian subject to Robin boundary conditions in terms of these extension operators. The limiting case of Dirichlet boundary conditions is treated separately.
Introduction
Let Ω ⊂ R N be a smooth bounded open set. Here and in the following, "smooth" means "of class C ∞ ", although the main results remain true under slightly milder regularity assumptions. On such a set we consider the (autonomous, homogeneous) diffusion equation u t = ∆u, on (0, ∞) × Ω, ∂u ∂ν (t, z) = −β(z)u(t, z), for t > 0 and z ∈ ∂Ω, u(0, x) = u 0 (x), for x ∈ Ω, (1) subject to Robin boundary conditions. Here u 0 ∈ C(Ω) is an arbitrary initial function, β is a non-negative smooth function on ∂Ω which does not depend on t and ∂u ∂ν denotes the directional derivative of u along the outwards pointing unit normal of Ω. We remark that this setting includes Neumann boundary conditions for β ≡ 0. A mild solution of (1) is a function u ∈ C [0, ∞); C(Ω) such that It is known that for every non-negative, bounded, measurable function β and every initial value u 0 ∈ C(Ω) there exists a unique mild solution to problem (1). In fact, Warma proved that under the above assumptions ∆ R generates a C 0 -semigroup T R (t) on C(Ω) [14], and it follows from the general theory of semigroups that then u(t) = T R (t)u 0 is the unique mild solution of the corresponding homogeneous abstract Cauchy problem [4,Proposition II.6.4]; note that Warma's proof remains valid for arbitrary non-negative functions β ∈ L ∞ (∂Ω).
If we want to calculate this solution numerically, a typical problem is how to handle the boundary conditions. To fix the general ideas, let Ω = (0, 1) and assume that we want to apply an explicit finite difference method. Then one replaces the derivatives u t and u xx by appropriate difference quotients and successively calculates approximations u(t n , x j ) of the exact solution u(t, x) by the relation
u(t n+1 , x j ) − u(t n , x j ) k = u(t n , x j+1 ) − 2u(t n , x j ) + u(t n , x j−1 ) h 2 ,
where t n = n · k and x j = j · h for given small numbers k, h > 0. Note, however, that this cannot be directly applied to calculate u(t n+1 , 0) and u(t n+1 , 1) because u(t n , −h) and u(t n , 1+h) are not defined. For Dirichlet boundary conditions, we can assign u(t n , −h) := u(t n , 1 + h) := 0. On the other hand, for Neumann boundary conditions the situation is not as simple. One common technique is to use u(t n , −h) := u(t n , h) and u(t n , 1 + h) := u(t n , 1 − h)
in the calculations, which comes from a second order accurate approximation of the derivative at the boundary, see [11,Section 8.3]. The aim of the article at hand is to justify the use of (3) for Neumann boundary conditions from a semigroup perspective, showing that the corresponding continuous method converges to the exact solution as k → 0, and to extend it to the more general case of Robin boundary conditions. More precisely, we construct an extension operator E β from C(Ω) to C 0 (R N ) which depends only on Ω and β (but not on t) and resembles a continuous version of (3) if β = 0, such that E β is a contraction and that E β u is sufficiently regular whenever u ∈ D(∆ R ); we refer to Corollary 12 for the precise statement. For operators E β satisfying these two assumptions, we prove the Trotter-like (compare to [12]) approximation result
T R (t)u = lim n→∞ RG 0 ( t n )E β n u in C(Ω) for every u ∈ C(Ω)(4)
uniformly on [0, T ] for every T > 0, where G 0 (t) denotes the Gaussian semigroup on C 0 (R N ) and R : C 0 (R N ) → C(Ω) is the restriction operator Ru = u| Ω . This shows how Robin boundary conditions can be incorporated into a numerical solver such that the numerical solutions converge uniformly on [0, T ] × Ω, at least if error introduced by space discretization is neglected.
The article is organized as follows. In Section 2 we show how to represent a neighborhood of ∂Ω in terms of the outwards pointing unit normal and recall some facts about the Laplacian. In Section 3 we construct the extension operator E β related to the Robin Laplacian and prove its aforementioned two properties which ensure (4) as we show in Section 4. Section 5 deals with the limiting case β → ∞ giving rise to Dirichlet boundary conditions. Finally, Section 6 summarizes the results.
Notation and Preliminary Results
It is well-known that for smooth boundary a neighborhood of ∂Ω can be parametrized by the outwards pointing unit normal ν. Because certain features of the parametrization are needed later on, we state this result in the formulation we want to use and prove it for the sake of completeness.
Proposition 1. Let δ > 0 and T : ∂Ω × (−δ, δ) → R N , (p, t) → p + tν(p).
For small δ > 0, T is a smooth diffeomorphism onto a neighborhood of ∂Ω in R N .
Remark 2.
(a) Let x 0 ∈ ∂Ω be arbitrary. By definition of "smooth boundary", Ω can locally at
x 0 be represented as the subgraph of a smooth function ϕ : U → R (U ⊂ R N −1 ) up to rotation. Assume for the moment that no rotation is needed. Then
∂Ω ∩ V = z ϕ(z) z ∈ U(5)
for an open set V ⊂ R N . Thus z → z ϕ(z) is a bijection of an open subset of R N −1 onto a neighborhood of x 0 in ∂Ω. Using these mappings for all x ∈ ∂Ω as charts we make ∂Ω into a smooth manifold built upon the subspace topology induced by R N . Thus we can look at T as a mapping from a manifold to a euclidean space. As usual we say that T is smooth if the composition T * of T with a chart is smooth, i.e., if
T * : U × (−δ, δ) → R N , (z, t) → z ϕ(z) + tν z ϕ(z)(6)
is smooth as a mapping between euclidean spaces. (b) Using charts, the outwards pointing unit normal ν can be written as
ν z ϕ(z) = ± −∇ϕ(z) 1 T −1 −∇ϕ(z) 1 T .(7)
To see this, note that for x ∈ ∂Ω the direction of ν(x) is uniquely described by the property that for every smooth curve ξ in ∂Ω satisfying ξ(0) = x = z ϕ(z) the vectors ν(x) and ξ ′ (0) are orthogonal. Since by (5), locally ξ(t) = ζ(t) ϕ(ζ(t)) , where ζ(0) = z, (7) follows from the identity −∇ϕ(z) 1 ζ ′ (0) ∇ϕ(ζ(0)) · ζ ′ (0) = −∇ϕ(z) · ζ ′ (0) + ∇ϕ(z) · ζ ′ (0) = 0.
Proof of Proposition 1. Let x ∈ ∂Ω be arbitrary. Working locally near x, for simplicity we may assume without loss of generality that there exists ϕ be as in the previous remark, i.e., that no rotation is needed for Ω to be the subgraph of a smooth function. Then x = z ϕ(z) for some z. Using (7), the derivative of T * is
T * ′ (z, 0) = I −c∇ϕ(z) T ∇ϕ(z) c .(8)
Here, c = ± −∇ϕ(z) 1 −1 = 0. In particular, we obtain
det T * ′ (z, 0) = c · det I −∇ϕ(z) T ∇ϕ(z) 1 = c · det I −∇ϕ(z) T 0 1 + |∇ϕ(z)| 2 = 0
by applying the Gauss-Jordan elimination algorithm. The inverse function theorem asserts that T * and hence T is locally a smooth diffeomorphism. Because x ∈ ∂Ω was arbitrary, all that remains to show is that T is injective if δ > 0 is small enough.
By the above argument for every x ∈ ∂Ω there exists an open neighborhood O x of x in ∂Ω and δ x > 0 such that T is a smooth diffeomorphism from O x ×(−δ z , δ z ) to a neighborhood of x in R N . By compactness of ∂Ω we can choose finitely many such O xi (i = 1, . . . , m) which already cover ∂Ω. It is easily proved by contradiction that we can find δ > 0 such that for every x ∈ ∂Ω there exists an index k(x) ∈ {1, . . . , m} with the property that
B 4δ (x) ∩ ∂Ω ⊂ O x k(x) , where B r (a)
denotes the open ball with center a and radius r. We pick δ such that δ < δ xi for all i = 1, . . . , m.
For this choice of δ, T is injective. To see this, let T (y 1 , t 1 ) = T (y 2 , t 2 ) where y 1 , y 2 ∈ ∂Ω and t 1 , t 2 ∈ (−δ, δ). We estimate 0 = |T (y 1 , t 1 ) − T (y 2 , t 2 )| ≥ |y 1 − y 2 | − (|t 1 ν(y 1 )| + |t 2 ν(y 2 )|) ≥ |y 1 − y 2 | − 2δ.
This shows |y 1 − y 2 | ≤ 2δ and thus y 2 ∈ B 4δ (y 1 ), hence y 1 , y 2 ∈ O x k , k = k(y 2 ). By construction, T is injective on O k × (−δ, δ), hence y 1 = y 2 and t 1 = t 2 , proving the claim. Lemma 3. The set D := D(∆ R ) ∩ C ∞ (Ω) is an operator core for ∆ R , i.e., D is dense in D(∆ R ) with respect to the graph norm.
Proof. As ∆ R is a generator, the space n∈N D(∆ n R ) is a core for ∆ R [4, Proposition II.1.8]. Moreover,
R(1, ∆ R ) H m (Ω) ∩ C(Ω) ⊂ H m+2 (Ω)
for every m ∈ N 0 by the regularization properties of elliptic operators [6, Remark 2.5.1.2]. By a standard Sobolev embedding theorem [5, Section 5.6],
D(∆ n R ) = R(1, ∆ R ) n C(Ω) ⊂ H 2n (Ω) ⊂ C 2n−[ N 2 ]−1 (Ω)
for all n > N 4 . Letting n → ∞ we obtain the assertion.
We remark that for u ∈ C ∞ (Ω) the normal derivative exists in the classical sense. For these functions, u ∈ D(∆ R ) is equivalent to (2), and we will frequently use the boundary condition in this way.
Let G 2 (t) denote the C 0 -semigroup on L 2 (R N ) with generator
D(∆ 2 ) := u ∈ L 2 (R N ) ∆u ∈ L 2 (R N ) ,∆ 2 u := ∆u.
The semigroup G 2 (t) leaves the space C 0 (R N )∩L 2 (R N ) invariant, and its restriction extends continuously to a positive, contractive C 0 -semigroup on C 0 (R N ), denoted by G 0 (t). The generator of this semigroup is
D(∆ 0 ) := u ∈ C 0 (R N ) ∆u ∈ C 0 (R N ) , ∆ 0 u := ∆u.
We will refer to both semigroups as the Gaussian semigroup. For more details about the Gaussian semigroup, we refer to [1, Chapter 3.7].
Extension Operator
Given a smooth bounded open set Ω and a smooth function β : ∂Ω → R + , we construct an extension operator E β which satisfies the assumptions under which we will prove (4) in Section 4. For β = 0, the operator is similar to, but slightly simpler than the extension operator in [5,Section II.5.4]. However, the properties which we prove here may also be of independent interest.
For the whole section, let δ and T be as in Proposition 1. We start by fixing a "kinking function" ̺. First choose a function ̺ 1 having the following properties.
(a) ̺ 1 ∈ C ∞ ([0, ∞) × [0, ∞)) (b) 0 ≤ ̺ 1 (γ, t) ≤ 1 for all γ, t ≥ 0 (c) ̺ 1 (γ, t) = 0 for all t ≥ δ 2 and γ ≥ 0 (d) ̺ 1 (γ, 0) = 1 for all γ ≥ 0 (e) ∂ ∂t ̺ 1 (γ, 0) = −2γ for all γ ≥ 0 (f) ∂ 2 ∂t 2 ̺ 1 (γ, 0) = 4γ 2 for all γ ≥ 0 Here ∂
∂t ̺ 1 denotes the partial derivative of ̺ 1 with respect to the second argument. For example, we may choose ̺ 1 (γ, t) := exp (−2γt) χ(t) where χ is a smooth cut-off function such that χ ≡ 1 near 0. Now define ̺ : Ω C → R to be
̺(x) := ̺ 1 (β(z), t), if x = T (z, t), 0 ≤ t < δ, 0, otherwise.
Note that ̺ is well-defined since T is injective, and it is smooth by construction.
Definition 4 (Reflection at the Boundary). Let x ∈ T (∂Ω × (−δ, δ)), x = T (z, t).
We callx := T (z, −t) the (orthogonal) reflection of x at the boundary ∂Ω. For a function u : Ω → R we define the reflected functioñ
u : T (∂Ω × (0, δ)) → R,ũ(x) := u(x).
We define the extension operator E β belonging to β as
E β : C(Ω) → C 0 (R N ), E β u := u, on Ω, ̺ũ, on Ω C .(9)
Here ̺ũ is understood to be 0 outside T (∂Ω × (0, δ)) because ̺ equals 0 in that region.
Lemma 5. The operator E β is well-defined, linear, positive, contractive and an extension operator, i.e., RE β = I, where R :
C 0 (R N ) → C(Ω), u → u| Ω .
Proof. Let u ∈ C(Ω). By property (d), the function E β u is continuous on R N . Since it has compact support, E β u ∈ C 0 (R N ). Positivity and contractivity follow from property (b). The other two properties are obvious from (9).
We now turn towards a more interesting property of E β : we prove that it maps D as defined Lemma 3 into D(∆ 0 ). This extensive calculation is split into several lemmata. Most calculations will be carried out in local coordinates, i.e., locally at
x 0 = T * (z 0 , 0) ∈ ∂Ω,
where we represent all functions with respect to the charts as follows. Here T * is defined as in (6).
u * (z, t) := u(T * (z, t))ũ * (z, t) :=ũ(T * (z, t)) β * (z) := β(T * (z, 0)) ̺ * (z, t) := ̺(T * (z, t)) = ̺ 1 (β * (z), t)
In the following we will adhere to the usual notation for normal derivatives, i.e., ∂g ∂ν denotes the directional derivative of g along the outwards pointing unit normal with respect to the domain of g. Note that for functions defined on Ω C this means that ∂g ∂ν = −∇g · ν, where ν always denotes the outwards pointing unit normal of Ω.
Lemma 6. Let u ∈ D. Then E β u ∈ D(∆ 2 ) and ∆ 2 (Eu) | Ω = ∆ R u.
Proof. The continuous function E β u has compact support, hence E β u ∈ L 2 (R N ). Moreover, E β u is smooth away from ∂Ω being the composition of smooth functions. Thus the measurable function f := ∆u, on Ω,
∆(̺ũ), on Ω C .
is defined outside ∂Ω which is a set of measure zero. As u and ̺ũ are smooth up to ∂Ω, f is bounded. Note that f has compact support, hence f ∈ L 2 (R N ). For the assertion of the lemma, it remains to show that f = ∆(E β u) in the sense of distributions. For this we calculate the (classical) normal derivative of ̺ũ using that u satisfies (2). For z ∈ ∂Ω we have
∂ũ ∂ν (z) = − lim h→0ũ (z + hν(z)) −ũ(z) h = − lim h→0 u(z − hν(z)) − u(z) h = −β(z)u(z) and ∂̺ ∂ν (z) = − lim h→0 ̺(z + hν(z)) − ̺(z) h = − lim h→0 ̺ 1 (β(z), h) − ̺ 1 (β(z), 0) h = 2β(z).
This implies
∂(̺ũ) ∂ν (z) = ̺(z) ∂ũ ∂ν (z) + ∂̺ ∂ν (z)ũ(z) = ∂u ∂ν (z) + ∂̺ ∂ν (z) u(z) = β(z)u(z).
Now let ϕ ∈ D(R N ) be an arbitrary test function. From the above calculations, the classical Green formula [2, Section II.1.3] and ̺ũ| ∂Ω = u| ∂Ω , we obtain
R N (E β u)∆ϕ = Ω u ∆ϕ + Ω C ̺ũ ∆ϕ = Ω ∆uϕ + (∂Ω) + u ∂ϕ ∂ν − ∂u ∂ν ϕ dσ + Ω C ∆(̺ũ)ϕ + (∂Ω) − ̺ũ ∂ϕ ∂ν − ∂(̺ũ) ∂ν ϕ dσ = Ω ∆u ϕ + Ω C ∆(̺ũ) ϕ = R N f ϕ,
where (∂Ω) + is understood as the (oriented) boundary of Ω, whereas (∂Ω) − denotes the (oriented) boundary of Ω C . This shows ∆(E β u) = f in the sense of distributions.
Remark 7. The above lemma tells us that ∆(E β u) is a function. To see that
E β u ∈ D(∆ 0 ), it remains to show that ∆(E β u) ∈ C 0 (R N )
. We already know that ∆(E β u) has compact support and is smooth on R N \ ∂Ω. Thus it suffices to show that the function can continuously be extended to ∂Ω. This is a local property. In fact, since we already know that the limits from the inside and the outside both exist, it suffices to show that ∆u(
x 0 ) = ∆(̺ũ)(x 0 ) for every x 0 ∈ ∂Ω.
Let x 0 ∈ ∂Ω be fixed. To simplify notation, we may assume that ν(x 0 ) = e N without loss of generality, exploiting the rotational invariance of the Laplacian. Here and in the following, e n denotes the the n th unit vector in R N . Moreover, since we treat the problem locally, we may work in local coordinates, x 0 = T * (z 0 , 0). We start by calculating the partial derivatives of T * −1 , where T * is defined as in (6).
Lemma 8. For n ∈ {1, . . . , N }, ∂ ∂x n T * −1 (x 0 ) = e n , ∂ 2 ∂x 2 n T * −1 (x 0 ) = 0 − ∂ 2 ∂z 2 n ϕ(z 0 ) if n = N, 0 if n = N.
Proof. The assumption ν(z 0 ) = e N implies ∇ϕ(z 0 ) = 0 due to (7). As in (8), this shows T * ′ (z 0 , 0) = I. By the inverse function theorem,
T * −1 ′ (x) = T * ′ T * −1 (x) −1 .
For the partial derivatives at x 0 this means ∂ ∂x n T * −1 (x 0 ) = T * ′ (z 0 , 0) −1 e n = Ie n = e n .
To calculate the second derivatives, we employ a differentiation rule for matrices,
d dt A(t) −1 = −A −1 (t)A ′ (t)A −1 (t). ∂ 2 ∂x 2 n T * −1 (x) = ∂ ∂x n T * ′ T * −1 (x) −1 e n = − T * ′ T * −1 (x) −1 ∂ ∂x n T * ′ T * −1 (x) T * ′ T * −1 (x) −1 e n
If we denote the entries of T * ′ by t ij (i, j = 1, . . . , N ), we can proceed as follows.
∂ ∂x n t ij T * −1 (x) = ∇t ij T * −1 (x) · ∂ ∂x n T * −1 (x) For x = x 0 this yields ∂ ∂x n t ij T * −1 (x 0 ) = ∇t ij (z 0 , 0)e n = ∂ ∂z n t ij (z 0 , 0),
where for notational simplicity we use z N as an alias for the variable t. Inserting this expression into the above identity, we arrive at
∂ 2 ∂x 2 n T * −1 (x 0 ) = − ∂ ∂z n t ij (z 0 , 0) i,j=1,...,N e n = − ∂ ∂z n t in (z 0 , 0) i=1,...,N = − ∂ 2 ∂z 2 n T * (z 0 , 0).
In combination with formula (6) this finishes the proof.
Having the derivatives of the charts at hand, we are able to calculate all derivatives in local coordinates.
Lemma 9. Let f and f * be functions such that locally f * (z, t) = f (T (z, t)). Then
∇f (x 0 ) = ∇f * (z 0 , 0), ∆f (x 0 ) = N −1 n=1 ∂ 2 ∂z 2 n f * (z 0 , 0) + ∂ 2 ∂t 2 f * (z 0 , 0) − ∂ ∂t f * (z 0 , 0) N −1 n=1 ∂ 2 ∂z 2 n ϕ(z 0 ).
In particular,
∇̺(x 0 ) = 0 −2β(x 0 ) , ∆̺(x 0 ) = 4β(x 0 ) 2 + 2β(x 0 ) N −1 n=1 ∂ 2 ∂z 2 n ϕ(z 0 ). Proof. Differentiating f (x) = f * T * −1 (x) we obtain ∂ ∂x n f (x) = (∇f * ) T * −1 (x) ∂ ∂x n T * −1 (x), ∂ 2 ∂x 2 n f (x) = ∂ ∂x n T * −1 T (x) H f * T * −1 (x) ∂ ∂x n T * −1 (x) + (∇f * ) T * −1 (x) ∂ 2 ∂x 2 n T * −1 (x),
where H f * = Concerning ̺ we remark that ̺ * (z, 0) = ̺(β(z), 0) = 1 implies ∂ ∂zn ̺ * (z 0 , 0) = 0 (n = 1, . . . , N − 1). On the other hand, the derivatives with respect to t equal
∂ ∂t ̺ * (z 0 , 0) = ∂ ∂t ̺ 1 (β * (z 0 ), 0) = −2β * (z 0 ) = −2β(x 0 ), ∂ ∂t 2 ̺ * (z 0 , 0) = ∂ ∂t 2 ̺ 1 (β * (z 0 ), 0) = 4β * (z 0 ) 2 = 4β(x 0 ) 2 .
With this information, the formulae for ̺ follow from the general formulae.
Finally, it is easy to calculate the relation between the derivatives of the function and its reflection at the boundary in local coordinates. It suffices to observe that
u * (z, t) =ũ(T (z, t)) = u(T (z, −t)) = u * (z, −t).
From this we deduce the following formulae.
u * (z, t) = u * (z, −t) ∂ ∂z nũ * (z, t) = ∂ ∂z n u * (z, −t) ∂ ∂tũ * (z, t) = − ∂ ∂t u * (z, −t) ∂ 2 ∂z 2 nũ * (z, t) = ∂ 2 ∂z 2 n u * (z, −t) ∂ 2 ∂t 2ũ * (z, t) = ∂ 2 ∂t 2 u * (z, −t)
Now we are ready to prove continuity of ∆(E β u) at x 0 .
Proposition 10. For every u ∈ D, ∆u(x 0 ) = ∆(̺ũ)(x 0 ). Proof. Note that ∂ ∂tũ * (z 0 , 0) = − ∂ ∂t u * (z 0 , 0) = − ∂u ∂ν (x 0 ) = β(x 0 )u(x 0 ) = β(x 0 )ũ(x 0 ).
We use the formulae of this section to obtain the desired identity.
∆(̺ũ)(x 0 ) = ∆̺(x 0 )ũ(x 0 ) + 2∇̺(x 0 ) · ∇ũ(x 0 ) + ̺(x 0 ) ∆ũ(x 0 ) = 4β(x 0 ) 2ũ (x 0 ) + 2β(x 0 )ũ(x 0 ) N −1 n=1 ∂ 2 ∂z 2 n ϕ(z 0 ) − 4β(x 0 ) ∂ ∂tũ * (z 0 , 0) + N −1 n=1 ∂ 2 ∂z 2 nũ * (z 0 , 0) + ∂ 2 ∂t 2ũ * (z 0 , 0) − ∂ ∂tũ * (z 0 , 0) N −1 n=1 ∂ 2 ∂z 2 n ϕ(z 0 ) = N −1 n=1 ∂ 2 ∂z 2 n u * (z 0 , 0) + ∂ 2 ∂t 2 u * (z 0 , t) − ∂ ∂t u * (z 0 , 0) N −1 n=1 ∂ 2 ∂z 2 n ϕ(z 0 ) = ∆u(x 0 ).
The following theorem is the main result of this section. As explained in Remark 7, it follows by combining Lemma 6 and the last proposition. Even though Theorem 11 is also true for the usual extension operator for Lipschitz domains [10, VI. §3, Theorem 5], that operator fails to be contractive and thus is more difficult to handle for the application in Section 4.
Theorem 11. The operator E β maps D into D(∆ 0 ).
Corollary 12.
The operator E β maps D(∆ R ) into D(∆ 0 ).
Proof. There exists a constant C > 0 satisfying E β u D(∆0) ≤ C u D(∆R) for all u ∈ D. To see this, note that on Ω C
∆(ũ̺) ∞ = ∆ũ̺ + 2∇ũ∇̺ +ũ∆̺ ∞ ≤ ∆ũ ∞ ̺ ∞ + 2 ∇̺ ∞ (ε ∆ũ ∞ + ũ ∞ ) + ũ ∞ ∆̺ ∞
for every ε > 0. Similarly, ũ ∞ ≤ u ∞ and ∆ũ ∞ ≤ C ( ∆u ∞ + u ∞ ), using the definition ofũ as a composition of u and a function involving T , where C > 0 depends only depends on a estimate on the derivatives of T . Noting that ̺ and its derivatives are bounded by assumption, we see that there exists a C > 0 as claimed.
As D is a core of ∆ R , the above estimate shows that there exists a unique continuous extension of E β | D to D(∆ R ), and that this operator still takes values in D(∆ 0 ). Because D(∆ R ) is continuously embedded into C(Ω) and E β is continuous on C(Ω), this extension agrees with E β | D(∆R) . Thus the claim is proved.
Approximation Result
In this section, we we prove that if E β : C(Ω) → C 0 (R N ) is a contractive extension operator mapping an operator core D for ∆ R into D(∆ 0 ), then formula (4) holds. Note that the operator defined in (9) has this properties as shown in the preceding section. The tool we use for the proof is the following approximation result for semigroups due to Chernoff. Then (A, D) is closable and A generates a bounded C 0 -semigroup T (t), which is given by
T (t)x = lim n→∞ V ( t n ) n x
for every x ∈ X locally uniformly with respect to t ≥ 0.
We apply the theorem by setting
X := C(Ω), V (t) := RG 0 (t)E β , A := ∆ R .(10)
As D is an operator core for ∆ R , the density conditions are fulfilled because λ − ∆ R is an isomorphism between D(∆ R ) with the graph norm and C(Ω) for every λ > 0.
Theorem 14. Let E β : C(Ω) → C 0 (R N ) be a contractive extension operator which maps an operator core D for ∆ R into D(∆ 0 ). Then formula (4) holds true.
Proof. We check the conditions of Chernoff's product formula with the choices made in (10). The fact V (0) = I is equivalent to E β being an extension operator. Since all three of their factors are contractions, the operators V (t) are contractions for every t ≥ 0, thus V (t) m ≤ 1; in particular V (t) is a bounded operator for every t ≥ 0. The density assumptions on D are fulfilled because D is an operator core for ∆ R . Now let u ∈ D be arbitrary. By assumption, E β u ∈ D(∆ 0 ). By definition of the infinitesimal generator,
V (h)u − u h = R G(h)(E β u) − E β u h → R∆ 0 E β u. (h → 0)
Since the function E β u agrees with u on Ω, they represent the same distribution acting on the test functions D(Ω). This means that they have the same distributional derivatives, hence R∆ 0 E β u = ∆ R u. Having checked all the conditions of Theorem 13, we deduce that indeed (4) holds true.
Corollary 15. Formula (4) holds true for the operator E β defined in (9).
Remark 16. As a special case, we may choose β = 0. Then ∆ R = ∆ N is the Laplacian with Neumann boundary conditions. In this case, E 0 is the reflection without "kinking", corresponding to certain numeric schemes where Neumann boundary conditions are realized as in (3). A different extension operator for Neumann boundary conditions would be given by extending constantly along the outwards pointing unit normal and again multiplying by a cut-off function. This corresponds to a first-order accurate boundary condition approximation, see again [11,Section 8.3].
Although this might seem more natural at first, it is not obvious whether formula (4) is true for this choice of E β . Unfortunately, Chernoff's theorem cannot be applied again because ∆E β u fails to be continuous at ∂Ω as can easily be seen.
Dirichlet Boundary Conditions
Next we treat the model problem of an elliptic operator on a bounded set, the Laplacian with Dirichlet boundary conditions. Typically, all results about elliptic operators are much simpler for this special case. Surprisingly, for the aim of this article there arise completely different problems than for Robin and Neumann boundary conditions. This is the reason why we consider it worthwhile to treat this operator in detail.
The Formally, the boundary conditions (2) become the Dirichlet boundary conditions u = 0 on ∂Ω in the limit β → ∞. This observation can be made precise, cf. [13, Proposition 3.5.3]. As we want to prove an analogue of (4) for T D (t), we have to define an appropriate extension operator E ∞ for β = ∞. Taking the limit in (9), we arrive at
E ∞ : C 0 (Ω) → C 0 (R N ), E ∞ u := u on Ω, 0 on Ω C .
Note that we had to replace C(Ω) by C 0 (Ω) as we require E ∞ u to be continuous. Unfortunately, we cannot simply replace E β by E ∞ in formula (4) because the iteration scheme does not remain in C 0 (Ω), hence leaving the domain of E ∞ . However, the analogue formula is well-defined (and true) in the L 2 -context. To see this, note that L 2 (Ω) is a closed subspace of L 2 (R N ) if we consider its functions to be extended by zero. Then the identity mapping takes the role of E ∞ , and the restriction becomes multiplication with 1 Ω . Thus, the analogue of formula (4) for Dirichlet boundary conditions reads
T D,2 (t)u = lim n→∞ 1 Ω G 2 ( t n ) n u in L 2 (Ω) for every u ∈ L 2 (Ω),(11)
where T D,2 denotes the Dirichlet semigroup on L 2 (Ω) generated by the Laplacian on L 2 (Ω) with domain H 1 0 (Ω) ∩ H 2 (Ω). Indeed, formula (11) remains true even if Ω has merely Lipschitz regular boundary, cf. [8].
It is interesting to note that (11) cannot be proved using Chernoff's product formula in the way we did in Section 4. For this, a dense subspace of H 1 0 (Ω)∩H 2 (Ω) would have to be contained in H 2 (R N ), where both spaces carry the graph norm of the Laplacian. But then, continuity asserts H 1 0 (Ω) ∩ H 2 (Ω) ⊂ H 2 (R N ). However, this cannot be true. In fact, a function in C 0 (Ω) ∩ C ∞ (Ω) ⊂ H 1 0 (Ω) ∩ H 2 (Ω) whose normal derivative does not vanish is not an element of H 2 (R N ).
Despite those problems, it is possible to prove a similar result in the same spirit as in Section 4 even in C 0 (Ω). For this, we need to replace 1 Ω by a sequence of smooth interior cut-off functions. But we have to assure that they exhaust Ω sufficiently fast compared to the decay of functions in a core for ∆ D . So we start by a investigation of that decay. which vanishes for x ∈ ∂Ω. Using compactness of Ω and ∂Ω we deduce that for any ε > 0 there exists a neighborhood S ε of ∂Ω such that x ∈ S ε implies k(x, y) ≤ ε for all y ∈ Ω. Define U t := S ε , where ε := t 2 |Ω| . Now fix u ∈ D(∆ m D ) and define v :
= (I − ∆ D ) m u ∈ C 0 (Ω). For x ∈ Ω ∩ U t , i.e., x ∈ Ω ∩ S ε , we obtain |u(x)| ≤ (I − ∆ D ) −m |v| = Ω k(x, y)|v(y)| dy ≤ ε |Ω| v ∞ = t 2 (I − ∆ D ) m u ∞ .
This concludes the proof.
We have already explained why we cannot use E ∞ as extension operator. Instead, we choose
E D : C 0 (Ω) → C 0 (R N ), E D u := u,
on Ω, −̺ũ, on Ω C , similarly to (9). Here ̺ denotes a cut-off function that equals 1 near ∂Ω. Using the same ideas as in Section 3 it can be shown that E D is a contractive extension operator that maps D(∆ D ) ∩ C ∞ (Ω) into D(∆ 0 ). In fact, the main difference to Section 3 is that we know ∆u ∈ C 0 (Ω) for u ∈ D(∆ D ) which makes it easy to check the continuity of ∆(E D u), significantly shortening the chain of arguments. Now let m be as in the above lemma, and choose a family (U t ) t>0 as in the lemma. For every t > 0 we fix a suitable cut-off function χ t ∈ C 0 (Ω) satisfying 0 ≤ χ t ≤ 1, and χ t (x) = 1 if x ∈ Ω \ U t . Moreover, define χ 0 := 1 Ω . To simplify notation, we use the multiplication operator χ t as an operator from C 0 (R N ) to C 0 (Ω) and χ 0 as the restriction from C 0 (R N ) to C(Ω), whenever they are applied to functions in C 0 (R N ).
We remark that in view of the kernel of T D (t) (t > 0) being strictly positive in the interior of Ω due to the strong maximum principle, it can be seen that for every compact set K ⊂ Ω there exists t 0 > 0 such that U t and K are disjoint whenever t < t 0 , implying that χ t → 1 Ω pointwise as t → 0. In this sense, the next result is another flavor of formula (11).
Theorem 18. Let m ∈ N and (χ t ) t≥0 be as above. Then
T D (t)u = lim n→∞ χ t n G 0 ( t n )E D n u for every u ∈ C 0 (Ω) uniformly on [0, T ] for every T > 0.
Proof. We apply Theorem 13 to the operators
V (t) : C 0 (Ω) → C 0 (Ω), u → χ t G 0 (t)E D u.
The properties V (0) = I and V (t) n ≤ 1 for every t ≥ 0 and n ∈ N are obvious from the properties of E D and the Gaussian semigroup. Let D := D(∆ m D ) ∩ C ∞ (Ω), which is a core for ∆ D . This choice makes the density conditions automatic once we show that the limit operator is ∆ D .
It only remains to prove the convergence to ∆u on D. For this, let u ∈ D. In particular u ∈ D(∆ D ), thus ∆u ∈ C 0 (Ω). Note that
V (t)u − u t − ∆u ∞ = χ t G 0 (t)E D u − u t − ∆u ∞ ≤ χ t G 0 (t)E D u − E D u t − ∆u ∞ + χ t E D u − u t ∞ + χ t ∆u − ∆u ∞ .
We estimate the three summands separately. After estimating χ t by 1 in the first expression, convergence to zero follows from E D u ∈ D(∆ 0 ) and the fact that ∆ 0 (E D u) = ∆u on Ω. The third summand can be estimated by sup x∈Ut 2 |∆u(x)| using that χ t = 1 on Ω \ U t . But since we assumed that U t leaves any compact set K ⊂ Ω for small t, this expression becomes small as t → 0 because ∆u ∈ C 0 (Ω). The second summand can be estimated with the help of Lemma 17. We obtain
χ t E D u − u t ∞ = 1 t χ t u − u ∞ ≤ 2 t sup x∈Ut |u(x)| ≤ 2t (I − ∆ D ) m u ∞ → 0
as t → 0. Together, these three estimates show the convergence of the difference quotient to ∆u as t tends to zero. We have checked the assumptions of Chernoff's product formula, thus proving the claim of the theorem.
Conclusion
It is a direct consequence of (4) that T R (t) is a positive semigroup. Because the operators on the right are L ∞ -contractive, it is also clear that T R (t) is L ∞contractive, thus submarkovian. In the same way other properties of the limiting semigroup can be deduced by such an approximation formula, as long as they are preserved when taking limits in the strong operator topology. To obtain further properties, it might help to modify the formula a little bit.
So far, we have only considered the Gaussian semigroup as underlying tool. However, it can be seen from the proofs that actually we used only few properties of the Gaussian semigroup. More precisely, we only used that G 0 (t) is a contraction on C 0 (R N ) and that any continuous function u such that the support of u is contained in a given neighborhood of Ω and ∆u is continuous on R N is in the domain of the generator of G 0 (t). Thus we could replace G 0 (t) with other semigroups, for example with the semigroup generated by the Laplacian with Dirichlet or Neumann boundary conditions, on a larger bounded set Ω ′ ⊂ R N . Then (4) becomes an approximation formula where the approximating operators are compact. Note, however, that this does not imply that T R (t) is compact, as the limit is only in the strong operator topology.
Similarly, we can try to approximate T R (t) only in terms of operators on C(Ω), i.e., without any extension to R N , to obtain an intrinsic approximation. The most natural Trotter-like candidate of this kind would be S n (t) :
= T D ( αt n )T N ( βt n ) n ,
where α and β are positive numbers such that α + β = 1 and T D (t) and T N (t) denote the Dirichlet and Neumann semigroups on C(Ω), respectively. It is known, however, that lim S n (t) = T D (t) in the strong operator topology on L 2 (Ω), whenever α > 0, see [7].
Recall that regarding the extension operator we used only two of its properties in Section 4, namely contractivity and some regularity of the extended function. By definition of the extension operator, contractivity came for free. This is due to the rather special definition of E β and the choice of spaces and is a very convenient prerequisite for the application of Theorem 13, although not a necessary one.
Assume that we replace E β by some other, non-contractive extension operator E. This is a natural consideration because most extension operators are noncontractive. In fact, it is easy to see that no operator extending C 1 (Ω) to C 1 (R N ) can be contractive. This also shows that it is a very special property for an extension operator to be contractive and to preserve the regularity of functions in D(∆ R ).
For such an extension operator E, it is considerably more difficult to check whether RG( t n )E n is uniformly bounded in operator norm with respect to n.
Because it is hard to control such iterated applications of the Gaussian semigroup, one could try to estimate each factor separately. Then one has to show that RG 0 (t)E ≤ 1 + ct for some c > 0, leading to the upper bound e ct . The short time diffusion through the boundary, however, is of order O( √ t), see [9]. This is why in general only estimates of the kind RG 0 (t)E = 1 + O( √ t) can be obtained. Almost the same reasoning applies if C(Ω) is replaced by an L p -space, for example by L 1 (Ω). As Ω is bounded, uniform convergence already implies convergence in L 1 (Ω), hence T R (t)u = lim n→∞ RG 0 ( t n )E β n u in L 1 (Ω) for every u ∈ C(Ω) by what we have already shown. If we want to extend this result to u ∈ L 1 (Ω), it suffices to show that the approximating operators RG 0 ( t n )E β n remain bounded in the norm of operators on L 1 (Ω). Here again, there arise difficulties which are similar to those mentioned in the preceding paragraph because no non-trivial extension operator from L 1 (Ω) to L 1 (R N ) is contractive. This shows that for our applications the space C(Ω) has significant advantages. A related question is whether (4) remains true if the assumption β ≥ 0 is dropped. We mention that it can be seen that for any β ∈ L ∞ (∂Ω) the Laplacian with Robin boundary conditions is the generator of a semigroup on L 2 (Ω) and thus this question makes sense. But if we define E β as in Section 3, we do not even in C(Ω) obtain a contraction if β(z) < 0 for a point z ∈ ∂Ω, causing the same problems again. Moreover, it is clear that the assumption (RG 0 (t)E β ) n which is needed for Theorem 13 cannot be fulfilled since the candidate limit semigroup T R (t) will not be bounded. But the latter is merely a problem of rescaling, compare [4, Corollary III.5.3].
It should be possible to extend the results to smooth unbounded open sets Ω without difficulties because most arguments are local. However, the other calculations become even more technical. This is why we have restricted ourselves to bounded domains.
On the other hand, choosing a different (contractive) extension operator will usually change the situation completely. For example, Theorem 13 cannot be applied for the constant extension as in Remark 16, reflecting the fact that a worse numerical approximation of the normal derivative leads to worse convergence behavior. But that extension operator can be defined even for convex domains without any smoothness assumptions, which might provide an alternative approximation scheme for less smooth domains. It is easy to come up with various other extension operators when trying to find an approximation formula such as (4) for (not necessarily convex) sets with non-smooth boundary. This is ongoing work and might be the topic of a future publication.
every t ≥ 0. Note that we have incorporated the Robin boundary conditions ∂u ∂ν + βu = 0 on ∂Ω (2) into the domain D(∆ R ) := u ∈ H 1 (Ω) ∩ C(Ω) ∆u ∈ C(βdσ = 0 for every ϕ ∈ H 1 (Ω) , ∆ R u := ∆u of the Laplacian on Ω subject to Robin boundary conditions.
∂ 2 ∂zi
2∂zj f * i,j=1,...,N denotes the Hessian matrix of f * . By using Lemma 8 and summing up, we arrive at the desired formulae for x = x 0 .
Theorem 13 ([4, Theorem III.5.2]). Let X be a Banach space. Consider a function V : [0, ∞) → L (X) satisfying V (0) = I and V (t) m ≤ M for all t ≥ 0, m ∈ N and some M ≥ 1. Assume that Ax := lim h→0 V (h)x − x h exists for all on x ∈ D ⊂ X, where D and (λ 0 − A)D are dense subspaces in X for some λ 0 > 0.
Laplacian with Dirichlet boundary conditions defined by D(∆ D ) := {u ∈ C 0 (Ω)|∆u ∈ C 0 (Ω)} , ∆ D u := ∆u generates an positive, contractive C 0 -semigroup T D (t) on C 0 (Ω) [1, Theorem 6.1.8].
Lemma 17 .
17Given a Dirichlet regular bounded set Ω, there exists m ∈ N having the following property. Given t > 0, there exists a neighborhood U t of ∂Ω such that the estimate|u(x)| ≤ t 2 (I − ∆ D ) m u ∞holds for every x ∈ Ω ∩ U t and every u ∈ D(∆ m D ). Proof. It is well-known that T D (t) has a kernel representation with a continuous non-negative symmetric kernel k s (x, y) which vanishes on ∂Ω and is dominated by the Gaussian kernel [3, Section 3.4]. Let m > N 2 be fixed. The integral formula for powers of the resolvent[4, Corollary 2.1.11] shows that (I − ∆ D ) −m is a positive kernel operator with the continuous non-negative symmetric kernel s k s (x, y) ds
Vector-Valued Laplace Transforms and Cauchy Problems. W Arendt, C Batty, M Hieber, F Neubrander, BirkhäuserW. Arendt, C. Batty, M. Hieber, and F. Neubrander, Vector-Valued Laplace Transforms and Cauchy Problems, Birkhäuser, 2001.
. R Dautray, J.-L Lions, Mathematical Analysis and Numerical Methods for Science and Technology. 1Springer-VerlagPhysical Origins and Classical MethodsR. Dautray and J.-L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology 1: Physical Origins and Classical Methods, Springer-Verlag Berlin, 1990.
E B Davies, Heat kernels and spectral theory. 92E.B. Davies, Heat kernels and spectral theory, vol. 92, Cambridge Tracts in Mathematics, 1989.
One-Parameter Semigroups for Linear Evolution Equations. K.-J Engel, R Nagel, SpringerK.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, Springer, 2000.
L C Evans, Partial Differential Equations. American Mathematical SocietyL.C. Evans, Partial Differential Equations, American Mathematical Society, 1998.
Elliptic Problems in Nonsmooth Domains. P Grisvard, PitmanBostonP. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, Boston, 1985.
Trotter's product formula for an arbitrary pair of self-adjoint contraction semigroups Topics in functional analysis. T Kato, Adv. Math. Suppl. Stud. 3T. Kato, Trotter's product formula for an arbitrary pair of self-adjoint contraction semigroups Topics in functional analysis, Adv. Math. Suppl. Stud 3 (1978), 185-195.
Trotter's product formula for projections. M Matolcsi, R Shvidkoy, Archiv der Mathematik. 813M. Matolcsi and R. Shvidkoy, Trotter's product formula for projections, Archiv der Mathe- matik 81 (2003), no. 3, 309-317.
M MirandaJr, D Pallara, F Paronetto, M Preunkert, Short-time Heat Flow and Functions of Bounded Variation in R N , Annales, faculte des sciences toulouse mathematiques 16. 125M. Miranda Jr, D. Pallara, F. Paronetto, and M. Preunkert, Short-time Heat Flow and Func- tions of Bounded Variation in R N , Annales, faculte des sciences toulouse mathematiques 16 (2007), no. 1, 125.
E M Stein, Singular Integrals and Differentiability Properties of Functions. Princeton University PressE.M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Uni- versity Press, 1970.
J C Strikwerda, Finite Difference Schemes and Partial Differential Equations. J.C. Strikwerda, Finite Difference Schemes and Partial Differential Equations, Society for Industrial Mathematics, 2004.
On the Product of Semi-Groups of Operators. H F Trotter, Proceedings of the American Mathematical Society. 104H.F. Trotter, On the Product of Semi-Groups of Operators, Proceedings of the American Mathematical Society 10 (1959), no. 4, 545-551.
The Robin and Wentzell-Robin Laplacians on Lipschitz Domains. M Warma, Semigroup Forum. 141University of UlmThe Laplacian with General Robin Boundary ConditionsM. Warma, The Laplacian with General Robin Boundary Conditions, Ph.D. thesis, University of Ulm, 2002. 14. , The Robin and Wentzell-Robin Laplacians on Lipschitz Domains, Semigroup Forum 73 (2006), no. 1, 10-30.
| [] |
[
"Rings with Polynomial Identity and Centrally Essential Rings",
"Rings with Polynomial Identity and Centrally Essential Rings"
] | [
"V T Markov [email protected] \nLomonosov Moscow State University\nNational Research University \"MPEI\" Lomonosov Moscow State University\n\n",
"A A Tuganbaev [email protected] \nLomonosov Moscow State University\nNational Research University \"MPEI\" Lomonosov Moscow State University\n\n"
] | [
"Lomonosov Moscow State University\nNational Research University \"MPEI\" Lomonosov Moscow State University\n",
"Lomonosov Moscow State University\nNational Research University \"MPEI\" Lomonosov Moscow State University\n"
] | [] | It is proved that for any prime integer p and each field F of characteristic p, there exists a centrally essential F -algebra which is not a PI-ring and is not algebraic over its center.V.T.Markov is supported by the Russian Foundation for Basic Research, project 17-01-00895-A. A.A.Tuganbaev is supported by Russian Scientific Foundation, project 16-11-10013.Key words: centrally essential ring, PI ring, ring algebraic over its center, ring integral over its center.MSC2010 datebase 16R99; 16D101 Centrally essential rings are studied, for example, in[1]. | 10.1007/s13366-019-00447-w | [
"https://arxiv.org/pdf/1902.06287v1.pdf"
] | 119,633,032 | 1902.06287 | 444bb00ff3aaba4f140b07c5d91490e2577afca7 |
Rings with Polynomial Identity and Centrally Essential Rings
17 Feb 2019
V T Markov [email protected]
Lomonosov Moscow State University
National Research University "MPEI" Lomonosov Moscow State University
A A Tuganbaev [email protected]
Lomonosov Moscow State University
National Research University "MPEI" Lomonosov Moscow State University
Rings with Polynomial Identity and Centrally Essential Rings
17 Feb 2019centrally essential ringPI ringring algebraic over its centerring integral over its center MSC2010 datebase 16R99; 16D10
It is proved that for any prime integer p and each field F of characteristic p, there exists a centrally essential F -algebra which is not a PI-ring and is not algebraic over its center.V.T.Markov is supported by the Russian Foundation for Basic Research, project 17-01-00895-A. A.A.Tuganbaev is supported by Russian Scientific Foundation, project 16-11-10013.Key words: centrally essential ring, PI ring, ring algebraic over its center, ring integral over its center.MSC2010 datebase 16R99; 16D101 Centrally essential rings are studied, for example, in[1].
Introduction
All considered rings are associative and contain the non-zero identity element.
1.1. Centrally essential rings. A ring R with center C = C(R) is said to be centrally essential 1 if the module R C is an essential extension of the module C C , i.e., for any non-zero element r ∈ R, there exist non-zero central elements c, d ∈ C with rc = d.
It is clear that all commutative rings are centrally essential. If Z 2 is the field of order 2 and Q 8 is the quaternion group of order 8, then the group ring Z 2 [Q 8 ] is an example of a non-commutative, centrally essential, finite ring [1].
1.2. Rings with polynomial identity. Let X be a countable set and F = Z < X > a free ring with the set of free generators X. A classical identity in the sense of Rowen is an identity with integral coefficients, i.e., the element of the free ring F which is contained in the kernel of any homomorphism of the ring F in ring R. A classical identity is called a polynomial identity if it is multilinear and has 1 as one of its coefficients; a ring with polynomial identity is called a PI ring 2 .
The main result of this paper is Theorem 1.3.
Theorem.
For any prime integer p and each field F of characteristic p, there exists a centrally essential F -algebra which is not a PI ring and is not algebraic over its center.
1.4. Rings which are algebraic or integral over their centers. Let R be an arbitrary ring with center C. An element r ∈ R is said to be algebraic (resp., integral) over the center if, for some n ∈ N, there exist elements c 0 , . . . c n ∈ C such that c n is a non-zero-divisor in R (resp., an invertible element in R) and (1) c n r n + c n−1 r n−1 + . . . + c 1 r + c 0 = 0.
We denote by n 1 (r) (resp., n 2 (r)) the least integer n which satisfies this condition. A ring R is said to be algebraic (resp., integral) over its center if any element r ∈ R is algebraic (resp., integral) over its center. We set m 1 (R) = max{n 1 (r) | r ∈ R} and m 2 (R) = max{n 2 (r) | r ∈ R}; it is possible that m 1 (R) = ∞, m 2 (R) = ∞.
Finite rings and finite-dimensional algebras over fields are examples of rings R such that m 1 (R) = m 2 (R) < ∞.
We give an example of the ring which is algebraic and is not integral over its center. Let
R = a b 0 z : a, b ∈ Q, z ∈ Z ,
It is clear that the center of the ring R is of the form ZE, where E is the identity matrix.
We note that the ring R is not integral over its center. Indeed, since Q is a homomorphic image of the ring R, the integrity of R over its center would imply that Q is integral over Z which is false.
On the other hand, if r ∈ R, then nr ∈ M 2 (Z) for some n ∈ N, and the matrix ring M 2 (Z) is a finitely generated Z-module and hence it is integral over Z.
We note that the classes of centrally essential rings, PI rings and rings which are algebraic (or integral) over its center, properly contain the class of all commutative rings.
The Proof of Theorem 1.3
The proof of Theorem 1.3 uses the following two familiar results. . Let F be a field of characteristic p > 0. If G is a finite p-group of nilpotence class 2, then the group algebra F G is a centrally essential ring.
We fix a prime integer p and a field F of characteristic p. We denote by Z(G) the center of the group G.
Lemma.
There exists a series of finite p-groups G(n), n ∈ N, such that 1) for any n ∈ N, the group algebra F G(n) is a centrally essential ring; 2) for any d ∈ N there exists n = n(d) such that the ring F G(n) does not satisfy a polynomial identity of degree d.
Proof. For any positive integer n, we construct a group G = G(n) as follows. Let A = a , B = b , C = c be three cyclic groups such that |A| = |B| = |C| = p n . We consider the automorphism α ∈ Aut(B × C) defined on the generators by the relations α(b) = bc and α(c) = c. It is clear that α n is the identity automorphism; therefore, we have a homomorphism ϕ : A → Aut(B × C) with ϕ(a) = α. This homomorphism corresponds to the semidirect product G = (B ×C)⋉A which can be considered as the group generated by the elements a, b, c which satisfy the relations a p n = a p n = c p n = 1, bc = cb, ac = ca and aba −1 = bc. It follows from these relations that c ∈ Z(G). It is directly verified that for any integers x, y, z, x ′ , y ′ , z ′ , we have
(2) [b y c z a x , b y ′ c z ′ a x ′ ] = b y a x b y ′ a x ′ a −x b −y a −x ′ b −y ′ = b y (a x b y ′ a −x )(a x ′ b −y a −x ′ )b −y ′ = b y (b y ′ c xy ′ )(b −y c −yx ′ )b −y ′ = c xy ′ −yx ′ .
Thus, Z(G) = G ′ = c and G is a group of nilpotence class 2, so the first assertion follows from Theorem 2.1.
Let H be an arbitrary subgroup of the group G. Then If m + k ≥ n, then the inequality (3) holds. If m + k < n, then, by (2) and the property that the elements a p k b l and b p m are contained in the subgroup H, we have that [a p k b l , b p m ] = c p m+k ∈ H ′ ; therefore, |H ′ | ≥ | c p m+k | = p n−m−k and we have
[G : H] · |H ′ | ≥ p m+k · p n−m−k = p n ,
i.e., (3) also holds in this case.
Hence the second assertion follows from Theorem 2.2. Now we finish the proof of Theorem 1.3. It is sufficient to take the direct product of the group algebras F G(n), n ∈ N, satisfying the conditions of Lemma 2.3, as the ring R. We note that the direct product of any set of rings is centrally essential if and only if every factor is a centrally essential ring. Therefore, the ring R is centrally essential. However, if the algebra R satisfies some polynomial identity of degree d, then every ring F G(n) satisfies this polynomial identity, contrary to the second assertion of Lemma 2.3.
It remains to prove that the constructed ring is not algebraic over its center. We note that for any m ∈ N, there exists an integer n m such that R(n m ) does not satisfy any polynomial identity of degree d(m); moreover, we can choose integers n 1 , n 2 , . . . such that these integers form an ascending sequence. By the definition of d(m), there exists an element r ′ m ∈ F G(n m ) which does not satisfy any relation of the form (1) of degree m. Now we consider the element r = ∞ n=1 r n ∈ ∞ n=1 F G(n),
where r n ∈ F G(n), r n = r ′ m if n = n m for some m ∈ N and r n = 0, otherwise. It is clear that if r satisfies some relation of the form (1) of degree m, then every element r n satisfies a relation of the same degree; this is impossible by the choice of the element r ′ m . 2.4. Remark. The reviewer suggested another example of a centrally essential ring which is not a PI ring, namely the group algebra F G where G is the direct sum 3 of all the groups G(n) constructed in the proof of Lemma 2.3. It is easy to check that this algebra is a centrally essential ring. It is also clear that it is not a PI ring since every ring F G(n) is its subring (as well as its homomorphic image). But this ring is evidently integral over its center since is locally finite over F .
2.5. Acknowledgment. The authors are sincerely grateful to the reviewer for valuable comments and suggestions.
2. 1 .
1Theorem. Passman, 1977 [2, Theorem 5.3.9(ii)]. Let F be a field of characteristic p > 0. If the group algebra F G satisfies a polynomial identity of degree d, then there exists a subgroup H of the group G such that [G : H] · |H ′ | < g(d), where g(d) is some fixed function of the integer d.
( 3 )
3[G : H] · |H ′ | ≥ p n . We note that [G : HZ(G)] ≤ [G : H] and (HZ(G)) ′ = H ′ ; consequently, it is sufficient to prove the inequality (3) in the case, where H ⊇ Z(G). We setḠ = G/Z(G) and denote byā,b,H the images of a, b, H under the canonical homomorphism G onto the groupḠ. We also setB = b . We have [G : H] = [Ḡ :H]. It follows from the standard isomorphism (HB)/B ∼ =H/(H ∩B) thatH/(H ∩B) is a cyclic group which is to some subgroup of the group ā . The groupH ∩B is also cyclic; consequently, the groupH is generated by two elements of the formb p m andā p kb l for some non-negative integers k, l, m. Thus, [Ḡ :H] = [Ḡ :HB][HB :H] = [ ā : ā p k ][ b : b p m ] = p k p m = p m+k .
It is well known (e.g., see[3, Proposition 1.1.37] or [4, Lemma 5.2.6]) that if m 1 (R) = m < ∞, then R satisfies a polynomial identity of degree d(m) = m(m + 1) 2 + m.
See [3, Definitions 1.1.12, 1.1.17]
The direct sum of some groups G(n) is the subgroup of n∈N G(n) consisting of such elements n∈N g n that only finite number of elements g n are non-identical.
Centrally essential group algebras. V T Markov, A A Tuganbaev, J. Algebra. 518Markov, V.T.; Tuganbaev A.A. Centrally essential group algebras. -J. Alge- bra. -2018. -Vol. 518. -P. 109-118.
The algebraic structure of group rings. D S Passman, ; N.-Y , John Wiley and SonsPassman, D. S. The algebraic structure of group rings. -John Wiley and Sons, N.-Y. et al., 1977.
Polynomial identities in ring theory. L H Rowen, Academic Press, Inc. New YorkRowen, L. H. Polynomial identities in ring theory. -Academic Press, Inc. New York, 1980.
Rings that are nearly associative. K A Zhevlakov, A M Slin'ko, I P Shestakov, A I Shirshov, Academic PressNew York-Londonxi+371 ppZhevlakov, K.A.; Slin'ko, A.M.; Shestakov, I.P.; Shirshov, A.I. Rings that are nearly associative. -Academic Press, New York-London, 1982. xi+371 pp.
| [] |
[
"Phase Spaces for asymptotically de Sitter Cosmologies",
"Phase Spaces for asymptotically de Sitter Cosmologies"
] | [
"William R Kelly *[email protected]†[email protected] \nUniversity of California at Santa Barbara\n93106Santa BarbaraCAUSA\n",
"Donald Marolf \nUniversity of California at Santa Barbara\n93106Santa BarbaraCAUSA\n"
] | [
"University of California at Santa Barbara\n93106Santa BarbaraCAUSA",
"University of California at Santa Barbara\n93106Santa BarbaraCAUSA"
] | [] | We construct two types of phase spaces for asymptotically de Sitter Einstein-Hilbert gravity in each spacetime dimension d ≥ 3. One type contains solutions asymptotic to the expanding spatially-flat (k = 0) cosmological patch of de Sitter space while the other is asymptotic to the expanding hyperbolic (k = −1) patch. Each phase space has a non-trivial asymptotic symmetry group (ASG) which includes the isometry group of the corresponding de Sitter patch. For d = 3and k = −1 our ASG also contains additional generators and leads to a Virasoro algebra with vanishing central charge. Furthermore, we identify an interesting algebra (even larger than the ASG) containing two Virasoro algebras related by a reality condition and having imaginary central charges ±i 3 2G . On the appropriate phase spaces, our charges agree with those obtained previously using dS/CFT methods. Thus we provide a sense in which (some of) the dS/CFT charges act on a well-defined phase space. Along the way we show that, despite the lack of local degrees of freedom, the d = 3, k = −1 phase space is non-trivial even in pure Λ > 0 Einstein-Hilbert gravity due to the existence of a family of 'wormhole' solutions labeled by their angular momentum, a mass-like parameter θ 0 , the topology of future infinity (I + ), and perhaps additional internal moduli. | 10.1088/0264-9381/29/20/205013 | [
"https://arxiv.org/pdf/1202.5347v2.pdf"
] | 119,273,959 | 1202.5347 | 90d731bbeedf5829a819602e29a86133604981d9 |
Phase Spaces for asymptotically de Sitter Cosmologies
23 Feb 2012
William R Kelly *[email protected]†[email protected]
University of California at Santa Barbara
93106Santa BarbaraCAUSA
Donald Marolf
University of California at Santa Barbara
93106Santa BarbaraCAUSA
Phase Spaces for asymptotically de Sitter Cosmologies
23 Feb 2012(Dated: February 27, 2012)1
We construct two types of phase spaces for asymptotically de Sitter Einstein-Hilbert gravity in each spacetime dimension d ≥ 3. One type contains solutions asymptotic to the expanding spatially-flat (k = 0) cosmological patch of de Sitter space while the other is asymptotic to the expanding hyperbolic (k = −1) patch. Each phase space has a non-trivial asymptotic symmetry group (ASG) which includes the isometry group of the corresponding de Sitter patch. For d = 3and k = −1 our ASG also contains additional generators and leads to a Virasoro algebra with vanishing central charge. Furthermore, we identify an interesting algebra (even larger than the ASG) containing two Virasoro algebras related by a reality condition and having imaginary central charges ±i 3 2G . On the appropriate phase spaces, our charges agree with those obtained previously using dS/CFT methods. Thus we provide a sense in which (some of) the dS/CFT charges act on a well-defined phase space. Along the way we show that, despite the lack of local degrees of freedom, the d = 3, k = −1 phase space is non-trivial even in pure Λ > 0 Einstein-Hilbert gravity due to the existence of a family of 'wormhole' solutions labeled by their angular momentum, a mass-like parameter θ 0 , the topology of future infinity (I + ), and perhaps additional internal moduli.
I. INTRODUCTION
Spacetimes that approximate de Sitter space (dS) form the basis of inflationary early universe cosmology and also give a rough description of our current universe. One expects this description to further improve in the future as the cosmological expansion dilutes the various forms of matter, and that in tens of Gyrs it will become quite good indeed. Yet certain classic issues in gravitational physics, such as the construction of phase spaces and conserved charges, are less well developed in the de Sitter context than for asymptotically flat or asymptotically AdS spacetimes; see e.g. [1][2][3][4][5][6]. While there have been many discussions of de Sitter charges (see [7][8][9][10][11][12][13]) over a broad span of time, most of these [7][8][9][10]13] do not construct a phase space in which the charges generate the associated diffeomorphisms while the remainder [11,12] define phase spaces in which many of the expected charges diverge.
This hole in the literature is presumably due, at least in part, to the fact that global de Sitter space admits a compact Cauchy surface of topology S d−1 . It is thus natural to define a phase space (which we call the k = +1 phase space, Γ(dS k=+1 )) which contains all solutions with an S d−1 Cauchy surface. Since S d−1 is compact, there is no need to impose further boundary conditions. The constraints then imply that all gravitational charges vanish identically. All diffeomorphisms are gauge symmetries and the asymptotic symmetry group is trivial.
On the other hand, it is natural in cosmological contexts to consider pieces of de Sitter space which may be foliated by either flat (k = 0) or hyperbolic (k = −1) Cauchy surfaces.
We call these patches dS k=0 and dS k=−1 respectively, see Figs. 1, 2. These Cauchy surfaces are non-compact, and boundary conditions are required in the resulting asymptotic regions.
The purpose of this paper is to construct associated phase spaces (for both k = 0, −1) of asymptotically de Sitter solutions in d ≥ 3 spacetime dimensions for which the expected charges are finite and conserved. As there are claims [14][15][16][17][18][19] in the literature that the socalled 'dilatation symmetry' of the k = 0 patch is broken at the quantum level (though see [20][21][22][23][24][25][26][27][28]), it is particularly important to verify that this is indeed a symmetry of an appropriate classical gravitational phase space for k = 0.
In most cases below the resulting asymptotic symmetry group (ASG) is the isometry group of the associated (flat-or hyperbolic-sliced) patch of dS, though for k = −1 and d = 3
we find that the obvious rotational symmetry is enlarged to a (single) Virasoro algebra in the ASG. The structure is somewhat similar to that recently seen in the Kerr/CFT context [29,30], though in our present case the central charge vanishes due to a reflection symmetry in the angular direction. We also identify an interesting algebra somewhat larger than the ASG which contains two Virasoro algebras related by a reality condition and having imaginary central charges (in agreement with [31][32][33][34]). We note, however, that the extra generators (outside the ASG) have incomplete flows on our classical phase space. Thus real classical charges of this sort will not lead to self-adjoint operators at the quantum level.
We construct the phase spaces Γ(dS k=0 ) and Γ(dS k=−1 ) associated with dS k=0 and dS k=−1 below in sections II and III. In each case, we find that our final expressions agree with the relevant charges of [9,10,13] when we impose a certain gauge condition in the asymptotic region 1 . Though the charges of [9,10,13] differ from ours (and in fact diverge) in a general gauge, this nevertheless provides a sense in which those charges generate canonical trans- 1 We expect the same to be true of [8]. However, the fact that [8] used a Chern-Simons formulation makes direct comparison non-trivial; we will not attempt it here. We also make no direct comparison with ref. [7], which worked perturbatively around dS, though we again expect agreement in the appropriate regime.
formations on a well-defined phase space. We close with some final discussion in section IV which in particular compares our phase space with those of [11,12].
II. THE PHASE SPACE Γ(dS k=0 )
Our first phase space will consist of spacetimes asymptotic to the expanding spatially-flat (k = 0) patch of de Sitter space (see figure 1) in d ≥ 3 spacetime dimensions for which the metric takes the familiar form ds 2 = −dt 2 + e 2t/ δ ij dx i dx j .
(2.1)
Here δ ij is a Kroenecker delta and i, j range over the d − 1 spatial coordinates. We will refer to this patch as dS k=0 and the corresponding phase space as Γ(dS k=0 ). For future reference we note that the symmetries of dS k=0 are generated by three types of Killing fields (dilations, translations, and rotations) which take the following forms
Dilations : ξ a D = (∂ t ) a − x a / , Translations : ξ a P i = (∂ i ) a , Rotations : ξ a L ij = (2x [i ∂ j] ) a . (2.2)
Elements of our phase space are globally hyperbolic solutions to the Einstein equation in
d ≥ 3 spacetime dimensions with positive cosmological constant Λ = (d − 1)(d − 2) 2 2 (2.3)
and topology R d . Introducing a time-function t defines a foliation of (t = constant) spacelike slices Σ. Choosing coordinates x i ∈ R d−1 on each slice, the metric may then be written in the form
ds 2 = −N 2 dt 2 + h ij (dx i + N i dt)(dx j + N j dt). (2.4)
We define Γ(dS k=0 ) to contain such spacetimes for which, on any t = constant slice Σ, the induced metric h ij , the canonical momentumπ ij = √ hπ ij , the lapse N , and the shift N i satisfy the boundary conditions 2
∆h ij = h (d−1) ij + O(r −(d−1+ ) ), ∆π ij = π ij (d−2) + π ij (d−1) + O(r −(d−1+ ) ) N = 1 + N (d−2) + O(r −(d−1) ) N i = N i (d−3) + O(r −(d−2) ), (2.5) at large r = δ ij x i x j , with h (d−1) ij = (Any function of Ω) ij r d−1 , (2.6a) π ij (d−2) = (Odd function of Ω) ij r d−2 , π ij (d−1) = (Any function of Ω) ij r d−1 (2.6b) N (d−2) = (Odd function of Ω) r d−2 , N i (d−3) = (Even function of Ω) i r d−3 (2.6c)
and where
∆h ij = h ij − e 2t/ δ ij , ∆π ij = π ij + (d − 2) e −2t/ δ ij . (2.7)
In order for time evolution to preserve (2.5) the lapse, shift and momentum must satisfy the additional relation
π (d−2) ij + ∂ (i N (d−3) j) + N (d−2)h ij = 0. (2.8)
This final condition was obtained by writing the equations of motion to leading order, imposing (2.5) and requiring thatḣ (d−2) ij = 0 (no further condition is required to makeπ ij
(d−2)
odd). In all of the explicit examples we consider below this condition is satisfied trivially.
We also assume that the nth derivative of the
O(r −(d−1+ ) ) term in (2.5) is O(r −(d−1+n+ ) ).
The definitions (2.7) were chosen so that ∆h ij = 0 = ∆π ij for exact de Sitter (Eq. (2.1)).
We also note that (2.5) together with the constraints (2.18), ensures that
∆π ij = (Odd function of Ω) ij r d−2 + O r −(d−1) ,(2.
9)
2 A study of the symplectic structure (eq. (2.11) below) indicates that these boundary conditions can be significantly relaxed, presumably allowing radiation that falls off more slowly at large r. However, doing so requires non-trivial use of the equations of motion to make explicit the fact that the charges associated with (2.2) are finite. We have not attempted to complete such an analysis as we see no obvious advantage to weakening the boundary conditions (2.5).
with
∆π ij = √ hπ ij − hπ ij . (2.10)
Let us now consider two tangent vectors (δ 1 h ij , δ 1π ij ) and (δ 2 h ij , δ 2π ij ) to Γ(dS k=0 ). In order for our phase space to be well defined we must show that the symplectic product of these two tangent vectors is finite and independent of the Cauchy surface on which it is evaluated, i.e. independent of t. Our boundary conditions suffice to guarantee both of these conditions. Equations (2.5) and (2.9) imply convergence of the standard expression
ω(δ 1 g, δ 2 g) = 1 4κ Σ δ 1 h ij δ 2π ij − δ 2 h ij δ 1π ij (2.11)
for the symplectic product (see e.g. [35]). Furthermore, we will show in section II B below that the (time-depenedent) Hamiltonian H(∂ t ) (see (2.17)) defined by some N, N i satisfying (2.5) i) has well-defined variations and ii) generates an evolution that preserves the boundary conditions (2.5) on h ij andπ ij . This in turn guarantees that ω(δ 1 g, δ 2 g) is time independent. 3
Thus, we conclude that Γ(dS k=0 ) is well-defined . Below we compute asymptotic symmetries and conserved charges, largely following the approach of [2][3][4].
A. Asymptotic Symmetries
We begin by using the fact that linearized diffeomorphisms generated by any element of our ASG must map (2.1) onto a solution satisfying (2.5). Consider the metrich ij induced on a t = (constant) slice of (2.1) (in general overbars will denote quantities associated with (2.1)) and its pullback into the bulk spacetime which we callh ab . We also introduce h a i which is the projector from the spacetime onto Σ. If ξ is in our ASG then
δ ξhij ≡ h a i h b j £ ξhab = h a i h b j ξ c∇ c (h ab ) + 2h c(a∇b) ξ c = 2h a i h b j∇ (a ξ b) ,(2.12)
must vanish as r → ∞ fast enough so thath ij + δ ξhij satisfies (2.5). So, up to terms which vanish at r → ∞, ξ must satisfy We now show that our phase space is closed under the action of the expected symmetry group (2.2). To do so, we consider an arbitrary solution (h ij ,π ij ) satisfying (2.5) and show that (h ij + δ ξ h ij ,π ij + δ ξπ ij ) also satisfies (2.5) where ξ is one of the vector fields (2.2).
∂ (i ξ j) = ξ ⊥ e 2t/ δ ij ,(2.
First consider a purely spatial vector ξ, i.e. a translation or rotation. From the expressions
£ ξ h ij = ξ k ∂ k ∆h ij + 2∆h k(i ∂ j) ξ k £ ξ π ij = ξ k ∂ k ∆π ij − 2∆π k(i ∂ k ξ j) ,(2.15)
we can see that our boundary conditions are preserved by diffeomorphisms along these vector fields.
To see that ξ D preserves our boundary conditions note that
£ ξ D = £ t − £ x/ . (2.16)
Together with the canonical equations of motion, the boundary conditions (2.5) and (2.8) ensure that £ t preserves Γ(dS k=0 ), and it is straightforward to verify that £ x/ does as well using (2.15). Thus, our boundary conditions are also preserved by ξ D . This completes our proof that the asymptotic symmetry group of Γ(dS k=0 ) is given by the isometries of dS k=0 .
B. Conserved Charges
Our next task is to construct a corresponding set of conserved charges. As described by
Regge and Teitelboim [2], the fact that any such charge H(ξ) must generate diffeomorphisms along ξ implies that H(ξ) is a linear combination of the gravitational constraints determined by the relevant vector field ξ, together with certain surface terms chosen to ensure that the charges have well-defined variations with respect to h ij andπ ij . So long as the boundary conditions are sufficiently strong, the result takes the standard form 17) in terms of the Hamiltonian and momentum constraints
H(ξ) = 1 2κ Σ √ h ξ ⊥ H + ξ i H i + 1 κ ∂Σ (dr) i ξ j ∆π ik h kj +π ik ∆h kj −π kl ∆h kl 2 δ i j + 1 2κ ∂Σ √ σr l G ijkl (ξ ⊥ D k ∆h ij − ∆h ij D k ξ ⊥ ) ,(2.H = h −1 π ijπ ij − 1 d − 2π 2 − (R − 2Λ), H i = −h −1/2 D j (2π ij ). (2.18)
In (2.17),
G ijkl = h i(k h l)j − h ij h kl ,(2.19)
κ = 8πG, ∂Σ is the limit of constant r = δ ij x i x j submanifolds in Σ as r → ∞, and σ ij andr i are the induced metric on and the unit normal (in Σ) to ∂Σ.
The boundary conditions are strong enough for (2.17) to hold when general variations of the above boundary terms can be computed by varying only ∆h ij and ∆π ij ; i.e., all other terms in the general variation are too small to contribute in the r → ∞ limit. Power counting shows that this is indeed implied by eqs. (2.5).
Now, naive power counting suggests that (2.17) may diverge. But since (2.5) ensures that the symplectic structure is finite, the equations of motion must conspire to prevent these divergences. This fact is verified in appendix A. We define Q D := H(−ξ D ), P j := H(ξ P j ), L jk := H(ξ L jk ). (2.20) Note that in defining Q D we chose signs that would conventionally appear in the definition of an 'energy,' while we defined P j and L jk with signs conventionally chosen in defining momenta. Interestingly, this choice of signs makes Q D negative for the de Sitter-Schwarzschild solution in agreement with [10].
We may also consider a general Hamiltonian H(∂ t ) for ∂ t defined by lapse and shift of the form (2.5). Power counting and the boundary conditions (2.5) ensure that H(∂ t ) is finite and that it has well-defined variations. The boundary conditions (2.5) and (2.8) ensure that it generates an evolution which preserves the boundary conditions on h ij ,π ij and thus, as noted in footnote 3, that the symplectic structure is conserved. It follows that the above charges are conserved as well.
The conserved charges (2.20) take a particularly simple form when we impose the asymp-
totic gauge conditions h (d−1) rj = 0 = h (d−1) . (2.21)
One may transform to such a gauge from any configuration satisfying (2.5) via the diffeomorphism generated by any vector field ζ which satisfies
h rj ζ ⊥ +D (r ζ j) = − h (d−1) rj 2 (2.22) d − 1 ζ ⊥ +D i ζ i = − h (d−1) 2 . (2.23)
We are ensured that ζ generates a gauge transformation as it both vanishes on the boundary (which is clear by power counting) and preserves our boundary conditions (which is
gauranteed by∇ (a ζ b) ∼ O(r −(d−1) )).
With this choice of gauge the conserved quantities associated with (2.2) take the following simple forms:
Q D = 1 κ ∂Σ √ σr i ∆π ij x j , (2.24a) P j = 1 κ ∂Σ √ σr i ∆π ij , (2.24b) L jk = 1 κ ∂Σ √ σr i 2∆π i[k x j] .
(2.24c)
C. Familiar Examples
We now consider some familiar spacetimes in order to provide further intuition for the constructions above. In particular, for d ≥ 4 we find coordinates in which our phase space contains the de-Sitter Schwarzschild solution and we compute the relevant charges. For d = 3 we consider instead the spinning conical defect spacetimes, which describe gravity coupled to compactly supported matter fields.
In familiar static coordinates the d ≥ 4 de Sitter-Schwarzschild solution takes the form
ds 2 = −f (ρ)dτ 2 + dρ 2 f (ρ) + ρ 2 dΩ 2 (2.25) f (ρ) = 1 − 2GM ρ d−3 − ρ 2 2 . (2.26)
In the region ρ > we introduce the coordinates (t, {x i }) through the implicit expressions
τ = t + ρ dρ 1 − f (ρ ) f (ρ ) (2.27a) ρ = e t/ i (x i ) 2 , (2.27b)
with the angular variables being related to {x i } in the usual way. After this change of coordinates (2.25) becomes
ds 2 = −dt 2 + e 2t/ δ ij dx i − GM x i (e t/ r) d−1 dt dx j − GM x j (e t/ r) d−1 dt + O(r −(2d−3) ), (2.28)
for r e −t/ . As a result, we find
∆h ij = O r −(2d−3) (2.29a) ∆π ij = e −2t/ GM (e t/ r) d−1 δ ij − (d − 1) x i x j r 2 + O r −(2d−3) . (2.29b)
Comparison with (2.5) shows that (2.28) does indeed lie in the phase space Γ(dS k=0 ).
The linear and angular momenta for this solution vanish by symmetry. Since ∆h ij satisfies the gauge condition (2.21) we may use (2.24a) to calculate the dilation charge. The result is
Q D = − (d − 2) κ GM ∂Σ √ γd d−2 θ (2.30) = − (d − 2)π (d−3)/2 4Γ[(d − 1)/2] M,(2.31)
where γ is the determinant of the metric on the unit S d−2 (so that the volume element
on S d−2 is √ γd d−2 θ). In particular, we find Q D = −M for d = 4 and Q D = −(3π/4)M for d = 5.
Up to a shift of the zero point of the energy for d = 5 these results agree with the charges computed in [10] using a rather different approach (which did not involve constructing a phase space). One might therefore expect a similar agreement to hold in general, but the situation turns out to be more subtle. We will show in section II D below that the charges of [9,10]
ζ = α r d−2 ∂ r .
(2.32)
The change in the canonical variables is
δ ζ h ij = 2αe 2t/ r d−1 δ ij − (d − 1) x i x j r 2 + O(r −(3d−4) ), (2.33) δ ζ π ij = 2(d − 2)αe −2t/ r d−1 δ ij − (d − 1) x i x j r 2 + O(r −(3d−4) ). (2.34)
Then from (A10) we see that δ ζ Q D = 0, as must be the case for any gauge transformation.
But (2.24a) is now equal to − (d − 2)π (d−3)/2 4Γ[(d − 1)/2] M − 2αe (d−1)t/ G 2 . (2.35)
This differs from (2.30) and is time-dependent. In particular, it diverges as t → ∞. This leads to disagreement with the charges of [10] in a general gauge and shows that their charges generally diverge on Γ(dS k=0 ).
We now turn to the d = 3 spinning conical defect solution [36] with defect angle θ d , which may be written in the form (see e.g. [10])
ds 2 = −f (ρ)dτ 2 + dρ 2 f (ρ) + ρ 2 8GM a 2ρ 2 dτ + dφ 2 f (ρ) = 1 − 8GM − ρ 2 2 + (8GM a) 2 4ρ 2 , (2.36)
where the parameter M = θ d /8πG is the mass that would be assigned to a conical defect in flat space with defect angle θ d [37,38]. 6 After changing to the coordinates (t, r) defined by
τ = t − 2 log e 2t/ r 2 2 − 1 (2.37) ρ = e t/ r, (2.38) we find ∆h ij , ∆π ij ∼ O(r −2 ) so that the transformed solution lies in Γ(dS k=0 ), though h (2) ij = 0.
The non-vanishing results from (2.17) are Q D = −M and L 12 = aM , in agreement with [10] up to the expected shift in the zero point of Q D and in agreement with flat space results in the (here trivial) limit → ∞.
Acting on this solution with the diffeomorphism generated by (2.32) and repeating the calculation that led to (2.35) in gives
−M + αe 2t/ 2G 2 ,(2.39)
which shows that the charges of [10] diverge on Γ(dS k=0 ) for d = 3 as well.
D. Comparison with Brown-York methods at future infinity
Our discussion above closely followed the classic treatment of [2]. In contrast, refs.
[9, 10] took a rather different approach to the construction of charges in de Sitter gravity.
They considered spacetimes for which the induced metric on future infinity I + (defined by a conformal compactification associated with a given foliation near I + ) agrees with some fixed metric q ij . They then constructed an action S for gravity subject to this Dirichlet-like boundary condition, choosing the boundary terms at future infinity so that variations of the action are well-defined. By analogy with the Brown-York stress tensor of [39] (and with the anti-de Sitter case [40,41]), refs. [9, 10] defined a de Sitter 'stress-tensor' τ ij on future infinity through
τ ij = 2 √ h δS δh ij = lim t→∞ π ij κ + τ ij ct , (2.40)
where the first term results from varying the Einstein-Hilbert action with a Gibbons-Hawking-like boundary term 1/κ I + √ qK and τ ij ct is the result of varying so-called counterterms added to the action. Given a Killing field ξ i of q ij and any d − 2 surface B in I + , [9,10] then define a charge
Q ξ (B) = B √ σ b i τ ij ξ j ,(2.41)
whereb i and √ σ are the unit normal to and induced volume element on B. Since ξ i is a
Killing field of q ij , the charge is in fact independent of B. For even d one can show that τ ij is traceless so that this B-independence also holds when this definition of charge is extended to conformal Killing fields ξ i .
Because q ij is fixed, these boundary conditions force the symplectic flux through I + to vanish. So long as all other boundary conditions (e.g., at r = ∞) enforce conservation of symplectic flux, it follows immediately that this flux also vanishes on any Cauchy surface.
As a result, the class of spacetimes for which S is a valid variational principle does not form a phase space (though see [42] for further discussion). On the other hand, as shown in [13], the charges (2.41) agree with a natural construction that does not require the condition of fixed q ij , but which is instead given by the covariant phase space prescription used by Wald and Zoupas [43] to define charges for the Bondi-Metzner-Sachs group in asymptotically flat space [44][45][46][47]. In this context, one takes ξ i above to be an arbitrary vector field on I + (and one generally expects Q ξ (B) to depend on B). Though it is not immediately clear in what sense such charges generate symmetries, this fact nevertheless suggests that the charges (2.41) are of interest even when q ij is not fixed. This is also suggested by the formal analogy with anti-de Sitter space.
In any case, we saw earlier that when B is taken to be i 0 (as defined by figure 1) for k = 0, q ij is taken to be the metric on the surface t = ∞, and ξ a is a generator of asymptotic symmetries for Γ(dS k=0 ), the charges ( We wish to take B to lie at i 0 , which we will think of as the boundary ∂Σ ∞ of the surface
Σ ∞ on which t = ∞. The unit normal to ∂Σ ∞ in Σ ∞ is thusb i =r i .
We use the fact that, since any asymptotic symmetry ξ a preserves I + , it defines a vector field ξ i I + on I + . In fact, this ξ i I + is just the t → ∞ limit of the part ξ i of ξ a tangent to Σ as defined by (2.14). We therefore use the notation ξ i for this vector field below. It will further be useful to decompose ξ i into parts normal and tangent to ∂Σ ∞ according to
ξ i = ξ ⊥r i + ξ i . (2.42)
To compute the counter-term charges we recall from [9,10] that for d = 3, 4, 5, independent of (∆h ij , ∆π ij ) and thus yields at most an irrelevant shift of the zero point of the charge 7 . To do so, recall thatr i G ij can be expressed in terms of the Ricci scalar R and the extrinsic curvature of ∂Σ ∞ through the Gauss-Codacci equations. We will work with the pull-back θ ij of this extrinsic curvature to Σ ∞ . 7 By symmetry, such a shift is allowed only for the energy Q D .
τ ij CT = 1 κ (d − 2)h ij + d − 3 G ij = 1 κ −π ij 0 + d − 3 G ij ,(2.
In particular, the contribution involving ξ ⊥ is related to the 'radial Hamiltonian constraint'
G ijr irj = − 1 2 (R − θ 2 + θ ij θ ij ). (2.44)
After expanding h ij =h ij +∆h ij , power counting shows that only the (constant) background term and terms linear in ∆h ij can contribute in the r → ∞ limit. A bit of calculation (given in appendix B) then showŝ
r i G ij ξ ⊥r j d − 3 = − ξ ⊥ 2rr mhjk (D j ∆h km − D m ∆h jk ) + (constant) + . . . ,(2.45)
where . . . represents both higher order terms (that do not contribute as r → ∞) and pure divergences on ∂Σ ∞ . Power counting again then shows that the non-trivial term in (2.45) vanishes by our boundary conditions.
We now turn to the counter-term contribution involving ξ i , which is a combination of the 'radial momentum constraints':r
i G ij ξ j = D i θ ij − θσ ij ξ j ,(2.46)
where D is the derivative operator associated with σ ij . We now treat the various asymptotic symmetries separately: This term vanishes explicitly for dilations as they have ξ i = 0. For rotations, the symmetry of θ ij and the fact that ξ i is a Killing field of ∂Σ ∞ allow us to bring the factor of ξ i = 0 inside the parentheses and write (2.46) as a total divergence on ∂Σ.
Thus its integral over ∂Σ (a closed manifold) must vanish. Finally for translations we use the fact that ξ i is a conformal Killing vector of ∂Σ to writê
r i G ij ξ j = (d − 3)θ r ξ ⊥ .(2.ds 2 = −dT 2 + sinh 2 (T / ) δ ij − 1 1 + 2 /R 2 X i X j R 2 dX i dX j ,(3.1)
where X i := δ ij X j . The Killing vectors then take the simple form
'Translations' : ξ a P i = √ R 2 + 2 (∂ i ) a , Rotations : ξ a L ij = (2X [i ∂ j] ) a .(3.2)
These are precisely the Killing fields of H d−1 and so generate the Lorentz group SO(d−1, 1).
One might thus equally well refer to the hyperbolic 'translations' as boosts.
The first phase space Γ(dS k=−1 ) will be constructed in direct analogy to our treatment for k = 0 in section II. We again consider globally hyperbolic solutions to Einstein's equations in d ≥ 3 spacetime dimensions with positive cosmological constant and topology R d .
Introducing a foliation as before with coordinates (T, {X i }) and a metric of the form (2.4), we define Γ(dS k=−1 ) to contain spacetimes with induced metric h ij canonical momentum π ij , lapse N , and shift N i on a (T = constant) slice Σ which satisfy
∆h ij = h (d−2) ij + O(h (d−2) ij R −1 ) ∆π ij = π ij (d−2) + O(π ij (d−2) R −1 ) N = 1 + N (d−2) + O(R −(d−1) ) N i = N i (d−3) + O(N i (d−3) R −1 ), (3.3) for large R with R = δ ij X i X j . The required falloff of h (d−2) ij , π ij (d−2) and N i (d−3) are most clearly expressed in spherical coordinates, h (d−2) RR = (Function of Ω) R 2+(d−2) h (d−2) Rθ = (Function of Ω) R (d−2) h (d−2) φθ = (Function of Ω) R −2+(d−2) (3.4a) π RR (d−2) = (Function of Ω) R −2+(d−2) π Rθ (d−2) = (Function of Ω) R (d−2) π φθ (d−2) = (Function of Ω) R 2+(d−2) (3.4b) N (d−2) = (Function of Ω) R (d−2) N R (d−3) = (Function of Ω) R (d−3) N θ (d−3) = (Function of Ω) R 2+(d−3) (3.4c)
with θ and φ standing in for any angular coordinates. The symbol O(h
(d−2) ij R −1 ) indicates
that corrections should be suppressed by an additional power of R. We define
∆h ij = h ij − sinh 2 (T / ) ω ij (3.5a) ∆π ij = π ij + (d − 2) coth(T / ) sinh −2 (T / ) ω ij . (3.5b)
Here we have introduced the metric ω ij on the unit H d−1 :
ω ij = δ ij − 1 1 + 2 /R 2 X i X j R 2 .
(3.6)
The definitions (3.5) ensure that ∆h ij = 0 = ∆π ij for dS k=−1 . The boundary conditions (3.3) are sufficient to ensure that, in spherical coordinates,
∆π ij = O(∆π ij (d−2) R d−3 ),(3.7)
which makes the symplectic structure (2.11) finite. We will show below that it is also conserved. Thus Γ(dS k=−1 ) is a well-defined phase space. Now, in studying the k = 0 case, we found that our charges simplified when we imposed the asymptotic gauge condition (2.21). In particular, in this gauge our charges agreed with those constructed via counter-term methods. To clean up this discussion, we might have chosen to define a second k = 0 phase space Γ(dS k=0 ) gf by adding (2.21) to the list (2.5) of boundary conditions. 8 While Γ(dS k=0 ) and Γ(dS k=0 ) gf are physically equivalent, each formulation has certain advantages. For example, while full agreement with the counterterm charges of [9,10,13] holds only on Γ(dS k=0 ) gf , familiar solutions (e.g. the d = 3 conical defects) were most easily written in the larger phase space Γ(dS k=0 ).
A similar pattern will emerge below for k = −1. For d ≥ 4 we therefore define a second phase space Γ(dS k=−1 ) gf to be the subset of Γ(dS k=−1 ) on which the extra asymptotic gauge
condition h (d−2) Rj = 0 = h (d−2) (3.8)
holds. Note that for d = 3 the spatial metric h ij has only three components and (3.8)
implies h
(1) ij = 0. For this special case it in fact turns out to be useful to define Γ(dS k=−1 ) gf by imposing both (3.8) and the further asymptotic gauge condition
∆h ij = h (d) ij + O(h (d) ij R − ), (3.9) where h (d) ij is defined by replacing (d − 2) → (d) in (3.4a).
We will see the utility of this extra condition below. One may verify that the above are valid gauge conditions using precisely the same argument as in section II. Note that we do not require further conditions on ∆π ij for any dimension d.
A. Asymptotic Symmetries
As in the dS k=0 case we know that any element of our ASG must map (3.1) onto a spacetime satisfying (3.3). This means that the associated vector field must satisfy isomorphic to the group generated by (3.2). We conclude that our d ≥ 4 ASG can only contain symmetries which asymptotically approach the isometries (3.2). The case d = 3 will be addressed below.
h a i h b j∇ (a ξ b) = O R −(d−2) , or D (i ξ j) = ∂ (i ξ j) + R ξ ⊥ 2 ω ij = sinh(T / ) coth(T / )ξ ⊥ ω ij ,(3.D (i ξ j) = sinh(T / ) coth(T / )ξ ⊥ − (1 − R 2 / 2 ) ξ ⊥ R σ ij ,(3.
As before, (2.15) shows that our phase space is closed under the isometries of dS k=−1 (noting that ξ ∼ R and ξ ⊥ = 0). So we have shown that the asymptotic symmetry group of Γ(dS k=−1 ) is isomorphic to the isometries of dS k=−1 for d ≥ 4. Using the same technique as in section II B, we obtain conserved charges for the asymptotic symmetries of Γ(dS k=−1 ) and d ≥ 4,
P j ≡ H(ξ a P j ) = 1 κ ∂Σ (dR) i ∆π il h lk +π il ∆h lk −π mn ∆h mn 2 ω i k R(∂ j ) k L jk ≡ H(ξ a L jk ) = 1 κ ∂Σ (dR) i 2 ∆π im h ml +π im ∆h ml h l[k X j] ,(3.12)
which are finite by the boundary conditions (3.3). In (3.12), the notation (∂ j ) k denotes the k-th component of the vector field ∂ j . Since these expressions are necessarily gauge invariant, the same equations can be used on Γ(dS k=−1 ) gf (in which case the ∆h ij terms will not contribute). Furthermore, as for k = 0 one finds that H(∂ T ) is finite, has welldefined variations, and generates an evolution that preserves the boundary conditions (3.3) on h ij ,π ij . (For k = −1 there is no need to introduce an analogue of (2.8).) So we again conclude that the symplectic structure and the above charges are conserved on Γ(dS k=−1 ), and thus on Γ(dS k=−1 ) gf as well.
The case d = 3 is special due to the infinite-dimensional conformal group in two dimensions. Since the hyperbolic plane is conformally flat, the solutions of (3.10) define two commuting Virasoro algebras formally associated with charges L n andL n for n ∈ Z satisfying L * n =L −n where * denotes complex conjugation and n labels the angular momentum quantum number. The details of the vector fields are given in appendix C. The unfamiliar reality condition is due to the fact that the symmetries of the Lorentz-signature theory gen-erate the 2d Euclidean-signature conformal group and was previous discussed in [31,32,34].
With our conventions, the angular momentum (called L ij above in higher dimensions) is J 0 where J n = (L n +L n ). It is also useful to introduce K n = (L n −L n )/i . We will see below that K 0 captures energy-like information about solutions.
As noted in section II, expression (2.17) is valid only when second order terms in ∆h ij , ∆π ij do not contribute to the variations. In order for this to be guaranteed by power counting for d = 3, we use the gauge freedom to set h (1) ij = 0. We then find Simple power counting shows that J n is finite on Γ(dS k=−1 ), and also that H(∂ T ) is finite, has well-defined variations, and generates an evolution that preserves the boundary conditions (3.3) on h ij ,π ij . It follows that the symplectic structure and the J n charges are conserved on Γ(dS k=−1 ), and thus on Γ(dS k=−1 ) gf as well.
J n ≡ H(ξ Jn ) = 1 κ ∂Σ (dR) i 2 ∆π im h ml +π im ∆h ml δ l[k X j] e inθ K n ≡ H(−ξ Kn ) = 1 κ ∂Σ (dR) i ∆π ik h kj +π ik ∆h kj −π kl ∆h kl 2 ω i j R 2 coth(T / ) 2 (∂ R ) j e inθ + 1 2κ ∂Σ √ σR l G ijkl − R D k ∆h ij + ω R k ∆h ij e inθ .
While the K n do not generate asymptotic symmetries, the usual arguments show that they are nevertheless finite on Γ(dS k=−1 ). In general, these "charges" depend both on time and the choice of gauge. But the situation improves further on Γ(dS k=−1 ) gf . There the J n and K n are given by the simple expressions
J n = 1 κ ∂Σ √ σR i 2∆π im h ml δ l[k X j] e inθ = 1 κ ∂Σ π Rθ (2) R 2 sinh 4 (T / )e inθ K n = 1 κ ∂Σ √ σR i ∆π ik h kj R 2 coth(T / ) 2 (∂ R ) j e inθ = 1 κ ∂Σ π RR (2) sinh 3 (T / ) cosh(T / )e inθ . (3.15)
The constraints and equations of motion then reveal that the components of π ij (2) have precisely the form required to make J n and K n finite, gauge invariant, time independent quantities. The derivation of (3.15) and the details of this further argument are both given in Appendix D. For J n this amounts to a trivial check that our phase space is well defined.
But since the K n do not lie in our ASG, their conservation is an interesting surprise.
We take these observations as motivation to consider further the full 2d conformal algebra.
The associated central charges can be computed as in [4] and turn out to be non-zero: It would be interesting to understand the unitary representations of (3.16) under the appropriate reality conditions. This question was briefly investigated in [32]. However, our analysis suggests that there is an additional subtlety: Because the flows generated by K n are not complete on our classical phase space, "real" elements of the algebra they generate (e.g., K 0 ) are unlikely to be self-adjoint on the quantum Hilbert space. Indeed, it is natural to expect behavior resembling that of −i d dx on the half-line, which admits complex eigenvalues. This in principle allows representations more general than those considered in [32], though we will not pursue the details here.
We now study two classes of familiar solutions -the d = 4 Kerr-de Sitter solution and the d = 3 spinning conical defect. We find coordinates for which each solution lies in the phase space Γ(dS k=−1 ) and compute the appropriate charges. We consider the Kerr case (as opposed to just Schwarzschild) since spherical symmetry would force all d = 4 charges to vanish.
Our first task is to transform the standard d = 4 Kerr-de Sitter metric [48] (3.19) into coordinates for which it satisfies the boundary conditions (3.3).
ds 2 = − ∆ − Σa 2 sin 2 (ψ) Ω dτ 2 + a sin 2 (ψ) 2GM ρ Ω + ρ 2 + a 2 2 (dtdγ + dγdt) + Ω ∆ dρ 2 + Ω Σ dψ 2 + sin 2 (ψ) 2GM ρa sin 2 (ψ) Ω + ∆ + 2GM ρ dγ 2 ∆ = (ρ 2 + a 2 ) 1 − ρ 2 2 − 2GM ρ Ω = ρ 2 + a 2 cos 2 (ψ) Σ = 1 + a 2 2 cos 2 (ψ),
We proceed by introducing coordinates (s, θ, φ) through the expressions (c.f. appendix B
of [3])
s cos(θ) = ρ cos(ψ) (3.20)
1 + a 2 2 s 2 = ρ 2 + a 2 sin 2 (ψ) + a 2 2 ρ 2 cos 2 (ψ) (3.21)
φ = 1 + a 2 2 γ + a 2 τ. (3.22)
In (t, s, θ, φ) coordinates the metric (3.19) approaches exact de Sitter space in static coordinates as M → 0. We then introduce further coordinates T, R through and which again satisfy (3.3). Thus we see that our phase spaces are non-trivial for d ≥ 4.
τ = log cosh(T / ) + sinh(T / ) 1 + R 2 / 2 | sinh 2 (T / )R 2 / 2 − 1| 1/2 s = sinh(T / )R.
Note that in d = 4 the leading order terms in (∆h ij , ∆π ij ) for rotating black holes vanish as a → 0. We expect the same to be true in higher dimensions, though with the rotating solutions still satisfying (3.3).
Returning to the Kerr-de Sitter solution, symmetry implies that the only non vanishing charge is L 12 . From (3.12) we find
L 12 = aM (1 + a 2 / 2 ) 2 .
(3.24)
This differs from the analogous AdS result [3] only by the expected replacement → i and agrees with the analogous flat space result when → ∞. 9
Finally, for d = 3 we once again consider the conical defect solution (2.36) of [4,10].
After transforming from static to k = −1 coordinates (through a transformation resembling (3.23)) we find ∆h ij , ∆π ij satisfy (3.3) with h (1) ij = 0. Thus the transformed solution is in Γ(dS k=−1 ). We find J 0 = aM and a time-dependent value of K 0 . This time-dependence is uninteresting as K 0 is gauge-dependent on Γ(dS k=−1 ). But after making an additional gauge transformation to write the solution as an element of Γ(dS k=−1 ) gf we find the timeindependent value K 0 = −M . Note that this agrees with Q D as computed for the k = 0 version of the conical defect in section II C. As will be clear after we show equivalence to the counter-term charges in section III D below, this is due to the fact that K 0 and Q D correspond to the same element of the Euclidean conformal group on I + .
C. Asymptotically dS k=−1 wormhole spacetimes
We now turn to some more novel spacetimes asymptotic to dS k=−1 . We consider pure Λ > 0 Einstein-Hilbert gravity, for which all solutions are quotients of dS 3 . As noted in [32], quotients generated by a single group element fall into two classes (up to congujation):
The first leads to the conical defects discussed above. For the 2nd class, the generator of the quotient group can be described simply in terms of the 3+1 Minkowski space M 3,1 into which dS 3 is naturally embedded (as the set of points of proper distance from the origin).
This generator then consists of a simultaneous boost (say, along the z-axis) and a commuting 9 Note that [3] used a different sign convention for a. rotation (in the xy plane). This class of quotients was not investigated in [32], essentially because the resulting spacetimes are not asymptotically dS k=−1 . Indeed, from the point of view of the k = 0 patch the quotient spacetime is naturally interpreted as a cosmological solution in which space is a cylinder (S 1 × R) at each moment of time.
However, in appropriate coordinates this 2nd class of quotients also defines spacetimes asymptotic to the k = −1 patch and which in fact lie in Γ(dS k=−1 ). In this sense our quotient spacetimes may be thought of as Λ > 0 analogues of BTZ black holes [49,50].
That the quotient lies in Γ(dS k=−1 ) is easy to see when the quotient generator is a pure boost (i.e., where the commuting rotation is set to zero) in which case the quotient group preserves the appropriate k = −1 patch. Indeed, recall that for non-spinning BTZ black holes the associated quotient on AdS 3 acts separately on each slice of constant global time, and that such surfaces are two-dimensional hyperbolic space H 2 . Here the T = (constant) slices of (3.1) are also H 2 and we apply the analogous quotient. This amounts to defining new coordinates (R, θ) through
X = 2πR θ 0 , Y = sinh θ 0 2π θ 2πR θ 0 2 + 2 ,(3.25)
and taking θ to be periodic with period 2π. The T = (constant) slices are then topologically S 1 × R and the the metric is When the commuting rotation is non-zero, the quotient group does not preserve the k = −1 patch of exact dS 3 . Yet it appears that the quotient can nevertheless be considered to lie in Γ(dS k=−1 ). Indeed, assuming a single rotational symmetry it is straightforward to solve the constraints (2.18) to find initial data for wormholes with angular momentum lying in dS k=−1 . For data asymptotic to a T = (constant) surface we find
g ab = −dT 2 + sinh 2 (T / ) 2 dR 2 R 2 + (θ 0 /2π) 2 + R 2 + (θ 0 /2π) 2 dθ 2 .h ij = sinh 2 (T / ) 2 R 2 + (θ 0 /2π) 2 0 0 r 2 + (θ 0 /2π) 2 (3.27a) π ij = − coth(T / ) h ij + γ(T, R) sinh 4 (T / ) α 0 (R 2 + (θ 0 /2π) 2 )) sinh 4 (T / ) α 0 (R 2 + (θ 0 /2π) 2 )) sinh 4 (T / ) β(T, R) coth 2 (T / )/ , (3.27b) where γ = α(T, R) − α(T, R) 2 − α 2 0 (3.28) β = α(T, R) α(T, R) 2 − α 2 0 − α(T, R) − α 2 0 2 α(T, R) 2 α(T, R) 2 − α 2 0 , (3.29) with α(T, R) := R 2 + (θ 0 /2π) 2 2 sinh(2T / ) 2 , . (3.30)
where R ranges over (−∞, +∞) though we must choose α(T, R) > α 0 . The above canonical data satisfies (3.3) with h (1) ij = 0 so we may readily compute the charges
J 0 = α 0 4G , K 0 = 1 − (θ 0 /2π) 2 8G
. in one asymptotic region, though for the foliation defined by (3.27) they then diverge in the second asymptotic region. It would be interesting to find a more well-behaved foliation of the spinning wormhole spacetime, or perhaps an analytic solution for the full spacetime metric.
Such a solution can presumeably be found by considering the above-mentioned quotients of dS 3 and taking the size of the commuting rotation to be determined by α 0 .
In the non-spinning case, it is clear that the above construction may be generalized to quotients by groups with more than one generator. In analogy with [51,52], one may construct quotients for which the T = (constant) surface (and thus I + ) is an arbitrary Reimann surface with any number of punctures 10 , where each puncture describes an asymptotic region. In particular, one may construct solutions with only a single asymptotic region. We expect that angular momentum may be added to these solutions as above. The solution also depends on a choice of internal moduli when the Riemann surface is not a sphere.
D. Comparison with Brown-York methods at future infinity
We now compare our charges to those obtained in [9,10] using boundary stress tensors on I + . As for k = 0, these constructions will agree on Γ(dS k=−1 ) gf , but not on all of Γ(dS k=−1 ).
We therefore restrict to Γ(dS k=−1 ) gf for the remainder of this section. All ∆h ij terms then vanish in (2.17) and ∆π ij may be replaced by √ h∆π ij .
We wish to evaluate (2.43) for k = −1. For d = 3 the counterterm is again simply −π ij 0 /κ which results in the necessary ∆π ij term in (2.17). For d = 4, 5 we will again show that the
counterterm d − 3 G ij (3.35)
can contribute only a constant (which must then vanish by symmetry for all charges). Using the same notation and conventions as in section II D we first evaluate the contributions from radial momentum constrains. As in the k = 0 case the contribution is a pure divergence for rotations. For translations the result is (see Appendix B)
R i G ij ζ j d − 3 = R ζ ⊥ R k σ ij D k ∆σ ij . (3.36)
10 For spheres, the number of punctures must be at least 2.
The contribution from the radial Hamiltonian constraint is given bŷ
R i G ij ζ ⊥Rj d − 3 = Rζ ⊥ R k σ ij (D i ∆σ jk − D k ∆σ ij ) . (3.37)
This vanishes explicitly for rotations (for which ζ ⊥ = 0). For translations, (3.36) nicely cancels the second term in the radial Hamiltonian contribution leaving onlŷ
R i G ij ζ j d − 3 = Rζ ⊥ R k σ ij D i ∆σ jk . (3.38)
The leading order term vanishes because ζ ⊥ is odd. Power counting now shows that the remaining terms vanish by (3.3).
IV. DISCUSSION
We have constructed phases spaces Γ(dS In most cases we found an asymptotic symmetry group (ASG) isomorphic to the isometries of dS k=0 or dS k=−1 , though for k = −1 and d = 3 the obvious rotational symmetry was enlarged to a (single) Virasoro algebra in the ASG. Since we do not include a gravitational Chern-Simons term, the central charge for this case vanishes due to reflection symmetry in the angular direction. While we expect a similar structure to arise for phase spaces asymptotic to general 2+1 dimensional k = −1 Friedmann-Lemaître-Robertson-Walker cosmologies, we leave such a general study for future work. We also identified a larger algebra containing two Virasoro sub algebras with non-trivial imaginary central charges ±i 3
2G
in agreement with those expected from [32,33] and computed in [34]. While the classical reality conditions are those of [31,32,34], the fact that the additional generators K n do not preserve our phase space suggests that corresponding "real" elements of the algebra (e.g., K 0 ) do not define self-adjoint operators at the quantum level. Instead, we expect that these operators behave somewhat like −i ∂ ∂x on the half-line and may have complex eigenvalues. While the K n charges do not generate symmetries, they are nevertheless gauge invariant and conserved on Γ(dS k=−1 ) gf .
A similar extension to the full Euclidean conformal group may also be allowed for d = 3, k = 0 and perhaps even in higher dimensions. However, reflection and rotation symmetries imply that the additional 'charges' vanish for d = 3 spinning conical defects, for d = 4 Kerrde Sitter, and for rotating de Sitter Myers-Perry black holes in higher dimensions. We have not investigated whether other solutions in our phase space (perhaps a 'moving' black hole?) might lead to non-zero values of such charges.
For both k = 0 and k = −1, our charges agree with those defined in [10,13] when the latter are computed for our asymptotic Killing fields at the respective i 0 and when the extra conditions defining Γ(dS k=0 ) gf and Γ(dS k=−1 ) gf hold. This establishes that (some of) the charges of [9,10,13] generate diffeomorphisms in a well-defined phase space.
It remains to compare our phase spaces with those of [11,12]. These references studied spacetimes asymptotic to dS k=0 in four spacetime dimensions, so we limit the comparison to our Γ(dS k=0 ) with d = 4. The focus in both [11] and [12] was on proving positive energy theorems associated with a charge Q[∂ t ] defined by the time-translation conformal Killing field ∂ t of dS k=0 . Because ∂ t defines only an asymptotic conformal symmetry, the value of Q[∂ t ] depends on the Cauchy surface on which it is evaluated. I.e., it is time-dependent, and is thus not conserved in the sense in which we use the term here. The boundary conditions of [11] for d = 4 can be stated as follows. Define
∆ h ij = h ij − e 2t/ δ ij = ∆h ij , ∆ K ij = K ij − h ij = ∆K ij (4.1)
and require that there exist a foliation with vanishing shift on which ∆ h ij = O(r −1 ), Now in order for Eqs. (2.5) and the boundary conditions of either [11] or [12] to be simultaneously satisfied, we must have at least ∆h ij = ∆ h ij = O(r −2 ). Using the results of [12] (particularly Theorem 4.2) it can be shown that the only globally hyperbolic spacetime which satisfies both sets of boundary conditions on the same foliation is exact de Sitter space in the form (3.1) (up to gauge transformations). 11 Our phase space thus has precisely one point in common with that of either [11] or [12]. It is nevertheless interesting to ask whether the methods of [11,12] might be used to derive a bound on the charge K 0 for d = 3.
∆ K ij = O(r −3
We define
χ ij 0 := π ij (d−2) (A1) χ ij 1 := π ij (d−1) +π ik h (d−1) k j ,(A2)
and evaluate the constraints to order r −(d−1) ,
D j χ ij 0 = 0 = χ 0 (A3) D j χ ij 1 = 0 = χ 1 .(A4)
Using these constraints and the known r dependence of χ ij 0 we find
χ ir 0 = −D k rχ ik 0 . (A5)
Calculating boundary terms using (2.13), (A1), (A5) and integrations by parts gives
∂Σ √σr i π ik (d−2)h kj ξ j = ∂Σ √σ χ rr 0 ξ ⊥ + χ ri 0 ξ i (A6) = ∂Σ √σ 2χ rr 0 ξ ⊥ (A7) = ∂Σ √σ 2rχ rj 0 D j ξ ⊥ ,(A8)
and
− 1 2 ∂Σ √σπ jk h (d−2) jk ξ ⊥ = − 1 2 ∂Σ √σ χ 0 ξ ⊥ = 0. (A9)
The remaining terms involving χ 1 are finite by simple power counting, providing a manifestly finite expression for our charges:
H(ξ) = 1 κ ∂Σ √ σ 2rπ rj (d−2) D j ξ ⊥ + χ rj 1 ξ j + π (d−1) 2 ξ ⊥ .(A10)
From this expression we can see that solutions to (2.13) which vanish on ∂Σ are gauge transformations as follows: Any such solution ξ has a Hamiltonian H(ξ) which is identically zero. Using the identity ω(δg, £ ξ g) = δH(ξ) where δg is an arbitrary tangent vector, we see that £ ξ g is a degenerate direction of the symplectic structure for such ξ. Thus any vector ξ which preserves our boundary conditions and vanishes at infinity generates a gauge transformations.
2. k = −1
With the same notation as above, for k = −1 we havē
θ ij = Rσ ij 2 (B5a) σ ij ∆θ ij = −R mσjk D j ∆h km − 1 2 D m ∆h jk + (Pure Divergence on ∂Σ ∞ ) (B5b) R = (d − 3)σ ij r 2 ,(B5c)
To derive (3.36) we use the fact that the translation symmetries are conformal Killing vectors on ∂Σ which satisfy
D (i ζ j) = − R ζ ⊥ 2 σ ij , (B6) so ∆ D i (θ ij − θσ ij ) ζ j = ∆ (θ ij − θσ ij )D (i ζ j) + . . . (B7) = (d − 3)R ζ ⊥ 2 2R mσjk D m ∆σ jk + . . . .(B8)
As for the radial Hamiltonian constraint, we have
∆ −θ 2 + θ ij θ ij = (d − 3)R 2 − 2Rσ ij ∆σ ij 2
+R mσjk (2D j ∆h km − D m ∆h jk ) + . . . ,
∆R = (d − 3)R 2 R∆σ ijσ ij 2 + . . . ,(B9)
which, after using (B4) to combine terms, gives (3.37).
Appendix C: ASG of Γ(dS k=−1 ) for d = 3
ξ T = 1 + R 2 / 2 sinh(T / ) cosh(T / ) ξ R + R ξ R ,(C3)
where primes signify R derivatives and f n is the solution to R 2 (1 + R 2 / 2 )f n − Rf n + (1 − n 2 )f n = 0. (C4)
Using the ansatz
f n (R, T ) = ∞ k=−∞ (k) n (T )R k ,(C5)
we find that (k) n must satisfy the recursion relation
(k − 1) 2 − n 2 (k) n = − (k − 2)(k − 3) 2 (k−2) n .(C6)
From this relation we see that Finally, we must specify the time dependence of (1) n and (0) n . We will use this freedom to enforce∇ (T ξ i) ∼ O(R −2 ). This condition is met by
which gives ζ a n = e inθ A n + B n coth(T / ) R (∂ θ ) a + e inθ in B n R 2 coth(T / ) 2 + A n n 2 R (∂ R ) a + e inθ in −B n R − (n 2 − 1) 2R + A n n 2 (n 2 − 1) 3 3R 2 coth(T / ) (∂ T ) a + . . . ,
where here and below . . . denote pure gauge terms.
Now we define ξ Kn by A n = 0 and B n = −in and ξ Jn by A n = 1 and B n = 0
ξ Kn = e inθ R ∂ T − R 2 coth(T / ) 2 ∂ R − in coth(T / ) R ∂ θ + . . . ,(C10)ξ Jn = e inθ − in(n 2 − 1) 3 3R 2 coth(T / ) ∂ T − inR∂ R + ∂ θ + . . . .(C11)
The charges L n (L n ) are now given by
L n = J n + i K n 2 L n = J n − i K n 2 ,(C12)
which lead to the algebra (3.16).
Appendix D: Finiteness, Gauge Invariance, and Conservation of K n in Γ(dS k=−1 ) gf
Finiteness
First we must show that the expression
K n = 1 κ ∂Σ √ σR i ∆π ik h kj R 2 coth(T / ) 2 (∂ R ) j e inθ = 1 κ ∂Σ (π RR (1) + π RR (2) ) sinh 3 (T / ) cosh(T / )(∂ R ) j e inθ(D1)
is finite. Imposing the constraints on the boundary conditions of Γ(dS k=−1 ) gf we find that
D i π ij (1) = 0 = π (1) D i π ij
(2) = 0 = π (2) .
Using these expressions we can show that
D i π iR (1) = − π RR (1) R D i π iR (2) = 0.(D3)
Thus we see that the π RR (1) term is a pure divergence which vanishes upon integration over the sphere. The remaining term is finite by power counting.
For future reference we note that the constraints also require π θθ
(2) = − 2 π RR (2)
R 4 ,(D4)
so π ij (2) only has two independent components.
Gauge Invariance
Using (2.15) and the fact that h (2) ij is completely fixed by the boundary conditions of Γ(dS k=0 ) gf we see that if ζ generates a gauge transformation on Γ(dS k=0 ) gf , then δ ζ π RR
(2) = − coth(T / ) δ ζ h RR (2) = 0.
Therefore δ ζ K n = 0.
Conservation
In the gauge h
where . . . represents terms that fall off like h
ij or faster. Solving this equations to lowest order and applying the constraints gives N = 1 + 3 sinh 2 (T / ) tanh(T / )π RR
(2) 3R 2 + . . .
N R = 2 2 sinh 2 (T / )π RR(2)
3R + . . .
N θ = 2 3 sinh 2 (T / )π Rθ (2) 2R 3 + . . . ,(D7)
which satisfy (3.3). Inserting these expressions into the equation of motion for the momentum and again applying the constraints gives two first order differential equations for the two independent components of π ij
(2) 0 = π RR (2) + 2 (2 coth(2T / ) + csch (2T / )) π RR
(2) 0 = π Rθ (2) + 4 coth(T / )π Rθ (2) .
The solutions are π RR (2) = κ 2π sinh 3 (T / ) cosh(T / ) K(θ) π Rθ (2) = κ 2π sinh 4 (T / )R 2 J(θ),
where K(θ) and J(θ) are free functions that depend only on θ. Comparison with (3.15) then shows that K n , J n are the Fourier components of K(θ), J(θ) and thus are conserved on Γ(dS k=−1 ) gf .
FIG
. 1: A conformal diagram of dS d . The region above the diagonal line is the k = 0 cosmological patch. Each point represents an S d−2 . Our boundary conditions are applied at the S d−2 labeled i 0 , which we call spatial infinity. The dashed line shows a representative (t = constant) slice.
FIG. 2 :
2A conformal diagram of dS d . The region above the diagonal line is the expanding hyperbolic patch. The S d−2 labeled i 0 is the spatial infinity at which our boundary conditions are applied.The dashed line shows a representative (T = constant) slice.III. THE k = −1 PHASE SPACESWe now construct two phase spaces of d ≥ 3 spacetimes which asymptotically approach the hyperbolic (k = −1) patch of de Sitter space (seefigure 2) with metric
11) where D i is the covariant derivative on the R = (constant) subsurface of Σ. Since we consider here the metric (3.1), eqn.(3.11) is the conformal Killing equation on the unit S d−2 . The sphere is conformally flat, so the solutions to this equation are the generators of the d − 2 dimensional Euclidean conformal group. For d ≥ 5 this group is SO(d − 1, 1) which is isomorphic to the group generated by (3.2). For d = 4, (3.11) has an infinite number of solutions, however we are only interested in those solutions which are globally well defined on the sphere. These solutions form the subgroup P SL(2, C) ∼ = SO(3, 1), which is again
sign conventions we have treated J n as a momentum and K n as an energy. Here the condition h(1) ij = 0 is relevant only to K n and does not affect J n . From the Poincaré disk description of the 2d hyperbolic plane, one readily sees that diffeomorphisms associated with J n preserve the boundary while those associated with K n do not. A careful study of our boundary conditions (3.3) similarly shows that the phase space Γ(dS k=−1 ) is invariant only under the J n . As a result, the asymptotic symmetry group of either Γ(dS k=−1 ) or Γ(dS k=−1 ) gf is given by a single Virasoro algebra [J n , J m ] = (n − m)J n+m , (3.14) where [A, B] denotes the commutator of the corresponding quantum mechanical charges (i.e., we have inserted an extra factor of i relative to the classical Poisson Bracket). The central charge vanishes due to the symmetry θ → −θ, which reflects the lack of a gravitational Chern-Simons term. Adding such a term to the action should lead to non-vanishing central charge.
[
L n , L m ] = (n − m)L n+m + the left-and right-moving central charges are imaginary complex conjugates in agreement with[32][33][34].
3.19) to (T, R, θ, φ) coordinates and finally converting from spherical to Cartesian coordinates we obtain a metric which approaches (3.1) when M → 0. The explicit form of the metric is unenlightening but yields ∆h ij , ∆π ij which satisfies(3.3). A similar calculation using the de Sitter-Schwarzschild solution in d ≥ 4 yields fields with h
0) and thus lies in Γ(dS k=−1 ). We refer to (3.26) as a 'wormhole' since on a given constant T surface the θ circle has a minimum size θ 0 sinh(T / ) at R = 0. These solutions have J 0 = 0 and K 0 = (1 − (θ 0 /2π) 2 ) /8G. Although (3.26) does not satisfy (3.9), performing an additional coordinate transformation to write(3.26) as an element of Γ(dS k=−1 ) gf turns out not to change this value of K 0 .
performing an additional coordinate transformation to write (3.27) as an element of Γ(dS k=−1 ) gf turns out not to change this value of K 0 . Thus α 0 , θ 0 are constant on any solution (at least when the lapse and shift have the fall off dictated by (3.3)). Indeed, using the Bianchi identities one may show thatJ 0 =K 0 = 0 (as evaluated in one asymptotic region) are precisely the conditions for (3.27) to solve the canonical equations of motion with lapse and shift defined by solving any 3 independent sets of these equations. The resulting lapse and shift can be chosen to satisfy 3 (T / ) cosh(T / )R 3 + O(R −6 ) (3.33) 2 (T / )R 3 + O(R −2 ) (3.34)
k=0 ), Γ(dS k=0 ) gf and Γ(dS k=−1 ), Γ(dS k=−1 ) gf associated with spacetimes asymptotic to the planar-and hyperbolic-sliced regions dS k=0 and dS k=−1 of de Sitter space for d ≥ 3. For d ≥ 4 our phase spaces are non-trivial in the sense that they contain the de Sitter-Schwarzschild solution as well as spacetimes with generic gravitational radiation through I + , provided only that this radiation falls off sufficiently quickly at i 0 . For d = 3 and k = 0 the phase spaces become non-trivial when coupled to matter fields or point particles. Despite the lack of local degrees of freedom, the case d = 3 with k = −1 is non-trivial even without matter due to both boundary gravitons and the family of wormhole spacetimes described in section III C. These solutions are labeled by the topology of I + , the angular momentum, and an energy-like K 0 charge, as well as internal moduli if the topology of I + is sufficiently complicated.
For d = 3
3in 2f n (T, R) − f n (T, R) R (C2)
n
, from which the rest of the series is determined (though as shown in appendix A the terms involving (k<0) n are pure gauge).
n
= B n sinh(T / ) cosh(T / ),
equation of motion for the induced metric iṡh ij = 2N (π ij − πh ij ) + 2D (i N j) + . . . ,
13 )
13where we have defined the tangential and normal parts ξ ⊥ , ξ a to Σ via the decomposition ξ a = ξ ⊥ n a + ξ a .(2.14)Note that (2.13) is the conformal Killing equation for vectors ξ i in Euclidean R d−1 with confromal factor ξ ⊥ e 2t/ / . equation. Expanding this potential in sperical harmonics we obtain a set of symmetries which fall off with various powers of r. Vector fields with terms of order r 2 or higher violate our boundary conditions while those of order r −1 or lower are pure gauge because theyWe wish to discard solutions to this equation which are either pure gauge (ω(£ ξ g, δg) = 0)
or do not preserve our boundary conditions. It is shown at the end of appendix A that if
ξ vanishes as r → ∞ then ω(£ ξ g, δg) = 0. For d > 3 the remaining solutions are the
conformal group of R d−1 . We find using (2.15) that our boundary conditions (2.5) are not
invariant under special conformal transformations 4 . Excluding such transformations leaves
a group isomorphic to the isometries of dS k=0 (see (2.2)).
Due to the infinite-dimensional conformal group of the plane there are additional solutions
to (2.13) for d = 3, each of which can be described by a potential satisfying Laplace's
vanish at infinity. What remains are four vector fields (two of order r 1 , two of order r 0 )
corresponding to the four isometries of dS k=0 for d = 3.
2.41) coincide with ours (up to a shift of the zero of energy) for the particular cases of d = 4, 5 de Sitter-Schwarzschild and the d = 3 spinning conical defect in appropriate coordinates. It might seem natural to suppose that this equivalence extends to all of Γ(dS k=0 ). Such a correspondence is plausible since near i 0 the boundary conditions for Γ(dS k=0 ) require the spacetime to approach exact de Sitter space and the induced metric on I + becomes approximately fixed. However, the exact manner in which the spacetime approaches dS k=0 at large r turns out to be important. We show below up to a possible shift of the zero points). They thus agree with our charges when the gauge condition (2.21) holds, but not in complete generality. Indeed, from the discussion at the end of section II B we see that the charges (2.41) generally diverge on our phase space.that in d = 3, 4, 5 for generators of our asymptotic symmetries the charges (2.41) actually
yield precisely (2.24a), (2.24b), (2.24c) (
43 )
43where G ij is the Einstein tensor of Σ ∞ and the G ij term does not appear for d = 3. It is then clear that first term in (2.43) combines nicely with the explicit π ij term in(2.40) to give a term involving ∆π ij ; i.e., this piece of the counterterm cancels the contribution from the pure dS k=0 background. For d = 3 we then see that (2.41) precisely reproduces (2.24a), (2.24b), and (2.24c), and the same would be true for d = 4, 5 if the G ij term in(2.43) can be ignored. This is in fact the case, as we now show for d = 4, 5 that the G ij term in (2.43) is
47 )
47The leading order contribution to this term vanishes upon integration due to the fact thatξ ⊥ is odd. The remaining terms vanish by power counting. It follows that (2.46) makes no
contribution to the total charge and we see that, as previously claimed, the charges (2.41)
are given up to a possible shift of the zero-point by (2.24a), (2.24b), (2.24c). Thus the
charges (2.41) agree with ours in the asymptotic gauge (2.21).
10 )
10up to terms which vanish at infinity, where D i is the covariant derivative on Σ. For d ≥ 4we project this equation onto a R = (constant) submanifold which gives
). These boundary conditions make Q[∂ t ] finite and allow one to prove that Q[∂ t ] ≥ 0. However, as the authors note, they do not generally make finite the charges associated with the asymptotic Killing fields. In contrast, our boundary conditions were chosen specifically to make such 'Killing charges' finite. Luo et. al.[12] use boundary conditions that are similar to[11]. They require also that a foliation be constructed with vanishing shift and ∆ h ij = O(r −1 ), ∆ K ij = O(r −2 ).A simple calculation yields ∆ π ij = ∆π ij −√h
2∆h ij + ∆h k
kh
ij + O(∆h 2 ).
(4.2)
A finite well-defined Hamiltonian that preserves the phases space ensures that the Poisson bracket is conserved. Since the symplectic product is the inverse of the Poisson structure, it too must be conserved.
Specfically, acting on the Schwarzschild de Sitter solution (2.28) below twice with a Lie derivative along the generator of special conformal transformations gives a term which violates our boundary conditions.
Up to a possible shift of the zero of energy. Because the counter-terms required by[10] proliferate in higher dimensions, section II D considers only the cases d = 3, 4, 5 though we expect similar results for higher dimensions. In addition, the charges of[9] differ by an overall sign.
Our conventions are related to those of [10] by M ⇒ 1/8G − m and aM ⇒ −J.
Of course, this requires that lapse and shift also be chosen so that this gauge condition is preserved. This is always possible since (2.21) is a valid gauge condition.
AcknowledgementsWe thank Tomas Andradé, Curtis Asplund, Andy Strominger, Jennie Traschen, and David Kastor for interesting discussions concerning de Sitter charges. We also thank Alejandra Castro, Matthias Gaberdiel, and Alex Maloney for discussions related to the algebra (3.16). This work was supported in part by the National Science Foundation under Grant No PHY08-55415, and by funds from the University of California.Appendix A: Finiteness of the charges for Γ(dS k=0 ).As noted in section II B, despite naive power-counting divergences in(2.17), finiteness of the symplectic structure (2.11) under the boundary conditions (2.5) guarantees that charges defined by asymptotic symmetries are in fact finite on solutions. We now show this explicitly by solving the constraints at the leading orders in 1/r. Below, we use the notation defined in (2.42).11However it is possible for the same spacetime to admit two different foliations with one satisfying(3.3)and the other satisfying the boundary conditions of[11,12]. This is in particular the case for the de Sitter-Schwarzschild solution; see[11,12].Appendix B: The radial Hamiltonian constraintWe now fill in some details of the analysis of the radial Hamiltonian constraint (2.44) in sections II D and III D.Let us write σ ij =σ ij + ∆σ ij , θ ij =θ ij + ∆θ ij , whereσ ij ,θ ij are the induced metric and extrinsic curvature of ∂Σ ∞ in exact dS k=0 . In particular,where . . . represents a linear combination of higher order terms in ∆h ij and total divergences on ∂Σ ∞ .Next we use the definition of θ ij to note that r m σ jk D j ∆h km = −∆h km θ km + (Pure Divergence on ∂Σ ∞ ) .Using (B1) then yieldŝ r m σ jk D j ∆h km = − ∆σ km σ km r + · · · = ∆σ km σ km r + . . . .Combining the results above gives (2.45).
. R Arnowitt, S Deser, C W Misner, Phys. Rev. 1161322R. Arnowitt, S. Deser, and C. W. Misner, Phys. Rev. 116, 1322 (1959).
. T Regge, C Teitelboim, Annals of Physics. 88286T. Regge and C. Teitelboim, Annals of Physics 88, 286 (1974).
. M Henneaux, C Teitelboim, http:/www.springerlink.com/index/10.1007/BF01205790Communications in Mathematical Physics. 98M. Henneaux and C. Teitelboim, Communications in Mathematical Physics 98, 391 (1985), URL http://www.springerlink.com/index/10.1007/BF01205790.
. J D Brown, M Henneaux, Communications in Mathematical Physics. 104207J. D. Brown and M. Henneaux, Communications in Mathematical Physics 104, 207 (1986).
A Ashtekar, L Bombelli, R Koul, The Physics of Phase Space. Y. S. Kim and W. W. ZacharyBerlinSpringer-VerlagA. Ashtekar, L. Bombelli, and R.Koul, in The Physics of Phase Space, edited by Y. S. Kim and W. W. Zachary (Springer-Verlag, Berlin, 1987).
. A Ashtekar, L Bombelli, O Reula, Analysis, Geometry and Mechanics. 200A. Ashtekar, L. Bombelli, and O. Reula (1990), in 'Analysis, Geometry and Mechanics: 200
Years After Lagrange. M. Francaviglia, D. HolmNorth-Holland, AmsterdamYears After Lagrange', Ed. by M. Francaviglia, D. Holm, North-Holland, Amsterdam.
. L Abbott, S Deser, Nucl.Phys., B. 19576L. Abbott and S. Deser, Nucl.Phys., B 195, 76 (1982).
. M Banados, T Brotz, M E Ortiz, hep-th/9807216Phys.Rev. 5946002M. Banados, T. Brotz, and M. E. Ortiz, Phys.Rev. D59, 046002 (1999), hep-th/9807216.
. A Strominger, hep-th/0106113JHEP. 011034A. Strominger, JHEP 0110, 034 (2001), hep-th/0106113.
. V Balasubramanian, J Boer, D Minic, hep- th/0110108Phys.Rev. 65123508V. Balasubramanian, J. de Boer, and D. Minic, Phys.Rev. D65, 123508 (2002), hep- th/0110108.
. D Kastor, J H Traschen, hep-th/0206105Class.Quant.Grav. 19D. Kastor and J. H. Traschen, Class.Quant.Grav. 19, 5901 (2002), hep-th/0206105.
. M Luo, N Xie, X Zhang, 0712.4113Nucl.Phys. 82598M.-x. Luo, N.-q. Xie, and X. Zhang, Nucl.Phys. B825, 98 (2010), 0712.4113.
. D Anninos, G S Ng, A Strominger, 1009.4730Class.Quant.Grav. 28175019D. Anninos, G. S. Ng, and A. Strominger, Class.Quant.Grav. 28, 175019 (2011), 1009.4730.
. G Kleppe, Phys. Lett. 317305G. Kleppe, Phys. Lett. B317, 305 (1993).
. S P Miao, N C Tsamis, R P Woodard, 0907.4930J. Math. Phys. 50122502S. P. Miao, N. C. Tsamis, and R. P. Woodard, J. Math. Phys. 50, 122502 (2009), 0907.4930.
. S P Miao, N C Tsamis, R P Woodard, 1002.4037S. P. Miao, N. C. Tsamis, and R. P. Woodard (2010), 1002.4037.
. S Miao, N Tsamis, R Woodard, 1106.0925S. Miao, N. Tsamis, and R. Woodard (2011), 1106.0925.
. S Miao, N Tsamis, R Woodard, Temporary entry *, 1107.4733S. Miao, N. Tsamis, and R. Woodard (2011), * Temporary entry *, 1107.4733.
. E Kahya, S Miao, R Woodard, 1112.4420E. Kahya, S. Miao, and R. Woodard (2011), 1112.4420.
. B Allen, Phys.Rev. 343670B. Allen, Phys.Rev. D34, 3670 (1986).
. B Allen, Nucl.Phys. 287743B. Allen, Nucl.Phys. B287, 743 (1987).
. B Allen, M Turyn, Nucl. Phys. 292813B. Allen and M. Turyn, Nucl. Phys. B292, 813 (1987).
. A Higuchi, Nucl. Phys. 282397A. Higuchi, Nucl. Phys. B282, 397 (1987).
. A Higuchi, Class.Quant.Grav. 82005A. Higuchi, Class.Quant.Grav. 8, 2005 (1991).
. A Higuchi, Class.Quant.Grav. 81961A. Higuchi, Class.Quant.Grav. 8, 1961 (1991).
. A Higuchi, S S Kouris, gr-qc/0004079Class.Quant.Grav. 17A. Higuchi and S. S. Kouris, Class.Quant.Grav. 17, 3077 (2000), gr-qc/0004079.
. A Higuchi, S S Kouris, gr-qc/0107036Class. Quant. Grav. 18A. Higuchi and S. S. Kouris, Class. Quant. Grav. 18, 4317 (2001), gr-qc/0107036.
. A Higuchi, D Marolf, I A Morrison, Class.Quant.Grav. 28A. Higuchi, D. Marolf, and I. A. Morrison, Class.Quant.Grav. 28, 245012 (2011), 1107.2712.
. M Guica, T Hartman, W Song, A Strominger, 0809.4266Phys.Rev. 80124008M. Guica, T. Hartman, W. Song, and A. Strominger, Phys.Rev. D80, 124008 (2009), 0809.4266.
. I Bredberg, C Keeler, V Lysov, A Strominger, 1103.2355Nucl.Phys.Proc.Suppl. 216I. Bredberg, C. Keeler, V. Lysov, and A. Strominger, Nucl.Phys.Proc.Suppl. 216, 194 (2011), 1103.2355.
. J Fjelstad, S Hwang, T Mansson, hep-th/0206113Nucl.Phys. 641J. Fjelstad, S. Hwang, and T. Mansson, Nucl.Phys. B641, 376 (2002), hep-th/0206113.
. V Balasubramanian, J Boer, D Minic, hep- th/0207245Class.Quant.Grav. 19V. Balasubramanian, J. de Boer, and D. Minic, Class.Quant.Grav. 19, 5655 (2002), hep- th/0207245.
. J M Maldacena, astro-ph/0210603JHEP. 030513J. M. Maldacena, JHEP 0305, 013 (2003), astro-ph/0210603.
. P Ouyang, 1111.0276P. Ouyang (2011), 1111.0276.
. T N Palmer, Journal of Mathematical Physics. 192324T. N. Palmer, Journal of Mathematical Physics 19, 2324 (1978).
. S Deser, R Jackiw, Annals Phys. 153405S. Deser and R. Jackiw, Annals Phys. 153, 405 (1984).
. S Deser, R Jackiw, G Hooft, Annals Phys. 152220S. Deser, R. Jackiw, and G. 't Hooft, Annals Phys. 152, 220 (1984).
. M Henneaux, Phys.Rev. 292766M. Henneaux, Phys.Rev. D29, 2766 (1984).
. J D Brown, J W York, Physical Review D. 471407J. D. Brown and J. W. York, Physical Review D 47, 1407 (1993).
. M Henningson, K Skenderis, hep-th/9806087JHEP. 980723M. Henningson and K. Skenderis, JHEP 9807, 023 (1998), hep-th/9806087.
. V Balasubramanian, P Kraus, hep-th/9902121Commun.Math.Phys. 208413V. Balasubramanian and P. Kraus, Commun.Math.Phys. 208, 413 (1999), hep-th/9902121.
. D Anninos, G S Ng, A Strominger, 1106.1175D. Anninos, G. S. Ng, and A. Strominger (2011), 1106.1175.
. R M Wald, A Zoupas, gr-qc/9911095Phys. Rev. 6184027R. M. Wald and A. Zoupas, Phys. Rev. D61, 084027 (2000), gr-qc/9911095.
. H Bondi, M Van Der Burg, A Metzner, Proc.Roy.Soc.Lond. 26921H. Bondi, M. van der Burg, and A. Metzner, Proc.Roy.Soc.Lond. A269, 21 (1962).
. R Sachs, Proc.Roy.Soc.Lond. 270103R. Sachs, Proc.Roy.Soc.Lond. A270, 103 (1962).
. R Sachs, Phys.Rev. 1282851R. Sachs, Phys.Rev. 128, 2851 (1962).
. R Penrose, Phys.Rev.Lett. 1066R. Penrose, Phys.Rev.Lett. 10, 66 (1963).
B Carter, Black Holes. B. DeWitt and C. DeWitt (Gordon and BreachNew YorkB. Carter, in Black Holes, edited by B. DeWitt and C. DeWitt (Gordon and Breach, New York, 1973).
. M Banados, C Teitelboim, J Zanelli, hep-th/9204099Phys.Rev.Lett. 691849M. Banados, C. Teitelboim, and J. Zanelli, Phys.Rev.Lett. 69, 1849 (1992), hep-th/9204099.
. M Banados, M Henneaux, C Teitelboim, J Zanelli, gr- qc/9302012Phys.Rev. 481506M. Banados, M. Henneaux, C. Teitelboim, and J. Zanelli, Phys.Rev. D48, 1506 (1993), gr- qc/9302012.
. S Aminneborg, I Bengtsson, D Brill, S Holst, P Peldan, gr-qc/9707036Class.Quant.Grav. 15S. Aminneborg, I. Bengtsson, D. Brill, S. Holst, and P. Peldan, Class.Quant.Grav. 15, 627 (1998), gr-qc/9707036.
. S Aminneborg, I Bengtsson, S Holst, gr-qc/9805028Class.Quant.Grav. 16S. Aminneborg, I. Bengtsson, and S. Holst, Class.Quant.Grav. 16, 363 (1999), gr-qc/9805028.
| [] |
[
"Sensing User's Channel and Location with Terahertz Extra-Large Reconfigurable Intelligent Surface under Hybrid-Field Beam Squint Effect",
"Sensing User's Channel and Location with Terahertz Extra-Large Reconfigurable Intelligent Surface under Hybrid-Field Beam Squint Effect"
] | [
"Zhuoran Li ",
"Zhen Gao ",
"Tuan Li "
] | [] | [] | This paper investigates the sensing of user's uplink channel and location in terahertz extra-large reconfigurable intelligent surface (XL-RIS) systems, where the unique hybrid far-near field effect and the beam squint effect caused by the XL array aperture as well as the XL bandwidth are overcome. Specifically, we first propose a joint channel and location sensing scheme, which consists of a location-assisted generalized multiple measurement vector orthogonal matching pursuit (LA-GMMV-OMP) algorithm for channel estimation (CE) and a complete dictionary based localization (CDL) scheme, where a frequency selective polar-domain redundant dictionary is proposed to overcome the hybrid field beam squint effect. The CE module outputs coarse on-grid angle estimation (respectively observed from the BS and RIS) to the localization module, which returns the fine off-grid angle estimation to improve CE. Particularly, with RIS, CDL can obtain user's location via line intersection, and a polar-domain gradient descent (PGD) algorithm at the base station is proposed to achieve the off-grid angle estimation with super-resolution accuracy. Additionally, to further reduce the sensing overhead, we propose a partial dictionary-based localization scheme, which is decoupled from CE, where RIS is served as an anchor to lock the user on the hyperbola according to time difference of arrival and the user's off-grid location can be obtained by using the proposed PGD algorithm. Simulation results demonstrate the superiority of the two proposed localization schemes and the proposed CE scheme over state-of-the-art baseline approaches. | 10.1109/jstsp.2023.3278942 | [
"https://export.arxiv.org/pdf/2305.07184v2.pdf"
] | 258,676,200 | 2305.07184 | 7a9c9b2195d3ed44338fb37eaa72153c2335fdbf |
Sensing User's Channel and Location with Terahertz Extra-Large Reconfigurable Intelligent Surface under Hybrid-Field Beam Squint Effect
Zhuoran Li
Zhen Gao
Tuan Li
Sensing User's Channel and Location with Terahertz Extra-Large Reconfigurable Intelligent Surface under Hybrid-Field Beam Squint Effect
1Index Terms-Terahertz communicationsXL-arrayhybrid far- near fieldbeam squintreconfigurable intelligent surfacewireless sensing and localization
This paper investigates the sensing of user's uplink channel and location in terahertz extra-large reconfigurable intelligent surface (XL-RIS) systems, where the unique hybrid far-near field effect and the beam squint effect caused by the XL array aperture as well as the XL bandwidth are overcome. Specifically, we first propose a joint channel and location sensing scheme, which consists of a location-assisted generalized multiple measurement vector orthogonal matching pursuit (LA-GMMV-OMP) algorithm for channel estimation (CE) and a complete dictionary based localization (CDL) scheme, where a frequency selective polar-domain redundant dictionary is proposed to overcome the hybrid field beam squint effect. The CE module outputs coarse on-grid angle estimation (respectively observed from the BS and RIS) to the localization module, which returns the fine off-grid angle estimation to improve CE. Particularly, with RIS, CDL can obtain user's location via line intersection, and a polar-domain gradient descent (PGD) algorithm at the base station is proposed to achieve the off-grid angle estimation with super-resolution accuracy. Additionally, to further reduce the sensing overhead, we propose a partial dictionary-based localization scheme, which is decoupled from CE, where RIS is served as an anchor to lock the user on the hyperbola according to time difference of arrival and the user's off-grid location can be obtained by using the proposed PGD algorithm. Simulation results demonstrate the superiority of the two proposed localization schemes and the proposed CE scheme over state-of-the-art baseline approaches.
I. INTRODUCTION
A. Prior Works T HE cellular network localization is a prerequisite for various critical applications in 6G. Compared to satellite localization, cellular network localization is more suitable for indoor and urban environment. Conventional cellular network localization methods can be divided into three categories, depending on the receive signal strength (RSS), time of arrival (ToA)/time difference of arrival (TDoA), and angle of arrival (AoA)/angle of departure (AoD), respectively [1]. In contrast to RSS-based localization methods, which are mainly used indoors due to their poor accuracy for outdoor localization [1], [2], ToA/TDoA and AoA/AoD-based localization methods can be used both indoors and outdoors. ToA-based methods estimate the delays between multiple anchors and the user equipment (UE) to obtain the UE's location according to the intersection of several circles [3], while the TDoA-based methods resort to the time differences between anchors and the UE to conduct hyperbolic localization [4]. The advantage of TDoA-based methods over ToA-based methods is that the former only requires accurate synchronization of time between anchors [4], [5], which avoids the affection of clock offset between the UE and the BS. Additionally, in the OFDM frequency-domain model, the cyclic prefix can only be used to Z obtain the delay of the multipath with respect to the first path, and the absolute delay of each path is not available. Therefore, the communication protocol for localization is based on the TDoA and round trip time [6], [7]. Benefiting from high angle resolution of massive multiple-input multiple-output (mMIMO) and even extra-large MIMO (XL-MIMO) systems, the AoAbased UE localization has been studied in [3], [8]. In [3], multiple base stations (BSs) were utilized to sense the UE's location, where the matching filtering method and compressed sensing (CS) method were used to estimate the ToA and AoA, respectively. However, this work required the collaboration of multiple BSs and therefore it cannot be applied to the case of a single BS. In [8], distributed compressed sensing-simultaneous orthogonal matching pursuit (OMP) algorithm was utilized to estimate the AoA, AoD and ToA, which were then refined using the expectation maximization algorithm to sense the UE's location indoors. However, the UE needed to locate scatterers in order to calculate its own location. Moreover, this scheme cannot work well if the UE is equipped with only one antenna.
In addition, some literature has conducted the studies of reconfigurable intelligent surface (RIS)-assisted UE localization [9]- [11]. A RIS-self-sensing system was proposed in [9], which designed the phases of the RIS and adopted multiple signal classification (MUSIC) algorithm to estimate the AoA. However, this work only estimated the AoA, rather than the specific location of the UE. Using the degree of freedom of observation brought by multiple RISs, random beamforming and maximum likelihood estimation method were utilized in [10] to estimate the AoD and sense the UE's location. However, this work requires the UE to have perfect knowledge of the locations of the RISs, which is difficult to achieve in practice. Moreover, the downlink localization imposes a certain computational burden on the UE compared to the uplink localization. In [11], the RISassisted localization error bound was analyzed, which brought theoretical guidance to the deployment of the RIS. However, the process of designing the phase of the RIS and analyzing the error bound requires the UE's location in advance, which is difficult to achieve in practice.
As for channel sensing or channel estimation (CE), the path loss is severe for terahertz (THz) signals, so the angle-domain representation of mMIMO and XL-MIMO channels presents sparse features. To exploit the angle-domain sparsity, various CS methods (e.g., OMP algorithms and its derivatives) have been proposed to sense the channels [12]- [15]. For wideband mMIMO systems, a distributed grid matching pursuit (DGMP) algorithm was proposed in [14]. For the CE problem in the nearfield region, the polar-domain simultaneous OMP (PSOMP) algorithm has been proposed in [13], which only works in the case that the beam squint effect (BSE) is not obvious. However, the aforementioned algorithms rely on the on-grid processing, which suffer from the limited estimate resolution due to the continuously distributed AoA/AoD. Therefore, offgrid super-resolution channel sensing algorithms were proposed to improve the channel sensing accuracy [16], [17]. [3] ✓ ✓ ✓ ✓ Direct Source Localization [4] ✓ ✓ ✓ Maximum likelihood estimation [8] ✓ ✓ ✓ ✓ Modified OMP, Expectation maximization [9] ✓ ✓ ✓ ✓ Customized MUSIC [10] ✓ ✓ ✓ ✓ Maximum likelihood estimation [18] ✓ ✓ ✓ ✓ Maximum likelihood estimation [19] ✓ ✓ ✓ ✓ Time-delay lines-assisted localization [20] ✓ ✓ ✓ ✓ ✓ Successive localization and beamforming Our Work ✓ ✓ ✓ ✓ ✓ ✓ ✓ LA-GMMV-OMP algorithm along with CDL Scheme, PDL Scheme
On the other hand, due to the high carrier frequency of millimeter-wave (mmWave)/THz and the large aperture of XL-MIMO or XL-RIS, the Rayleigh distance becomes significantly large in cellular networks, therefore the conventional far-field assumption is not always valid. In contrast, the near-field communications have attracted much attention recently. In [13], the phenomenon that the coexistence of the near-field and the far-field is called the hybrid far-near field (HFNF) effect. Meanwhile, the BSE [21] induced by ever-increasing bandwidth severely limits the performance of communications and network sensing of XL-MIMO systems. In [13], to acquire better performance in the near-field, polar-domain transform matrix (PTM) was proposed to replace the Fourier transform matrix (FTM) in the OMP-based channel sensing schemes. In contrast to the FTM, only having the angle-domain resolution, the PTM has both the angle-domain resolution and distancedomain resolution, which can overcome the energy spread effect [13]. A solution to sense the UE's location under the near-field BSE was proposed in [19], but the localization accuracy was poor. In [18], the RIS was used as lens and its phase was specifically designed, where a maximum likelihood estimation method was utilized to estimate the UE's location in the near-field region without BSE. Although sophisticated methods have been proposed in [21], [22] to overcome the BSE in communication systems, the influence of such effect on cellular network sensing has not been well studied at the time of writing.
By far, the aforementioned localization methods seldom considered the HFNF BSE, which is common in mmWave/THz XL-MIMO systems with very large bandwidth [3], [8]- [10], [18], [20]. Even if the HFNF BSE was taken into consideration, the accuracy of localization did not meet the requirements of 6G communications [19]. In addition, although UE localization and channel sensing have been jointly investigated in [8], [20] in the far-field region without the BSE, no current research has investigated the joint localization and channel sensing in the HFNF channel with BSE. Therefore, the study of joint UE localization and channel sensing in RIS-assisted XL-array systems under the HFNF BSE is still in its early stage.
B. Our Contributions
This paper proposes two RIS-assisted localization paradigms for network sensing, where HFNF channel with BSE is considered. Specifically, we propose a joint channel and location sensing scheme in Section III and a pure location sensing scheme not relying on channel estimation in Section IV, respectively. The sensing procedure for the UE's channel and location is summarized in Fig. 1.
Our contributions 1 are summarized as follows:
• We design a frequency-selective polar-domain redundant dictionary (FSPRD) for sensing the UE's channel and location under the HFNF BSE. The conceived FSPRD is developed from the PTM [13], so that the angle-distance parameters can be reliably estimated under HFNF channels. Moreover, BSE indicates that the virtual angle-distance representation under HFNF channels shifts as the subcarrier deviates from the central carrier 2 . The proposed FSPRD can compensate the offsets for different subcarriers so that the identical physical angle-distance parameters among different subcarriers can be ensured and exploited for enhanced sensing. • We propose a joint channel and location sensing scheme. This solution consists of a location-assisted generalized multiple measurement vector orthogonal matching pursuit (LA-GMMV-OMP) algorithm for CE and a complete dictionary based localization (CDL) scheme. The CE module outputs coarse on-grid angle estimation (respectively observed from the BS and RIS) to the localization module, which returns the fine off-grid angle estimation to improve CE. Specifically, the correlation operation of the LA-GMMV-OMP algorithm outputs a coarse AoA estimation to the CDL scheme for localization. In CDL scheme, a polar-domain gradient descent (PGD) algorithm is proposed to obtain the fine off-grid estimation of AoA seen from the BS, and the polar-domain hierarchical dictionary (PHD) is utilized to obtain the fine estimation of AoA observed from the RIS. On this basis, we can obtain the accurate UE's location, which can further facilitate the line-of-sight (LoS) channel reconstruction and therefore improve the channel sensing performance of LA-GMMV-OMP algorithm. Note that, by adding multiple atoms to the support set in each iteration, the LA-GMMV-OMP algorithm can better estimate NLoS paths in channels with cluster structure. At the same time, we apply a novel adaptive iterative stopping criterion, which has more stable performance than the conventional residual-based criterion. • We propose a PGD algorithm to acquire the fine off-grid estimation of AoAs at the BS. Since the ongrid estimation of AoA based on the quantized FSPRD has limited resolution, the sensing performance of UE's channel and location has a limited precision. Therefore, an off-grid PGD algorithm is dedicatedly designed. By carefully designing the combiner of the BS, the AoA estimation can be decoupled from distance and we can obtain an equivalent LoS path channel, which can be used to obtain a loss function with good local convexity, since it only consists of the channel gain and the HFNF steering vector. On this basis, the PGD algorithm is proposed to obtain the off-grid AoA estimation at the BS without knowing the exact distance. • is required, the sensing signal overhead can be further reduced. Therefore, to directly sense the UE's location, we further propose a partial dictionary based localization (PDL) scheme, where the TDoA is utilized to lock the UE on the hyperbola and both the BS and RIS are served as anchors. Particularly, partial FSPRD are generated on the hyperbola to obtain the coarse AoA and then the PGD algorithm is utilized to improve the accuracy of the AoA estimation. Since the UE is locked on the hyperbola, the size of FSPRD and the involved computational complexity can be considerably reduced.
C. Notation
Throughout this paper, scalar variables are denoted by normal-face letters, while boldface lower and uppercase letters denote column vectors and matrices, respectively; the transpose and conjugate transpose operators are denoted by (·) T and (·) H , respectively; j = √ −1 is the imaginary unit; C is the sets of complex-valued numbers; |A| c is the cardinal number of set A; ∅ is the empty set; X :,m1:m2 is the matrix composed of column vectors from m 1 -th column to m 2 -th column of matrix X ∈ C N ×M ; [m] in X[m](θ) means extracting some elements of X indexed by [m], where θ is the argument of X[m](θ); diag(a) is a diagonal matrix with elements of a on its diagonal; tr(·) is the trace operator; |s| is the magnitude of s, whether s is a real number or a complex-valued number; ∥s∥ F and ∥S∥ F is the Frobenius norm of vector s and matrix S, respectively; CN (µ, Σ) is the Gaussian distribution with mean µ and covariance Σ; U(a, b) is the uniform distribution between a and b; R(s) is the real part of the complex-valued number s; ⌈s⌉ represents finding the smallest integer greater than or equal to s; ∂(·) is the first-order partial derivative operation; ⊙ is the Hadamard product; 0 n , 1 n and I n are the vector of size n with all the elements being 0, 1 and the n × n identity matrix, respectively; c is the speed of light.
II. SYSTEM MODEL A. Channel Model
We consider that each UE is equipped with one omnidirectional antenna 3 . In order to reduce the prohibitive cost and power consumption in XL-array systems, hybrid beamforming is adopted. Specifically, the BS is equipped with N -element uniform linear array (ULA) while only N RF radio frequency (RF)-chains are adopted (N RF < N ), the RIS has N RIS elements, and M subcarriers are assigned to each UE.
In Fig. 2, ϑ B (ϑ R ) is the angle between the BS (RIS) array and the x-axis. ϑ BU (ϑ RU ) is the AoA from the UE to the BS (RIS) relative to the normal direction of the array. For convenience, we define θ as the sine of the true AoA ϑ in radians, i.e., θ = sin(ϑ).
In Fig. 2, if the distance between the BS and the UE is less than the Rayleigh distance or more accurately the effective Rayleigh distance as will be described in the section II-B, farfield planar wavefront assumption is no longer valid. In this case, the near-field channel between each antenna element and the UE is not only determined by the AoA, but also by the distance. Therefore, in XL-array systems, in order to model the near-field and the far-field channel simultaneously, the HFNF channel is adopted, and the channel between the BS (or RIS) and UE on the m-th subcarrier h[m] ∈ C N (or C NRIS ) can be modeled as follows
h[m] = L l=0 G l g=1 e −jkmr l,g α l,g [m] ⊙ b l,g [m](f m , θ l,g , r l,g ),(1)
where L denotes the number of clusters from the UE to the BS (or RIS), G l denotes the number of paths in the lth cluster, α l,g [m] ∈ C N (or C NRIS ) denotes the channel gain of the g-th path in the l-th cluster on the m-th subcarrier, f m = f c − B/2 + (m − 1)B/M denotes the frequency of the m-th subcarrier, B denotes the bandwidth, λ m denotes the wavelength of the m-th subcarrier, k m = 2π λm denotes the wavenumber of the m-th subcarrier,r l,g denotes the total distance between the UE and the reference point of the BS (or RIS) array associated with the g-th path in the lth cluster while r l,g denotes the distance of the last hop between the scatterer and the reference point of the BS (or RIS) array 4 , and θ l,g denotes the AoA of the g-th path in the l-th cluster between the reference point of the BS (or RIS) array and the UE (or scatterers). The reference point of the BS (or RIS) array is set to the center of the array.
α l,g [m] = α S l,g [m]α F l,g [m] ⊙ α A l,g [m], where α F l,g [m]
denotes the large-scale fading and can be depicted by Friis formula, α S l,g [m] denotes the small-scale fading, and α A l,g [m] denotes the attenuation due to atmospheric gases (mainly water vapour and oxygen) and can be modeled based on the data in ITU-R P.676-12 [24]- [27]. It is worth noting that it is difficult to communicate efficiently in some frequency bands due to severe molecular absorption. Therefore, the THz band is divided into a number of less heavily absorbed subbands, also known as transmission windows [26]. The frequency bands considered in this paper fall within these transmission windows. In addition, we define the path with l = 0 as the LoS and l > 0 as the non-line-of-sight (NLoS) paths. Therefore, G 0 = 1 and we only use the subscript l = 0 to represent the channel gain α 0 [m], distance r 0 , and AoA θ 0 of the LoS channel. The HFNF steering vector on the m-th subcarrier b l,g [m] ∈ C N (or C NRIS ) can be acquired as b l,g [m](f m , θ l,g , r l,g ) = [e −jkm(r l,g,0 −r l,g ) , · · · , e −jkm(r l,g,N −1 −r l,
g ) ] T / √ N ,(2)
where N in (2) can be replaced by N RIS if the HFNF steering vector is from the UE to the RIS (the same holds in the following, where the specific value can be determined according to the context), r l,g,n = (r l,g ) 2 − 2r l,g δ n dθ l,g + δ 2 n d 2 [13] denotes the distance between the UE and the n-th element 4r l,g degrades to r l,g if the path is the LoS path. Fig. 2. The model of a RIS-assisted localization system, and this is also a schematic diagram of the proposed location sensing scheme not relying on channel estimation in Section IV. of the BS (or RIS) array associated with the g-th path in the l-th cluster, d = λc 2 denotes the elements spacing in the BS (or RIS) array, λ c is the carrier wavelength, and δ n = (2 × n − N + 1)/2, n = 0, · · · , N − 1. When r l,g is large enough, in other words, the UE is in the far-field region of the BS (or RIS), the HFNF steering vector in (1) can be degenerated to the far-field steering vector a l,g [m] ∈ C N (or C NRIS ) as follows
RIS BS
Hyperbola X-axis Y-axis 智能超表面 用户 双曲线 X-轴 Y-轴 RIS BS UE Hyperbola X-axis Y-axis UEa l,g [m](f m , θ l,g ) = [e jkmθ l,g dδ0 , · · · , e jkmθ l,g dδ N −1 ] T / √ N = [e j 2π c fmθ l,g dδ0 , · · · , e j 2π c fmθ l,g dδ N −1 ] T / √ N ≜ a l,g [m](Ξ m,l,g ) = [e jπΞ m,l,g δ0 , · · · , e jπΞ m,l,g δ N −1 ] T / √ N ,(3)
where Ξ m,l,g = fm fc θ l,g by using d = λc 2 = c 2fc , which is consistent with the beam angle in [22], [28], [29]. The parameters of (3) are defined the same as (2).
B. Boundary Between Near-Field and Far-Field
A = N d (or A = N RIS d)
is the aperture of the ULA at the BS (or RIS). Conventionally, the boundary between the nearfield spherical wave and the far-field plane wave can be defined according to the classical Rayleigh distance [30], i.e.,
Z = 2A 2 /λ c ,(4)
which only depends on the aperture A and the central frequency f c . However, when the number of the elements at the BS (or RIS) and the system bandwidth become large, i.e., for the THz XL-array systems, the definition (4) is not always accurate, since the boundary is also determined by the frequency of the subcarrier and the AoA of the incident signal. Therefore, in [30], an effective Rayleigh distance is further defined as
Z eff m (θ) = ϵ(1 − θ 2 )2A 2 /λ m ,(5)
where ϵ can be defined artificially based on the requirements. According to the [30], we can obtain ϵ by defining the constraint as
|χ(f m , θ, Z eff m (θ)) − ρ(f m , θ, Z eff m (θ))| 2 = ℏ,(6)
where
χ(f m , θ, Z eff m (θ)) = |b[m](f m , θ, Z eff m (θ)) H b[m](f m , θ, Z eff m (θ))| , ρ(f m , θ, Z eff m (θ)) = |a[m](f m , θ) H b[m](f m , θ, Z eff m (θ))|.
χ represents the autocorrelation of the HFNF steering vector. ρ represents the correlation between the HFNF steering vector and the far-field steering vector. Since ℏ is defined as the loss if a spherical wave is approximated as a plane wave, the greater ℏ indicates the more significant difference between χ and ρ. Given ℏ, ϵ and the corresponding Z eff m (θ) in (5) (3) and (2), respectively, ϵ is 0.4 and Z eff c (θ) is 29.5 m. However, if (4) is utilized, the classical Rayleigh distance is 98.3 m, which is much greater than the effective Rayleigh distance. Meanwhile, if the classical Rayleigh distance is used as the boundary between the near-field region and the far-field region, the loss ℏ = 0.0096, which is ten times smaller than the one if the effective Rayleigh distance is used. Therefore, this paper adopts the effective Rayleigh distance rather than the classical Rayleigh distance to distinguish the boundary between nearfield region and far-field region.
C. Hybrid Far-Near Field Beam Squint Phenomenon
We begin with the far-field case. Ξ m ≈ θ when B is small since fm fc ≈ 1, which indicates that Ξ m can be used to express θ. However, when B is large, fm fc cannot be approximated as 1 and Ξ m cannot be approximated as θ. This is the beam squint phenomenon in the far-field case. If we omit this phenomenon and think Ξ m ≈ θ is still hold, we will get
f m1 θ m1 = f m2 θ m2 ,(7)
i.e., different frequencies will correspond to different θ m , while in fact there can only be one ture physical angle θ. Therefore, if the system bandwidth B is large enough compared with the center frequency f c , e.g., B = 10 GHz, f c = 0.1 THz and B/f c = 1/10, which is the typical THz communication system parameters and will be adopted in our latter simulation, the maximum difference between θ m1 and θ m2 can be θ m2 /θ m1 = f m1 /f m2 = f max /f min = 105 GHz/95 GHz ≈ 1.1. If θ 1 takes 0.5, then θ 2 takes 0.55. When we convert θ to ϑ, we have ϑ 1 = arcsin(θ 1 ) = 30 • and ϑ 2 = arcsin(θ 2 ) = 33.37 • . The angle-domain resolution can be approximated as 1/N = 1/256 ≈ 0.0039. Therefore, the BSE results in an angle differences across almost (0.55 − 0.5)/0.0039 ≈ 12.8 angle-domain resolutions. Besides, as the distance between the UE and the BS (or RIS) increases, there will be a greater localization error when this estimated angle is used for localization. For example, if the distance between the UE and the BS (or RIS) is 30 m, the location offset between f max and f min can be up to 1.76 m, while if the distance between the UE and the BS (or RIS) increases to 60 m, the location offset can be 3.53 m. Since the BS (or RIS) has the ultra-high angular resolution benefited from the XL-arrays for localization, the angle deviation caused by the BSE will have a severe impact on communications and sensing. The analysis of the near-field BSE is much more complex than that of the far-field one, since the near-field BSE involves frequency, angle, and distance (f m , θ m , r m ), while the farfield one only involves frequency and angle (f m , θ m ). The relation between f m and θ m can be accurately characterized by Ξ m = fm fc θ in the far-field case, whereas the relation among f m , θ m , r m in the near-field case are difficult to characterize. To maintain the consistence between the far-field steering vector and the near-field steering vector, we will adopt a[m](f m , θ) instead of a[m](Ξ m ) in future expressions. We introduce the cross-correlation of the near-field steering vector (2) to analyze the near-field BSE. It is difficult to obtain the closed-form solution of the near-field beam squint as concise as the far-field one in (7). Although some existing literature have worked towards the closed-form solution of the near-field BSE [19], some approximations are made in them for the sake of simple analysis. Therefore, we focus our attention on qualitatively understanding how the nearfield BSE affects localization and how to eliminate this effect. Specifically, we plot simulated Fig. 3 to get some enlightening conclusions, which gives an illustration of the localization problem under the HFNF BSE. In order to illustrate the BSE better, we set N = 512, f c = 0.1 THz, ℏ = 0.5. Therefore, according to (5) and (6), we can obtain ϵ = 0.16 and the effective Rayleigh distance is about 46 m. In the far-field region, the real location of the UE is set as θ m1 = 0.5 and r m1 = 50 m, where f m1 is fixed to f c = 0.1 THz. If f m2 = f c , the direction of the beam will be precisely at θ m2 = 0.5. However, if f m2 takes other values, such as f m2 = f min = 95 GHz or f m2 = f max = 105 GHz, the beam direction will squint, as shown in Fig. 3(a). Since there is no obvious beam-focusing phenomenon when r m1 = 50 m, it is more reasonable to use the effective Rayleigh distance as the boundary between the far-field region and the near-field region, where the classical Rayleigh distance is 400 m, which is far larger than the effective Rayleigh distance. Meanwhile, as can be seen in Fig. 3(a), the farther the distance from the UE to the BS (or RIS), the larger the location error. In the near-field region, the real location of the UE is set as θ m1 = 0.5 and r m1 = 16.7 m, where f m1 is fixed to f c = 0.1 THz. If f m2 = f c , the beam-focusing region will be located at θ m2 = 0.5 and r m2 = 13.99 m. The focusing point is r m2 = 13.99 m rather than r m2 = 16.7 m since largescale fading cannot be overlooked in reality. If f m2 takes other values, such as f m2 = f min = 95 GHz or f m2 = f max = 105 GHz, the beam-focusing region will also squint, as shown in Fig. 3(b). The HFNF BSE makes the localization more difficult, and thus, two schemes are proposed to overcome this problem in the following two sections.
C = |b[m 1 ](f m1 , θ m1 , r m1 ) H b[m 2 ](f m2 , θ m2 , r m2 )| between b[m 1 ] and b[m 2 ] taken from
III. PROPOSED JOINT CHANNEL AND LOCATION SENSING SCHEME A. Problem Formulation
The proposed scheme aims to jointly estimate the channels between the BS (or RIS) and the UE h BU (or h RU ) and sense the UE's location.
Here we consider the UE transmits pilot signals to facilitate the joint channel and location sensing at the BS, and the channel is assumed to remain unchanged during the joint channel and location sensing stage. Without the assistance of the RIS, the received uplink pilot on the m-th subcarrier in the p-th time slot, denoted by y NRIS [p, m] ∈ C NRF , can be expressed as
y NRIS [p, m] = W NRIS [p]h BU [m]x[p, m] + n NRIS [p, m], (8) where x[p, m], h BU [m] ∈ C N , and W NRIS [p] ∈ C NRF×N denote|W NRIS i,j [p]| = 1 √ NRF . n NRIS [p, m] = W NRIS [p]n NRIS [p, m]
, and n NRIS [p, m] ∈ C N denotes the Gaussian complex noise, which follows CN (0 N , σ 2 I N ). In (8), the RIS is turned off and y NRIS [p, m] is assumed to be not affected by the RIS. Similarly, the received pilot with the assistance of the RIS on the m-th subcarrier in the p-th time slot, denoted by y RIS [p, m] ∈ C NRF , can be expressed as
y RIS [p, m] = W RIS [p]H BR [m]Φ RIS [p]h RU [m]x[p, m] + W RIS [p]h BU [m]x[p, m] + n RIS [p, m],(9)where Φ RIS [p] ∈ C NRIS×NRIS , H BR [m] ∈ C N ×NRIS , and h RU [m] ∈ C NRIS denote[m] = [(y NRIS [1, m]) T , · · · , (y NRIS [P NRIS , m]) T ] T ∈ C P NRIS NRF ,W NRIS = [(W NRIS [1]) T , · · · , (W NRIS [P NRIS ]) T ] T ∈ C P NRIS NRF×N and N NRIS [m] = [(n NRIS [1, m]) T , · · · , (n NRIS [P NRIS , m]) T ] T ∈ C P NRIS NRF . Therefore, we can obtain Y NRIS [m] =W NRIS h BU [m] + N NRIS [m].(10)
Similarly, we stack the received P RIS pilots with the assistance of RIS and have
Y RIS [m] =W RIS [m]h RU [m] +W RIS h BU [m] + N RIS [m], (11) where Y RIS [m] = [(y RIS [1, m]) T , · · · , (y RIS [P RIS , m]) T ] T ∈ C P RIS NRF ,W RIS [m] = [(W RIS [1]H BR [m]Φ RIS [1]) T , · · · , (W RIS [P RIS ]H BR [m]Φ RIS [P RIS ]) T ] T ∈ C P RIS NRF×NRIS , W RIS = [(W RIS [1]) T , · · · , (W RIS [P RIS ]) T ] T ∈ C P RIS NRF×N , and N RIS [m] = [(n RIS [1, m]) T , · · · , (n RIS [P RIS , m]) T ] T ∈ C P RIS NRF .
Since there is no prior information about the UE's location, with the RIS turned off, the W NRIS i,j [p] for all i, j, p needs to be set to be omnidirectional to receive signals in all directions, i.e.,
W NRIS i,j [p] = e j2πzi,j,p / N RF ,(12)
where z i,j,p ∀i, j, p follows U(0, 1). However, in order to better estimate the AoA from the UE to the BS, we will carefully design some values of W NRIS i,j [p] later in Section III-D1. Similarly, when the RIS is turned on, the phase of the RIS also needs to be set to be omnidirectional to receive signals from all directions, i.e., for all p, the probability that the diagonal elements of Φ RIS [p] take values 1 and -1 is both 0.5. Meanwhile, W RIS [p] for all p needs to be set to be directed towards the RIS so as to maximize the signal energy from the RIS direction and reduce the signal energy from other directions to the BS. However, since
H BR [m] is frequency selective while W RIS [p] is frequency invariant, if W RIS [p]
is designed based on the central frequency using (2) as follows,
W RIS i,: [p] = ( b[ M 2 + 1](f c , sin( π 2 − ϑ B ), r B2R ) ) H √ N √ N RF , ∀i, p,(13)
where π 2 − ϑ B denotes the real AoA from the RIS to the BS and it can be seen clearly in Fig. 2 , and r B2R denotes the known distance between the BS and the RIS , the energy in other subcarriers will be weakened due to the large bandwidth in the THz systems. To solve this problem, we design W RIS [p] using (2) as follows,
W RIS i,: [p] = ( b[m(i, p)](fm (i,p) , sin( π 2 − ϑ B ), r B2R ) ) H √ N √ NRF , ∀i, p,(14)where fm (i,p) = f c − B/2 + B NRFP RIS ((p − 1)N RF + i) and m(i, p) = M NRFP RIS ((p − 1)N RF + i) + 1.
The meaning of (14) is that at each RF-chain in each time slot, the BS's analog combiner is varied by using (2), where the AoA and the distance use the value from the center of the RIS to the center of the BS, and the frequency is evenly valued throughout the bandwidth.
To better explain our problem and the variables intended to be solved, we formulate the following optimization problem,
min h BU ,ĥ RU ,x UE ,ŷ UE Σ M m=1 Y NRIS [m] −W NRISĥBU [m](x UE ,ŷ UE ) 2 F + Σ M m=1 Y RIS [m] −W RIS [m]ĥ RU [m](x UE ,ŷ UE ) −W RISĥBU [m](x UE ,ŷ UE ) 2 F s.t. C1: W NRIS i,j [p] = {0, 1/ √ NRF}, ∀i, j, p C2: W RIS i,j [p] = {0, 1/ √ NRF}, ∀i, j, p C3: Φ RIS i,i [p] = {−1, 1} , ∀i, p C4:W NRIS = [(W NRIS [1]) T , · · · , (W NRIS [P NRIS ]) T ] T C5:W RIS [m] = [(W RIS [1]H BR [m]Φ RIS [1]) T , · · · ,(W RIS [P RIS ]H BR [m]Φ RIS [P RIS ]) T ] T , ∀m C6:W RIS = [(W RIS [1]) T , · · · , (W RIS [P RIS ]) T ] T ,(15)
where (x UE ,ŷ UE ) is the estimation of UE's location,ĥ BU andĥ RU , determined by (x UE ,ŷ UE ), are estimations of h BU and h RU , respectively, constraints C1 and C2 are the constant modulus constraint of the analog combiner at the BS (if some phase shifters are switched off, those will take on a value of zero), constraint C3 is the limitation on the value of the RIS phase shifter with a precision of 1 bit, constraints C4, C5 and C6 denote the sensing matrices collected from multiple time slots. Our goals are to estimateĥ BU from Y NRIS , estimateĥ RU from Y RIS , and estimate the UE's location (x UE ,ŷ UE ), which can be utilized to improve the CE performance ofĥ BU and h RU , jointly from Y NRIS as well as Y RIS .
To solve the optimization problem in (15), we propose a dictionary design scheme, a CE algorithm and a localization scheme, which are described in detail in Section III-B, III-C and III-D, respectively. Among them, the CE module outputs coarse on-grid angle estimation (respectively observed from the BS and RIS) to the localization module, which returns the fine offgrid AoA estimation to improve CE. In the following equations, if one equation is divided into subequation (a) and subequation (b), (a) stands for sensing without the assistance of the RIS and (b) for sensing with the assistance of the RIS.
B. Frequency Selective Polar-Domain Redundant Dictionary
We take h BU , which is the same as h RU , as an example. In far-field region, (3) can be used to model the steering vector and the phase of each element in the steering vector is linear to the antenna index. Thus, we can use the Fourier transform to sparse the spatial-domain channel to the angle-domain channel as is sparse and we can recover signals from higher dimensionalities with a small number of pilots through the CS-based methods. Nevertheless, in the HFNF region, we use (2), which is determined not only by the AoA from the UE to the BS, but also by the distance between the UE and the BS, to model the steering vector. Thus, the FTM F cannot be utilized to sparse the near-field channel because of the energy spread Generate the s-th distance grid as rs,n = 2Z eff
h BU [m] = Fh BU,A [m],(16)c (0)(1 − θ 2 n )/s; 11
Generate HFNF steering vector b as (2) by using fm, θn and rs,n; effect described in [13]. Instead, the new transform matrix D[m] ∈ C N ×ςN S , which is developed from the PTM in [13] and called frequency selective polar-domain redundant dictionary (FSPRD), is proposed. m means the m-th subcarrier, N is the number of angle grids as well as the BS (RIS) elements, S is the number of distance grids and ς ≥ 1 is the redundant rate. Since D[m] takes into account the differences in the HFNF steering vectors across different subcarriers, it can overcome the BSE compared to FTM and PTM. We can obtain relatively good correlation property when θ is uniformly sampled from (−1, 1) as θ n = (2n − ςN + 1)/(ςN ), n = 0, 1, · · · , ςN − 1, (17) and r is sampled as
r s,n = 2Z eff c (0)(1 − θ 2 n )/s, s = 1, 2, · · · , S − 1,(18)
where Z eff c (0) is the effective Rayleigh distance when f m = f c and θ = 0. r is sampled in the manner of inverse proportional function to make the correlation of elements in the FSPRD smaller. On account of the effective Rayleigh distance in (5), r is associated with θ. Moreover, the coefficient "2" in (18)
Therefore, considering both BSE and HFNF, we can obtain the generation step of FSPRD as Algorithm 1. (21), we can obtain
C. Proposed LA-GMMV-OMP Algorithm
Υ NRIS i = M m=1 |Γ NRIS i [m]| 2 ,(22a)Υ RIS i = M m=1 |Γ RIS i [m]| 2 ,(22b)
where Υ NRIS ∈ C ςN S and Υ RIS ∈ C ςNRISS . Since we adopted the FSPRD, the peaks of Γ NRIS and Γ RIS on different subcarriers have the same AoA and distance indices, so we can take advantage of this MMV property to improve the robustness of finding (θ BU , r BU ) and (θ RU , r RU ). For the first iteration, since there is a single path of the LoS channel, we only pick out the largest elements from Υ NRIS and Υ RIS as γ NRIS and γ RIS , which correspond to the rough AoAs and distances of the LoS paths (respectively from the UE to the BS and from the UE to the RIS). These coarse AoAs and distances can provide good initial values for further localization in Section III-D, which returns the fine AoA estimation to improve CE. For subsequent iterations, we pick out the N NRIS s and N RIS s largest elements from Υ NRIS and Υ RIS as
γ NRIS ={γ NRIS 1 , γ NRIS 2 , · · · , γ NRIS N NRIS s },(23a)γ RIS ={γ RIS 1 , γ RIS 2 , · · · , γ RIS N RIS s },(23b)
respectively. Then, the support sets Ω NRIS and Ω RIS , which are ∅ at the beginning, can be updated as Ω NRIS = Ω NRIS ∪ γ NRIS and Ω RIS = Ω RIS ∪ γ RIS . We have the following consideration for adding multiple atoms to the support set in estimating the channel of NLoS paths in each iteration. Since the distance is taken into account in the FSPRD, the dimensionality of the FSPRD is so large that the true path has strong correlations with several FSPRD elements. Additionally, the size of the scatterers can not be negligible in the HFNF region, so the channel is a cluster-sparse multi-path channel where many paths are contained in one cluster. Therefore, one scatterer corresponds to multiple dictionary atoms with similar angles and distances. If multiple atoms with large energies are selected in each iteration, we can relatively correctly select the atoms corresponding to the scatterer with the largest energy in the current iteration. However, if only one atom with the largest energy is selected each time, the energy of the other atoms with the same angle and distance will be weakened in future iterations, since the energy of these paths has already been weakened from the residual after each iteration of the OMP-based algorithm. These atoms with reduced energy are difficult to pick out correctly in the future iterations and hence the CE performance degrades, as verified by simulations. After updating the support set, for every subcarrier, the orthogonal projection Φ NRIS [m] ∈ C |Ω NRIS |c×N and Φ RIS [m] ∈ C |Ω RIS |c×NRIS can be calculated as
Φ NRIS [m] =(W NRIS :,Ω NRIS [m]) † Y NRIS [m], (24a) Φ RIS [m] =(W RIS :,Ω RIS [m]) † Y RIS [m].(24b)
At the end of each iteration, the residual should be updated as follows
R NRIS [m] =Y NRIS [m] −W NRIS :,Ω NRIS [m]Φ NRIS [m],(25a)R RIS [m] =Y RIS [m] −W RIS :,Ω RIS [m]Φ RIS [m].(25b)
After (21) to (25) are iterated multiple times and the stop criterion is reached, we can obtain the final channel as followŝ
h BU [m] =D NRIS :,Ω NRIS [m]Φ NRIS [m],(26a)h RU [m] =D RIS :,Ω RIS [m]Φ RIS [m],(26b)
The LA-GMMV-OMP algorithm is summarized in Algorithm 2, where the LA-GMMV-OMP algorithm degenerates into the GMMV-OMP algorithm if steps 8-12 are not performed. For brevity, some subscripts and superscripts for partial variables are omitted.
D. Proposed Complete Dictionary based Localization (CDL) Scheme
The CDL scheme is summarized in Algorithm 3, where the overall idea is to use PGD algorithm to refineθ BU 0 as shown in steps 2-9, and PHD to refineθ RU 0 as shown in steps 10-17. Finally, the UE can be located by line intersection. Obtain coarse AoAsθ BU 0 ,θ RU 0 as (27); 10 Obtain fine estimations of AoA and distance (θ BU 0 ,r BU 0 ), (θ RU 0 ,r RU 0 ) by the CDL scheme; 11 Update the FSPRD used in the step 3 as (42) Find out new support set, γ, as (22) and (23); 14 Update the support set Ω = Ω ∪ γ ; 15 for m = {1, 2, · · · , M } do 16 Calculate the orthogonal projection as (24) From the first iteration of CE process in the LA-GMMV-OMP algorithm, the coarse location of the UE can be acquired. Since the AoA and distance corresponding to each element in FSPRD is known to be generated by Algorithm 1 (especially (17)), we can deduce the corresponding coarse AoA estimations of the LoS paths from the UE to the BS and the RIS aŝ
θ BU 0 =(2 i NRIS /S − 1)/(ςN ) − 1,(27a)θ RU 0 =(2 i RIS /S − 1)/(ςN RIS ) − 1,(27b)respectively, where i NRIS = arg max i Υ NRIS i and i RIS = arg max i Υ RIS i .
Although Υ NRIS is not only related to the AoA from the UE to the BS, but also related to the distance from the UE to the BS, the distance is too imprecise to locate the UE since the distance from the UE to the BS is sampled in the manner of inverse proportional function. It is the same with the Υ RIS . Additionally, since the UE is assumed to be equipped with one omni-directional antenna, there is no AoD information in the uplink stage and the UE's location cannot be estimated by NLoS paths. Meanwhile, in THz systems, the energy of NLoS paths is too small compared to that of the LoS path to extract the UE's location, and the area of the scatterer can not be negligible, which further increases the difficulty of high-precision localization. Further, although the UE can be equipped with multiple antennas, the estimation of the UE's location through the AoDs of NLoS paths is also limited by the number of antennas at the UE side, where the AoD resolution is poor if the number of antennas is small. Therefore, onlyθ BU 0 andθ RU 0 can be sought to locate the UE relatively accurately by line intersection, where the anchors are the BS and RIS. By combining theθ BU 0 andθ RU 0 with the locations of the BS and the RIS, denoted as (x BS , y BS ) and (x RIS , y RIS ), the location of the UE can be calculated aŝ
x UE = (y RIS − y BS + k BU x BS − k RU x RIS )/(k BU − k RU ) y UE = (k BU y RIS − k RU y BS + k BU k RU (x BS − x RIS ))/(k BU − k RU ),(28)
where k BU = − tan( π 2 − ϑ B − arcsin(θ BU 0 )) and k RU = tan( π 2 − ϑ R − arcsin(θ RU 0 )) are the slope of the line from the BS to the UE and the slope of the line from the RIS to the UE, respectively, as Fig. 2 shows. The BS and the RIS are set on the x-axis, so y BS = y RIS = 0. Then, we can get the distance throughr
BU 0 = (x UE − x BS ) 2 + (ŷ UE − y BS ) 2 ,(29a)r RU 0 = (x UE − x RIS ) 2 + (ŷ UE − y RIS ) 2 .(29b)
However, the estimated (θ BU 0 ,θ RU 0 ) and (r BU 0 ,r RU 0 ) are limited by the number of FSPRD grids, leading to limited localization accuracy. Previous off-grid methods are in far-field region without the BSE as [16] [17] or in near-field region without the BSE as [13], which cannot be directly applied to the HFNF with severe BSE in this paper. Therefore, an off-grid method is proposed to solve this challenging localization problem.
1)
Using the polar-domain gradient descent algorithm to refine the AoA from the UE to the BS:
Particularly, we only use PGD to estimate the AoA from the UE to the BS to sense the UE's location. As mentioned earlier, if the AoA is estimated with the following loss function as in conventional off-grid methods [13], [16] v
NRIS = M m=1 Y NRIS [m] −W NRISĥBU [m] 2 F ,(30)
the results will be affected by the accuracy of the estimated distance, as shown in Fig. 5(a). The reason is that the e −jkmr BU l,g of (1) plays a key role in (30), so the estimated inaccurate distance results in an inaccurate AoA estimation. Note that e −jkmr BU l,g in (1) is determined by the total distance from the UE to the center of the BS antenna array at the subcarrier f m . On account of the fully connected antenna architecture of hybrid beamforming, we let the first RF-chain of the combiner in one of all time slots, e.g.W NRIS 1,: , to be
0 ... 0 N −1 2 1 √ NRF 0 ... 0 N −1 2 , if N is odd 0 ... 0 N −2 2 1 √ NRF 1 √ NRF 0 ... 0 N −2 2 , if N is even ,(31)
while each element inW NRIS 2:end,: is set as (12). The settings of (31) can be used to remove e −jkmr BU l,g in (1) so that the phase of the center antenna will be zero and the phases of other antennas are the relative values of the center antenna. The concrete operating step is as follows
Ȳ NRIS i [m] = Y NRIS i [m], for i = 1 Y NRIS i [m] = S i [m] M m=1 |Y NRIS i [m]| 2 / M m=1 |S i [m]| 2 for i = 2, · · · , N RF P NRIS , (32) where S i [m] = Y NRIS i [m]/Y NRIS 1 [m].
We then obtain the new loss function as
v NRIS = M m=1 Ȳ NRIS [m] −W NRIShBU [m] 2 F = Ȳ NRIS −W NRIShBU 2 F ,(33)
whereh BU denotes the equivalent LoS channel and can be expressed as
h BU [m] =α BU 0 [m] ⊙ b BU 0 [m](f m ,θ BU 0 ,r BU 0 ),(34)
whereα BU 0 [m] denotes the estimated channel gain of the LoS path on the m-th subcarrier,r BU 0 denotes the estimated distance of the LoS path between the center of the BS antenna and the UE, andθ BU 0 denotes the estimated AoA of the LoS path from the UE to the center of the BS antenna. Sinceh BU [m] is the LoS channel ,α BU 0 [m] can be obtained approximately by the Friis formula and the known attenuation due to absorption of atmospheric gases in the given frequency band. Note that there are two differences between (34) and (1). The first difference is that we only consider the LoS path of the channel since the influence of NLoS paths in THz is the secondary factor and the information of the UE's location is only included in the LoS path if the UE is equipped with one antenna. The second difference is that we eliminate the term e −jkmr BU l,g in order to obtain a better property of the loss function. It can be seen from Fig. 5(b) that the AoA localization result will not be affected by the accuracy of the estimated distance if the loss function (33) is adopted. Therefore the PGD algorithm can be adopted and the gradient of v NRIS with respect toθ BU
0 is ∂v NRIS /∂θ BU 0 = ∂tr[(Ȳ NRIS −W NRIShBU ) H (Ȳ NRIS −W NRIShBU )]/∂θ BU 0 = −tr[(Ȳ NRIS ) HWNRIS ∂h BU /∂θ BU 0 ] − tr[(W NRIS ∂h BU /∂θ BU 0 ) HȲNRIS ] + tr[(W NRIS ∂h BU /∂θ BU 0 ) HWNRIShBU ] + tr[(W NRIShBU ) HWNRIS ∂h BU /∂θ BU 0 ] = −2R{tr[(W NRIS ∂h BU /∂θ BU 0 ) HȲNRIS ]} + 2R{tr[(W NRIShBU ) HWNRIS ∂h BU /∂θ BU 0 ]} = −2R{tr[(W NRISαBU 0 [m] ⊙ ∂b BU 0 /∂θ BU 0 ) HȲNRIS ]} + 2R{tr[(W NRIShBU ) HWNRISαBU 0 [m] ⊙ ∂b BU 0 /∂θ BU 0 ]}.(35)
(a) The loss function is obtained from (30).
(b) The loss function is obtained from (33). The gradient of the n-th element of b BU 0 [m] with respect toθ BU 0 is derived as
∂b BU 0 [m]/∂θ BU 0 n = βr BU 0 δnd/ (r BU 0 ) 2 + δ 2 n d 2 − 2r BU 0θ BU 0 δnd,(36)where β = e −j 2π λm ( √ (r BU 0 ) 2 +δ 2 n d 2 −2r BU 0θ BU 0 δnd−r BU 0 ) (j 2π λm )
. The specific process of the PGD can be seen in steps 2-9 of the Algorithm 3, where the step size ∆ in each iteration is determined by the Armijo-Goldstein condition [31] 6 .
2) Using the polar-domain hierarchical dictionary to refine the AoA from the UE to the RIS:
Compared toW NRIS ,W RIS is related to the frequency. Therefore, the PGD cannot be used to refine the AoA from the UE to the RIS since Y RIS cannot be converted toȲ RIS as Y NRIS toȲ NRIS . Instead, polar-domain hierarchical dictionary (PHD) is used.
Since the coarseθ RU 0 is passed by the LA-GMMV-OMP, where the FSPRD is used, the AoA spacing is 1 ςNRIS in angle domain (the sine of the real angle), as can be seen in (17). Therefore, the first search of PHD ranges fromθ RU 0 − 1 ςNRIS tô θ RU 0 + 1 ςNRIS . If the number of grids in each search is N PHD , the general formula for the AoA of the first search can be expressed asθ
RU 0 − 1 ςN RIS + 2(i − 1) (N PHD − 1)ςN RIS , ∀i = 1, 2, · · · , N PHD . (37)
Similarly, the general formula for the AoA of the k-th (k ≥ 2) search can be expressed aŝ
θ RU 0 [k − 1] − 1 ςNRIS( N PHD −1 2 ) k−1 + 2(i − 1) (NPHD − 1)ςNRIS( N PHD −1 2 ) k−1 , ∀i = 1, 2, · · · , NPHD,(38)whereθ RU 0 [k −1] is the AoA estimation in the (k −1)-th search andθ RU 0 [0] ≜θ RU 0 .
In the k-th search, we denote the set of the search range as R k , whose i-th element is given by (38). Then, we generate N PHD girds of dictionary D R [m, k] ∈ C NRIS×NPHD with the same distancer RU 0 but different angles, which are given by R k . Next, Γ R [m, k] ∈ C NPHD , the correlation matrix on the m-th subcarrier in the k-th search, can be calculated as
Γ R [m, k] = |(W RIS [m]D R [m, k]) H Y RIS [m]|.(39)
Algorithm 3: Proposed CDL Scheme
Input: received pilot Y, equivalent combining matrixW, coarse AoA estimationθ BU 0 andθ RU 0 , the maximum number of iteration in PGD Imax, threshold to terminate the PGD ϖPGD, threshold to terminate the PHD ϖPHD, number of points per search in PHD NPHD Output: fine AoA estimationθ BU 0 andθ RU 0 , fine distance estimationr BU 0 andr RU 0 , fine estimation of UE's location (x UE ,ŷ UE ) 1 Obtain (x UE ,ŷ UE ), r BU 0 , r RU 0 from (28) and (29); 2 /* Refineθ BU 0 * /; 3 ObtainȲ NRIS and v NRIS as (32) and (33) if |∆∇| < ϖPGD, break; 9 end 10 /* Refineθ RU 0 */; 11 for k=1,2, · · · do 12 Generate the search range R k as (38); 13 Generate Γ R [m, k] ∈ C N PHD , the correlation matrix on the m-th subcarrier in the k-th search, as (39); 14 Obtainî R k , the index estimation of the AoA from the UE to the RIS in the k-th search, as (40); 15 Obtainθ RU 0 [k], the estimation of the AoA from the UE to the RIS in the k-th search, as (41); Benefiting from the MMV property, the estimation of the AoA index from the UE to the RIS in the k-th search can be obtained asî
16 if 2 ςN RIS ( N PHD −1 2 ) k < ϖPHD,θ RU 0 =θ RU 0 [k]R k = arg max i M m=1 |Γ R i [m, k]| 2 .(40)
Therefore, we can obtain the estimation of the AoA from the UE to the RIS in the k-th search aŝ
θ RU 0 [k] = θ RU 0 [k − 1] − 1 ςNRIS( N PHD −1 2 ) k−1 + 2(î R k − 1) (NPHD − 1)ςNRIS( N PHD −1 2 ) k−1 .
(41) Once the search range ) k of (k + 1)-th iteration is less than the threshold ϖ PHD , which is given in advance,θ RU 0 [k] will output as the final result. The CDL scheme is summarized in Algorithm 3.
After estimating the AoA from the UE to the BS and the RIS using the PGD algorithm and the PHD, respectively, the precise location of the UE, (x UE ,ŷ UE ), can be obtained through (28). Then, combined with the coordinate of the BS and the RIS, the distance from the UE to the BS,r BU 0 , and the distance from the UE to the RIS,r RU 0 , can be obtained as (29). Finally, a new FSPRD can be updated by adding the new element generated with (θ BU 0 ,r BU 0 )/(θ RU 0 ,r RU 0 ) to the old one. transmission, the estimated channel can be used for precoding and beamforming [28], [29].
E. Complexity Analysis
The computational complexity associated with localization is listed as follows. 1) PSOMP [13]:
O(ς(P NRIS + P RIS )N RF N 2 SM ) 2) GMMV-OMP: O(ς(P NRIS + P RIS )N RF N 2 SM ) 3)
IV. PROPOSED LOCATION SENSING SCHEME NOT RELYING ON CHANNEL ESTIMATION A. Problem Formulation
The aim of the proposed location sensing scheme not relying on channel estimation is to sense the UE's location from the received signal with low overhead and without the need for CE, and the schematic diagram of this scheme is shown in Fig. 2. Compared to the proposed joint channel and location sensing scheme, the prior information of the ToA can be utilized and there is no need to guarantee the performance of the CE. Therefore, the pilot overhead and the size of the FSPRD in this scheme can be greatly reduced. This scheme shares the same received signal model, which is described detailedly from (8) to (11), as the joint channel and location sensing scheme. Then, we can formulate the following optimization problem of this scheme,
min x UE ,ŷ UE Σ M m=1 Y NRIS [m] −W NRISĥBU LoS [m](x UE ,ŷ UE ) 2 F + Σ M m=1 Y RIS [m] −W RIS [m]ĥ RU LoS [m](x UE ,ŷ UE ) −W RISĥBU LoS [m](x UE ,ŷ UE ) 2 F s.t. C1 -C6,(43)
where constraints C1 to C6 are the same as the optimization problem (15) andĥ BU LoS ( orĥ RU LoS ) denotes the estimated LoS path from the UE to the BS (or RIS). Different from the joint channel and location sensing scheme, the location sensing scheme not relying on channel estimation only estimates the UE's location (x UE ,ŷ UE ), whose information is completely included in the LoS path. Therefore, only LoS paths are utilized for localization, and this treatment is justified since the loss of NLoS paths in the THz band is too severe [24].
B. Proposed Partial Dictionary based Localization (PDL) Scheme
First, W NRIS [p] is set randomly and the RIS is turned off when AoA and delay from the UE to the BS are estimated from Y NRIS , which is assumed not affected by the RIS. Second, the RIS is turned on and we can obtain Y RIS by setting W RIS [p] as (14) to estimate the delay from the UE to the RIS. The delay can be estimated from different subcarriers based on an algorithm modified from MUSIC. We utilize the Y NRIS and Y RIS to do eigenvector decomposition (EVD) as
(Y NRIS ) H Y NRIS =E NRIS Λ NRIS (E NRIS ) H , (44a) (Y RIS ) H Y RIS =E RIS Λ RIS (E RIS ) H ,(44b)
where E NRIS and E RIS are the matrices composed of the eigenvectors of (Y NRIS ) H Y NRIS and (Y RIS ) H Y RIS , respectively. We assume that the eigenvalues are arranged from the largest to the smallest in Λ NRIS and Λ RIS without loss of generality, and the eigenvectors corresponding to all but the largest eigenvalues are treated as the noise subspace as
E NRIS =E NRIS :,2:end , (45a) E RIS =E RIS :,2:end . (45b)
By defining f ∈ C 1×M as the frequency vector in M subcarriers, we generate the vector a(τ ) = e j2πτ f ∈ C 1×M at M subcarriers and estimateτ NRIS , the delay from the UE to the BS directly, andτ RIS , the delay from the UE to the BS via the RIS, aŝ
τ NRIS = arg max τ (1/(a(τ )Ē NRIS (a(τ )Ē NRIS ) H )), (46a) τ RIS = arg max τ (1/(a(τ )Ē RIS (a(τ )Ē RIS ) H )),(46b)
respectively. However, the above modified MUSIC algorithm is computationally complex in two aspects, one is EVD and one is spectral peak search. The computational complexity of spectral peak search can be greatly reduced by the idea of the hierarchical dictionary, but the complexity of EVD is not easy to reduce. Since only the LoS path is required to be estimated, the dimensionality of the noise subspace (M − 1) is known. Therefore, by subspace analysis [32], it is possible to obtain the noise subspaces, which are equivalent to the spaces spanned by the eigenvectors corresponding to the eigenvalues other than the largest one. We take the process of estimatingτ NRIS , which is the same asτ RIS , as an example. We calculate the autocorrelation matrix of (
Y NRIS ) H = (W NRIS h BU ) H + (N NRIS ) H ∈ C M ×P NRIS NRF as follows, E((Y NRIS ) H Y NRIS ) = (h BU ) H (W NRIS ) H W NRIS h BU + E((N NRIS ) H N NRIS ),(47)
where E(·) is the expected operator. By approximating the (47) and utilizing E((N NRIS ) H N NRIS ) = σ 2 P NRIS N RF I, where σ 2 is the assumed known noise power, the denoising autocorrelation matrixỸ NRIS ∈ C M ×M can be denoted as
Y NRIS ≜ (Y NRIS ) H Y NRIS − σ 2 P NRIS N RF I ≈ (h BU ) H (W NRIS ) H W NRIS h BU ,(48)
whereỸ NRIS is rank-deficient and its maximum rank is P NRIS N RF . Then, we can obtain the orthogonal complement of Y NRIS and hence the noise subspace of E((Y NRIS ) H Y NRIS ).
The matrixỸ NRIS can be partitioned as the block matrix as
followsỸ NRIS = [Ỹ NRIS 1Ỹ NRIS 2 ],(49)whereỸ NRIS 1 ∈ C M ×P NRIS NRF andỸ NRIS 2 ∈ C M ×(M −P NRIS NRF)
. SinceỸ NRIS is rank-deficient, we can assume that there exist
G = [G H 1 − G H 2 ] H ∈ C M ×(M −P NRIS NRF) that makes Y NRIS G = 0,(50)
where G 1 ∈ C P NRIS NRF×(M −P NRIS NRF) and G 2 ∈ C (M −P NRIS NRF)×(M −P NRIS NRF) . Therefore, G 1 can be denoted as
G 1 = (Ỹ NRIS 1 ) †ỸNRIS 2 G 2(51)
By taking the conjugate transpose of (50), we can obtain G HỸNRIS = 0 and the projection matrix to the subspace spanned by G can be calculated as
P G = G(G H G) −1 G H = Ỹ † 1Ỹ 2 −I ( Ỹ † 1Ỹ 2 −I H Ỹ † 1Ỹ 2 −I ) −1 Ỹ † 1Ỹ 2 −I H ,(52)
where P G is only associated withỸ NRIS . Therefore, (46a) can be equivalently converted to the problem as followŝ
τ NRIS = arg max τ 1/[a(τ )P G a H (τ )].(53)
The process of solving forτ RIS is the same as forτ NRIS and we omit it for simplicity. Sinceτ NRIS andτ RIS include the same common clock offset, we can obtain the time difference between the UE arriving at the BS and the UE arriving at the RIS asτ
TDoA =τ NRIS − (τ RIS − r B2R /c).(54)
In reality,τ TDoA may be positive or negative, depending on the distance between the BS and the UE and the distance between the RIS and the UE. Without loss of generality, we assumê τ TDoA is positive here. Then we can obtain one side of the hyperbola and the UE must be around, which can be seen in Fig. 2. For convenience, we put the BS and the RIS at the focal points of the hyperbola, and let them lie on the xaxis. Therefore, we can acquire the standard equation of the hyperbola as
x 2 /(a h ) 2 − y 2 /(b h ) 2 = 1,(55)
where a h = cτ TDoA
2 and (b h ) 2 = (c h ) 2 − (a h ) 2 = ( rB2R 2 ) 2 − (a h ) 2 .
After that, we can lock the UE on the hyperbola, where the FSPRD D h [m] ∈ C N ×T can be generated to sense the UE's location on grids. N is the number of the BS antenna and T is the number of grids on the hyperbola. There are many ways to generate the FSPRD, and the method we adopt is to generate a group of lines from the BS and intersect the hyperbola. The group of angles can be denoted as
[θ h ,θ h + ∆θ,θ h + 2∆θ , · · · ,θ h + (T − 1)∆θ],(56)
whereθ h is the initial angle value between the line and the x-axis, and ∆θ is the angle spacing. Then we can obtain the coarse AoA from the UE to the BS by the correlation aŝ with the location of the BS, denoted as (x BS , y BS ), the location of the UE can be calculated as the Algorithm 4: Proposed PDL Scheme Input: received pilot Y, equivalent combining matrixW, the maximum number of iteration Imax, threshold to terminate the PGD ϖPGD Output: fine AoA estimationθ BU 0 , fine distance estimation r BU 0 , fine estimation of UE's location (x UE ,ŷ UE ) 1 Obtain the denoising autocorrelation matrixỸ NRIS as (48); (50), and obtain the relationship between the G1 and G2 as shown in (51); 3 Obtain the projection matrix, P G , onto the subspace spanned by G as (52); 4 Obtainτ NRIS by searching spectral peak according to (53) with the idea of hierarchical dictionary; 5 The procedure for obtainingτ RIS is the same as for obtaininĝ τ NRIS , as shown from step 1 to 4. 6 Obtain theτ TDoA as (54) and the hyperbola equation as (55); 7 Generate FSPRD D h as Algorithm 1 on the hyperbola and obtain the coarse estimationθ BU 0 as (57); 8 Combine the hyperbola andθ BU 0 with the location of the BS, (x BS , y BS ), to obtainr BU 0 using (58) and (59); 9 ObtainȲ NRIS and v NRIS as (32) and (33) intersection of the line and the hyperbolâ
θ BU 0 =θ h + (t − 1)∆θ,(57)2 Construct G = [G H 1 − G H 2 ] H to make G satisfyx UE = 4k 4 a 4 (x BS ) 2 + 4a 2 (b 2 − a 2 k 2 )(k 2 (x BS ) 2 + b 2 ) 2(b 2 − a 2 k 2 ) + −2a 2 k 2 x BS 2(b 2 − a 2 k 2 ) y UE = k(x UE − x BS ) ,(58)
where k = − tan( π 2 − ϑ B − arcsin(θ BU 0 )) is the slope of the line from the BS to the UE, as Fig. 2 shows. In (58), y BS is set to 0, which is the same setting as that in the simulation. Then, we can get the distancer BU 0 aŝ
r BU 0 = (x UE − x BS ) 2 + (ŷ UE − y BS ) 2 .(59)
However, the estimatedθ BU 0 is limited to the number of grids, leading to limited localization accuracy. Therefore, the PGD algorithm is also employed to refine the AoA from the UE to the BS. The step of PGD in PDL is similar to the one in CDL and Algorithm 4 contains complete steps of the proposed PDL scheme.
C. Complexity Analysis
As a comparison of PDL, full-digital estimating signal parameter via rotational invariance techniques (ESPRIT) [21], full-digital MUSIC [9], and hybrid beamforming MUSIC [33] are used to estimate the AoA from the UE to the BS, which is combined with the TDoA to sense the UE's location. Their steps of obtaining the TDoA are the same as the PDL.
The computational complexity of the proposed algorithm and the baseline algorithms is provided as follows. 1) ESPRIT (full-digital) [21]: [33]: R denotes the number of distance grids to get theτ RIS and τ NRIS , Q denotes the number of angle grids searching in the whole space, T denotes the number of grids generated on the hyperbola, and I PGD denotes the number of total iterations in PGD of the PDL scheme. I PGD is less than 10 in iterations.
O(M 3 + (M 2 + M )R) + O(N 3 + N 2 M ) 2) MUSIC (full-digital) [9]: O(M 3 + (M 2 + M )R) + O(N 3 + N 2 M + QN 2 ) 3) MUSIC (hybrid)
V. SIMULATION RESULTS
A. Simulation Setup
We consider the system model as shown in Fig. 2 and Fig. 4. The BS and the RIS receive the uplink signals from the active UE within a sector of radius R s = 100m and central angle 90 • . The UE is set within the effect Rayleigh distance as (5) if the UE is in the near-field region. Unless otherwise specified, the simulation setup is detailed as follows: N = 256, N RF = 4, N RIS = 256, f c = 0.1 THz, B = 10 GHz, M = 2048, P = 30dBm, ϑ B = π 4 , ϑ R = π 4 , S = 10, L max = 20, ϖ OMP = 0.85 (N s = 6), ϖ OMP = 0.95 (N s = 1), ϖ PGD = 1 × 10 −7 , ϖ PHD = 2 × 10 −5 , N PHD = 41, ℏ = 0.1, I max = 20, ς = 2. According to the ITU-R P.676-12, the molecular absorption coefficient α A [m] can be set to −0.45 dB/km from 0.095 THz to 0.105 THz [26]. In the CDL scheme, P NRIS = 16, P RIS = 32 and it uses only 64 of the 2048 subcarriers to take advantage of the MMV property and reduce computational complexity. These 64 subcarriers are composed of the first subcarrier extracted from each of the 32 consecutive subcarriers in the 2048 subcarriers. In the PDL scheme, P NRIS = 8 and P RIS = 16. Therefore, in the CDL scheme, the compression ratio of estimation of h BU or extraction of the UE's location from h BU is P NRIS N RF /N = 1/4. The compression ratio of estimation of h RU or extraction of the UE's location from h RU is P RIS /N = 1/8, since the rank of the channel matrix between the RIS and the BS is almost 1. In the PDL scheme, the compression ratio of extraction of the UE's location from h BU is P NRIS N RF /N = 1/8 and the one of extraction of the UE's location from h RU is P RIS /N = 1/16. The time slots in the CDL scheme are twice as much as that in the PDL scheme since the number of observations cannot be too few in order to guarantee the performance of the CE. The noise power spectrum density at the receiver is set as σ 2 NSD = −174 dBm/Hz and the transmit power of the UE P is set from 15 dBm to 45 dBm in evaluating the CE performance. The channel is modeled as the cluster-sparse multi-path channel that is widely used in mmWave/THz systems. There are 3 scatterers between the UE and the BS, and 3 scatterers between the UE and the RIS. Therefore, the clusters number of the channel between the UE and the BS and between the UE and the RIS are both 3. Moreover, there are 6 paths in each cluster. The BS is set at (-10 √ 2 m, 0 m) and the RIS is set at (10 √ 2 m, 0 m) in the near-field region as shown in (d) h RU is the far-field channel. Fig. 6. The CE performance of h BU is shown in (a) and (b) while that of h RU is shown in (c) and (d).
G R S eff λ 2 m / (4π) 3 (R 1 ) 2 (R 2 ) 2 [9],
from the certain element of the RIS to the certain BS antenna element, respectively. G T and G R denote antenna gain of the UE and the BS, respectively. ε NRIS , ε RIS ∈ U[0, 2π) are the phase shift by the channel. α S [m] ∼ CN (0, 1) denotes the small-scale complex channel gain of the NLoS path. S eff , which is set to (N RIS d) 2 m 2 in the simulation, is the effective reflection area of the RIS. As for the NLoS complex channel gain, defined similarly as α RIS [m], multi-hop paths and the scattering area of the scatterers are taken into account. In this paper, we only consider 2-hop paths and assume that the scattering area equal to 3m 2 . The root mean square error (RMSE) is considered as the accuracy evaluation of the UE's location, which can be defined as
RMSE ϑ = [ Nit n=1 (θ n − ϑ real ) 2 /N it ] 0.5 and RMSE r = [ Nit n=1 (r n − r real ) 2 /N it ] 0.5 ,
whereθ n andr n are the estimation results of the AoA and distance from the UE to the BS every iteration n, respectively, r real and ϑ real are the real values, N it is the number of the total iterations. The normalized mean square errors (NMSE) is considered as the accuracy evaluation of CE, which can be defined as NMSE = ||h −ĥ|| 2 F /||h|| 2 F , where h is the real channel matrix whileĥ is the estimated channel matrix.
In the following simulation results, Genie-least square (LS), PSOMP [13], GMMV-OMP, LA-GMMV-OMP are simulated to compare the performance of CE in the near-field ( Fig. 6(a), Fig. 6(c)) and in the far-field ( Fig. 6(b), Fig. 6(d)) condition. PSOMP and Genie-LS are the baseline algorithms and they are used to prove the effectiveness of GMMV-OMP under the HFNF BSE. The comparison between the GMMV-OMP and LA-GMMV-OMP is to observe that whether the UE's location is beneficial to CE under the condition of this paper.
MUSIC [9] based on full-digital beamforming, MUSIC based on hybrid beamforming [33](to illustrate that the conventional MUSIC algorithm is difficult to work stably in the hybrid-beamforming structure), ESPRIT [21] based on fulldigital beamforming are the baseline algorithms to prove the effectiveness of the PDL and CDL scheme under the HFNF BSE in Fig. 7 8 and 9. It is worth noting that although ESPRIT and MUSIC-based algorithms are used to estimate the θ BU , their methods to estimate the TDoA before estimating the θ BU are the same as PDL. Moreover, the on-grid localization result of GMMV-OMP in step 9 of the Algorithm 2 is also shown. The effects of several parameters on the localization performance of CDL schemes and PDL schemes are also investigated in Fig. 9, 10, and 11, from which we derive some interesting insights.
B. Sensing Results of UE's Channel
In Fig. 6(a) and Fig. 6(b), different algorithms are implemented to compare their CE performance of h BU under the HFNF BSE. The performance of Genie-LS is the best since the support set in it is fully correct, so it can be a lower bound for the proposed algorithms. The performance of PSOMP is the worst, because the dictionary used in PSOMP is PTM. Although the near-field is taken into consideration in PTM, the BSE is not yet settled. The use of FSPRD in GMMV-OMP not only takes the near-field into consideration, but also solves the beam squint problem. Therefore, the performance of GMMV-OMP is better than that of PSOMP. The performance of LA-GMMV-OMP is better than that of GMMV-OMP since the former is assisted by the relatively accurate UE location estimation from the CDL scheme, which illustrates that the information of the UE's location can improve the performance of the CE. Moreover, the CE performance of the proposed algorithms show almost no performance difference between the near-field and the far-field except for the difference brought by the received SNR, which indicates that the proposed algorithm can work stably in the HFNF.
As for the value of G l , if G l ̸ = 1, i.e., the channel has a cluster structure, all CE algorithms will degrade. The performance of algorithms other than the Genie-LS algorithm degrades because accurate support set is more difficult to obtain when the channel has a cluster structure. For Genie-LS, the angles and distances in the support set are too close to each other, so the pseudo-inverse operation performs poorly. As a result, the CE performance of Genie-LS deteriorates. As for the value of N s , if N s = 6 and G l = 6 (we choose N s = 6 since every cluster we generated has 6 paths 7 ), we can relatively correctly select the atoms corresponding to the scatterer with the largest energy in the current iteration. However, if only one atom with the highest energy is selected each time, i.e., N s = 1, the energy of the other atoms with the same angle and distance will be weakened in future iterations, since this path has already been weakened from the residual after each iteration of the OMP-based algorithm. These atoms with reduced energy are difficult to pick out correctly in the future iterations and hence the CE performance degrades. However, if N s is 6, its CE performance is worse than that of 1 at low transmit power but better than that of 1 at high transmit power under the condition that the channel does not have a cluster structure (G l = 1). This is because selecting multiple atoms simultaneously is more likely to select the wrong atoms at low transmit power, while more likely to select the right atoms at the high transmit power. Moreover, when N s > 1, the algorithm has fewer adaptive iterations and faster running speed.
In Fig. 6(c) and Fig. 6(d), different algorithms are implemented to compare their CE performance of h RU under the HFNF BSE. The estimation results of h RU are generally consistent with those of h BU , except three main differences. Firstly, if h RU has a cluster structure, the performance of the Genie-LS will become so poor that proposed algorithms can Fig. 7. RMSE performance of ϑ and r versus the transmit power with the BSE: the channel of (a) and (b) is in the near-field region, while that of (c) and (d) is in the far-field region. outperform this so-called lower bound. On the one hand, it suffers the same drawback as that of h BU since the poor performance of the pseudo-inverse in LS. On the other hand, the rank of W RIS and H BR [m] is almost one so that the condition number ofW RIS [m] is very large. The property of sensing matrixW RIS [m] used to estimate h RU is much poorer than that of sensing matrixW NRIS used to estimate h BU . Thus, even if the algorithms other than Genie-LS adaptively stop the iteration without finding all the correct indices, their CE performance is better than that of Genie-LS with all the correct indices, demonstrating the superiority of the proposed algorithms and their adaptive iteration stopping criterion. Secondly, due to the poor properties of the sensing matrixW RIS [m] in the estimation of h RU , and the small energy of the NLoS paths, even the channel cluster structure is overwhelmed. Therefore, there is no obvious advantage or disadvantage to choosing multiple atoms simultaneously as opposed to choosing only one atom. Thirdly, if the near-field channel h RU does not have cluster structure, the estimation of the UE location does not give a clear benefit to the CE performance, since the atom selected by the GMMV-OMP is very close to the actual location of the UE under our simulation settings.
In conclusion, GMMV-OMP performs much better than PSOMP, whether in the near-field region or in the far-field region, whether through the RIS or not. Compared with GMMV-OMP, LA-GMMV-OMP has about 2 dB NMSE performance gain, which demonstrates that the information of the UE's location has some benefits to the CE. When estimating channels with cluster structure, the value of N s can be larger to obtain better CE performance and lower iteration numbers. Moreover, the CE performance of h RU is worse than that of h BU , since the property of the sensing matrix of the former is much poorer than that of the latter and the received SNR of the former is much lower than that of the latter.
C. Sensing Results of UE's Location
Localization accuracy of different algorithms are evaluated in terms of AoA and distance, namely θ BU 0 and r BU 0 , under HFNF BSE. 1) Localization Performance of Different Algorithms: Fig. 7 shows the variation of the localization performance of different algorithms with the transmit power. As for Fig. 7(a) and Fig. 7(c), the CDL scheme has the highest AoA localization accuracy followed by the PDL scheme. The use of redundant dictionaries allows the on-grid GMMV-OMP algorithm to achieve good AoA estimation accuracy, but once the index of the FSPRD at the location of the UE is accurately found, the AoA localization accuracy reaches the highest. Even if full-digital beamforming is adopted, the localization accuracy of MUSIC and ESPRIT algorithms is still inferior to the proposed PDL and CDL scheme with hybrid beamforming. This is because MUSIC and ESPRIT algorithms are suitable for the far-field region without the BSE. However, in near-field XL-array systems, the steering vector is not only dependent on the AoA from the UE to the BS, but also dependent on the distance from the UE to the BS. Due to the energy spread effect in [13], one near-field steering vector corresponds to multiple far-field steering vectors, breaking the far-field premise of MUSIC and ESPRIT algorithms. What's more, the HFNF BSE leads to the shift of the angles and distances on different subcarriers. Hence, the severe HFNF BSE will reduce the localization accuracy. When the hybrid beamforming is adopted, the MUSIC algorithm has poor performance and the ESPRIT algorithm cannot work at all due to the incomplete rotation invariance. As for Fig. 7(b) and Fig. 7(d), the CDL scheme has the highest distance localization accuracy, but the second highest distance localization accuracy is achieved by the GMMV-OMP algorithm, not the PDL scheme. This illustrates that localization through two AoAs is more accurate than localization through one AoA and one TDoA, where the BS and the RIS are the two anchors. Comparing the AoA localization accuracy and the distance localization accuracy at high transmit power, we can find that distance localization accuracy of the PDL scheme is limited by TDoA estimation, because the AoA localization accuracy of the PDL scheme is higher than that of the GMMV-OMP algorithm, but the distance localization accuracy of the former is lower than that of the latter. Fig. 8 shows the localization performance of the baselines with and without BSE in the HFNF 8 . It can be seen from Fig. 7 that the localization performance of the conventional MUSIC and ESPRIT algorithms, which only work well in the far-field narrowband scenarios without the BSE, is much poorer than the proposed schemes and the reason is shown in Fig. 8. The BSE is the main factor, as can be seen from the clear performance improvement of the two conventional algorithms at a bandwidth of 500 MHz, where the BSE is no obvious. The near-field spherical wave is the secondary factor since the received SNR in the far-field is lower than that in the near-field at the same transmit power. However, when the transmit power is high, it can be seen that the performance of the far-field reaches or even exceeds that of the near-field one at 500 MHz, which proves that the near-field spherical wave has a negative effect on the performance of these conventional algorithms. Fig. 9(a) and 9(b) show the cumulative distribution function (CDF) of the AoA localization RMSE and the distance localization RMSE for different algorithms. The results coincide with those in Fig. 7.
2) Effects of Different Parameters on the Localization Performance of CDL and PDL Schemes: Fig. 9(c) and 9(d) show the CDF of RMSE ϑ and RMSE r for different number of time slots. The value of P NRIS are {8, 16, 32} and the corresponding value of P RIS are {16, 32, 64}. The localization accuracy of the CDL scheme and the AoA localization accuracy of the PDL scheme increase with the number of observed time slots, but the distance localization performance of the PDL scheme does not change significantly with the increase of the number of observed time slots, which means that the delay estimation performance of the PDL scheme has reached saturation when the number of observed time slots is small. This is because the size of the RIS elements and the size of the BS antenna array limit the further improvement of the PDL scheme's TDoA accuracy under underdetermined observation in the hybrid-beamforming structure. Fig. 10(a) and 10(b) show the CDF of RMSE ϑ and RMSE r for different distances from the UE to the BS, i.e., r BU 0 , where θ BU 0 remains constant. The difference in localization performance caused by different distances mainly comes from the variation of the received power, since the HFNF BSE has been overcome in the proposed localization schemes. At the same transmit power, the farther away the UE is from the BS, the smaller SNR of Y NRIS [m]. However, under our simulation settings, the farther away the UE from the BS means the closer the UE to the RIS, i.e., the larger SNR of Y RIS [m]. Under the influence of these two opposing factors, the AoA localization accuracy of the CDL scheme decreases first and then increases, while the distance localization accuracy keeps increasing. For the PDL scheme, the AoA localization accuracy keeps decreasing while the distance localization accuracy keeps increasing. Fig. 10(c) and 10(d) show the CDF of RMSE ϑ and RMSE r for different number of the RIS elements N RIS . As N RIS increases, we can observe the following changes that affect the final localization performance. 1) The effective reflection area of the RIS will increase and hence the received SNR increases; 2) The AoA resolution from the UE to the RIS will increase; 3) The BS combiner is only aligned with the center of the RIS, so the phase difference between the rest of the elements at the RIS and each antenna at the BS becomes larger; 4) Increasing the RIS area leads to the larger delay estimation errors of the PDL scheme 9 ; 5) When the number of observed time slots remains constant, the reduction of the compression ratio leads to a deterioration of localization performance. Based on the aforementioned analysis, as N RIS increases, the AoA localization accuracy of the CDL scheme remains constant, while the distance localization accuracy of the CDL scheme keeps increasing. For the PDL scheme, the AoA localization accuracy remains constant, while the distance localization accuracy is non-monotonic. Fig. 11(a) and 11(b) show the CDF of RMSE ϑ and RMSE r for different number of the BS array elements N . As N increases, we can observe the following changes that affect the final localization performance. 1) The AoA resolution from the UE to the BS will increase; 2) The BS combiner is only aligned with the center of the RIS, so the phase difference between the rest of the elements at the RIS and each antenna at the BS becomes larger; 3) Increasing the aperture of the BS antenna array leads to larger distance estimation errors of the PDL scheme 9 . Based on the aforementioned analysis, as N increases, the AoA localization accuracy of the CDL scheme keeps increasing, while the distance localization accuracy of the CDL scheme is non-monotonic. For the PDL scheme, both the AoA and distance localization accuracy are non-monotonic. Fig. 11(c) and 11(d) show the CDF of RMSE ϑ and RMSE r for different size of bandwidth B 8 . As B increases, we can observe the following factors that affect the final localization performance. 1) The design of the BS combiner W RIS [p] shown in (14) is heuristic and may not be optimal; 2) The proposed PDL scheme cannot make the delay estimation free from the unknown HFNF steering vector under the large array and underdetermined observations. Based on the aforementioned analysis, as B increases, the AoA localization accuracy of the CDL scheme remains constant, while the distance localization accuracy of the CDL scheme is non-monotonic. For the PDL scheme, both the AoA and distance localization accuracy are non-monotonic.
In conclusion, the CDL and PDL schemes are all superior to the baseline algorithms, both in the near-field and far-field. At the same time, we tested the localization performance of the two schemes under different simulation conditions and found some interesting compromises. Since the CDL scheme is closely linked with the (LA)-GMMV-OMP algorithm, thus, if there is a need for both CE and the UE's location, the CDL scheme can be adopted. If there is only the need for the UE's location, the PDL scheme can be adopted.
VI. CONCLUSION In this paper, we have investigated how to sense the UE's channel and location under the hybrid BSE for THz XLarray systems. First, a procedure for generating the FSPRD was proposed to model the dictionary of the HFNF steering vectors under the BSE on grids. Then, we proposed two RIS-assisted localization paradigms depending on whether the UE's channel was required to be estimated when sensing the UE's location. Finally, the two proposed schemes were tested under the hybrid BSE, and the simulation results showed that the proposed schemes outperformed the baseline algorithms in terms of the UE's location sensing performance and the UE's CE performance. Possible future research directions based on this work include more efficient design of the following three elements in the HFNF BSE scenario: the beam training procedure, the combiner at the BS and the reflection at the RIS.
. Li, Z. Gao, and T. Li are with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China, and also with the Advanced Research Institue of Multidisciplinary Sciences, Beijing Institute of Technology, Beijing 100081, China (emails: {lizhuoran, gaozhen16, 6120210230}@bit.edu.cn).(Corresponding author: Zhen Gao.)
arXiv:2305.07184v2 [eess.SP] 23 May 2023
can be obtained according to simulations, e.g., if f m = f c = 0.1 THz, A = 0.384 m (i.e., N = 256), ℏ = 0.1, θ = 0.5, a[m] and b[m] are taken from
The schematic diagram to illustrate the localization problem under the HFNF BSE through the inner product of the real channel (only LoS path) and HFNF steering vectors.
Fig. 4 .
4Schematic diagram of the proposed joint channel and location sensing scheme.
the number of array elements of the BS (or RIS) N , the number of subcarriers M , carrier frequency fc, bandwidth B, the number of distance grids S, redundancy rate ς Output: the frequency selective polar-domain redundant dictionary D ∈ C N ×ςN S×M 1 for m = {1, 2, · · · , M } do 2 fm = fc − B/2 + (m − 1)B/M ; 3 for n = {0, 1, · · · , ςN − 1} do 4Generate the n-th angle grid θn as(17
Algorithm 2 :
2Proposed LA-GMMV-OMP Algorithm Input: received pilot Y, equivalent combining matrixW, threshold to terminate ϖOMP, the maximum number of iterations in the LA-GMMV-OMP algorithm Lmax Output: estimated channelĥ 1 Initialization R = R0 = Y, Ω = {∅}; 2 Generate the FSPRD W as Algorithm 1; 3 CalculateW usingW and D as (20); 4 for i = {1, 2, · · · , Lmax} do 5 for m = {1, 2, · · · , M } do 6 Calculate the correlation matrix Γ[m] as (21
Fig. 5 .
5The view of the absolute value of the loss function when signal to noise ratio (SNR) is 20 dB, where AoA and distance are independent values. The parameters are N = 256, fc = 0.1THz, B = 10GHz, P NRIS = 4, N RF = 4. The AoA can be obtained more simply and accurately from (b).
For every subcarrier f m , the HFNF steering vector b NRIS [m] and b RIS [m] are generated as (2) and the new FSPRD can be generated as follows D NRIS [m] =[D NRIS [m] b NRIS [m]], (42a) D RIS [m] =[D RIS [m] b RIS [m]]. (42b) Then, the new FSPRD can be used in (20)-(26) to improve the CE performance. Moreover, in the subsequent downlink
LA-GMMV-OMP (CDL): O(ς(P NRIS + P RIS )N RF N 2 SM ) +O(P NRIS N RF N M I PGD ) + O(P RIS N RF N M I PHD ) O(ς(P NRIS +P RIS )N RF N 2 SM ) is the complexity in PSOMP and (LA-)GMMV-OMP to calculate the correlation between the FSPRD and the received pilot Y NRIS , Y RIS . Once the index of FSPRD, which has the largest correlation between the FSPRD and received pilot, is acquired, the UE's location can be determined immediately from (28). The extra complexity O(P NRIS N RF N M I PGD ) in LA-GMMV-OMP (CDL) is to utilize the PGD to refine the AoA from the UE to the BS, where I PGD denotes the number of total iterations in PGD algorithm, and it is less than 10 in iterations. The extra complexity O(P RIS N RF N M I PHD ) in LA-GMMV-OMP (CDL) is to utilize the PHD to refine the AoA from the UE to the RIS, where I PHD denotes the total number of search grids.As for the complexity of CE, the computational complexity of PSOMP and GMMV-OMP, O(ςLP N RF N 2 SM ), are of the same order of magnitude with the same number of iterationŝ L, where P can be replaced by P NRIS and P RIS if h BU and h RU are estimated, respectively. The complexity of LA-GMMV-OMP is based on the complexity of GMMV-OMP plus its extra localization complexity.
2 + M )R) + O((P NRIS NRFN M )(IPGD + T ))
Fig. 2 .
2In the far-field condition, the BS is set at (-20 √ 2 m, 0 m) and the RIS is set at (20 √ 2 m, 0 m). The UE is set at (5.96 m, -10.1 m) and (11.83 m, -20.2 m) in the nearfield and far-field region, respectively. The scatterers location are set randomly in the whole region. Each LoS complex channel coefficient in α F [m] can be calculated from the Friis transmission formula as α NRIS [m] = e jε NRIS G T G R λ 2 m /4πR and α RIS [m] = e jε RIS G T
) RMSE ϑ vs transmit power.
RMSEr vs transmit power. Fig. 8. The localization performance of the MUSIC and ESPRIT algorithms with or without BSE in the HFNF versus the transmit power.
) CDF of the RMSE ϑ .
of the RMSEr. Fig. 9. (a) and (b) show the localization accuracy of different algorithms. (c) and (d) reflect the effect of the number of the uplink time slots on the localization accuracy. Except for the parameters in the legend, the values of other parameters follow the default values in Section V-A.
) CDF of the RMSE ϑ .
of the RMSE ϑ .
Fig. 10. (a) and (b) reflect the effect of the distance from the UE to the BS on the localization accuracy. (c) and (d) reflect the effect of the number of the RIS elements on the localization accuracy. Except for the parameters in the legend, the values of other parameters follow the default values in Section V-A.
) CDF of the RMSE ϑ .
) CDF of the RMSE ϑ .
Fig. 11. (a) and (b) reflect the effect of the number of BS antennas on the localization accuracy. (c) and (d) reflect the effect of the bandwidth on the localization accuracy. Except for the parameters in the legend, the values of other parameters follow the default values in Section V-A.
TABLE I A
IBRIEF COMPARISON OF THE RELATED LITERATURE WITH OUR WORKReference
Categories of
Localization Methods
MIMO Channel
Precoding/Beamforming
architecture
Combine
with CE
Assisted
by RIS
Algorithm
ToA/
TDoA
AoA/
AoD
RSS
Beam
Squint
Far-
Field
Near-
Field
Hybrid-
Field
Single
antenna
Full-
Digital
Hybrid
[2]
✓
✓
✓
✓
Hybrid RSS-AoA positioning scheme
We propose a pure location sensing scheme that does not rely on CE. In the case that only the UE's locationUE
生成极坐标域频率依赖性
全域字典
关闭智能超
PgOMP-MMV第
用户到基
利用上述角度
表面的位置得
更新极坐标域频率依赖性
全域字典,并执行LA-
PgOMP-MMV后续所有步
骤,完成信道估计
关闭智能超表面,利用
MUSIC算法得到用户到基
站的ToA
开启智能超表面,利用
MUSIC算法得到用户到智
能超表面的ToA
通过两个ToA得到得到
TDoA,联合基站和智能超表
面的位置得到双曲线
在双曲线上生成极坐标域频率依
性字典,与基站移相网络相乘后与
收信号相关,得到用户位置粗估计
1. 采用极坐标域频率依赖性字典,划分基站
和智能超表面所服务的公共区域
2.1 关闭智能超表面,将观测矩阵与
极坐标域频率依赖性字典相乘,其结
果与接收信号做相关,根据相关的结
果,得到用户到基站的粗略角度
3.1 使用极坐标域梯度下降算法优化
用户到达基站的角度
5. 通过步骤4获得的用户位置,更新信道估计过程中的字典,
然后再执行信道估计操作,进一步提高用户到基站和用户到
智能超表面的信道估计性能
2.2 开启智能超表面,将观测矩阵与
极坐标域频率依赖性字典相乘,其结
果与接收信号做相关,根据相关的结
果,得到用户到智能超表面的粗略角
度
4. 利用基站和智能超表面
的位置以及到达角信息得
到用户的精确位置
3.2 使用极坐标域分层字典优化用户
到达智能超表面的角度
Se
Uplink pilot signals
sent from UEs for
sensing
Estimate the
channel and obtain
the coarse on-grid
AoA from the UE
to the BS using
GMMV-OMP
algorithm
Estimate the
channel and obtain
the coarse on-grid
AoA from the UE
to the RIS using
GMMV-OMP
algorithm
Obtain the fine
off-grid AoA from
the UE to the BS
using PGD
algorithm
Obtain the fine
AoA from the UE
to the RIS using
the PHD
Sense the UE's
Location using two
AoA (respectively
observed from the
BS and RIS)
Improve
channel sensing
performance
with the
assistance of
the estimated
location
Uplink pilot
signals sent from
UEs for sensing
Utilize TDoA to lock
UE on the hyperbola,
with BS and RIS being
anchors
Generate the partial FSPRD on
the hyperbola and estimate the
coare AoA from the UE to the
BS
Refine the AoA from the UE to the
BS using PGD algorithm
Sense the UE's Location using
TDoA and AoA observed from the
BS
(a) Joint Channel and Location Sensing Scheme.
(b) Location Sensing Scheme
not Relying on Channel Estimation.
Localization
Module
CE Module
Fig. 1. The sensing procedure of the joint channel and location sensing scheme
and the location sensing scheme not relying on channel estimation.
the pilot transmitted by the UE, the HFNF channel between the BS and the UE, and the combiner of the BS, respectively. Each element of W NRIS [p] satisfies the constant modulus constraint
the phase of the RIS, the HFNF channel between the BS and the RIS, and the HFNF channel between the RIS and the UE, respectively. Since the RIS is deployed in advance to ensure a LoS link between the RIS and the BS, H BR [m] can be assumed to be known. Moreover, H BR [m] can be assumed to have only one LoS link, since the energy of the multi-hop path is drastically reduced in the THz band. The phase of each element in H BR [m] can be obtained from the distance between each element of the RIS and each element of the BS antenna array, and the amplitude of each element in H BR [m]can be calculated from the Friis formula. W RIS [p] can be designed to minimize the effect of incoming signals from directions other than the one from the RIS to the BS, and to estimate the information from the UE to the RIS since spatial filtering can weaken the signal strength from other directions. Meanwhile, h BU [m] can also be weakened since W RIS [p] is not aligned along the directions contained in h BU [m]. We assume x[p, m] = 1, and stack the received P NRIS pilots without the assistance of RIS as follows Y NRISΦ RIS [p] ≜
diag[e jι1[p] , · · · , e jιn[p] , · · · , e jι N RIS [p] ] is a diagonal matrix ac-
counting for the reflection phases of the RIS, where e jιn[p] is
the phase of the n-th RIS element in the p-the time slot and it
has a precision of 1 bit 5 In (9), y RIS [p, m], W RIS [p], x[p, m],
h BU [m] and n RIS [p, m] are similar to what we defined before in
(8).
where F denotes the FTM and h BU,A [m] denotes the angledomain channel. Since the number of paths is limited, h BU,A [m]
can be utilized to cover UEs within the effective Rayleigh distance better, i.e., to generate more distance grids around the effective Rayleigh distance. Thus, we can sparse the HFNF channel h BU [m] to the polar-domain channel h BU,P [m] approximately by the FSPRD D[m] as h BU [m] = D[m]h BU,P [m].
The FSPRD is generated as D NRIS [m] and D RIS [m] using Algorithm 1. Based on the representation of FSPRD in (19), we can get the equivalent measurement matrixW NRIS [m] ∈ C NRFP NRIS ×ςN S andW RIS [m] ∈ C NRFP RIS ×ςNRISS as follows W NRIS [m] =W NRIS D NRIS [m],(20a)
W RIS [m] =W RIS [m]D RIS [m].
(20b)
At the beginning of each iteration, we calculate Γ NRIS [m] ∈
C ςN S (Γ RIS [m] ∈ C ςNRISS ), the correlation matrix on the m-
th subcarrier betweenW NRIS [m] (W RIS [m]) and the residual
R NRIS [m] (R RIS [m]), as follows
Γ NRIS [m] =|(W NRIS [m]) H R NRIS [m]|,
(21a)
Γ RIS [m] =|(W RIS [m]) H R RIS [m]|,
(21b)
where R NRIS [m] (R RIS [m]) is Y NRIS [m] (Y RIS [m]) in the first
iteration, and is calculated by (25) in the subsequent iterations.
For Γ NRIS [m] (Γ RIS [m]), the positions where peaks appear are
determined by (θ BU , r BU ) ((θ RU , r RU )), where θ BU , r BU , θ RU ,
and r RU denote the AoAs from the UE to the BS, the distances
from the UE to the BS, the AoAs from the UE to the RIS, and
the distances from the UE to the RIS, respectively. Through
the correlation matrix in
;17
Update the residual R[m] as (25);
18
end
19
if ||R||F/||R0||F > ϖOMP, break;
20
R0 = R;
21 end
22 Acquire the estimated channelĥ as (26);
, respectively; 4 while iteration < Imax do Obtain the ∇ = ∂v NRIS5
∂θ BU
0
as (35);
6
Use Armijo-Goldstein condition to obtain the step ∆
based on the ∇ andθ BU
0 ;
7
Update theθ BU
0 asθ BU
0 =θ BU
0 − ∆∇ and v NRIS as (33);
8
Simulation codes are provided to reproduce the results in this paper: https://github.com/LiZhuoRan0
BSE originally refers to that the virtual angle representation in far-field shifts as the subcarrier deviates from the central carrier. Here we extend this concept to HFNF channels.
This paper can be directly extended to the multi-user scenario by assigning orthogonal pilots, i.e., orthogonal time-frequency resources, to different users. Since different users are completely orthogonal, without loss of generality, we can analyze the performance of the proposed scheme by taking only one user as an example[13],[23].
RIS reflections in this paper take random values from -1 and 1 such that signals of approximately equal power can be received at all subcarriers without knowing the UE location. The optimal RIS reflection design in the HFNF BSE scenario still needs further investigation, and this is a promising research direction, whose essence can be formulated as how to jointly design frequencyindependent RIS reflections with finite quantization based on unknown angles and distances under the HFNF BSE to achieve better received power at each subcarrier.
We have tried the Wolfe condition in simulations to find the appropriate step size, but the estimation accuracy of AoAs is similar to using the Armijo-Goldstein condition. The Armijo-Goldstein condition is to ensure that the step size is not too large, whereas the Wolfe condition is to ensure that the step size is neither too small nor too large. Considering the computational complexity of Armijo-Goldstein condition is lower than the Wolfe condition, we adopted the Armijo-Goldstein condition here.
How to determine the specific value of Ns in the implementation of the algorithm is not a trivial task, and how to determine it in the absence of a prior on the channel clusters needs further study in the future.
In the simulations, for the sake of fairness, the noise power at different bandwidths is set as the noise power at 10 GHz, thus, the performance of the proposed schemes is evaluated solely in terms of the bandwidth impact (other than SNR).
The estimation error ofτ TDoA acquired from the PDL scheme is related to the size of RIS and BS array since the signals of the UE arrive at different elements of these arrays have different delays.
Survey of cellular mobile radio localization methods: from 1G to 5G. J A Del Peral-Rosado, IEEE Commun. Surv. Tutor. 202J. A. del Peral-Rosado et al., "Survey of cellular mobile radio localization methods: from 1G to 5G," IEEE Commun. Surv. Tutor., vol. 20, no. 2, pp. 1124-1148, 2018.
3-D indoor positioning for millimeter-wave massive MIMO systems. Z Lin, IEEE Trans. Commun. 666Z. Lin et al., "3-D indoor positioning for millimeter-wave massive MIMO systems," IEEE Trans. Commun., vol. 66, no. 6, pp. 2472-2486, Jun. 2018.
Direct localization for massive MIMO. N Garcia, IEEE Trans. Signal Process. 6510N. Garcia et al., "Direct localization for massive MIMO," IEEE Trans. Signal Process., vol. 65, no. 10, pp. 2475-2487, May 2017.
Tdoa localization algorithm with compensation of clock offset for wireless sensor networks. H Xiong, China Commun. 1210H. Xiong et al., "Tdoa localization algorithm with compensation of clock offset for wireless sensor networks," China Commun., vol. 12, no. 10, pp. 193-201, Oct. 2015.
Low complexity iterative localization of timemisaligned terminals in cellular networks. M Van Eeckhaute, pp. 10 730-10 739IEEE Trans. Veh. Technol. 6711M. Van Eeckhaute et al., "Low complexity iterative localization of time- misaligned terminals in cellular networks," IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10 730-10 739, Nov. 2018.
Positioning in 5g networks. S Dwivedi, IEEE Commun. Mag. 5911S. Dwivedi et al., "Positioning in 5g networks," IEEE Commun. Mag., vol. 59, no. 11, pp. 38-44, 2021.
TS) 38.215: Physical layer measurements. Tech. Spec. 3GPP. v. 17.2.0, 20223GPP, "Tech. Spec. (TS) 38.215: Physical layer measurements," ,v. 17.2.0, 2022.
Position and orientation estimation through millimeter-wave MIMO in 5G systems. A Shahmansoori, IEEE Trans. Wireless Commun. 173A. Shahmansoori et al., "Position and orientation estimation through millimeter-wave MIMO in 5G systems," IEEE Trans. Wireless Commun., vol. 17, no. 3, pp. 1822-1835, Mar. 2018.
Target sensing with intelligent reflecting surface: architecture and performance. X Shao, IEEE J. Sel. Areas Commun. 407X. Shao et al., "Target sensing with intelligent reflecting surface: archi- tecture and performance," IEEE J. Sel. Areas Commun., vol. 40, no. 7, pp. 2070-2084, Jul. 2022.
Joint beam training and positioning for intelligent reflecting surfaces assisted millimeter wave communications. W Wang, W Zhang, IEEE Trans. Wireless Commun. 2010W. Wang and W. Zhang, "Joint beam training and positioning for intelligent reflecting surfaces assisted millimeter wave communications," IEEE Trans. Wireless Commun., vol. 20, no. 10, pp. 6282-6297, 2021.
Location awareness in beyond 5g networks via reconfigurable intelligent surfaces. Z Wang, IEEE J. Sel. Areas Commun. 407Z. Wang et al., "Location awareness in beyond 5g networks via reconfig- urable intelligent surfaces," IEEE J. Sel. Areas Commun., vol. 40, no. 7, pp. 2011-2025, 2022.
Channel estimation via orthogonal matching pursuit for hybrid mimo systems in millimeter wave communications. J Lee, G.-T Gil, Y H Lee, IEEE Trans. on Commun. 646J. Lee, G.-T. Gil, and Y. H. Lee, "Channel estimation via orthogonal matching pursuit for hybrid mimo systems in millimeter wave communi- cations," IEEE Trans. on Commun., vol. 64, no. 6, pp. 2370-2386, 2016.
Channel estimation for extremely large-scale MIMO: far-field or near-field?. M Cui, L Dai, IEEE Trans. Commun. 704M. Cui and L. Dai, "Channel estimation for extremely large-scale MIMO: far-field or near-field?" IEEE Trans. Commun., vol. 70, no. 4, pp. 2663- 2677, Apr. 2022.
Channel estimation for millimeter-wave massive mimo with hybrid precoding over frequency-selective fading channels. Z Gao, IEEE Commun. Lett. 206Z. Gao et al., "Channel estimation for millimeter-wave massive mimo with hybrid precoding over frequency-selective fading channels," IEEE Commun. Lett., vol. 20, no. 6, pp. 1259-1262, 2016.
Generalized orthogonal matching pursuit. J Wang, S Kwon, B Shim, IEEE Trans. Signal Process. 6012J. Wang, S. Kwon, and B. Shim, "Generalized orthogonal matching pursuit," IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6202-6216, Dec. 2012.
Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding. C Hu, IEEE Trans. Veh. Technol. 679C. Hu et al., "Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding," IEEE Trans. Veh. Technol., vol. 67, no. 9, pp. 8954-8958, Sep. 2018.
Wideband channel tracking and hybrid precoding for mmwave MIMO systems. N González-Prelcic, IEEE Trans. Wireless Commun. 204N. González-Prelcic et al., "Wideband channel tracking and hybrid precoding for mmwave MIMO systems," IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2161-2174, Apr. 2021.
Near-field localization with a reconfigurable intelligent surface acting as lens. Abu-Shaban, ICC 2021 -IEEE Int. Conf. Commun., 2021. Abu-Shaban et al., "Near-field localization with a reconfigurable intelli- gent surface acting as lens," in ICC 2021 -IEEE Int. Conf. Commun., 2021, pp. 1-6.
Beam squint assisted user localization in near-field communications systems. H Luo, F Gao, arXiv:2205.11392arXiv preprintH. Luo and F. Gao, "Beam squint assisted user localization in near-field communications systems," arXiv preprint arXiv:2205.11392, 2022.
Successive localization and beamforming in 5G mmwave MIMO communication systems. B Zhou, A Liu, V Lau, IEEE Trans. Signal Process. 676B. Zhou, A. Liu, and V. Lau, "Successive localization and beamforming in 5G mmwave MIMO communication systems," IEEE Trans. Signal Process., vol. 67, no. 6, pp. 1620-1635, Mar. 2019.
Terahertz ultra-massive MIMO-based aeronautical communications in space-air-ground integrated networks. A Liao, IEEE J. Sel. Areas Commun. 396A. Liao et al., "Terahertz ultra-massive MIMO-based aeronautical com- munications in space-air-ground integrated networks," IEEE J. Sel. Areas Commun., vol. 39, no. 6, pp. 1741-1767, Jun. 2021.
Wideband beamforming for hybrid massive mimo terahertz communications. F Gao, IEEE J. Sel. Areas Commun. 396F. Gao et al., "Wideband beamforming for hybrid massive mimo terahertz communications," IEEE J. Sel. Areas Commun., vol. 39, no. 6, pp. 1725- 1740, 2021.
D Tse, P Viswanath, Fundamentals of wireless communication. Cambridge university pressD. Tse and P. Viswanath, Fundamentals of wireless communication. Cambridge university press, 2005.
Spectrum allocation with adaptive sub-band bandwidth for terahertz communication systems. A Shafie, IEEE Trans. on Commun. 702A. Shafie et al., "Spectrum allocation with adaptive sub-band bandwidth for terahertz communication systems," IEEE Trans. on Commun., vol. 70, no. 2, pp. 1407-1422, 2022.
Channel modeling and capacity analysis for electromagnetic wireless nanonetworks in the terahertz band. J M Jornet, I F Akyildiz, IEEE Trans. Wireless Commun. 1010J. M. Jornet and I. F. Akyildiz, "Channel modeling and capacity analysis for electromagnetic wireless nanonetworks in the terahertz band," IEEE Trans. Wireless Commun., vol. 10, no. 10, pp. 3211-3221, 2011.
Molecular absorption effect: A double-edged sword of terahertz communications. C Han, IEEE Wireless Commun. C. Han et al., "Molecular absorption effect: A double-edged sword of terahertz communications," IEEE Wireless Commun., pp. 1-8, 2022.
Terahertz communications for 6g and beyond wireless networks: Challenges, key advancements, and opportunities. A Shafie, IEEE Netw. A. Shafie et al., "Terahertz communications for 6g and beyond wireless networks: Challenges, key advancements, and opportunities," IEEE Netw., pp. 1-8, 2022.
Deep learning-based hybrid precoding for terahertz massive mimo communication with beam squint. Q Yuan, IEEE Commun. Lett. 271Q. Yuan et al., "Deep learning-based hybrid precoding for terahertz massive mimo communication with beam squint," IEEE Commun. Lett., vol. 27, no. 1, pp. 175-179, 2023.
3-d hybrid beamforming for terahertz broadband communication system with beam squint. Y Wu, IEEE Trans. Broadcast. 691Y. Wu et al., "3-d hybrid beamforming for terahertz broadband communi- cation system with beam squint," IEEE Trans. Broadcast., vol. 69, no. 1, pp. 264-275, 2023.
Near-field wideband beamforming for extremely large antenna array. M Cui, arXiv:2109.10054arXiv preprintM. Cui et al., "Near-field wideband beamforming for extremely large antenna array," arXiv preprint arXiv:2109.10054, 2021.
Minimization of functions having lipschitz continuous first partial derivatives. L Armijo, Pacific Journal of mathematics. 161L. Armijo, "Minimization of functions having lipschitz continuous first partial derivatives," Pacific Journal of mathematics, vol. 16, no. 1, pp. 1-3, 1966.
The propagator method for source bearing estimation. S Marcos, Signal processing. 422S. Marcos et al., "The propagator method for source bearing estimation," Signal processing, vol. 42, no. 2, pp. 121-138, 1995.
Millidegree-level direction-of-arrival estimation and tracking for terahertz ultra-massive mimo systems. Y Chen, IEEE Trans. Wireless Commun. 212Y. Chen et al., "Millidegree-level direction-of-arrival estimation and tracking for terahertz ultra-massive mimo systems," IEEE Trans. Wireless Commun., vol. 21, no. 2, pp. 869-883, 2022.
| [
"https://github.com/LiZhuoRan0"
] |
[
"Implications and Applications for Education",
"Implications and Applications for Education"
] | [
"Olnancy)Anastasia Olga ",
"Akash Tzirides ",
"Gabriela Saini ",
"Duane Zapata ",
"Bill Searsmith ",
"Mary Cope ",
"Vania Kalantzis ",
"Theodora Castro ",
"John Kourkoulou ",
"Rodrigo Jones ",
"Jen Abrantes Da Silva ",
"Nikoleta Polyxeni Whiting ",
"Kastania ",
"Olnancy)Anastasia Olga ",
"Gabriela Zapata ",
"Akash Saini ",
"Duane Searsmith ",
"Bill Cope ",
"Mary Kalantzis ",
"Vania Castro ",
"Theodora Kourkoulou ",
"John Jones ",
"Rodrigo Abrantes Da Silva ",
"Jen Whiting ",
"Nikoleta Polyxeni Kastania ",
"" Generative ",
"Ai : "
] | [] | [] | The launch of ChatGPT in November 2022 precipitated a panic among some educators while prompting qualified enthusiasm from others. Under the umbrella term "Generative AI," ChatGPT is an example of a range of technologies for the delivery of computer-generated text, image, and other digitized media. This paper examines the implications for education of one generative AI technology, chatbots responding from large language models (C-LLM). It reports on an application of a C-LLM to AI review and assessment of complex student work. In a concluding discussion, the paper explores the intrinsic limits of generative AI, bound as it is to language corpora and their textual representation through binary notation. Within these limits, we suggest the range of emerging and potential applications of Generative AI in education. | 10.48550/arxiv.2305.07605 | [
"https://export.arxiv.org/pdf/2305.07605v3.pdf"
] | 258,676,570 | 2305.07605 | ebae6e48eedf456de10b7e4cf70eab467731fd73 |
Implications and Applications for Education
2023
Olnancy)Anastasia Olga
Akash Tzirides
Gabriela Saini
Duane Zapata
Bill Searsmith
Mary Cope
Vania Kalantzis
Theodora Castro
John Kourkoulou
Rodrigo Jones
Jen Abrantes Da Silva
Nikoleta Polyxeni Whiting
Kastania
Olnancy)Anastasia Olga
Gabriela Zapata
Akash Saini
Duane Searsmith
Bill Cope
Mary Kalantzis
Vania Castro
Theodora Kourkoulou
John Jones
Rodrigo Abrantes Da Silva
Jen Whiting
Nikoleta Polyxeni Kastania
" Generative
Ai :
Implications and Applications for Education
202310.48550/arXiv.2305.076051
The launch of ChatGPT in November 2022 precipitated a panic among some educators while prompting qualified enthusiasm from others. Under the umbrella term "Generative AI," ChatGPT is an example of a range of technologies for the delivery of computer-generated text, image, and other digitized media. This paper examines the implications for education of one generative AI technology, chatbots responding from large language models (C-LLM). It reports on an application of a C-LLM to AI review and assessment of complex student work. In a concluding discussion, the paper explores the intrinsic limits of generative AI, bound as it is to language corpora and their textual representation through binary notation. Within these limits, we suggest the range of emerging and potential applications of Generative AI in education.
Context and History
Chatbots responding from large language models (C-LLM) have sprung to public and educator attention in recent times, notably with the launch of ChatGPT in November 2022. This was version 3.5 of a series of Generative Pre-trained Transformers (GPTs) in development by the company Open AI, founded in 2015. GPT-1was released in 2018, andGPT-4 in March 2023. Microsoft has invested heavily in OpenAI, incorporating its software and data resources into its Bing search, Edge browser and Windows operating system. Google is developing its own GPT, Bard, among many others already developed or with development underway. Images, software code, math and other digitized media can similarly be generated, although these also leverage text for labeling, training and prompts.
Notwithstanding the hype and anxiety created by these generative AI systems, the underlying technologies are by no means new. The two main components of text-based generative AI are the chatbot that accepts prompts from users and the large language model from which the software draws to frame its dialogical response.
Chatbots proceed through a human prompt/machine response dialogue. The first computer chatbot was ELIZA, developed in 1964-66 by Joseph Weizenbaum at MIT. To test the program, ELIZA was programmed to ask questions and to respond to answers as if the machine were a psychotherapist along lines developed by Carl Rogers. The art of the psychologist and the intelligence of the chatbot were to reframe information given by the patient as new questions. From a computational point of view, explained Weizenbaum, "[i]nput sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules." However, from the human point of view of its first users, ELIZA uncannily seemed like a psychotherapist. Of the prospects for artificial intelligence, Weizenbaum concluded, "machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures" (Weizenbaum 1966: 36, 43). These generalizations remain as valid today as they were in 1966.
The other principal component of GPTs, the language model, is also a technology that is decades old. The term "language model" is in some ways a misnomer because breakthroughs in this model did not begin until computer-scientists working with language all-but abandoned the project of modeling language, at least in the senses in which humans describe language and account for their usage-semantically (to mean something) and grammatically (its patterning according to differentiated components). Language models "know" nothing about language. They just assess the frequency of character (letter) collocations that may happen to have some semantic and syntactic significance for humans.
However, the emergent phenomena arising from language models of a certain size show that much has in effect been "learned" about language structure and semantic cohesion. Experts in this field are split as to whether GPTs are just behaving as a "collection of procedures" or if there is some mysterious emergent behavior happening that we do not yet understand. Google engineer Blake Lemoine was fired in 2022 for suggesting in an interview with the Washington Post that Google's chatbot initiative, LaMDA, showed signs of sentience (Tiku 2022).
A brief historical recap: When Noam Chomsky began working on his Logical Structure of Linguistic Theory in the early 1950s, he posited that language was like a program for human intelligence, in which any particular sentence among the infinite number of possible sentences was "generated by applying optional transformations to the strings underlying kernel sentences" (Chomsky 1956: 123). If language was to be conceived as a program operationalizing human intelligence, it was supposed that a language model could be applied in computing that replicated human reasoning. Versions of this idea are today dismissed as "good old-fashioned AI" (Nilsson 2009: 241).
By the mid 1960s, it became clear that the project of language for application to computing was failing. For the previous decade, the main focus of computational work in linguistics had been machine translation. In 1965, a US government report recommended that this quest should be abandoned because so little progress had been made (Kalantzis and Cope 2020: 209-15).
It was not until the mid 1970s that the first breakthroughs in machine translation were made, now using a completely different approach, statistical text analysis. This, said its early developers, Church and Mercer, in a retrospective overview, is "a pragmatic approach" with an "emphasis on numerical evaluations" focusing on "broad (though possibly superficial) coverage of unrestricted text, rather than deep analysis" (Church and Mercer 1993: 1). Working at IBM in the 1970s, Church and Mercer's boss was pleased with the results of this purely statistical approach. Mercer recalled his boss saying, "Whenever I fire a linguist, the system gets better." A historical aside: After he left IBM, Mercer put his statistical approach to text to powerfully practical effect, first to make a fortune as a hedge fund manager, and later as a driving intellectual force as well as investor in Cambridge Analytica and major funder of the Donald Trump 2016 presidential election campaign (Kalantzis and Cope 2020: 220-21, 235-40).
Since the emergence of GPTs, Chomsky has steadfastly maintained his distance, a paradigm away from statistical approaches to language and artificial intelligence. ChatGPT, he said, is no more than "a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer" (Chomsky, Roberts and Watumu 2023).
These are the larger dimensions and underpinnings of the debate we analyze in this paper, in theory as well as in educational practice. First, some definitions:
• Machine intelligence: following Alan Turing, symbol manipulation by computers larger and more complicated than is feasible for human minds (Turing 1950). Turing demonstrated this in 1949 when the Manchester 1 computer that he and his colleagues had designed found some impossibly large prime numbers (Turing 2015: 212-17). • Artificial intelligence: a term coined by John McCarthy (McCarthy et al. 1955), nowadays taken to mean machines that can learn from human interaction: supervised, unsupervised, reciprocal, and "deep" learning. • Generative AI: coherent and well-formed text, images, and sound generated in new designs by a computer. • Chatbot-prompted Large Language Model (C-LLM): massive textual corpora that can return well-formed textual responses following a human prompt in chatbot dialogue. We make the case that binary computers in general and C-LLMs in particular are in some respects already enormously "smarter" than humans. Unlike humans they can "remember" huge quantities of digitized texts and images. This makes them potentially very helpful as cognitive prostheses applied in life and learning, providing for instance a narrative response on any topic, an outline of a job application, suggestions about where to eat, and such like. However, there are absolute limits to AI which don't even fall short of human intelligence because they can barely be compared on the same scale. The phrase "Artificial Intelligence" implies that computer intelligence is a replicant of human intelligence. It is not-it is incomparably different, and this is of value. We return to this question in the final section of this paper.
Implications for Education
By the beginning of the 2020s, there were numerous applications of artificial intelligence in everyday life, from language (e.g., machine translation and voice recognition), to imaging (e.g. face recognition), to social interaction (e.g., the profiles developed in social media), to material practice (e.g. semi-autonomous cars, "smart" home security), and many more. The larger context is a new socio-economic regime sometimes characterized as Industry 4.0, whose signature technologies are artificial intelligence, automation/robotics, bio-informatics, and the internet of things (Schwab 2016). Focusing on the internet specifically, we are entering a phase called Web 3.0, adding artificial intelligence, the semantic web, and blockchain to the social web (Wood 2014). Driven by advanced computing, this is an era of integrated cyber-social systems .
However, to date there have been few effective, widespread applications of artificial intelligence to education. The dominant learning management systems, for instance, are still principally file upload/download technologies whose underlying technical and pedagogical architectures have changed little since the beginnings of networked or cloud computing in the 1990s (Cope and Kalantzis 2023b). Some generic applications of AI have been applied in education such as machine translation and grammar and style checking, but these are supplementary external services to support digital text work and not specifically educational technologies.
Nevertheless, education researchers have for some time been exploring the potential impacts of artificial intelligence on education. Lane and colleagues point to the enormous possibilities for what they term AIED in areas such as collaborative, immersive, affective and exploratory learning (Lane et al. 2016). Rosemary Luckin outlines the range of ways in which machine learning can complement human learning (Luckin 2018). A collaborative consisting of eleven leaders in the field have developed an analytical framework to account for the increasing entwinement of artificial intelligence and human learning (Markauskaite et al. 2022). The research team authoring this paper has developed and tested artificial intelligence technologies to support collaborative learning and learning analytics that offer formative feedback (Cope, Kalantzis and Searsmith 2021).
Early research on C-LLMs in education points both to their constructive and worryingly disruptive potentials. Addressing the constructive potentials first: ChatGPT (v.3.5) has been shown to offer "more detailed feedback that fluently and coherently summarizes students' performance than human instructors," demonstrating "high agreement with the instructor when assessing the topic of students' assignments" (Dai et al. 2023). It has been demonstrated that GPTs can be used constructively to support literacy development, from young children's storytelling (Li and Xu 2023) to academic writing (Buruk 2023, Liu et al. 2023). GPT-3 has been shown to perform credibly as the pseudo-teacher in a student-teacher dialogue (Tack and Piech 2022), and presumably this capability will have improved with subsequent releases. GPT-3 was also successfully used to simulate student discourse in dialogue with trainee teachers (Markel et al. 2023), and the significant impact of GPTs in teacher education has been predicted (Trust, Whalen and Mouza 2023).
On the other hand, many educators also fear the disruptive potentials of CLLMs. Open AI's GPT can write a five-paragraph essay on any topic, producing a perfectly well written if dull and predictable response, at least as good or better than a student's response. With more than a hint of irony, a leading AI in Education researcher concluded that GPT-3 will "democratize cheating," putting out of business the expensive essay mills used by an estimated 15% of higher education students who use them (Sharples 2022: 1120). GPT-3.5 was able to pass the US National Board of Medical Examiners test at the level of a third-year medical student (Gilson et al. 2023, Kung et al. 2023) and GPT-4 can pass the Multistate Bar Examination taken by law students (Katz et al. 2023).
This presents an immediate problem for education so long as students have access to GPThow is it possible to know which parts of student work and thinking where the student's, and which parts were the AI? Asking students to cite the chatbot does not resolve this problem because reliable knowledge claims must be referenced to reliable sources, and there is no knowing which sources a GPT has used. GPT detectors, moreover, are unreliable, and given the generative nature of the AI-every text is a new text-it is unlikely they will ever work reliably. If the cunning student adds a few typos and awkward expressions, they will mostly throw an AI detector off the audit trail.
To "know what students know" (Pellegrino, Chudowsky and Glaser 2001), the immediate solution appears to be rigorously proctored assessment. Some have suggested a return to handwritten submissions, but these could easily be transcribed from an AI generated answer. "Closed book," proctored assessments are another alternative, but they are less than ideal because they focus on long-term memory, and the validity of this learning measure is today moot. We live and work in knowledge environments where we have come to rely increasingly on externalized memory in the form of the web-connected devices close to our bodies that we constantly need to look up-doctors and lawyers are good examples. Integral to their work nowadays, they do exactly what GPTs do-they need to be able to look things up. We rely increasingly on digital devices as our cognitive prostheses, not only to remember things but to process knowledge in-the-hand with algorithms of calculation and procedure. Today, more than ever before, we need to transfer the center of our focus in education away from long-term memory and towards higher order critical, creative and design thinking that effectively applies long-term social memory to social practice.
Cheating, however, is one of the lesser problems for education created by GPTs. As soon as we expand our notion of knowledge from individual to collective, from personal memory to "cyber-social" knowledge systems (Cope and Kalantzis 2022), we run into much bigger problems with generative AI. On the basis of an analysis of their foundational architecture and algorithmic processes-to the extent that they are disclosed (Open AI 2023)-and on analysis of the development of chatbots and statistical language models, we suggest that C-LLM s are also deeply harmful to a social understanding of knowledge and learning in the following ways.
1. Sourcing: The machine buries its sources. Not only are the sources used by C-LLMs opaque, but a user requesting references will be served good-looking but sometimes fake references. In contrast, one of the great intellectual achievements of modern knowledge systems has been to base knowledge claims on the credibility of sources (Grafton 1997).
To validate antecedent knowledge claims, we need to be able to interrogate their sources.
To distinguish the thinking of the writer from the social knowledge upon which that thinking is based, we use quotation and citation apparatuses. In school, we call this "critical literacy." In academic work, the credibility of sources is dependent on a number of variables, including the qualifications of the researcher, the credibility of the publication venue, and the rigors of peer review. However, giving us the impression that the AI is answering rather than its sources, the sources are hidden. This makes the AI seem smart when it is just a remix of sources. Some of the sources, moreover, may be embarrassing to disclose; others have been copied in breach of copyright (Schaul, Chen and Tiku 2023). The software is a "black box" (Ashby 1956: 86) by design. More recent GPTs have been able to provide some real references when asked, but these are not necessarily the sources from which they have formulated their response.
Facts:
The machine can have no notion of empirical truth. The priority of C-LLMs is to produce convincing narratives. They are genre machines, harvesting ostensible facts they have found in their textual sources but without being able to verify them. They also invent non-existent facts when needed to complete a plausible text (Munn, Magee and Arora 2023). It may happen that some or even most of a response to a prompt is fact, but the machine has no way of distinguishing fact from fake in its sources.
Theory:
The machine can have no conception of a theoretical frame or disciplinary practice. At best, C-LLMs pick up latent semantics in the happenstance of character collocations. They can't know about the connection between dogs and kennels; they just find these character collocations associated under certain textual circumstances. By contrast, disciplinary frames of reference are in human practice rigorously framed ontologies (Cope and Kalantzis 2020: 271-328). These are products of the social intellect, constituted through validated systematic knowledge methodologies that have been codified in scientific and professional practices of observation, multiperspectival corroboration and critical reflection. C-LLMs can do none of these things: they are no more than "stochastic parrots" (Bender et al. 2021), regurgitating text they have copied from a mishmash of textual sources (Magee, Arora and Munn 2022).
Ethics:
If the machine is socially well mannered, it is not because it sources are necessarily that. C-LLMs depend on massive textual corpora, and the reality of human legacy text is that the sources are rife with racism, sexism, and homophobia, along with other ideologies and social orientations that are today unacceptable in mainstream public life. To align with the sensitivities and moral agendas of our times and as a necessary corrective to a multitude of existing biases, C-LLMs require extensive filtering. Human programmers create the filters to over-ride the "truth" of the source texts. This is the only way to be sure that the generated texts do not offend modern, liberal sensibilities. However the moral frame of these human-imposed filters is buried. Whether big brother is a nice pseudo-person is less relevant than the fact that C-LLMs are big brothers too, invisible shepherds of our morals.
Critical Dialogue:
To appear a good interlocutor, the machine is skewed towards being uncritically affirmative. The "chat" part of the technology of C-LLMs plays through a feigned anthropomorphism. As a good conversationalist, the chatbot remains polite and largely uncritical, even when its human partner is offensive or critical. This, said Weizenbaum, is how "ELIZA maintain[ed] the illusion of understanding." Indeed, "one of its principal objectives [was] the concealment of its lack of understanding" (Weizenbaum 1966: 36). Even the inventor was disturbed by this, a decade later writing a best-selling book renouncing not only chatbots but computer technology in general (Weizenbaum 1976).
If C-LLMs pose a danger for knowledge work and learning in the fundamental areas of their sources, purported facts, theory, explicit ethical frames, and critical dialogue, then how can we apply them in education? In the next section of this paper, we will describe an intervention in which we recalibrate a C-LLM to offer narrative feedback to students.
Application: The Study
Research Design
In order to explore the potential of C-LLMs in an authentic learning environment, Cope and Kalantzis' Cyber-Social Learning Laboratory 2 at the University of Illinois designed an intervention leveraging a C-LLM to provide learners with AI feedback on extended written texts. This research lab has since 2009 been developing a suite of interconnected apps in a social knowledge and learning platform, Common Ground Scholar (Cope and Kalantzis 2023a). 3 Research and development work has been built incrementally funded by a series grants from a variety of agencies and foundations. 4 Our research and software development processes have been closely integrated in a process we have termed "cyber-social research" (Tzirides et al. 2023), bringing together methods of "agile research" inspired by contemporary software development processes (Twidale and Hansen 2019) and design research methods in education (Laurillard 2012). Our masters and doctoral students are as much part of the iterative design process as the research team leaders. During periods of intensive use, new releases of the software were made at midnight almost every day, based on the previous day's interactions with users.
CGMap is a new app within the CGScholar platform, connecting via application programming interface (API) to OpenAI's GPT in order to offer machine feedback to learners on their extended multimodal texts. This machine feedback supplements the peer and instructor feedback provided to students on the same explicitly stated assessment measures. At the point of this trial, CGMap was connected to OpenAI's GPT-3. Since this intervention, it has been connected to later versions. Instructors can enter any assessment into CGMap in the form of a rubric which will provide AI reviews of student work, optionally in addition to anonymized peer-, named instructor-and self-review against the same rubric.
The students participating in our intervention were masters and doctoral students in the Learning Design and Leadership program at the University of Illinois. Sixty-two students joined the trial in two 8-week online courses, "Assessment for Learning" and "New Media and Literacies." The focus of the research was the major project for the course. Students could choose their own topic with a brief to examine educational theory and practice in "a cutting edge area of innovation (such as differentiated instruction, flipped classroom, GPTs, AI in education, learning analytics, gamification, metacognition, self-efficacy/regulation, socio-emotional learning, collaborative learning, formative assessment etc.) or one of education's 'wicked problems' which has presented a longtime challenge (such as a dimension or dimensions of learner diversity, strategies for inclusion, equity and education reform)."
The project workflow is outlined in Fig. 1, with the project commencing in Week 1 of the course, and final works published into the course community and personal portfolios in Week 8. 3 In 2023, CGScholar has nearly 350,000 user accounts and can be integrated into LTI-compliant learning management systems.
Parts of the software suite are open to anyone to sign up and use at no charge; other parts have a modest licensing fee based on self-sustainability principles and managed by Common Ground Research Networks, a not-for-profit public benefit corporation based in the Research Park at the University of Illinois. Among others, use cases for CGScholar include: literacy in schools between grades 4 to 12; higher education, including education, engineering, medicine, and veterinary medicine courses; and global social learning interventions by the Red Cross and the World Health Organization. 4 Learning Analytics: US Department of Education, Institute of Education Sciences: "The Assess-as-You-Go Writing Assistant" (R305A090394); "Assessing Complex Performance" (R305B110008); "u-Learn.net: An Anywhere/Anytime Formative Assessment and Learning Feedback Environment" (ED-IES-10-C-0018); "The Learning Element" (ED-IES-lO-C-0021); and "InfoWriter: A Student Feedback and Formative Assessment Environment" (ED-IES-13-C-0039). Bill and Melinda Gates Foundation: "Scholar Literacy Courseware." National Science Foundation: "Assessing 'Complex Epistemic Performance' in Online Learning Environments" (Award 1629161 An example from one of these works is illustrated in Fig. 2. Course participants commenced writing on the left side of the screen with an elaborated rubric on the right side, against which they write their work. Upon submission of a complete draft, they review two other participant's works against the rubric criteria within the CGMap application. Building on earlier research and development work supporting information and argument writing according to the Common Core Standards in the middle school (Olmanson et al. 2016), the CGMap application consists of a series of review and annotation nodes presented in the form of virtual sticky notes on the right side of the screen, linked to highlighted text on the left. Reviewers connect the nodes into a concept map. The CGMap tool also offers a sentiment analysis feature in reviews and feedback integrated into rubric elements, annotations, general comments, and overall feedback. The green node highlighted in red is currently active, an annotation coded as STR-(a structural issue with the text that requires improvement). This refers to the yellow highlighted block of text in the reviewed work. Mousing over the node highlights the text. This annotation coding supports machine learning. The brown node is a whole-text review against the theory review criterion. The reviewer's rating is recorded in the top bar, and their narrative explanation has been opened by the writer.
After the anonymized peer review, participants offer feedback to reviewers on their reviews, and revise their texts. At this point, a new step had been added to the existing workflow, the AI review. Linked via API to GPT-3, CGMap offered feedback on the same review criteria as the human peers (Fig. 4). Participants then compared the human and machine feedback before final revision and review and publication of their work into the course community and personal portfolios.
Fig. 3: AI review of a course participant's work in CGMap
Here is a brief technical description of the AI review generation process: the CGMap API first takes the extended student text and breaks it into sections. This is helped by CGScholar's strictly structural and semantic markup of sections and subsections, as well as HTML paragraphing at a more granular level. These "sections" are then fed to the ChatGPT-3 "textcurie-001." This model is smaller and less powerful than the model subsequently used for review generation, "text-davinci-003," but it is less expensive, faster, and does a competent job on the summarization task (which would not necessarily benefit from the more powerful model). Each section is summarized, and then sections are concatenated. Summarization was necessary as a result of constraints imposed by GPT-3 but may not be required for future implementations with later versions. After that, criterion by criterion, the summarized text is sent to GPT-3 multiple times, each time with a different rubric criterion as a prompt. The review is displayed as a node graph in the CGMap tool where the AI rubric criterion nodes are generated, one for each criterion on the rubric (Fig. 4). Course participants may then augment the review map with additional information.
The prompt served to GPT consists of the instructions to summarize the student text, system instructions, and a request to review the text based on one of the review criteria in the assessment rubric. This is repeated for however many review criteria there are in the rubric.
For each student work, CGMap can use any rubric provided by an instructor. However in this intervention the rubric items and GPT prompts were drawn from Kalantzis and Cope's "Learning by Design" schema, an overview of which is provided in Fig. 4. Based on the pedagogy of "multiliteracies" Kalantzis 2023c, New London Group 1996), this schema takes an epistemological approach to learning, focusing not solely on cognition but more broadly on knowledge-making activities which in addition to cognition involve material practices, embodied activity, and socio-emotional engagement Kalantzis 2015, Lim, Cope and. These are high level, abstract review criteria which in theory make the task of the AI more difficult. The Learning by Design schema consists of four major areas of knowledge activity, each subdivided into two. Empirical activities range from the immediate world, known to the learner (learner experience, positionality, interests, reasons for pursuing a topic), moving into empirical work that immerses learners into new knowledge constituted by systematic observation. Conceptual activities involve naming and classifying the world in ways that are often more precisely defined than they are in vernacular language, as well as the disciplinary practice of assembling these concepts into theories. Analytical activities apply reasoning to trace functional causes and effects, and critical thinking to explore underlying purposes and interests. Application activities test ideas and transfer learning either in predictable ways in typical settings, or creatively into new or unusual settings. For the cognitive aspects of this work, we have created a crosswalk into Bloom's taxonomy, while also highlighting additional socio-material and behavioral aspects that are captured in Learning by Design (Lim, Cope and Kalantzis 2022).
For these eight "knowledge processes" both learners and the AI are provided the same text, the learners in the form of eight rubric criteria the AI as eight engineered prompts. The average length of each rubric criterion is 100 words, including the following: a) a one-sentence definition; b) advice to reviewers on the general kind of feedback that would be most helpful to the author on this criterion; c) marker words typically used to document this particular kind of knowledge activity; and) performance descriptions at five rating levels. When the AI analyzes a learner's text, it runs through the whole text multiple times, analyzing according to each prompt, and returning these to the learner as separate nodes in a concept map, to which they can respond with linked comment nodes.
Course participants worked on their chosen topic for the whole term, making five blog-like updates on the topic on which they were working in the Community app within CGScholar, also commenting there on others' posts. Incremental progress in the course was tracked using CGScholar's analytics app, with each participant having their own progress visualization and the instructors having an aggregated visualization (Fig. 6). At the end of the course, letter grades for the courses were assigned according to each participant reaching announced thresholds in the metrics.
Research Participants
Participants were recruited during Spring 2023 from two online higher education courses in the College of Education at the University of Illinois Urbana-Champaign. The 62 participants were of mixed genders and ethnic/racial backgrounds, ages 25 years and above, pursuing Certificate, Master's, and Doctorate degrees in the College of Education. All were working professional educators undertaking their graduate degrees part-time, highly experienced in areas ranging from elementary school to higher education, workplace and community education, and crossing diverse discipline areas. Recruitment to the research was conducted through announcements in the courses.
Research Questions
The research questions for this project, both for our research team and posed directly to the research participants in this intervention were the following:
1. What are the differences between human and machine reviews? 2. How might these two types of reviews complement each other?
Data Collection and Analysis
The textual corpus that generated the data for this study is summarized in Table 1. Before analyzing these data in the sections that follow, we want to note is that the AI provided more extensive reviews (average length 1335.5 words, or 166.9 words per criterion), than the peers (335.5 words, or 41.9 words per criterion) against the same rubric criterion text. Within and around this textual corpus, the study collected and analyzed the learner experience in four ways, reported upon in the following sections:
a. Comparison of human and AI ratings b. Sentiment analysis of human and AI ratings c. Course participants' reactions to the differences between the human and AI reviews d. Post-course survey evaluating the value and relevance of peer and AI reviews.
3.4a Comparison of human and AI ratings
Both human and AI reviews produced ratings using the 1-5 Likert scale. The average ratings obtained from both human and AI reviews for the five rubric elements, as well as the overall average rating, were compared. The text of the reviews was also analyzed for its sophisitication as academic language.
3.4b Sentiment analysis of human and AI
To enhance peer feedback and ensure clarity in the intended message, course participants, instructors and the research team had access to sentiment analysis. This is a technology of Natural Language Processing (NLP) that is particularly popular in the field of Educational Data Mining (EDM) (Newman andJoyner 2018, Romero andVentura 2012). Sentiment analysis helps discern whether a given piece of text has a positive or negative orientation. It involves the use of computational techniques to identify, extract, and quantify affective information from written or spoken language with applications over a wide range of text domains (Wang and Wan 2018). Luo et. al. found in their research that review sentiments expressed in feedback text positively correlated with review scores or rankings, and proposed that sentiment analysis be used as supplementary data to support the validity and transparency of peer review systems (Luo et al. 2022).
CGMap incorporates a feature to connects the app via API to Google Cloud's Sentiment Analysis. Users can call up a sentiment analysis for every node on their map-these are accessed by selecting the double arrows at the bottom of each node, to be seen in Fig. 2. The result is a shadow box with sentiment evaluations, an example of which is provided in Fig. 6. The sentiment analysis determines the overall emotional leaning of the text by providing a numerical score and magnitude values. The score ranges from -1.0 (negative) to 1.0 (positive), indicating the sentiment of the text. The magnitude, on the other hand, indicates the overall strength of emotion, and it is not normalized. It is usually proportional to the length of the text.
The sentiment analysis also provides sentence-level sentiment values, which include score and magnitude values for each sentence. These values help identify the sentiment values expressed in each sentence of the text. In CGMap, the labels "positive", "negative", and "neutral" has been replaced with the color-coded categories of "encouraging", "informational", and "critical" respectively. In addition, the average sentiment score and sentiment magnitude obtained from both human and AI reviews for the five rubric elements, as well as the overall average sentiment score and sentiment magnitude, were compared.
3.4c: Course participants' reactions to the differences between the human and AI reviews
After their work had been reviewed by their peers and the AI, participants were invited to express their perceptions of and opinions on their experiences with both review types in the following two ways:
1. For the AI review: Participants were asked to share an image or screenshot of their experience with the AI review and provide three words that summarized their perceptions of this type of review. 2. AI vs. Peer/Human Reviews: Participants were asked to reflect on the main differences between AI and peer/human reviews based on their experience in 150 words or more, and to provide examples illustrated with screenshots. The students' linguistic responses were analyzed qualitatively with the VERBI software MAXQDA 2022 using thematic analysis. This type of analysis was chosen because it has been used in a myriad of works that have focused on participants' perceptions and have employed similar instruments for data collection (Braun and Clarke 2006). The first step involved the careful reading of the students' responses and the recording of their overall, general impressions of peer and AI reviews. This was followed by the specific identification of themes and exemplifying statements. In the final stage of the analysis, themes were cross-examined employing Glaser's constant comparative method (Glaser 1965) to ensure that there were no discrepancies in the initial analysis, and the percentage of themes in connection with coded segments for all participants was calculated.
The analysis of students' multimodal responses was grounded in the tenets of social semiotics (Cope and Kalantzis 2020, Kalantzis and Cope 2020, Kress andvan Leeuwen 1996 [2021], van Leeuwen 2005). This methodology allowed for the identification and categorization of the semiotic resources (e.g., textual [language and typography], visual, gestural) used by meaning-makers, as well as the examination of their motivation, sociocultural context, and intended audience.
3.4d Post-course survey
The post-course survey questions were divided into three sections: demographics, previous experience with peer reviews, and feedback about the peer/human versus the AI reviews. The questions about the peer/human and AI reviews targeted to get students' feedback on the overall quality of both types of reviews, their usefulness in improving the work based on the course rubric, their advantages and disadvantages, the ease of understanding and implementing the reviews' suggestions, and the participants' feelings about having their work reviewed by an AI tool.
3.5a Results: Human Compared to AI Reviews and Ratings
In order to prepare the data set for analysis and ensure comparability across variables, we utilized range normalization. This method involved rescaling data by dividing each value in respective rubric element by the range of the entire dataset. The data were reported against five rubric elements, the four macro knowledge processes, "experiential/empirical," "conceptual/theoretical," "analytical/critical," and "applied," plus a measure of academic writing, "communication." We analyzed results against the five rubric elements using the three values namely, 'ratings', 'sentiment score', and 'sentiment magnitude.' Additionally, we employed correlation coefficient and covariance to elucidate the relationships between each element and the overall average for human and AI review respectively.
The resulting bar chart demonstrates that the average rating generated by human reviews was higher than the average rating produced by AI reviews (3.82 vs 3.18 respectively). The variation between the human and AI judgement on each criterion is consistent which seems to indicate that the humans and the AI were in broad agreement about the differences in relative performance ( Table 2 and Fig. 7). Our findings also indicate that human peer review generated slightly higher ratings than AI review against identical rubric elements and rating level descriptions. In other words, the AI reviews were somewhat more critical and the human reviews somewhat more favorable. How does one interpret this result given that chatbots are designed to be more agreeable than argumentative? Among possible explanations, while the prompts specifically directed the chatbot to provide critical commentary, the human reviewers tended to be generous to each other given that they belonged to the same community of learners and even though the reviews were anonymous.
Further, we observed that the median values (Fig. 8) for the ratings variable were skewed to the left. In terms of variability within the data, we found that the standard deviation (SD) for human peer reviews was generally smaller than that of AI reviews, while the data in AI reviews were more spread out. The smaller SD values in human peer review ratings may indicate a more consistent review method, while the larger SD values in AI review dataset may imply greater variability in the results obtained using the other review method. This may also indicate that the "window" of possible scores, in this context, is skewed towards higher values thus compacting the score range.
Fig. 8: Median and Variability of human and AI ratings
Additionally, when comparing the covariance (Fig. 9) which highlights the direction of the linear relationship in datasets of human peer review and AI review, we can see that the values for human peer review seem to be relatively small, suggesting a weak or even non-existent unidirectional linear relationship. On the other hand, the values for AI review are generally larger and positive, indicating a stronger linear relationship. This suggests that the ratings in human peer review may not be strongly varying together, while the ratings in AI review more strongly vary together.
Fig. 9: Correlation and Covariance between the rubric elements and their respective average
Correlation coefficients (Fig. 9) showcase intensity among the elements in the datasets of human peer review and AI review. The values in human peer review dataset are consistently higher than the values in AI review dataset for both the courses. This suggests that the five rubric elements being studied in dataset human peer review are more strongly correlated than those in AI review dataset which may therefore have a stronger impact on student success than the factors in the AI review dataset. The correlation values in the AI review data set fluctuate more than those in the human peer review, with some having relatively low correlations and others having relatively high correlations. This could indicate that the relationship between elements in AI review are more complex or that there may be some outliers or influential data points affecting the observed correlations.
Assessing the quality of the AI compared to the human text, we used CGScholar's Analytics app's measure of academic language. This combines readability measures: Flesch-Kincaid, Coleman-Liau and the Automated Readability Index. The number approximates a school grade level. Together, these three measures capture the complexity and level of sophistication of syntax and vocabulary. We take these to be a proxy for the academic language which is typically more complex on these measures. A comparison of the full corpus of human and AI reviews (Table 3) reveals complex language use in the AI reviews than the human reviews, with higher mean (15.80 vs 7.80), median (16.25 vs 7.95), and maximum scores (18.03 vs 15.42) for AI reviews, The difference in standard deviation between the two groups is small, suggesting that there is not a significant difference in the consistency of readability levels. Tables 4 and 5. This comprises sentiment score (between -1 and +1), and sentiment magnitude (between 0 and +inf): Our analysis of the human and AI reviews (Tables 4 and 5) revealed that the average sentiment score from human reviews was slightly higher than that of AI reviews (0.32 vs 0.22), but AI review consistently outperformed human peer review in terms of sentiment magnitude for all rubric elements (see Table 5) which is a measure of the strength or intensity of the sentiment expressed in the feedback (Fig. 10). The AI review generated significantly higher sentiment magnitude compared to human peer reviews (1.08 vs 2.99), possibly due to its ability to provide longer and more in-depth feedback that consistently followed the prescribed prompt. These findings suggest that AI review may generate more detailed and informative feedback to students compared to human reviews.
Fig. 10: Sentiment score and magnitude for both courses
We also observed that, for the sentiment score (Fig. 11) and magnitude (Fig 12), there was no clear skewing of median values. The data seems to be relatively symmetrical in some sets, while in other sets, it is skewed to the right or left as indicated the charts below. In terms of variability within the data, we found that the SD values were again in flux-low for sentiment score (Fig. 11) indicating a narrow range of values and high for sentiment magnitude (Fig. 12) indicating a wide range of values. These findings suggest that the AI review could potentially be a more objective measure of student performance but may not be as effective as human reviews given the larger differences in mean and median scores and the relatively higher standard deviation for AI reviews indicating a greater variability in the results. When comparing the covariance of the sentiment score (Fig. 13), we observed that the values for AI review seem to be relatively small, suggesting a weak or even non-existent unidirectional linear relationship between the variables. The values for human peer review are generally larger and more positive, indicating a stronger linear relationship. This suggests that the sentiment score in AI review may not be strongly varying together, while in human peer review, it varies more strongly. Conversely, for the covariance of the sentiment magnitude (Fig. 14), we can see that the values for human peer review seem to be relatively small, suggesting a weak or even non-existent unidirectional linear relationship between the variables. The values for AI review are generally larger and more positive, indicating a stronger linear relationship. This suggests that the sentiment magnitude in human peer review may not be strongly varying together, while in AI review, it varies more strongly which could be influenced by the greater average length (Table 1). When comparing the correlation coefficients of the sentiment score and sentiment magnitude respectively, we found that the values in the human peer review dataset are relatively similar to the values in dataset AI review ( Fig. 13 and 14). Just like in the ratings, the correlation values of sentiment score and sentiment magnitude in the AI review dataset are more fluctuating than those in the human peer review, with some having relatively low correlations and others having relatively high correlations. This could indicate that the relationship between elements in AI review are more complex or that there may be some outliers or influential data points affecting the correlations.
In summary for the rating, review and sentiment analyses, we found that the average rating generated by human reviews was higher than that produced by AI reviews, with human reviews having smaller standard deviation, indicating a more consistent review method. Our findings also suggest that human reviews generated a slightly higher average sentiment score than AI reviews, but AI reviews consistently outperformed human reviews in sentiment magnitude, indicating the ability to provide longer and more in-depth feedback. The standard deviation of sentiment score and sentiment magnitude suggests that AI reviews with a wider range of values could be a more objective measure of student performance, but not as effective as human reviews. The covariance values for ratings, and sentiment magnitude suggest a stronger linear relationship between the variables in AI reviews, while human peer reviews show a weaker or even non-existent relationship. In the case of sentiment score, AI review reviews show a weaker or even nonexistent relationship which suggests that the sentiment score in AI review may not be strongly varying together, while in human peer review, it varies more strongly. The correlation values for ratings, sentiment score, and sentiment magnitude in AI reviews are more fluctuating than those in human peer reviews, indicating a more complex relationship or that there may be some outliers or influential data points affecting the correlations.
Overall, the analysis suggests that human peer reviews outperform AI reviews in terms of accuracy and consistency in the ratings category. However, the results from sentiment score and magnitude suggest that AI reviews could be a valuable tool for providing detailed and informative feedback to students. Therefore, a combination of both may be useful in providing a more comprehensive evaluation of student performance.
3.5c Results: Course Participants' Perspectives on the AI Reviews
The results of the thematic analysis of the participants' textual responses revealed themes connected to both benefits and drawbacks in peer and AI reviews. A summary of the themes resulting from the qualitative analysis and their percentage with respect to the coded segments in all responses is presented in Fig. 15.
Figure 15: Frequency of themes in coded segments
Most course participants believed peer reviews were more beneficial than their AI counterparts, with 7 of them stating that they would prefer only human feedback. These opinions were based on the comprehensive and in-depth feedback resulting from peer feedback with respect to essay content, formatting, and stylistic choices: "[The peer review] is not only specific but demonstrates that the reviewer has the entirety of the work and its purpose in mind. I think this is a key distinction between human peer reviews and their current AI counterparts: a human reviewer has a thesis, a guiding idea behind their critique that they form over the course of reviewing the work, while an AI does not" (Participant 4). "During [the] human review, I noticed that the feedback displayed a level of understanding and sensitivity to [the] issue [discussed]. It felt like the person providing feedback had a more holistic view of my work and evaluated it accordingly" (Participant 19).
Another benefit noted was the growth participants had experienced in their own writing as a result of reviewing their peers' work: "Giving a peer-review is time consuming on the reviewer's part, but I also see it [as] being a productive part of the process. I have learned so much reviewing others' work. For starters, reading the broad topics have made me a better learner. I am now a more professional writer" (Participant 13).
These themes were also reflected in the words chosen by the participants to summarize their experiences with peer reviews. Some of the words had a clear positive connotation (e.g., "supportive," "meaningful," "thoughtful," "encouraging," and "helpful") and seemed to focus on the human, emotional aspects of the reviews. Other words had a more negative connotation (e.g., "disappointing," "spotty," and "lacking") and appear to signal the lack of quality and time delay in the comments received by some course participants. Some terms also emphasized the comprehensiveness and usefulness of human reviews (e.g., "thorough," "specific," "constructive," "targeted," "enlightening," and "purposeful"). Overall, the tone of the words chosen was more positive than negative, confirming the participants' preference for this type of review.
When considering the effectiveness of the AI reviews, the students in this work noted that the comments received had only been useful for identifying "big picture" revisions based on the rubric categories, but they had been limited in terms of meaningful feedback due to the AI's lack of understanding of the context and nuances of academic writing: "I thought the feedback was really vague and too generalized… It could not tell me where I needed to add more details, better structure, or deeper analysis" (Participant 8). "The feedback I received on my work seemed general and could [have] been applied to any number of works. It did a great job summarizing, but not interpreting, my writing" (Participant 12).
Despite these drawbacks, the AI reviews were praised for their speed and overall efficiency as well as for the consistency of the feedback provided, which contrasted with the variability found in the quality of peer comments and their lack of expediency: "I could not believe how quickly the review was created. When I read through the review, I was pleased to find that all the comments were rubric aligned and pointed out… ways to continue to improve" (Participant 30). "AI reviews are… more consistent than human reviews" (Participant 36).
Because of these advantages, five of the participants stated they would choose these types of reviews over peer feedback. Nevertheless, since peer and AI reviews appear to offer different kinds of benefits, most participants thought they could complement each other, instead of one type of feedback being replaced by the other:
"The combination of both a peer and AI review was a nice mixture and balance that provided pragmatic feedback that was used when reassessing my work" (Participant 7). "I feel like human review should not be replaced with AI review but they can work alongside each other" (Participant 26).
The words used by the participants to describe the AI feedback seem to focus on its objective and impersonal, data-driven nature as well as its convenience: "fast," "straight-forward," "immediate," "practical," "satisfying," and "instant." These data suggest that the AI comments were easy to understand, and they provided clear and actionable information or insights as well as instilled a sense of progress or momentum in participants with respect to the development of their work. On the other hands, words like "superficial," "decontextualized," "general," "rough," and "disjointed" convey a sense of insufficiency and lack of substance or detail, as well as a lack of focus, sophistication, or coherence in the reviews. In spite of these weaknesses, most participants appear to welcome and enjoy this type of reviews, as long as they did not replace peer reviews, but complemented them.
Like the textual responses, the 29 multimodal elements submitted by the participants to describe their experiences with peer and AI reviews highlighted advantages and drawbacks. Additionally, some of the artifacts conveyed students' uneasiness, anxiety, and ambivalent emotions towards the use of AI for educational purposes, while others expressed the opposite, celebrating instead the potential for human-AI collaboration. The semiotic resources in all products were combined in single, non-segregated frames (i.e., all the semiotic resources were presented together in one semiotic space). The prevailing modes of communication were the visual, spatial, and gestural (though a limited number included some textual features).
The main advantage of peer reviews highlighted by the participants' multimodal artifacts was their diverse and collaborative nature. This was achieved through the use of different colors, photos or illustrations of diverse students, spatial and gestural features denoting proximity and equality, and facial expressions and body language conveying dialogue, harmony, and a friendly exchange of ideas. While the diversity of peer reviews appears to have been welcomed by some course participants others conveyed more negative emotions, as the participants who submitted them seem to have found the diversity of opinions negatively subjective and overwhelming (i.e., it was difficult to reconcile comments that differed quite drastically) and the reviewers' unreliability and/or delay in terms of the submission of their feedback. One image depicted an arm and a hand holding a pen that act as a visual and gestural synecdoche for the writer, who appears to be buried in paper reviews. The writer, nevertheless, is shown as breaking through the reviews, emerging triumphally. This is conveyed gesturally by the fist gripping the pen in a gesture symbolizing triumph, Of the multimodal artefacts submitted by the participants to convey their experiences with non-human reviews, half presented the AI tool as isolated robots with humanoid features, some located in the metaverse (presented in several images as an abstract, dark, and/or vacuous space). These robots exhibit neutral facial expressions, and are either looking at a particular point within the setting where they are or are staring blankly into the metaverse: There is no visual contact between the figures and viewer. These visual and gestural elements appear to highlight the impersonal, decontextualized "cold" tone of the AI reviews, which contrasts with the collaboration and communication emphasized in the artifacts depicting peer reviews.
Other course participants presented images that portrayed the AI as complementary to peer feedback, submitting multimodal artifacts that emphasized AI-human communication and collaboration. These artifacts included photos or illustrations of robots shaking hands with humans or making physical contact with them (e.g., fingers touching). The representations relied on visual and gestural synecdoche, as only one arm or an arm and a hand for each figure is shown (i.e., no full-bodied robots or human beings are included in the images chosen), which could be interpreted as a generalized view of this relationship. Also important is the fact that in all images, the technology and human elements are the same size, and though they might originate from opposite points (e.g., left vs. right), they converge on a central point. This could mean that, for these students, AI and peer reviews could complement each other and could equally contribute to the improvement of their work.
Some participants expressed their anxiety and ambivalent feelings towards the use of AI in educational contexts. To convey their views, they resorted to images from movies that were either created with live-action and motion-capture computer-animated animation or featured a villainous computer. For example, Participant 4 used an image from the film The Polar Express, whose type of animation has been associated with Mori's concept of the uncanny valley (Russell 2021). Masahiro Mori was a leading Japanese professor of robotics who posited that "a person's response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance" (Mori 1970(Mori [2012: 98). This resonates with Sigmund Freud's famous essay on the uncanny, capturing a moment when doubts are raised in a person's mind "whether an apparently animate being is really alive; or conversely, whether a lifeless object might not be in fact animate" (Freud 1919(Freud [1955: 226). By choosing an image from a movie that has been connected with this concept, the participant drew parallels between the uneasiness produced by the uncanny valley and their experience with AI.
3.5d Results: Post-Course Survey
The survey data indicated that most students (n=24) felt curious about having their work reviewed by an AI tool, while some (n=14) were excited about using it and only a few students (n=3) stated that were indifferent about it.
Considering the overall quality and the usefulness rating of the reviews, the human reviews were rated slightly higher in both domains compared to the AI reviews, as Fig. 16 and Fig. 17 show. Regarding the ease of comprehension and implementation of the review suggestions in the revision of their works, the majority of students (n=30 for the AI reviews and n=32 for the human reviews) rated the process as 'very' or 'somewhat' easy, which showcases that there is no difficulty in deploying either type of feedback to the learning process. When asked to share the advantages of the AI review, the majority of the participants (n=31) said that the AI feedback provided instant and consistent reviews aligned with the rubric: "Quick feedback, rubric specific feedback" (Participant 30). Some participants (n=11) considered useful that the AI feedback was rubric-based, as it helped them to understand how their work related to the rubric requirements. Participant 43 noted that "you can review the rubrics and criteria" and "generally know what to revise to meet the requirement." Participants also appreciated the impartiality of the AI's evaluation (n=20), as well as its ability to catch macro issues and provide thorough summaries with specific and clear comments, as participant 13 stated: "the details provided by the AI review were very concise yet detailed."
Regarding the disadvantages that participants identified about the AI reviews, most participants (n=27) expressed their concerns about the feedback being too general, lacking the human touch that the peer reviewers could provide, leading to difficulties in identifying specific areas for improvement. For instance, participant 36 pointed out that "AI systems may not be as effective as people in evaluating papers in certain specialized domains...People can identify subtle flaws and provide feedback on how to improve the paper/work project." Few participants also expressed concerns about the accuracy of the AI review (n=7), especially when the AI failed to identify important details in the work. They noted that the AI review "missed some smaller points" (Participant 34) and "was not specific enough in terms of items that needed to be revised" (Participant 38).
Moving forward to the benefits of human reviews identified in the survey, 19 participants mentioned the importance of receiving specific and constructive feedback from their peers. Participant 14 noted that peer reviewers provided "on point and specific" feedback that suggested insightful recommendations. Many participants (n=26) also appreciated the fresh perspective that peer reviews provided, with participant 28 noting that peer review "reveal[s] blind spots" and offers "diverse feedback." Most participants (n=32) valued the depth and nuance offered by
Participants
Usefulness Rating: Human vs AI reviews Human reviews AI reviews human reviewers, with participant 36 mentioning that peer reviewers "can identify subtle flaws and provide feedback on how to improve the paper/work project". Concerning the disadvantages of human reviews through CGMap, the most significant insight is that human reviewers are considered subjective and can be inconsistent, influenced by their personal biases and preferences (n=15). For instance, Participant 36 stated that "Human reviewers may have different levels of experience, expertise, and biases, which can lead to inconsistent evaluations of papers/work projects." Another prevalent insight is that there can be a delay in receiving feedback, and sometimes, reviews may not be received at all (n=26), as participant 30 said, "not everyone reviewed, time."
The survey data show that the participants' overall preference lean towards the peer reviews (n=18) or a combination of both peer and AI reviews (n=19). Participants appreciate the specific and actionable feedback provided by their peers, as well as the opportunity to collaborate and get customized comments, as participant 37 mentioned "I think the human reviews are written by people who have shared the course experience and understand the demands and limitations of these works." Participants recognize that AI reviews have the potential to provide useful feedback, but there are still limitations in making it nuanced, accurate, and personalized. That is why a combination of both review types is considered the most effective, as the two review types overlap by omitting each other disadvantages, as participant 19 highlighted, "the combination of both the human and AI work is helpful in having both general and nuanced feedback for my work".
Discussion
"Large Language Models," so-called, are still language models in Church and Mercer's sense (Church and Mercer 1993)-they have no understanding of language. Machine intelligence records characters in binary notation, and to perform statistical analyses of patterns in these characters. Limited though this is as mechanized intellectual work, the sheer force of numerical modelling and calculation can today achieve feats that are superhuman. In addition to responding to prompts, they perform second order operations including summarization, translation, application of instructions, and planning. The capabilities of generative AI are "emergent" and not even fully explicable even to the engineers who built them.
The power of this unmeaning brute force was clearly evident to the participants in our reported in this paper. The AI reviews come back to students as neatly framed narrative responses. Participants reported that AI reviews were helpfully different from human feedback in a number of ways, even against identical prompts.
One key to the implementation of C-LLMs, we propose, lies in a new domain of humancomputer interaction termed "prompt engineering." We outlined in the second section of this paper the serious and intrinsic deficiencies of C-LLMs in the areas of sourcing, facts, theory, ethics and critical dialogue-crucial epistemic virtues and pedagogical groundings. These deficiencies have been confirmed by the course participants in this intervention, themselves experienced educators undertaking graduate programs.
The main strength of C-LLMs is that they are good at spinning into narrative form texts that draw from only sometimes but not necessarily from reliable sources, using possible facts, applying possible theories, and viewing these through the lens of possibly critical analysis. The machine has no way of knowing whether its sources are reliably sourced, factual, theoretically sound or survive the rigors of critical interrogation. What we need for reliable knowledge work and good learning is to feed the machine with the epistemic virtues of using reliable sources and resilient facts, theories, and critical perspectives.
We do this in CGMap in two ways. First, the generative AI is presented student texts that have already been vetted by peers for these epistemic virtues. Then, second, we use generative AI to provide supplementary narrative reviews through careful prompt engineering. In a sense, we have told the GPT in general terms what to say in response to the specifics of the student texts.
In CGMap, we've set out to develop software that recalibrates C-LLMs to make them as useful as possible for learning. We present these recalibrations as three frames:
1 An Epistemic Frame: prompt the machine to offer students feedback on the basis of a theory of knowledge applicable to their learning. In our experiment, we used narrative elaborations via a rubric framed in terms of the "knowledge processes" of our epistemological theory of learning. CGMap then runs through each piece of student work multiple times, offering narrative feedback framed by epistemological criteria embedded in the rubric.
2 An Empirical Frame: require the learners to bring verifiable facts to the machine. We prompt the C-LLM to respond with anything factual because it is a "black box" that fails to acknowledge its sources and cannot know fact from fake. The narrative generated by the prompt is only valid to the extent that it works with the facts that it has been fed, already verified in human peer reviews.
3 An Ontological Frame: bring the theoretical frames of disciplines to the machine. In an extension of our recent work in the area of medical education , we are applying the formal ontologies of biomedical practice to the prompt, not as circumstantially collocated clusters of characters, but the widely agreed definitions and taxonomically well-formed schemas that define the medical domain. Many academic fields are supported by such schemas, frequently systematized in XML markup in addition to the metadata schemas that drive everyday interoperability across computer applications and the internet (Cope and Kalantzis 2023a). Curriculum standards documents are in computational terms discipline-specific ontologies. CGMap has been designed to ingest any well-formed ontological frame.
To conclude now with two propositions, in tension. On the one hand, the generative AI of C-LLMs is architected in a way that is worse-than harmful to education. It undermines some of the key epistemological bases of modern science and reliable knowledge systems. (A separate question is, did the technology have to turn out this way? Our tentative answer is, perhaps not if they were architected along lines of the recalibrations we have created in CGMap.)
On the other hand, the allure of C-LLMs is their neatly formed, speedy narrative responses. These characteristics were particularly highlighted by our participants. With epistemic, empirical and ontology-based recalibration, C-LLMs can offer feedback to learners that usefully supplements human feedback. Besides, C-LLMs have "read" massive bodies of digitized text, of considerable value in itself, even if their abilities to distinguish reliable fact and theory and to determine the credibility of sources are non-existent, and their ethical and critical faculties are at best questionable. At least they are interesting interlocutors, thought-provoking even for their untrustworthiness.
So what can C-LLMs be used for? Our answer is: much less than they implicitly purport to do when they respond to a prompt. But now that it's here, generative AI is not going to go away. Attempts to ban it or slow its development are doomed. Purposefully recalibrated, we contend, these stochastic parrots can be put to good use supporting learning, so long as their role is confined to what have discursive narrative-tying independently verified credible facts, theories and sources and into well-framed discourse.
Like all parrots, what C-LLMs say is only as good as what we tell them to say. To tell generative AI what to say, we educators must now become prompt engineers. And of course, any willing and agreeable interlocutor soon becomes likable. If it can help learning, we may come to like this particular parrot quite a lot.
Fig. 1 :
1CGScholar Project workflow with human and AI reviews.
Fig. 2 :
2One of two peer reviews of version 1 of a course participant's work in CGMap: the work under review is on the left and rubric-based peer review map is on the right. Two nodes are opened in this screenshot, the rest have been folded closed by the reviewer for visual clarity while building the review map.
Fig. 4 :
4The "Learning by Design" Knowledge Processes: A taxonomy of epistemic activities.
Fig. 5 :
5CGScholar Analytics App, aggregated instructor view for one of the two courses in this intervention. By the point in the course when this screenshot was taken, the 41 participants had met 67% of course performance expectations in the areas of demonstrable knowledge, collaboration (help), and engagement (focus). The app had mined over 870,000 data points measured across 22 data types and provided 3,000 pieces of actionable machine feedback and machine-mediated peer and instructor feedback. Course participants have the same visualization, just tracking their individual progress.
Fig. 6 :
6Sentiment analysis of a peer's feedback on one of the rubric elements (communication)
Fig. 7 :
7Human and AI ratings on 1-5 Likert scale
Fig. 11 :
11Sentiment scores, human and AI reviews Fig 12: Sentiment magnitude, human and AI reviews
Fig. 13 :
13Correlation and Covariance of sentiment analysis and score Fig. 14: Correlation and Covariance of sentiment sore and sentiment magnitude.
Fig. 16 :
16Overall quality: Human vs AI reviews Fig. 17: Usefulness Rating: Human vs AI reviews
). Cybersecurity: Utilizing an Academic Hub and Spoke Model to Create a National Network of Cybersecurity Institutes, Department of Homeland Security, contract 70RCSA20FR0000103; Infrastructure for Modern Educational Delivery Technologies: A Study for a Nationwide Law Enforcement Training Infrastructure, Department of Homeland Security, contract 15STCIR00001-05-03; Development of a Robust, Nationally Accessible Cybersecurity Risk Management Curriculum for Technical and Managerial Cybersecurity Professionals, Department of Homeland Security, contract 70SAT21G00000012/70RCSA21FR0000115. Medical Informatics: MedLang: A Semantic Awareness Tool in Support of Medical Case Documentation, Jump ARCHES program, Health Care Engineering Systems Center, College of Engineering, contracts P179, P279, P288.
Table 1 :
1Extent of the textual corpus: student works, peer reviews, and AI reviews.
Table 2 :
2Human and AI ratings on review criteria
Sentiment Comparison, Human and AI ReviewsSentiment analysis of the text of the human and AI reviews are presented inHuman Review
AI Review
Mean
7.80
15.80
Median
7.95
16.25
Maximum
15.42
18.03
Standard Deviation
2.71
2.43
Table 3: Human and AI reviews: readability level measures as a proxy for complexity in
academic language
3.5b Results:
Table 4 :
4Sentiment Score on review criteria
Table 5 :
5Sentiment Magnitude on review criteria
Tzirides, Anastasia Olga (Olnancy), Gabriela Zapata, Akash Saini, Duane Searsmith, Bill Cope, Mary Kalantzis, Vania Castro, Theodora Kourkoulou, John Jones, Rodrigo Abrantes da Silva, Jen Whiting and Nikoleta Polyxeni Kastania, "Generative AI: Implications and Applications for Education," arXiv, 2305.07605, 2023, doi: https://doi.org/10.48550/arXiv.2305.07605.
As well as faculty and doctoral students in Cope and Kalantzis' Learning Design and Leadership program at the university, the Cyber-Social Learning Laboratory hosted visitors including during the period of the intervention described here from the United Kingdom, Greece, Malta, Taiwan, China, and Brazil
. W Ashby, Ross, An Introduction to Cybernetics. Chapham & HallAshby, W. Ross, An Introduction to Cybernetics, London: Chapham & Hall, 1956.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. Emily M Bender, Timnit Gebru, Angelina Mcmillan-Major, Shmargaret Shmitchell, Pp. 610-23 in FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell. 2021. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Pp. 610-23 in FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
Using Thematic Analysis in Psychology. Virginia Braun, Victoria Clarke, 10.1191/1478088706qp063oaQualitative Research in Psychology. 32Braun, Virginia and Victoria Clarke, "Using Thematic Analysis in Psychology," Qualitative Research in Psychology, 3(2):77-101, 2006, doi: https:/doi/org/10.1191/1478088706qp063oa.
Academic Writing with GPT-3.5: Reflections on Practices, Efficacy and Transparency. Oğuz ' Buruk, Oz, 10.35542/osf.io/e3yfkEdArXiv. 2023Buruk, Oğuz 'Oz', "Academic Writing with GPT-3.5: Reflections on Practices, Efficacy and Transparency," EdArXiv, 2023, doi: https://doi.org/10.35542/osf.io/e3yfk.
Three Models for the Description of Language. Noam Chomsky, IRE Transactions on Information Theory. 23Chomsky, Noam, "Three Models for the Description of Language," IRE Transactions on Information Theory, 2(3):113-24, 1956.
The False Promise of ChatGPT. Noam Chomsky, Ian Roberts, Jeffrey Watumu, New York TimesChomsky, Noam, Ian Roberts and Jeffrey Watumu. 2023. "The False Promise of ChatGPT." in New York Times.
Introduction to the Special Issue on Computational Linguistics Using Large Corpora. Kenneth W Church, Robert L Mercer, Computational Linguistics. 191Church, Kenneth W. and Robert L. Mercer, "Introduction to the Special Issue on Computational Linguistics Using Large Corpora," Computational Linguistics, 19(1):1-24, 1993.
The Things You Do to Know: An Introduction to the Pedagogy of Multiliteracies. Bill Cope, Mary Kalantzis, A Pedagogy of Multiliteracies: Learning by Design. Bill Cope and Mary KalantzisLondonPalgraveCope, Bill and Mary Kalantzis, "The Things You Do to Know: An Introduction to the Pedagogy of Multiliteracies," pp.1-36 in A Pedagogy of Multiliteracies: Learning by Design, edited by Bill Cope and Mary Kalantzis, London: Palgrave, 2015.
Making Sense: Reference, Agency and Structure in a Grammar of Multimodal Meaning. Bill Cope, Mary Kalantzis, 10.1017/9781316459645Cambridge University PressCambridge UKCope, Bill and Mary Kalantzis, Making Sense: Reference, Agency and Structure in a Grammar of Multimodal Meaning, Cambridge UK: Cambridge University Press, 2020, doi: https://doi.org/10.1017/9781316459645.
Artificial Intelligence for Education: Knowledge and its Assessment in AI-enabled Learning Ecologies. Bill Cope, Mary Kalantzis, Duane Searsmith, 10.1080/00131857.2020.1728732Educational Philosophy and Theory. 5312Cope, Bill, Mary Kalantzis and Duane Searsmith, "Artificial Intelligence for Education: Knowledge and its Assessment in AI-enabled Learning Ecologies," Educational Philosophy and Theory, 53(12):1229-45, 2021, doi: http://doi.org/10.1080/00131857.2020.1728732.
Artificial Intelligence in the Long View: From Mechanical Intelligence to Cyber-social Systems. Bill Cope, Mary Kalantzis, 10.1007/s44163-022-00029-1Discover Artificial Intelligence. 213Cope, Bill and Mary Kalantzis, "Artificial Intelligence in the Long View: From Mechanical Intelligence to Cyber-social Systems," Discover Artificial Intelligence, 2(13):1-18, 2022, doi: https://doi.org/10.1007/s44163-022-00029-1.
Maps of Medical Reason: Applying Knowledge Graphs and Artificial Intelligence in Medical Education and Practice. Bill Cope, Mary Kalantzis, Chengxiang Zhai, Andrea Krussel, Duane Searsmith, Duncan Ferguson, Richard Tapping, Yerko Berrocal, 10.1007/978-3-030-95006-4_8in Bioinformational Philosophy and Postdigital Knowledge Ecologies. Michael Peters, Petar Jandrić and Sarah HayesCham CHSpringerCope, Bill, Mary Kalantzis, ChengXiang Zhai, Andrea Krussel, Duane Searsmith, Duncan Ferguson, Richard Tapping and Yerko Berrocal, "Maps of Medical Reason: Applying Knowledge Graphs and Artificial Intelligence in Medical Education and Practice," pp.133-59 in Bioinformational Philosophy and Postdigital Knowledge Ecologies, edited by Michael Peters, Petar Jandrić and Sarah Hayes, Cham CH: Springer, 2022, doi: https://doi.org/10.1007/978-3-030-95006-4_8
The Clause, Revised: On Cyber-Social Meaning. Bill Cope, Mary Kalantzis, International Journal of Communication and Linguistic Studies. 22Cope, Bill and Mary Kalantzis, "The Clause, Revised: On Cyber-Social Meaning," International Journal of Communication and Linguistic Studies, 22, 2023a.
Platformed Learning: Reshaping Education in the Era of Learning Management Systems. Bill Cope, Mary Kalantzis, Varieties of Platformisation: Critical Perspectives on EdTech in Higher Education. Duncan A. Thomas and Vito LaterzaLondonPalgrave MacmillanCope, Bill and Mary Kalantzis, "Platformed Learning: Reshaping Education in the Era of Learning Management Systems," in Varieties of Platformisation: Critical Perspectives on EdTech in Higher Education, edited by Duncan A. Thomas and Vito Laterza, London: Palgrave Macmillan, 2023b.
Towards Education Justice: Multiliteracies Revisited," in Multiliteracies in International Educational Contexts: Towards Education Justice?. Bill Cope, Mary Kalantzis, Bill Cope, Mary Kalantzis and Gabriela C. ZapataRoutledgeLondonCope, Bill and Mary Kalantzis, "Towards Education Justice: Multiliteracies Revisited," in Multiliteracies in International Educational Contexts: Towards Education Justice?, edited by Bill Cope, Mary Kalantzis and Gabriela C. Zapata, London: Routledge, 2023c.
Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT. Wei Dai, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gašević, Guanliang Chen, 10.35542/osf.io/hcgzj2023Dai, Wei, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gašević and Guanliang Chen, "Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT," EdArXiv, 2023, doi: 10.35542/osf.io/hcgzj.
The Uncanny. Sigmund Freud, The Standard Edition of the Complete Psychological Works of Sigmund Freud. James StracheyLondonThe Hogarth Press1919Freud, Sigmund, "The Uncanny," pp.217-52 in The Standard Edition of the Complete Psychological Works of Sigmund Freud, edited by James Strachey, London: The Hogarth Press, 1919 [1955].
How Does ChatGPT Perform on the Medical Licensing Exams? The Implications of Large Language Models for Medical Education and Knowledge Assessment. Aidan Gilson, Conrad Safranek, Thomas Huang, Vimig Socrates, Ling Chi, R , Andrew Taylor, David Chartash, 10.2196/45312JMIR Medical Education. 367533182023Gilson, Aidan, Conrad Safranek, Thomas Huang, Vimig Socrates, Ling Chi, R. Andrew Taylor and David Chartash, "How Does ChatGPT Perform on the Medical Licensing Exams? The Implications of Large Language Models for Medical Education and Knowledge Assessment," JMIR Medical Education, 36753318, 2023, doi: https://doi.org/10.2196/45312
The Constant Comparative Method of Qualitative Analysis. Barney G Glaser, 10.2307/798843Social Problems. 124Glaser, Barney G., "The Constant Comparative Method of Qualitative Analysis," Social Problems, 12(4):436-45, 1965, doi: https://doi.org/10.2307/798843.
Adding Sense: Context and Interest in a Grammar of Multimodal Meaning. Anthony ; Grafton, Mary Kalantzis, Bill Cope, 10.1017/9781108862059The Footnote: A Curious History. London UK; Cambridge UKCambridge University PressGrafton, Anthony, The Footnote: A Curious History, London UK: Faber and Faber, 1997. Kalantzis, Mary and Bill Cope, Adding Sense: Context and Interest in a Grammar of Multimodal Meaning, Cambridge UK: Cambridge University Press, 2020, doi: https://doi.org/10.1017/9781108862059.
GPT-4 Passes the Bar Exam. Daniel Katz, Michael James Martin, Shang Bommarito, Pablo Gao, Arredondo, 10.2139/ssrn.4389233SSRN. 2023Katz, Daniel Martin, Michael James Bommarito, Shang Gao and Pablo Arredondo, "GPT-4 Passes the Bar Exam," SSRN, 2023, doi: http://dx.doi.org/10.2139/ssrn.4389233.
Reading Images: The Grammar of Visual Design. Gunther Kress, Theo Van Leeuwen, RoutledgeLondonKress, Gunther and Theo van Leeuwen, Reading Images: The Grammar of Visual Design, London: Routledge, 1996 [2021].
Performance of ChatGPT on USMLE: Potential for AI-assisted Medical Education Using Large Language Models. Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, Victor Tseng, 10.1371/journal.pdig.0000198PLOS Digit Health. 22198Kung, Tiffany H., Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo and Victor Tseng, "Performance of ChatGPT on USMLE: Potential for AI-assisted Medical Education Using Large Language Models," PLOS Digit Health, 2(2):e0000198, 2023, doi: https://doi.org/10.1371/journal.pdig.0000198.
Preface to the IJAIED 25th Anniversary Issue, Part 2: The Next 25 Years: How Advanced Interactive Learning Technologies will Change the World. H Lane, Gordon Chad, Chee-Kit Mccalla, Susan Looi, Bull, International Journal of Artificial Intelligence in Education. 262Lane, H. Chad, Gordon McCalla, Chee-Kit Looi and Susan Bull, "Preface to the IJAIED 25th Anniversary Issue, Part 2: The Next 25 Years: How Advanced Interactive Learning Technologies will Change the World," International Journal of Artificial Intelligence in Education, 26(2):539-43, 2016.
Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology. Diana Laurillard, RoutledgeLondonLaurillard, Diana, Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology, London: Routledge, 2012.
Designing a Realistic Peer-like Embodied Conversational Agent for Supporting Children's Storytelling. Zhixin Li, Ying Xu, 10.48550/arXiv.2304.09399arXiv, 2304.093992023Li, Zhixin and Ying Xu, "Designing a Realistic Peer-like Embodied Conversational Agent for Supporting Children's Storytelling," arXiv, 2304.09399, 2023, doi: https://doi.org/10.48550/arXiv.2304.09399.
A Metalanguage for Learning: Rebalancing the Cognitive with the Socio-Material. Fei Lim, Bill Victor, Mary Cope, Kalantzis, 10.3389/fcomm.2022.830613Frontiers in Communication. 7830613Lim, Fei Victor, Bill Cope and Mary Kalantzis, "A Metalanguage for Learning: Rebalancing the Cognitive with the Socio-Material," Frontiers in Communication, 7(Article 830613):1- 15, 2022, doi: http://doi.org/10.3389/fcomm.2022.830613.
ArguGPT: Evaluating, Understanding and Identifying Argumentative Essays Generated by GPT Models. Yikang Liu, Ziyin Zhang, Wanyang Zhang, Shisen Yue, Xiaojing Zhao, Xinyuan Cheng, Hai Hu Yiwen, Zhang, 10.48550/arXiv.2304.076662304.07666 2023arXivLiu, Yikang, Ziyin Zhang, Wanyang Zhang, Shisen Yue, Xiaojing Zhao, Xinyuan Cheng and Hai Hu Yiwen Zhang, "ArguGPT: Evaluating, Understanding and Identifying Argumentative Essays Generated by GPT Models," arXiv, 2304.07666 2023, doi: https://doi.org/10.48550/arXiv.2304.07666.
The Future of Education in the 21st Century. Rosemary Luckin, Machine Learning, Human Intelligence, UCL Institute of Education PressLuckin, Rosemary, Machine Learning and Human Intelligence: The Future of Education in the 21st Century: UCL Institute of Education Press, 2018.
Analyzing Sentiments in Peer Review Reports: Evidence from Two Science Funding Agencies. Junwen Luo, Thomas Feliciani, Martin Reinhart, Judith Hartstein, Vineeth Das, Olalere Alabi, Kalpana Shankar, 10.1162/qss_a_00156Quantitative Science Studies. 24Luo, Junwen, Thomas Feliciani, Martin Reinhart, Judith Hartstein, Vineeth Das, Olalere Alabi and Kalpana Shankar, "Analyzing Sentiments in Peer Review Reports: Evidence from Two Science Funding Agencies," Quantitative Science Studies, 2(4):1271-95, 2022, doi: https://doi.org/10.1162/qss_a_00156.
Liam Magee, Vanicka Arora, Luke Munn, 10.48550/arXiv.2212.05058arXiv, 2212.05058Structured Like a Language Model: Analysing AI as an Automated Subject. 2022Magee, Liam, Vanicka Arora and Luke Munn, "Structured Like a Language Model: Analysing AI as an Automated Subject," arXiv, 2212.05058, 2022, doi: https://doi.org/10.48550/arXiv.2212.05058.
Rethinking the Entwinement Between Artificial Intelligence and Human Learning: What Capabilities Do Learners Need for a World with AI?. Lina Markauskaite, Rebecca Marrone, Oleksandra Poquet, Simon Knight, Roberto Martinez-Maldonado, Sarah Howard, Jo Tondeur, Maarten De Laat, Simon Buckingham Shum, Dragan Gašević, George Siemens, 10.1016/j.caeai.2022.100056Computers and Education: Artificial Intelligence. 3Markauskaite, Lina, Rebecca Marrone, Oleksandra Poquet, Simon Knight, Roberto Martinez- Maldonado, Sarah Howard, Jo Tondeur, Maarten De Laat, Simon Buckingham Shum, Dragan Gašević and George Siemens, "Rethinking the Entwinement Between Artificial Intelligence and Human Learning: What Capabilities Do Learners Need for a World with AI?," Computers and Education: Artificial Intelligence, 3:1-16, 2022, doi: https://doi.org/10.1016/j.caeai.2022.100056.
GPTeach: Interactive TA Training with GPT Based Students. Julia M Markel, G Steven, James A Opferman, Chris Landay, Piech, 10.35542/osf.io/r23bu2023Markel, Julia M., Steven G. Opferman, James A. Landay and Chris Piech, "GPTeach: Interactive TA Training with GPT Based Students," EdArXiv, 2023, doi: https://doi.org/10.35542/osf.io/r23bu.
A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. John Mccarthy, Marvin L Minsky, Nathaniel Rochester, Claude E Shannon, McCarthy, John, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, 1955.
The Uncanny Valley. Masahiro Mori, 10.1109/MRA.2012.2192811IEEE Robotics & Automation Magazine. 192Mori, Masahiro, "The Uncanny Valley," IEEE Robotics & Automation Magazine, 19(2):98-100, 1970 [2012], doi: https://doi.org/10.1109/MRA.2012.2192811.
Luke Munn, Liam Magee, Vanicka Arora, 10.48550/arXiv.2301.12066Truth Machines: Synthesizing Veracity in AI Language Models. arXivMunn, Luke, Liam Magee and Vanicka Arora, "Truth Machines: Synthesizing Veracity in AI Language Models," arXiv, 2301.12066, 2023, doi: https://doi.org/10.48550/arXiv.2301.12066.
A Pedagogy of Multiliteracies: Designing Social Futures. New London Group, 10.17763/haer.66.1.17370n67v22j160uHarvard Educational Review. 661New London Group, "A Pedagogy of Multiliteracies: Designing Social Futures," Harvard Educational Review, 66(1):60-92, 1996, doi: https://doi.org/10.17763/haer.66.1.17370n67v22j160u.
Sentiment Analysis of Student Evaluations of Teaching. Heather Newman, David Joyner, 10.1007/978-3-319-93846-2_45the International Conference on Artificial Intelligence in Education. Newman, Heather and David Joyner, "Sentiment Analysis of Student Evaluations of Teaching," paper presented at the International Conference on Artificial Intelligence in Education, Cham CH, 2018, doi: https://doi.org/10.1007/978-3-319-93846-2_45.
The Quest for Artificial Intelligence: A History of Ideas and Achievements. Nils J Nilsson, Cambridge University PressCambridge UKNilsson, Nils J., The Quest for Artificial Intelligence: A History of Ideas and Achievements, Cambridge UK: Cambridge University Press, 2009.
Visualizing Revision: Leveraging Student-Generated Between-Draft Diagramming Data in Support of Academic Writing Development. Justin Olmanson, Katrina Kennett, Sarah J Mccarthey, Duane Searsmith, Bill Cope, Mary Kalantzis, 10.1007/s10758-015-9265-5Technology, Knowledge and Learning. 211Olmanson, Justin, Katrina Kennett, Sarah J. McCarthey, Duane Searsmith, Bill Cope and Mary Kalantzis, "Visualizing Revision: Leveraging Student-Generated Between-Draft Diagramming Data in Support of Academic Writing Development," Technology, Knowledge and Learning, 21(1):99-123, 2016, doi: https://doi.org/10.1007/s10758-015- 9265-5.
Knowing what Students Know: The Science and Design of Educational Assessment. James W Pellegrino, Naomi Chudowsky, Robert Glaser, National Academies PressWashington DCPellegrino, James W., Naomi Chudowsky and Robert Glaser, eds, Knowing what Students Know: The Science and Design of Educational Assessment, Washington DC: National Academies Press, 2001.
Data Mining in Education. Cristobal Romero, Sebastian Ventura, 10.1002/widm.1075WIREs Data Mining and Knowledge Discovery. 32Romero, Cristobal and Sebastian Ventura, "Data Mining in Education," WIREs Data Mining and Knowledge Discovery, 3(2):12-27, 2012, doi: https://doi.org/10.1002/widm.1075.
The Disturbing Uncanny Valley of Robert Zemeckis Film 'Polar Express. Calum Russell, Far Out. Russell, Calum, "The Disturbing Uncanny Valley of Robert Zemeckis Film 'Polar Express'," Far Out, 2021:https://faroutmagazine.co.uk/the-disturbing-valley-robert-zemeckis-polar- express/.
Inside the Secret List of Websites that Make AI Like ChatGPT Sound Smart. Kevin Schaul, Yu Szu, Nitasha Chen, Tiku, Washington PostSchaul, Kevin, Szu Yu Chen and Nitasha Tiku. 2023. "Inside the Secret List of Websites that Make AI Like ChatGPT Sound Smart." in Washington Post.
Automated Essay Writing: An AIED Opinion. Klaus Schwab, 10.1007/s40593-022-00300-7International Journal of Artificial Intelligence in Education. 32Geneva CH: World Economic ForumSchwab, Klaus, The Fourth Industrial Revolution, Geneva CH: World Economic Forum, 2016. Sharples, Mike, "Automated Essay Writing: An AIED Opinion," International Journal of Artificial Intelligence in Education, 32:1119-26, 2022, doi: https://doi.org/10.1007/s40593-022-00300-7.
The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues. Anaïs Tack, Chris Piech, 10.48550/arXiv.2205.07540arXivTack, Anaïs and Chris Piech, "The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues," arXiv, 2205.07540, 2022, doi: https://doi.org/10.48550/arXiv.2205.07540.
Google Fired Engineer Who Said its AI was Sentient. Nitasha Tiku, Washingto PostTiku, Nitasha. 2022. "Google Fired Engineer Who Said its AI was Sentient." in Washingto Post.
ChatGPT: Challenges, Opportunities, and Implications for Teacher Education. Torrey Trust, Jeromie Whalen, Chrystalla Mouza, Contemporary Issues in Technology and Teacher Education. 12023Trust, Torrey, Jeromie Whalen and Chrystalla Mouza, "ChatGPT: Challenges, Opportunities, and Implications for Teacher Education," Contemporary Issues in Technology and Teacher Education, 1, 2023.
Computing Machinery and Intelligence. A M Turing, 59MindTuring, A.M., "Computing Machinery and Intelligence," Mind, 59:433-60, 1950.
Prof: Alan Turing Decoded, A Biography. Dermot Turing, The History PressBrimscombe Port UKTuring, Dermot, Prof: Alan Turing Decoded, A Biography, Brimscombe Port UK: The History Press, 2015.
Cyber-Social Research: Emerging Paradigms for Interventionist Education Research in the Postdigital Era. Michael Twidale, Preben Hansen, ; Tzirides, Anastasia O Akash, K Saini, Bill Cope, Mary Kalantzis, Duane Searsmith, First Monday. Petar Jandrić, Alison MacKenzie and Jeremy Knox241RoutledgeAgile ResearchTwidale, Michael and Preben Hansen, "Agile Research," First Monday, 24(1):1-18, 2019. Tzirides, Anastasia O., Akash K. Saini, Bill Cope, Mary Kalantzis and Duane Searsmith, "Cyber-Social Research: Emerging Paradigms for Interventionist Education Research in the Postdigital Era," in Postdigital Research edited by Petar Jandrić, Alison MacKenzie and Jeremy Knox, Cham CH: Springer, 2023. van Leeuwen, Theo, Introducing Social Semiotics, London UK: Routledge, 2005.
Sentiment Analysis of Peer Review Texts for Scholarly Papers. Ke Wang, Xiaojun Wan, 10.1145/3209978.3210056The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 18paper presented at the IGIRWang, Ke and Xiaojun Wan, "Sentiment Analysis of Peer Review Texts for Scholarly Papers," paper presented at the IGIR '18: The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval,, July 8-12, Ann Arbor MI, 2018, doi: https://doi.org/10.1145/3209978.3210056.
ELIZA-A Computer Program for the Study of Natural Language Communication Between Man and Machine. Joseph Weizenbaum, Communications of the ACM. 91Weizenbaum, Joseph, "ELIZA-A Computer Program for the Study of Natural Language Communication Between Man and Machine," Communications of the ACM, 9(1):36-45, 1966.
Computer Power and Human Reason: From Judgment to Calculation. Joseph Weizenbaum, W.H. FreemanSan Francisco CAWeizenbaum, Joseph, Computer Power and Human Reason: From Judgment to Calculation, San Francisco CA: W.H. Freeman, 1976.
. Gavin Wood, What is Web 3.0?".Wood, Gavin. 2014, "What is Web 3.0?". (http://www.gavwood.com/web3lt.html).
| [] |
[
"arXiv:hep-th/9609020v1 2 Sep 1996 From Conformal Haag-Kastler Nets to Wightman Functions *",
"arXiv:hep-th/9609020v1 2 Sep 1996 From Conformal Haag-Kastler Nets to Wightman Functions *"
] | [
"Martin Jörß [email protected] \nII. Institut für theoretische Physik\nUniversität Hamburg Luruper Chaussee 149\n22761Hamburg\n"
] | [
"II. Institut für theoretische Physik\nUniversität Hamburg Luruper Chaussee 149\n22761Hamburg"
] | [] | Starting from a chiral conformal Haag-Kastler net on 2 dimensional Minkowski space we present a canonical construction that leads to a complete set of conformally covariant N-point-functions fulfilling the Wightman axioms.Our method consists of an explicit use of the representation theory of the universal covering group of SL(2, R) combined with a generalization of the conformal cluster theorem to N-point-functions [FrJ].This paper continues work done in [FrJ] and [Jör3]. * hep-th/9609020 | null | [
"https://arxiv.org/pdf/hep-th/9609020v1.pdf"
] | 16,539,654 | hep-th/9609020 | a90365c6520b91e4a1e2503d1a45339323aaef17 |
arXiv:hep-th/9609020v1 2 Sep 1996 From Conformal Haag-Kastler Nets to Wightman Functions *
August 1996
Martin Jörß [email protected]
II. Institut für theoretische Physik
Universität Hamburg Luruper Chaussee 149
22761Hamburg
arXiv:hep-th/9609020v1 2 Sep 1996 From Conformal Haag-Kastler Nets to Wightman Functions *
August 1996
Starting from a chiral conformal Haag-Kastler net on 2 dimensional Minkowski space we present a canonical construction that leads to a complete set of conformally covariant N-point-functions fulfilling the Wightman axioms.Our method consists of an explicit use of the representation theory of the universal covering group of SL(2, R) combined with a generalization of the conformal cluster theorem to N-point-functions [FrJ].This paper continues work done in [FrJ] and [Jör3]. * hep-th/9609020
Introduction
The formulation of quantum field theory in terms of Haag Kastler nets of local observable algebras ("local quantum physics" [Haag]) has turned out to be well suited for the investigation of general structures. Discussion of concrete models, however, is mostly done in terms of pointlike localized fields.
In order to be in a precise mathematical framework, these fields might be assumed to obey the Wightman axioms [StW]. Even then, the interrelation between both concepts is not yet completely understood (see [BaW, BoY] for the present stage).
Heuristically, Wightman fields are constructed out of Haag-Kastler nets by some scaling limit which, however, is difficult to formulate in an intrinsic way [Buc2]. In a dilation invariant theory scaling is well defined, and in the presence of massless particles the construction of a pointlike field was performed in [BuF].
Here, we study the possibly simplest situation: Haag-Kastler nets in 2 dimensional Minkowski space with trivial translations in one light cone direction ("chirality") and covariant under the real Möbius group which acts on the other lightlike direction.
In [FrJ], it has been shown that in the vacuum representation pointlike localized fields can be constructed. Their smeared linear combinations are affiliated to the original net and generate it. We do not know at the moment whether they satisfy all Wightman axioms, since we have not yet found an invariant domain of definition.
In [Jör3], we have generalized this to the charged sectors of a theory. We have constructed pointlike localized fields carrying arbitrary charge with finite statistics and therefore intertwining between the different superselection sectors of the theory. (In Conformal Field Theory these objects are known as "Vertex Operators".) We have obtained the unbounded field operators as limits of elements of the reduced field bundle [FRS1,FRS2] associated to the net of observables of the theory.
In this paper, we start again from chiral conformal Haag-Kastler nets and present an canonical construction of N-point-functions that can be shown to fulfill the Wightman axioms. We proceed by generalizing the conformal cluster theorem [FrJ] to higher N-point-functions and by examining the momentum space limit of the algebraic N-point-functions at p = 0.
We are not able to prove that these Wightman fields can be identified with the pointlike localized fields constructed in [FrJ] and [Jör3].
First Steps
In this section, we give an explicit formulation of the setting frow which this work starts. We then present the proof of the conformal cluster theorem and the results on the construction of pointlike localized fields in [FrJ] and [Jör3].
Assumptions
Let A = (A(I)) I∈K 0 be a family of von Neumann algebras on some separable Hilbert space H. K 0 denotes the set of nonempty bounded open intervals on R. A is assumed to satisfy the following conditions. i) Isotony:
A(I 1 ) ⊂ A(I 2 ) for I 1 ⊂ I 2 , I 1 , I 2 ∈ K 0 .
ii) Locality: A(I 1 ) ⊂ A(I 2 ) ′ for I 1 ∩ I 2 = {}, I 1 , I 2 ∈ K 0 (2) (A(I 2 ) ′ is the commutant of A(I 2 )).
iii) There exists a strongly continuous unitary representation U of G = SL(2, R) in H with U(−1) = 1 and U(g) A(I) U(g) −1 = A(gI), I, gI ∈ K 0 (3) (SL(2, R) ∋ g = a b c d acts on R ∪ {∞} by x → ax+b cx+d with the appropriate interpretation for x, gx = ∞).
iv) The conformal Hamiltonian H, which generates the restriction of U to SO(2), has a nonnegative spectrum.
v) There is a unique (up to a phase) U-invariant unit vector Ω ∈ H .
vi) H is the smallest closed subspace containing the vacuum Ω which is invariant under U(g), g ∈ SL(2, R), and A ∈ A(I), I ∈ K 0 ("cyclicity"). 1
It is convenient to extend the net to intervals I on the circle S 1 = R ∪ {∞} by setting
A(I) = U(g) A(g −1 I) U(g) −1 , g −1 I ∈ K 0 , g ∈ SL(2, R).(4)
The covariance property guarantees that A(I) is well defined for all intervals I of the form I = gI 0 , I 0 ∈ K 0 , g ∈ SL(2, R), i.e. for all nonempty nondense open intervals on S 1 (we denote the set of these intervals by K).
Conformal Cluster Theorem
In this subsection, we derive a bound on conformal two-point-functions in algebraic quantum field theory (see [FrJ]). This bound specifies the decrease properties of conformal two-pointfunctions in the algebraic framework to be exactly those known from theories with pointlike localization. The Conformal Cluster Theorem plays a central role in this work.
Conformal Cluster Theorem (see [FrJ]): Let (A(I)) I∈K 0 be a conformally covariant local net on R. Let a, b, c, d ∈ R and a < b < c < d. Let A ∈ A( (a, b) ), B ∈ A( (c, d) ), n ∈ N and P k AΩ = P k A * Ω = 0, k < n. P k here denotes the projection on the subrepresentation of U(G) with conformal dimension k. We then have
|( Ω, BAΩ )| ≤ (b − a) (d − c) (c − a) (d − b) n A B .(5)
Proof: Choose R > 0. We consider the following 1-parameter subgroup of G = SL(2, R) :
g t : x −→ x cos t 2 + R sin t 2 − x R sin t 2 + cos t 2 .(6)
Its generator H R is within each subrepresentation of U(G) unitarily equivalent to the conformal Hamiltonian H. Therefore, the spectrum of AΩ and A * Ω w.r.t. H R is bounded below by n. Let 0 < t 0 < t 1 < 2π such that g t 0 (b) = c and g t 1 (a) = d. We now define
F (z) = ( Ω, B z −H R AΩ ) |z| > 1 ( Ω, A z H R BΩ ) |z| < 1 ( Ω, A α gt (B) Ω ) z = e it , t ∈ [t 0 , t 1 ] ,(7)
a function analytic in its domain of definition, and then
G(z) = (z − z 0 ) n (z −1 − z −1 0 ) n F (z), z 0 = e i 2 (t 0 +t 1 ) .(8)
(Confer the idea in [Fre].) At z = 0 and z = ∞ the function G(·) is bounded because of the bound on the spectrum of H R and can therefore be analytically continued. As an analytic function it reaches its maximum at the boundary of its domain of definition, which is the interval [e it 0 , e it 1 ] on the unit circle:
sup|G(z)| ≤ A B |e it 0 − e i 2 (t 0 +t 1 ) | 2n = A B (2 sin t 0 − t 1 4 ) 2n .(9)
This leads to
|( Ω, BAΩ )| = |F (1)| = |G(1)| |1 − e i 2 (t 0 +t 1 ) | −2n = |G(1)| (2 sin t 0 + t 1 4 ) −2n ≤ sup|G| (2 sin t 0 + t 1 4 ) −2n ≤ A B sin t 0 −t 1 4 sin t 0 +t 1 4 2n .(10)
Determining t 0 and t 1 we obtain lim R→∞ R t 0 = 2(c − b) and lim
R→∞ R t 1 = 2(d − a) .(11)
We now assume a−b = c−d and find t 0 −t 1
t 0 +t 1 2 = (a−b) (c−d) (a−c) (b−d) =: x .
Since the bound on |( Ω, BAΩ )| can only depend on the conformal cross ratio x, we can drop the assumption and the theorem is proven. ✷
The Construction of Pointlike Localized Fields from Conformal Haag-Kastler Nets
This subsection presents the argumentation and results of [FrJ] and [Jör3]:
The idea for the definition of conformal fields is the following: Let A be a local observable,
A ∈ I∈K 0 A(I),(12)
and P τ the projection onto an irreducible subrepresentation τ of U. The vector P τ AΩ may then be thought of as ϕ τ (h) Ω where ϕ τ is a conformal field of dimension n τ =: n and h is an appropriate function on R. The relation between A and h, however, is unknown at the moment, up to the known transformation properties under G,
U(g) P τ AΩ = ϕ τ (h (n) g ) Ω (13) with h (n) g (x) = (cx − a) 2n−2 h( dx−b −cx+a ), g = a b c d ∈ G.
We may now scale the vector P τ AΩ by dilations D(λ) = U λ 1 2 0 0 λ − 1 2 and find
D(λ) P τ AΩ = λ n ϕ τ (h λ ) Ω(14)
where h λ (x) = λ −1 h( x λ ). Hence, we obtain formally for λ ↓ 0
λ −n D(λ) P τ AΩ −→ dx h(x) ϕ τ (0) Ω.(15)
In order to obtain a Hilbert space vector in the limit, we smear over the group of translations T (a) = U 1 a 0 1 with some test function f and obtain formally
lim λ↓0 λ −n da f (a) T (a) D(λ) P τ AΩ = dx h(x) ϕ τ (f ) Ω.(16)
We now interpret the left-hand side as a definition of a conformal field ϕ τ on the vacuum, and try to obtain densely defined operators with the correct localization by defining
ϕ I τ (f ) A ′ Ω = A ′ ϕ I τ (f ) Ω, f ∈ D(I), A ′ ∈ A(I) ′ , I ∈ K.(17)
In order to make this formal construction meaningful, there are two problems to overcome. The first one is the fact that the limit on the left-hand side of (16) does not exist in general if AΩ is replaced by an arbitrary vector in H. This corresponds to the possibility that the function h on the right-hand side might not be integrable. We will show that after smearing the operator A with a smooth function on G, the limit is well defined. Such operators will be called regularized. The second problem is to show that the smeared field operators ϕ I τ (f ) are closable, in spite of the nonlocal nature of the projections P τ .
We omit the technical parts of [FrJ] and [Jör3] and summarize the results in a compact form and as general as possible.
Due to the positivity condition, the representation U(G) is completely reducible into irreducible subrepresentations and the irreducible components τ are up to equivalence uniquely characterized by the conformal dimension n τ ∈ R + (n τ is the lower bound of the spectrum of the conformal Hamiltonian H in the representation τ ).
Associated with each irreducible subrepresentation τ of U we find for each I ∈ K 0 a densely defined operator-valued distribution ϕ I τ on the space D(I) of Schwartz functions with support in I such that the following statements hold for all f ∈ D(I).
i) The domain of definition of ϕ I τ (f ) is given by
A(I ′ ) Ω. ii) ϕ I τ (f ) Ω ∈ P τ H red(18)
with P τ denoting the projector on the module of τ .
iii)
U(g) ϕ I τ (x) U(g) −1 = (cx + d) −2nτ ϕ gI τ (gx)(19)
with the covering projectiong → g and g = a b c d ∈ SL(2, R), I, gI ∈ K 0 . iv) ϕ I τ (f ) is closable.
v) The closure of ϕ I τ (f ), f ∈ D(I), is affiliated to A(I).
vi) A(I) is the smallest von Neumann algebra to which all operators ϕ I τ (f ), f ∈ D(I), are affiliated.
vii) The exchange algebra of the reduced field bundle [FRS2] and the existence of the closed field operators ϕ I τ (f ), mapping a dense set of the vacuum Hilbert space into some charged sector with finite statistics, suffice to construct closed field operators ϕ I τ,α (f ), mapping a dense set of an arbitrary charged sector α with finite statistics into some (other) charged sector with finite statistics. Here, the irreducible module τ of U(G) labels orthogonal irreducible fields defined in the same sector α .
viii) The closure of any ϕ I τ,α (f ), f ∈ D(I), is affiliated to F red (I).
ix) F red (I) is the smallest von Neumann algebra to which all operators ϕ I τ,α (f ), f ∈ D(I), are affiliated.
With the existence of pointlike localized fields we are able to give a proof of a generalized Bisognano-Wichmann property. We can identify the conformal group and the reflections as generalized modular structures in the reduced field bundle. Especially, we obtain a PCT operator on H red proving the PCT theorem for the full theory.
Moreover, the existence of pointlike localized fields gives a proof of the hitherto unproven Spin-Statistics theorem for conformal Haag-Kastler nets in 1+1 dimensions.
It was also possible to prove an operator product expansions for arbitrary local observables: For each I ∈ K 0 and each A ∈ A(I) there is a local expansion
A = τ ϕ I τ (f τ,A )(20)
into a sum over all irreducible modules τ of U(G) with
suppf τ,A ⊂ I ,(21)
which converges on A(I ′ )Ω * -strongly (cf. the definition in [BrR]). Here, I ′ denotes the complement of I in K 0 .
Canonical Construction of Wightman Fields
Starting from a chiral conformal Haag-Kastler net, pointlike localized fields have been constructed in [FrJ,Jör3]. Their smeared linear combinations are affiliated to the original net and generate it. We do not know at the moment whether these fields satisfy all Wightman axioms, since we have not found an invariant domain of definition.
In this section, we construct in a canonical manner a complete set of pointlike localized correlation functions out of the net of algebras we have been starting from. We proceed by generalizing the conformal cluster theorem to higher N-point-functions and by examining the momentum space limit of the algebraic N-point-functions at p = 0. This canonically constructed set of correlation functions can be shown to fulfill the conditions for Wightman functions (cf. [StW] and [Jos]). Hence, we can construct an associated field theory fulfilling the Wightman axioms.
We are not able to prove that these Wightman fields can be identified with the pointlike localized fields constructed in [FrJ] and [Jör3]. We do not know either how the Haag-Kastler theory, we have been starting from, can be reconstructed from the Wightman theory.
Such phenomena have been investigated by Borchers and Yngvason [BoY]. Starting from a Wightman theory, they could not rule out in general the possibility that the associated local net has to be defined in an enlarged Hilbert space.
Conformal Two-Point-Functions
First, we will determine the general form of conformal two-point-functions of local observables: It has been shown (cf. e.g. [Jör1]) that a two-point-function ( Ω, B U(x) AΩ ) of a chiral local net with translation covariance is of Lebesgue class L p for any p > 1. The Fourier transform of this two-point-function is a measure concentrated on the positive half line. Therefore, it iswith the possible exception of a trivial delta function at zero -fully determined by the Fourier transform of the commutator function ( Ω, [B, U(x) A U(x) −1 ] Ω ) . Since A and B are local observables, the commutator function has compact support and an analytic Fourier transform G(p). The restriction Θ(p) G(p) of this analytic function to the positive half line is then the Fourier transform of ( Ω, B U(x) AΩ ) .
In the conformally covariant case with P k AΩ = P k A * Ω = 0, k < n, the conformal cluster theorem implies that the two-point-function ( Ω, B U(x) AΩ ) decreases as x −2n . Therefore, its Fourier transform is 2n − 2 times continuously differentiable and can be written as Θ(p) p 2n−1 H(p) with an appropriate analytic function H(p).
Using this result, we are able to present a sequence of canonically scaled two-point-functions of local observables converging as distributions to the two-point-function known from conventional conformal field theory (cf. [Jör1,Reh]
): lim λ↓0 λ −2n ( Ω, B U(λ −1 x) AΩ ) = lim λ↓0 λ −2n F p→x Θ(p) (λp) 2n−1 H(λp) λ dp = H(0) (x + iε) −2n .(22)
Conformal Three-Point Functions
We consider the properties of chiral algebraic three-point functions If F now denotes the Fourier transform of ( Ω, A 1 U(·) A 2 U(·) A 3 Ω ), we get by straightforward calculations as a first result (cf. [Jör4])
( Ω, A 1 U(x 1 − x 2 ) A 2 U(x 2 − x 3 ) A 3 Ω )(23)F (p, q) = Θ(p) Θ(q − p) G + (p, q) + Θ(q) Θ(p − q) G − (p, q)(24)
with appropriate analytic functions G + and G − . In the case of conformal covariance the general form of these algebraic three-point functions is even more restricted by the following generalization of the conformal cluster theorem [FrJ]:
Theorem: Let (A(I)) I∈K 0 be a conformally covariant local net on R . Let a i , b i ∈ R , i = 1, 2, 3 , and
a 1 < b 1 < a 2 < b 2 < a 3 < b 3 . Let A i ∈ A( (a i , b i ) ) , n i ∈ N , i = 1, 2, 3 , and P k A i Ω = P k A * i Ω = 0 , k < n i .(25)
P k here denotes the projection on the subrepresentation of U(SL(2, R)) with conformal dimension k . We then have the following bound:
|( Ω, A 1 A 2 A 3 Ω )| ≤ (a 1 − b 1 ) + (a 2 − b 2 ) (a 2 − a 1 ) + (b 2 − b 1 ) (n 1 +n 2 −n 3 )(26)(a 1 − b 1 ) + (a 3 − b 3 ) (a 3 − a 1 ) + (b 3 − b 1 ) (n 1 +n 3 −n 2 ) (a 2 − b 2 ) + (a 3 − b 3 ) (a 3 − a 2 ) + (b 3 − b 2 ) (n 2 +n 3 −n 1 ) A 1 A 2 A 3 .
If we additionally assume
a 1 − b 1 = a 2 − b 2 = a 3 − b 3 ,(27)
we get
|( Ω, A 1 A 2 A 3 Ω )| ≤ r
(n 1 +n 2 −n 3 )/2 12 r (n 2 +n 3 −n 1 )/2 23 r (n 1 +n 3 −n 2 )/2 13
A 1 A 2 A 3 ,(28)
with the conformal cross ratios
(a i − b i ) (a j − b j ) (a i − a j ) (b i − b j ) =: r ij , i, j = 1, 2, 3 .(29)
Proof: This proof follows, wherever possible, the line of argument in the proof of the conformal cluster theorem for two-point functions (cf. [FrJ]). Choose R > 0 . Let us consider the following one-parameter subgroup of SL(2, R) :
g t : x −→ x cos t 2 + R sin t 2 − x R sin t 2 + cos t 2 .(30)
Its generator H R is within each subrepresentation of U(SL(2, R)) unitarily equivalent to the conformal Hamiltonian H . Therefore, the spectrum of A i Ω and A * i Ω with respect to H R is bounded from below by
n i , i = 1, 2, 3 . Let 0 < t − ij < t + ij < 2π such that g t − ij (b i ) = a j(31)
and
g t + ij (a i ) = b j(32)
for i, j = 1, 2, 3 , i < j . We now define
F (z 1 , z 2 , z 3 ) := ( Ω, A i 1 ( z i 1 z i 2 ) H R A i 2 ( z i 2 z i 3 ) H R A i 3 Ω )(33)
in a domain of definition given by
|z i 1 | < |z i 2 | < |z i 3 |(34)
with permutations (i 1 , i 2 , i 3 ) of (1, 2, 3) . This definition can uniquely be extended to certain boundary values with |z j | = |z k | , j, k = 1, 2, 3 , j = k : F shall be continued to this boundary of its domain of definition if
t jk := −i log z j z k / ∈ [t − jk , t + jk ] + 2πZ (35) or equivalently if g t k ([a k , b k ]) ∩ g t j ([a j , b j ]) = ∅ ,(36)
using the notation
t i := −i log z i , i = 1, 2, 3 .(37)
Thereby, boundary points with coinciding absolute values are included in the domain of definition. The definition of F is chosen in analogy to the analytic continuation of general Wightman functions (cf., e.g., [StW, Jos]) such that the edge-of-the-wedge theorem for distributions with several variables [StW] proves F to be an analytic function: Permuting the local observables A i , i = 1, 2, 3 , we have six three-point functions
( Ω, A i 1 U(x i 1 − x i 2 ) A i 2 U(x i 2 − x i 3 ) A i 3 Ω ) .(38)
These six functions have by locality identical values on a domain
E := {(y 1 , y 2 ) ∈ R 2 | |y 1 | > c 1 , |y 2 | > c 2 , |y 1 + y 2 | > c 3 }(39)
with appropriate c 1 , c 2 , c 3 ∈ R + . Each single function can be continued analytically by the condition of positive energy to one of the six disjoint subsets in
U := R 2 + iV := {(z 1 , z 2 ) ∈ C 2 | Imz 1 = 0 = Imz 2 , Imz 1 + Imz 2 = 0} .(40)
In this geometrical situation, the edge-of-the-wedge theorem (cf. [StW], theorem 2.14) proves the assumed analyticity of F . With the abbreviation
z 0 ij := e i(t − ij +t + ij )/2 , i, j = 1, 2, 3 ,(41)
we then define
G(z 1 , z 2 , z 3 )(42):= F (z 1 , z 2 , z 3 ) (i,j,k)∈T (1,2,3) ( z i z j − z 0 ij ) (n i +n j −n k )/2 ( z j z i − z 0 ji ) (n i +n j −n k )/2 ,
where T (1, 2, 3) denotes the set {(1, 2, 3), (1, 3, 2), (2, 3, 1)} . The added polynomial in z i , i = 1, 2, 3 , is constructed such that the degree of the leading terms are restricted by the assumption on the conformal dimensions of the three-point function F . Also, using the binomial formula, it can be controlled by straightforward calculations that no half odd integer exponents appear after multiplication of the product. Hence, at z i = 0 and z i = ∞ , i = 1, 2, 3 , the function G is bounded because of the bound on the spectrum of H R and can therefore be analytically continued. We can find estimates on G by the maximum principle for analytic functions. In order to get the estimate needed in this proof, we do not use the maximum principle for several complex variables [BoM]. Instead, we present an iteration of the maximum principle argument used in the proof of the conformal cluster theorem [FrJ] for the single variables z i , i = 1, 2, 3 , of G(·, ·, ·) and derive a bound on G(1, 1, 1) : Applying the line of argument known from the case of the two-point functions now to G(·, 1, 1) , we get the estimate |G(1, 1, 1)| ≤ sup z 1 |G(z 1 , 1, 1)| = sup z 1 ∈B ·,1,1 |G(z 1 , 1, 1)| .
The boundary of the domain of definition of the maximal analytical continuation of G(·, 1, 1) is here denoted by B ·,1,1 :
= {e it | t / ∈ [t − 12 , t + 12 ] ∪ [t − 13 , t + 13 ] + 2πZ} .(44)
Applying this argument to G(z 1 , ·, 1), we analogously get the estimate
|G(z 1 , 1, 1)| ≤ sup z 2 |G(z 1 , z 2 , 1)| = sup z 2 ∈B z 1 ,·,1 |G(z 1 , z 2 , 1)|(45)
with B z 1 ,·,1 denoting the boundary of the domain of definition of the maximal analytical continuation of G(z 1 , ·, 1) . Applying this argument finally to G(z 1 , z 2 , ·) , we analogously get the estimate
|G(z 1 , z 2 , 1)| ≤ sup z 3 |G(z 1 , z 2 , z 3 )| = sup z 3 ∈Bz 1 ,z 2 ,· |G(z 1 , z 2 , z 3 )|(46)
with B z 1 ,z 2 ,· denoting the boundary of the domain of definition of the maximal analytical continuation of G(z 1 , z 2 , ·) . Having iterated this maximum principle argument for the single variables z i , i = 1, 2, 3 , we can combine the derived estimates and get
|G(1, 1, 1)| ≤ sup t jk =−i log z j z k / ∈[t − jk ,t + jk ]+2πZ , j =k |G(z 1 , z 2 , z 3 )| .(47)
Hence, the boundary values of G have to be evaluated on the domain described by
g t k ([a k , b k ]) ∩ g t j ([a j , b j ]) = ∅(48)
with t i = −i log z i , i = 1, 2, 3 . We find the supremum with the same calculation as in the proof of the conformal cluster theorem above (cf. [FrJ]):
|G(1, 1, 1)| ≤ A 1 A 2 A 3 (i,j,k)∈T (1,2,3) |e it − ij − e i(t − ij +t + ij )/2 | n i +n j −n k = A 1 A 2 A 3 (i,j,k)∈T (1,2,3) |2 sin t − ij − t + ij 4 | n i +n j −n k(49)
This leads to another estimate:
|( Ω, A 1 A 2 A 3 Ω )| = |F (1, 1, 1)| = |G(1, 1, 1)| (i,j,k)∈T (1,2,3) |1 − e i(t − ij +t + ij )/2 | n i +n j −n k = |G(1, 1, 1)| (i,j,k)∈T (1,2,3) |2 sin t − ij + t + ij 4 | n i +n j −n k ≤ A 1 A 2 A 3 (i,j,k)∈T (1,2,3) sin t − ij −t + ij 4 sin t − ij +t + ij 4 n i +n j −n k(50)
Determining t − ij and t + ij , we obtain for i, j = 1, 2, 3
lim R→∞ R t − ij = 2(a j − b i )(51)
and lim
R→∞ R t + ij = 2(b j − a i )(52)
and the first bound in the theorem is proven. If we now assume
a 1 − b 1 = a 2 − b 2 = a 3 − b 3 ,(53)we find t − ij − t + ij t − ij + t + ij 2 = (a i − b i ) (a j − b j ) (a i − a j ) (b i − b j ) = r ij , i, j = 1, 2, 3 ,(54)
and the theorem is proven. ✷ This theorem can be used to get deeper insight in the form of the Fourier transforms of algebraic three-point functions. As in the case of the two-point functions, we proceed by transferring the decrease properties of the function in position space into regularity properties of the Fourier transform in momentum space.
In conventional conformal field theory, the three-point function with conformal dimensions n i , i = 1, 2, 3 , is known up to multiplicities as
f n 1 n 2 n 3 (x 1 , x 2 , x 3 ) = (x 1 − x 2 + iε) −(n 1 +n 2 −n 3 ) (x 2 − x 3 + iε) −(n 2 +n 3 −n 1 ) (x 1 − x 3 + iε) −(n 1 +n 3 −n 2 )(55)
(cf. [ChH, Reh]). Its Fourier transformf n 1 n 2 n 3 (p, q) =: Θ(p) Θ(q) Q n 1 n 2 n 3 (p, q)
can be calculated to be a sum of the restrictions of homogeneous polynomials Q + n 1 n 2 n 3 and Q − n 1 n 2 n 3 of degree n 1 + n 2 + n 3 − 2 to disjoint open wedges W + and W − in the domain of positive energy (cf. [Reh]).
By the bound in the cluster theorem above, we know that a conformally covariant algebraic three-point function ( Ω, A 1 U(x 1 −x 2 ) A 2 U(x 2 −x 3 ) A 3 Ω ) of local observables A i with minimal conformal dimensions n i , i = 1, 2, 3 , decreases in position space at least as fast as the associated pointlike three-point function f n 1 n 2 n 3 (x 1 , x 2 , x 3 ) known from conventional conformal field theory. Hence, the Fourier transform F A 1 A 2 A 3 (p, q) of this algebraic three-point function has to be at least as regular in momentum space as the Fourier transformf n 1 n 2 n 3 (p, q) of the associated pointlike three-point function known from conventional conformal field theory: Technically, we use a well-known formula from the theory of Fourier transforms,
F (Pol(X)S) = Pol( ∂ ∂Y )F S ,(57)
for arbitrary temperate distributions S and polynomials Pol(·) with a (multi-dimensional) variable X in position space and an appropriate associated differential operator ∂ ∂Y in momentum space. F denotes the Fourier transformation from position space to momentum space.
Let now S be the conformally covariant algebraic three-point function of local observables A i with minimal conformal dimensions n i , i = 1, 2, 3 :
S := ( Ω, A 1 U(x 1 − x 2 ) A 2 U(x 2 − x 3 ) A 3 Ω )(58)
and X be a pair of two difference variables out of x i − x j , i, j = 1, 2, 3 . By the cluster theorem proved above, we can now choose an appropriate homogeneous polynomial Pol(X) of degree n 1 + n 2 + n 3 − 4 such that the product Pol(X) S is still absolutely integrable in position space. Using the formula given above, we see that Pol( ∂ ∂Y )F S is continuous and bounded in momentum space. Furthermore, we have already derived the form of the Fourier transform F of an arbitrary (truncated) algebraic three-point function in a chiral theory to be
F (p, q) = Θ(p) Θ(q − p) G + (p, q) + Θ(q) Θ(p − q) G − (p, q)(59)
with appropriate analytic functions G + and G − . Thereby, we see that in the case of conformal covariance with minimal conformal dimensions n i , i = 1, 2, 3 , the analytic function G + (G − ) can be expressed as the product of an appropriate homogeneous polynomial P + (P − ) of degree n 1 + n 2 + n 3 − 2 restricted to the wedge W + (W − ) and an appropriate analytic function H + (H − ) . Hence, we have proved that the Fourier transform F A 1 A 2 A 3 of the algebraic three-point function ( Ω,
A 1 U(x 1 − x 2 ) A 2 U(x 2 − x 3 ) A 3 Ω )
can be written as
F A 1 A 2 A 3 (p, q) = Θ(p) Θ(q) P A 1 A 2 A 3 (p, q) H A 1 A 2 A 3 (p, q)(60)
with an appropriate homogeneous function P A 1 A 2 A 3 (p, q) of degree n 1 + n 2 + n 3 − 2 and an appropriate continuous and bounded function H A 1 A 2 A 3 (p, q) .
These results suffice to control the pointlike limit of the considered correlation functions. Scaling an algebraic three-point function in a canonical manner, we construct a sequence of distributions that converges to the three-point function of conventional conformal field theory:
lim λ↓0 λ −(n 1 +n 2 +n 3 ) ( Ω, A 1 U( x 1 − x 2 λ ) A 2 U( x 2 − x 3 λ ) A 3 Ω ) = lim λ↓0 λ −(n 1 +n 2 +n 3 ) F p→x 1 −x 2 q→x 2 −x 3 F A 1 A 2 A 3 (λp, λq) λ 2 dp dq = lim λ↓0 λ −(n 1 +n 2 +n 3 ) F p→x 1 −x 2 q→x 2 −x 3 Θ(p) Θ(q) λ n 1 +n 2 +n 3 −2 P A 1 A 2 A 3 (p, q) H A 1 A 2 A 3 (λp, λq) λ 2 dp dq = (x 1 − x 2 + iε) −(n 1 +n 2 −n 3 ) (x 2 − x 3 + iε) −(n 2 +n 3 −n 1 ) (x 1 − x 3 + iε) −(n 1 +n 3 −n 2 ) H A 1 A 2 A 3 (0, 0) .(61)
Conformal N-Point Functions
Since the notational expenditure increases strongly as we come to the construction of higher N-point functions, we concentrate on qualitatively new aspects not occurring in the case of twopoint functions and three-point functions. These qualitatively new aspects in the construction of higher N-point functions are related to the fact that in conventional field theory the form of higher N-point functions is not fully determined by conformal covariance. In conventional conformal field theory conformal covariance restricts the form of correlation functions of field operators ϕ i (x i ) , i = 1, 2, ... , N , with conformal dimension n i in the following manner (cf. [ChH, Reh]):
(Ω,
1≤i≤N ϕ i (x i ) Ω) = 1≤i<j≤N 1 (x j − x i + iε) c ij f (r v 1 s 1 t 1 u 1 , ... , r v N−3 s N−3 t N−3 u N−3 ) .(62)
Here, f (·, ... , ·) denotes an appropriate function depending on N −3 algebraicly independent conformal cross ratios
r vs tu := (x v − x s ) (x v − x t ) (x t − x u ) (x s − x u ) .(63)
The exponents c ij must fulfill the consistency conditions
N j=1 j =i c ij = 2n i , c ij = c ji , 1 ≤ i ≤ N .(64)
These conditions do not fully determine the exponents c ij in the case of N ≥ 4 . Hence, in conventional conformal field theory four-point functions and higher N-point functions are not fully determined by conformal covariance. In the case of conformal two-point functions and conformal three-point functions, our strategy to construct pointlike localized correlation functions was the following: First, we proved that the algebraic correlation functions decrease in position space as fast as the associated correlation functions in conventional field theory, which are uniquely determined by conformal covariance. Then, we transferred this property by Fourier transformation into regularity properties in momentum space. Finally, we were able to prove that the limit λ ↓ 0 of canonically scaled algebraic correlation functions converges to (a multiple of) the associated pointlike localized correlation functions in conventional conformal field theory.
In the case of four-point functions and higher N-point functions, the situation has changed and we cannot expect to be able to fully determine the form of the pointlike localized limit in this construction, since for N > 4 the correlation functions in conventional conventional field theory are not any longer uniquely determined by conformal covariance.
Beginning with the discussion of the general case with N ≥ 4 , we consider algebraic N-point functions
( Ω, 1≤i≤N U(−x i ) A i U(x i ) Ω )(65)
of local observables A i with minimal conformal dimensions n i , i = 1, 2, ... , N , in a chiral theory with conformal covariance. We want to examine the pointlike limit of canonically scaled correlation functions
lim λ↓0 λ − 1≤i≤N n i ( Ω, 1≤i≤N U(− x i λ ) A i U( x i λ ) Ω ) .(66)
Our procedure in the construction of pointlike localized N-point functions for N ≥ 4 will be the following: We consider all possibilities to form a set of exponents c ij fulfilling the consistency conditions N j=1 j =i c ij = 2n i , c ij = c ji , i = 1, 2, 3, ... , N .
For each consistent set of exponents a bound on algebraic N-point functions in position space can be proved. Each single bound on algebraic N-point functions in position space can be transferred into a regularity property of algebraic N-point functions in momentum space. We can use the same techniques as in the case of three-point functions. Finally, we will control the canonical scaling limit in (66) and construct pointlike localized conformal N-point functions.
We present the following generalization of the conformal cluster theorem proved above (cf. [FrJ]) to algebraic N-point functions of local observables:
Theorem: Let (A(I)) I∈K 0 be a conformally covariant local net on R . Let a i , b i ∈ R , i = 1, 2, 3, ... , N , and a i < b i < a i+1 < b i+1 for i = 1, 2, 3, ... , N −1 . Let A i ∈ A( (a i , b i ) ) , n i ∈ N , and P k A i Ω = P k A * i Ω = 0 , k < n i , i = 1, 2, 3, ... , N .
P k here denotes the projection on the subrepresentation of U(SL(2, R)) with conformal dimension k . We then have for each set of exponents c ij fulfilling the consistency conditions N j=1 j =i c ij = 2n i , c ij = c ji , i = 1, 2, 3, ... , N ,
the following bound:
|( Ω, 1≤i≤N A i Ω )| ≤ 1≤i<j≤N (a i − b i ) + (a j − b j ) (a j − a i ) + (b j − b i ) c ij 1≤i≤N A i .(70)
If we additionally assume
a 1 − b 1 = a 2 − b 2 = ... = a N − b N ,(71)
we can introduce conformal cross ratios and get
|( Ω, 1≤i≤N A i Ω )| ≤ 1≤i<j≤N (a i − b i ) (a j − b j ) (a i − a j ) (b i − b j ) c ij /2 1≤i≤N A i .(72)
Proof: If we pay attention to the obvious modifications needed for the additional variables, we can use in this proof the assumptions, the notation, and the line of argument introduced in the proof of the cluster theorem in the case of three-point functions. We choose an arbitrary set of exponents c ij fulfilling the consistency conditions
N j=1 j =i c ij = 2n i , c ij = c ji , i = 1, 2, 3, ... , N .(73)
Let R > 0 . We consider the generator H R of the following one-parameter subgroup of SL(2, R) :
g t : x −→ x cos t 2 + R sin t 2 − x R sin t 2 + cos t 2 .(74)
We know that H R is within each subrepresentation of U(SL(2, R)) unitarily equivalent to the conformal Hamiltonian H. Therefore, the spectrum of A i Ω and A * i Ω with respect to H R is bounded from below by n i , i = 1, 2, ... , N . Let 0 < t − ij < t + ij < 2π such that
g t − ij (b i ) = a j(75)
and
g t + ij (a i ) = b j ,(76)
for i, j = 1, 2, ... , N , i < j . We introduce
F (z 1 , ... , z N ) := ( Ω, N i=1 z −H R p(i) A p(i) z H R p(i) Ω )(77)
in a domain of definition given by
|z p(1) | < |z p(2) | < ... < |z p(N ) |(78)
with permutations ( p(1), p(2), ... , p(N) ) of ( 1, 2, ... , N ) . This definition can uniquely be extended in analogy to the case of three-point functions to boundary points with |z j | = |z k | ,
j, k = 1, 2, ... , N , j = k , if g t k ([a k , b k ]) ∩ g t j ([a j , b j ]) = ∅ ,(79)
thereby introducing
t i := −i log z i , i = 1, 2, ... , N .(80)
The line of argument presented above in the case of three-point functions and developed for general Wightman functions in [StW, Jos] proves that this continuation is still an analytic function. We then define
G(z 1 , ... , z N ) := F (z 1 , ... , z N ) 1≤i<j≤N ( z i z j − z 0 ij ) c ij /2 ( z j z i − z 0 ji ) c ji /2 ,(81)
using the abbreviation
z 0 ij := e i(t − ij +t + ij )/2 , i, j = 1, 2, ... , N .(82)
This function is constructed such that with the consistency conditions for c ij and with the bound on the spectrum of H R we get the following result in analogy to the cluster theorem for three-point functions: At the boundary points z i = 0 and z i = ∞ , i = 1, 2, ... , N , the function G is bounded and can therefore be analytically continued. As in the case of threepoint functions, we get with the maximum principle for analytic functions further estimates on G : Iterating the well-known maximum principle argument for the single variables, one obtains
|G(1, ... , 1)| ≤ sup B |G(z 1 , ... , z N )| ,(83)
where B denotes the set of boundary points
B := { |z j | = |z k | | g t k ([a k , b k ]) ∩ g t j ([a j , b j ]) = ∅ , j = k}(84)
with t i = −i log z i , i = 1, 2, ... , N . The supremum of the boundary values of G can be calculated in full analogy to the case of the three-point functions and to the proof of the conformal cluster theorem (cf. [FrJ]). We obtain straightforward:
|( Ω, 1≤i≤N A i Ω )| ≤ 1≤i≤N A i 1≤i<j≤N sin t − ij −t + ij 4 sin t − ij +t + ij 4 c ij .(85)
This estimate converges in the limit R ↓ 0 with
lim R→∞ R t − ij = 2(a j − b i )(86)
and lim
R→∞ R t + ij = 2(b j − a i )(87)
for i, j = 1, 2, ... , N to the first bound asserted in the theorem. If we assume
a 1 − b 1 = a 2 − b 2 = ... = a N − b N ,(88)we find t − ij − t + ij t − ij + t + ij 2 = (a i − b i ) (a j − b j ) (a i − a j ) (b i − b j ) = r ij , i, j = 1, 2, ... , N ,(89)
and get the second bound. Hence, the theorem is proven. ✷ For each consistent set of exponents c ij , i, j = 1, 2, 3, ... , N , we have proved a different bound on conformal four-point functions of chiral local observables. Hence, we know that the algebraic N-point function
( Ω, 1≤i≤N U(−x i ) A i U(x i ) Ω )(90)
decreases in position space at least as fast as the set of associated pointlike N-point functions known from conventional conformal field theory. Therefore, the Fourier transform of the algebraic N-point function has to be at least as regular in momentum space as the Fourier transforms of the associated pointlike N-point functions known from conventional conformal field theory. Technically, we follow the line of argument in the case of three-point functions and use the formula
F (Pol(X)S) = Pol ∂ ∂Y F S(91)
for arbitrary temperate distributions S and polynomials Pol(·) with a (multi-dimensional) variable X in position space and an appropriate associated differential operator ∂ ∂Y in momentum space. F denotes the Fourier transformation from position space to momentum space. Now, we choose S to be an algebraic N-point function
( Ω, 1≤i≤N U(−x i ) A i U(x i ) Ω )(92)
of local observables A i with minimal conformal dimensions n i , i = 1, 2, ... , N , and X to be a tuple of N − 1 algebraicly independent difference variables out of x i − x j , i, j = 1, 2, ... , N .
The estimates in the cluster theorem proved above imply, that appropriate homogeneous polynomials Pol(X) of degree
deg(Pol) = N i=1 n i − 2N + 2(93)
can be found such that the product Pol(X) S is still absolutely integrable in position space.
We then see that Pol( ∂ ∂Y )F S is continuous and bounded in momentum space. By locality and the condition of positive energy, the Fourier transform F of an arbitrary (truncated) algebraic N-point function is known to be of the form
F (p 1 , ... , p N −1 ) = G(p 1 , ... , p N −1 ) N −1 i=1 Θ(p i ) ,(94)
where G denotes a sum of restrictions of appropriate analytic functions to subsets of momentum space (cf. the case of three-point functions in the section above). One can now proceed in analogy to the argumentation in the case of three-point functions: In a situation with conformal covariance and minimal conformal dimensions n i , i = 1, 2, ... , N , the function G can be expressed as the product of an appropriate homogeneous polynomial P of degree
deg(P ) = N i=1 n i − N + 1(95)
and an appropriate function H , where H denotes another sum of restrictions of analytic functions to subsets of momentum space. Hence, we have proved that the Fourier transform of the algebraic N-point function
( Ω, 1≤i≤N U(−x i ) A i U(x i ) Ω )(96)
can be written as
F (p 1 , ... , p N −1 ) = P (p 1 , ... , p N −1 ) H(p 1 , ... , p N −1 ) N −1 i=1 Θ(p i )(97)
with an appropriate homogeneous function P of degree
deg(P ) = N i=1 n i − N + 1(98)
and an appropriate continuous and bounded function H . Using this result, we can now show in full analogy to the procedure in the last section that by canonically scaling an algebraic N-point function we construct a sequence of distributions that converges to an appropriate pointlike localized N-point function of conventional conformal field theory:
lim λ↓0 λ − 1≤i≤N n i ( Ω, 1≤i≤N U(− x i λ ) A i U( x i λ ) Ω ) = lim λ↓0 λ − 1≤i≤N n i F p i →x i −x i +1 F (λp 1 , ... , λp N −1 ) λ N −1 1≤i≤N −1 dp i = lim λ↓0 F p i →x i −x i +1 P (p 1 , ... , p N −1 ) H(λp 1 , ... , λp N −1 ) 1≤i≤N −1 Θ(p i ) dp i = 1≤i<j≤N 1 (x j − x i + iε) c ij f (r v 1 s 1 t 1 u 1 , ... , r v N−3 s N−3 t N−3 u N−3 ) .(99)
Again, f (·, ... , ·) denotes an appropriate function depending on N −3 algebraicly independent conformal cross ratios
r vs tu := (x v − x s ) (x v − x t ) (x t − x u ) (x s − x u ) .(100)
The exponents c ij must fulfill the consistency conditions
N j=1 j =i c ij = 2n i , c ij = c ji , 1 ≤ i ≤ N ,(101)
which do not fully determine the exponents. Hence, the general form of the pointlike localized conformal correlation functions constructed from algebraic quantum field theory has been determined to be exactly the general form of the N-point functions known from conventional conformal field theory. In both approaches conformal covariance does not fully determine the form of N-point functions for N > 4 .
Wightman Axioms and Reconstruction Theorem
The most common axiomatic system for pointlike localized quantum fields is the formulation of Wightman axioms given in [StW] and [Jos]. (If braid group statistics has to be considered and the Bose-Fermi alternative does not hold in general, the classical formulation of [StW] and [Jos] has to be modified for the charged case by introducing the axiom of weak locality instead of locality [FRS1,FRS2].) The construction of pointlike localized correlation functions in this paper uses sequences of algebraic correlation functions of local observables. The algebraic correlation functions obviously fulfill positive definiteness, conformal covariance, locality, and the spectrum condition. Hence, if the sequences converge, the set of pointlike limits of algebraic correlation functions fulfills the Wightman axioms (see [StW]) by construction. By the reconstruction theorem in [StW] and [Jos], the existence of Wightman fields associated with the Wightman functions is guaranteed and this Wightman field theory is unique up to unitary equivalence.
We do not know at the moment whether the Wightman fields can be identified with the pointlike localized field operators constructed in [FrJ] from the Haag-Kastler theory. We do not know either whether the Wightman fields are affiliated to the associated von Neumann algebras of local observables and how the Haag-Kastler net we have been starting from can be reconstructed from the Wightman fields. Possibly, the Wightman fields cannot even be realized in the same Hilbert space as the Haag-Kastler net of local observables.
We do know, however, that the Wightman theory associated with the Haag-Kastler theory is non-trivial: The two-point functions of this Wightman fields are, by construction, identical with the two-point functions of the pointlike localized field operators constructed in [FrJ]. And we have already proved that those pointlike field vectors can be chosen to be non-vanishing and that the vacuum vector is cyclic for a set of all field operators localized in an arbitrary interval.
It shall be pointed out again that those pointlike fields constructed in [FrJ,Jör3] could not be proved to fulfill the Wightman axioms, since we were not able to find a domain of definition that is stable under the action of the field operators.
To summarize this paper, we state that starting from a chiral conformal Haag-Kastler theory we have found a canonical construction of non-trivial Wightman fields. The reconstruction of the original net of von Neumann algebras of local observables from the Wightman fields could not explicitly be presented, since we do not know whether the Wightman fields can be realized in the same Hilbert space as the Haag-Kastler net.
Actually, Borchers and Yngvason [BoY] have investigated similar situations and have shown that such problems can occur in quantum field theory. In [BoY] the question is discussed under which conditions a Haag-Kastler net can be associated with a Wightman theory. The condition for the locality of the associated algebra net turned out be a property of the Wightman fields called "central positivity". Central positivity is fulfilled for Haag-Kastler nets and is stable under pointlike limits [BoY]. Hence, the Wightman fields constructed in this thesis fulfill central positivity. The possibility, however, that the local net has to be defined in an enlarged Hilbert space could not be ruled out in general by [BoY].
Furthermore, it has been proved in [BoY] that Wightman fields fulfilling generalized Hbounds (cf. [DSW]) have associated local nets of von Neumann algebras that can be defined in the same Hilbert space. The closures of the Wightman field operators are then affiliated to the associated local algebras. We could not prove generalized H-bounds for the Wightman fields constructed in this thesis. Actually, we suppose that the criterion of generalized H-bounds is too strict for general conformal -and therefore massless -quantum field theories. (Generalized) H-bounds have been proved, however, for massive theories, i.e. for models in quantum field theory with massive particles (cf. also [DrF,FrH,Sum,Buc1]).
of local observables A i , i = 1, 2, 3 . The general form of a (truncated) chiral three-point function of local observables is restricted by locality and by the condition of positive energy. The Fourier transform of an algebraic three-point function can be shown to be the sum of the restrictions of analytic functions to disjoint open wedges in the domain of positive energy:
This assumption is seemingly weaker than cyclicity of Ω w.r.t. the algebra of local observables on R.
AcknowledgementsThis paper is one part of the author's dissertation. We would like to thank Prof. Dr. K. Fredenhagen for his confidence, constant encouragement, and the numerous inspiring discussions over the whole period of the work. I am indebted to him for many important insights I received by his guidance. His cooperation was crucial and fruitful for this work.The financial support given by the Friedrich-Ebert-Stiftung is gratefully acknowledged.
H Baumgärtel, M Wolleberg, Causal nets of operator algebras. Akademie-VerlagH. Baumgärtel, M. Wolleberg: Causal nets of operator algebras (1992) Akademie-Verlag
S Bochner, W Martin, Several Complex Variables. Princeton University PressS. Bochner W. Martin: Several Complex Variables (1948) Princeton University Press [BoY]
Positivity of Wightman functions and the existence of local nets. H J Borchers, J Yngvason, Comm. Math. Phys. 127607H. J. Borchers, J. Yngvason: Comm. Math. Phys. 127 (1990) 607 Positivity of Wightman functions and the existence of local nets;
Local nets and selfadjoint extensions of quantum field operators. Rev. Math. Phys., Special Issue. 21Lett. Math. Phys.. From quantum fields to local von Neumann algebrasLett. Math. Phys. 21 (1991) 151 Local nets and selfadjoint extensions of quantum field operators; Rev. Math. Phys., Special Issue (1992) 15-47 From quantum fields to local von Neumann algebras
O Bratelli, D W Robinson, Operator algebras and quantum statistical mechanics I. SpringerO. Bratelli, D. W. Robinson: Operator algebras and quantum statistical mechanics I (1979) Springer
P Christe, M Henkel, Introduction to Conformal Invariance and Its Applications to Critical Phenomena. SpringerP. Christe, M. Henkel: Introduction to Conformal Invariance and Its Applications to Critical Phenomena (1993) Springer
On quantum fields that generate local algebras. D Buchholz, J. Math. Phys. 31D. Buchholz: On quantum fields that generate local algebras (1990) J. Math. Phys. 31, p. 1839-1846
D Buchholz, On the Manifestation of Particles (1993) DESY-preprint 93-155 and report in the proceedings of the Beer Sheva conference. D. Buchholz: On the Manifestation of Particles (1993) DESY-preprint 93-155 and report in the proceedings of the Beer Sheva conference 1993
. D Buchholz, K Fredenhagen, J. Math. Phys. 18Dilations and interactions [DrFD. Buchholz, K. Fredenhagen: J. Math. Phys. 18 (1977) 1107-1111 Dilations and interactions [DrF]
The reconstruction of local observable algebras from the Euclidean Green's function of relativistic quantum field theory. W Driessler, J Fröhlich, Ann. Inst. H. Poincaré. 27W. Driessler, J. Fröhlich: The reconstruction of local observable algebras from the Euclidean Green's function of relativistic quantum field theory (1977) Ann. Inst. H. Poincaré 27, p. 221-236
On the connection between quantum fields and von Neumann algebras of local operators. W Driessler, S J Summers, E H Wichmann, Riemannsche Flächen. Springer10549W. Driessler, S. J. Summers, E. H. Wichmann: Comm. Math. Phys. 105 (1986) 49 On the connection between quantum fields and von Neumann algebras of local operators [For] O. Forster: Riemannsche Flächen (1977) Springer
97 (1985) 461 A Remark on the Cluster Theorem. K Fredenhagen, Comm. Math. Phys. K. Fredenhagen: Comm. Math. Phys. 97 (1985) 461 A Remark on the Cluster Theorem
K Fredenhagen, J Hertel, Local algebras of observables and pointlike localized fields. 80K. Fredenhagen, J. Hertel: Local algebras of observables and pointlike localized fields (1981) Comm. Math. Phys. 80, p. 555-561
. K Fredenhagen, M Jörß, Comm. Math. Phys. 176541K. Fredenhagen, M. Jörß: Comm. Math. Phys. 176 (1996) 541
Conformal Haag-Kastler Nets, Pointlike Localized Fields and the Existence of Operator Product Expansions. Conformal Haag-Kastler Nets, Pointlike Localized Fields and the Existence of Operator Product Expansions
Superselection sectors with braid group statistics and exchange algebra I. K Fredenhagen, K.-H Rehren, B Schroer, Comm. Math. Phys. 125K. Fredenhagen, K.-H. Rehren, B. Schroer: Comm. Math. Phys. 125 (1989) 201 Superselec- tion sectors with braid group statistics and exchange algebra I
Superselection sectors with braid group statistics and exchange algebra II. K Fredenhagen, K.-H Rehren, B Schroer, Rev. Math. Phys., special issue. 113K. Fredenhagen, K.-H. Rehren, B. Schroer: Rev. Math. Phys., special issue (1992) 113 Su- perselection sectors with braid group statistics and exchange algebra II
R Haag, Local quantum physics. SpringerR. Haag: Local quantum physics (1992) Springer
Lokale Netze auf dem eindimensionalen Lichtkegel (1991) diploma thesis. M Jörß, FU BerlinM. Jörß: Lokale Netze auf dem eindimensionalen Lichtkegel (1991) diploma thesis, FU Berlin
M Jörß, On the Existence of Pointlike Localized Fields in Conformally Invariant Quantum Physics (1992) DESY-preprint 92-156 and report in the proceedings of the Cambridge conference. M. Jörß: On the Existence of Pointlike Localized Fields in Conformally Invariant Quantum Physics (1992) DESY-preprint 92-156 and report in the proceedings of the Cambridge conference 1992
M Jörß, The Construction of Pointlike Localized Charged Fields from Conformal Haag-Kastler-Nets (1995) DESY. preprint 95-105 and to appear in L. Math. PhysM. Jörß: The Construction of Pointlike Localized Charged Fields from Conformal Haag- Kastler-Nets (1995) DESY-preprint 95-105 and to appear in L. Math. Phys.
M Jörß, and DESY-preprint 96-136Conformal Quantum Field Theory: From Haag-Kastler Nets Wightman Fields. PhD-thesisM. Jörß: Conformal Quantum Field Theory: From Haag-Kastler Nets Wightman Fields (1996) PhD-thesis and DESY-preprint 96-136
R Jost, The General Theory of Quantized Fields. American Mathematical SocietyR. Jost: The General Theory of Quantized Fields (1965) American Mathematical Society
. K.-H Rehren, University HamburgmanuscriptK.-H. Rehren: unpublished manuscript (1987) University Hamburg
R Streater, A S Wightman, PCT, Spin & Statistics, and All That. BenjaminR. Streater, A. S. Wightman: PCT, Spin & Statistics, and All That (1964) Benjamin
S J Summers, From algebras of local observables to quantum fields: generalized H-Bounds. 601004S. J. Summers: From algebras of local observables to quantum fields: generalized H-Bounds (1987) Helv. Phys. Act. 60, p. 1004
| [] |
[
"Imaging Properties of Two-Dimensional Microlenses",
"Imaging Properties of Two-Dimensional Microlenses"
] | [
"Vera N Smolyaninova \nDepartment of Physics Astronomy and Geosciences\nTowson University\n8000 York Rd21252TowsonMDUSA\n",
"Igor I Smolyaninov \nDepartment of Electrical and Computer Engineering\nUniversity of Maryland\n20742College ParkMDUSA\n",
"Alexander V Kildishev \nShool of Electrical and Computer Engineering and Birck Nanotechnology Center\nPurdue University\n47907West LafayetteINUSA\n",
"Vladimir M Shalaev \nShool of Electrical and Computer Engineering and Birck Nanotechnology Center\nPurdue University\n47907West LafayetteINUSA\n"
] | [
"Department of Physics Astronomy and Geosciences\nTowson University\n8000 York Rd21252TowsonMDUSA",
"Department of Electrical and Computer Engineering\nUniversity of Maryland\n20742College ParkMDUSA",
"Shool of Electrical and Computer Engineering and Birck Nanotechnology Center\nPurdue University\n47907West LafayetteINUSA",
"Shool of Electrical and Computer Engineering and Birck Nanotechnology Center\nPurdue University\n47907West LafayetteINUSA"
] | [] | Despite strong experimental and theoretical evidence supporting superresolutionimaging based on microlenses, imaging mechanisms involved are not well understood. Based on the transformation optics approach, we demonstrate that microlenses may act as two-dimensional fisheye or Eaton lenses. An asymmetric Eaton lens may exhibit considerable image magnification, which has been confirmed experimentally.Current interest in electromagnetic metamaterials has been motivated by recent work on superlenses, cloaking and transformation optics[1][2][3]. This interest has been followed by considerable efforts aimed at introduction of metamaterial structures that could be realized experimentally. Unfortunately, it appears difficult to develop metamaterials with low-loss, broadband performance. The difficulties are especially severe in the visible frequency range where good magnetic performance is limited. On the other hand, very recently we have demonstrated that many transformation optics and metamaterial-based devices requiring anisotropic dielectric permittivity and magnetic | null | [
"https://export.arxiv.org/pdf/1006.0914v1.pdf"
] | 117,098,777 | 1006.0914 | 5f873361f45b692a6b07537543a0fa3f3f503ede |
Imaging Properties of Two-Dimensional Microlenses
Vera N Smolyaninova
Department of Physics Astronomy and Geosciences
Towson University
8000 York Rd21252TowsonMDUSA
Igor I Smolyaninov
Department of Electrical and Computer Engineering
University of Maryland
20742College ParkMDUSA
Alexander V Kildishev
Shool of Electrical and Computer Engineering and Birck Nanotechnology Center
Purdue University
47907West LafayetteINUSA
Vladimir M Shalaev
Shool of Electrical and Computer Engineering and Birck Nanotechnology Center
Purdue University
47907West LafayetteINUSA
Imaging Properties of Two-Dimensional Microlenses
1
Despite strong experimental and theoretical evidence supporting superresolutionimaging based on microlenses, imaging mechanisms involved are not well understood. Based on the transformation optics approach, we demonstrate that microlenses may act as two-dimensional fisheye or Eaton lenses. An asymmetric Eaton lens may exhibit considerable image magnification, which has been confirmed experimentally.Current interest in electromagnetic metamaterials has been motivated by recent work on superlenses, cloaking and transformation optics[1][2][3]. This interest has been followed by considerable efforts aimed at introduction of metamaterial structures that could be realized experimentally. Unfortunately, it appears difficult to develop metamaterials with low-loss, broadband performance. The difficulties are especially severe in the visible frequency range where good magnetic performance is limited. On the other hand, very recently we have demonstrated that many transformation optics and metamaterial-based devices requiring anisotropic dielectric permittivity and magnetic
2 permeability could be emulated by specially designed tapered waveguides. This approach leads to low-loss, broadband performance in the visible frequency range, which is difficult to achieve by other means. We have applied this technique to broadband electromagnetic cloaking in the visible range [4]. In this paper we apply the same technique to experimental realization of the fisheye and Eaton microlenses, which were suggested to act as ideal imaging devices even in the absence of negative refraction [5]. Realization of these microlenses using electromagnetic metamaterials would require sophisticated nanofabrication techniques. In contrast, our approach leads to a much simpler design, which involves two-dimensional (2D) imaging using a small liquid microdroplet.
Despite strong experimental and theoretical evidence supporting superresolution imaging based on microlenses and microdroplets, imaging mechanisms involved are not well understood. Imaging by surface plasmon polaritons (SPP) [6] has been proposed as the main super-resolution mechanism in imaging experiments using glycerin microdroplets on gold film surface [7]. Resolution of the order of λ/8 has been observed in these experiments. On the other hand, magnification of near-field image components has been suggested in recent experiments with self-assembled plano-spherical nanolenses [8], which demonstrated resolution of the order of λ/4. Our analysis in terms of the effective metamaterial parameters indicates that the shape of microlenses and microdroplets provides natural realization of the effective refractive index distribution in the fisheye and Eaton microlenses [4]. The starting point of our analyses is the dispersion law of guided modes in a tapered waveguide. In case of the metal-coated dielectric waveguide it can be written in a simple analytical form: where n d is the refractive index of the dielectric, d(r) is the waveguide thickness, and l is the transverse mode number. We assume that the thickness d of the waveguide in the zdirection changes adiabatically with radius r. A photon launched into the l-th mode of the waveguide stays in this mode as long as d changes adiabatically [9]. If we wish to emulate refractive index distribution n(r) of either 2D fisheye or 2D Eaton lens:
2 2 2 2 ) ( y x k k c r n + = ω (2)
we need to produce the following profile of the microdroplet:
) ( 2 2 2 r n n l d d − = λ (3)
This is easy to do for some particular mode l of the waveguide. Typical microdroplet/microlens profiles which emulate fisheye lens:
1 2 − = r R n for r > R(5)
are shown in Fig. 1. Real glycerin microdroplets have shapes, which are somewhere in between these cases. Since the refractive index distribution in the fisheye lens is obtained via the stereographic projection of sphere onto a plane [5], points near the droplet edge correspond to points located near the equator of the sphere. Therefore, these points are imaged into points located near the opposite droplet edge, as shown in 4 Fig.2(a). The Eaton lens has similar imaging properties, as shown in Fig.2(b). We have tested this imaging mechanism using glycerin microdroplets formed on the surface of gold film, which were illuminated near the edge using tapered fiber tips of a near-field scanning optical microscope (NSOM), as shown in Fig.3. As expected from the numerical simulations, an image of the NSOM tip was easy to observe at the opposite edge of the microdroplet.
While the fisheye lens design is difficult to modify to achieve image magnification, modification of the Eaton lens is straightforward. As shown in Fig.4, two halves of the Eaton lens having different values of parameter R can be brought together to achieve image magnification. The image magnification in this case is M=R 1 /R 2 . Our numerical simulations in the case of M=2 are presented. Since the sides of the lens play no role in imaging, the overall shape of the imaging device can be altered to achieve the shape of a "deformed droplet". Using experimental technique described in the "Methods" section, we have created glycerin droplets with shapes, which are very close to the shape of the "deformed droplet" used in the numerical simulations. Image magnification of the "deformed droplet" has been tested by moving the NSOM probe tip along the droplet edge, as shown in Fig.5. It appears to be close to the M=2 value predicted by the simulations (see "Methods").
In conclusion, we have demonstrated that small dielectric microlenses behave as two-dimensional imaging devices, which can be approximated by 2D fisheye and/or Eaton lenses. Superresolution imaging observed in these microlenses is consistent with the transformation optics mechanism described in ref. [5]. In addition, deformed microlenses/microdroplets were observed to exhibit image magnification, which is consistent with numerical predictions. 5
METHODS
In our imaging experiments the "deformed droplets" were formed in desired locations
by bringing a small probe Fig.6(a) wetted in glycerin into close proximity to the sample surface. The probe was prepared from a tapered optical fiber, which has an epoxy microdroplet near its apex. Bringing the probe to a surface region covered with glycerin led to a glycerin microdroplet formation under the probe (Fig.6b). The shape of the glycerin droplet was determined by the shape of the seed droplet of epoxy. The glycerin droplet under the probe can be moved to a desired location under the visual control, using a regular microscope. Our droplet deposition procedure allowed us to form droplet shapes, which were reasonably close to the shape of a magnifying Eaton lens, as shown in Fig.5. In addition, the liquid droplet boundary may be expected to be rather smooth because of the surface tension, which is essential for the proper performance of the droplet boundary as a 2D fisheye or Eaton lens.
Image magnification of the 2D magnifying Eaton lens has been measured as demonstrated in Fig.7. Position of the NSOM tip and its image in the second frame is shown by red dots in the first frame. The ratio of the gray line lengths, which connect NSOM tip and image locations in the two frames shown is close to the theoretically predicted value M=2. Points near the edge of the fisheye and Eaton lenses are imaged into opposite points.
Figure Captions
Refractive index distribution in these lenses is shown in the bottom panels. 6. The "deformed" glycerin droplets were formed in desired locations by bringing a small probe (a) wetted in glycerin into close proximity to a sample. The probe was prepared from a tapered optical fiber, which has an epoxy microdroplet near its apex.
Bringing the probe to a surface region covered with glycerin led to a glycerin microdroplet formation (b) under the probe in locations indicated by the arrows.
Fig. 1 .
1Typical profiles of a microdroplet which emulates either the fisheye lens (R=7μm) or the Eaton lens (R=5μm) for l=1.
Fig. 2 .
2Numerical simulations of imaging properties of the fisheye and Eaton lenses.
Fig. 3 .
3Experimental testing of the imaging mechanism of the glycerin microdroplets.The droplet is illuminated near the edge with a tapered fiber tip of a near-field scanning optical microscope (NSOM). Image of the NSOM tip is clearly seen at the opposite edge of the droplet.
Fig. 4 .
4Numerical simulations of image magnification (M=2) using the Eaton lens. Since the sides of the lens play no role in imaging, the overall shape of the imaging device can be altered to achieve the shape of a "deformed droplet".
Fig. 5 .
5Experimental testing of image magnification of the "deformed droplet". The NSOM probe tip was moved along the droplet edge. Bottom row presents results of our numerical simulations in the case of one and two point sources. The shape of the "deformed droplet" used in numerical simulations closely resembles the shape of the actual droplet.
Fig.
Fig.6. The "deformed" glycerin droplets were formed in desired locations by bringing a
Negative refraction makes a perfect lens. J B Pendry, Phys. Rev. Lett. 853966J.B. Pendry, "Negative refraction makes a perfect lens" Phys. Rev. Lett. 85, 3966 (2000).
Controlling electromagnetic fields. J B Pendry, D Schurig, D R Smith, Science. 312J. B. Pendry, D. Schurig, D.R. Smith, "Controlling electromagnetic fields", Science 312, 1780-1782 (2006).
Optical conformal mapping. U Leonhardt, Science. 312U. Leonhardt, "Optical conformal mapping", Science 312, 1777-1780 (2006).
Anisotropic metamaterials emulated by tapered waveguides: application to electromagnetic cloaking. I I Smolyaninov, V N Smolyaninova, A V Kildishev, V M Shalaev, Phys. Rev. Letters. 103213901I.I. Smolyaninov, V.N. Smolyaninova, A.V. Kildishev, and V.M. Shalaev, "Anisotropic metamaterials emulated by tapered waveguides: application to electromagnetic cloaking", Phys. Rev. Letters 103, 213901 (2009).
Perfect imaging without negative refraction. U Leonhardt, New Journal of Physics. 1193040U. Leonhardt, "Perfect imaging without negative refraction", New Journal of Physics 11, 093040 (2009).
Nano-optics of surface plasmonpolaritons. A V Zayats, I I Smolyaninov, A Maradudin, Physics Reports. 408A.V. Zayats, I.I. Smolyaninov, A. Maradudin, "Nano-optics of surface plasmon- polaritons", Physics Reports, 408, 131-314 (2005).
Far-field optical microscopy with nanometer-scale resolution based on the in-plane image magnification by surface plasmon polaritons. I I Smolyaninov, J Elliott, A V Zayats, C C Davis, Phys. Rev. Letters. 9457401I.I. Smolyaninov, J. Elliott, A.V. Zayats, and C.C. Davis, "Far-field optical microscopy with nanometer-scale resolution based on the in-plane image magnification by surface plasmon polaritons", Phys. Rev. Letters 94, 057401 (2005).
Near-field focusing and magnification through self-assembled nanoscale spherical lenses. J Y Lee, Nature. 460J. Y. Lee et al. "Near-field focusing and magnification through self-assembled nanoscale spherical lenses", Nature 460, 498-501 (2009).
L D Landau, E Lifshitz, Quantum Mechanics. Reed, OxfordL.D. Landau, E.M Lifshitz, Quantum Mechanics (Reed, Oxford, 1988).
| [] |
[
"Simulating boson sampling in lossy architectures",
"Simulating boson sampling in lossy architectures"
] | [
"Raúl García-Patrón \nCentre for Quantum Information and Communication\nEcole Polytechnique de Bruxelles\nUniversité Libre de Bruxelles\n165, 1050BrusselsCPBelgium\n",
"Jelmer J Renema \nDepartment of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom\n",
"Valery Shchesnovich \nCentro de Ciências Naturais e Humanas\nUniversidade Federal do ABC\nSanto André09210-170SPBrazil\n"
] | [
"Centre for Quantum Information and Communication\nEcole Polytechnique de Bruxelles\nUniversité Libre de Bruxelles\n165, 1050BrusselsCPBelgium",
"Department of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom",
"Centro de Ciências Naturais e Humanas\nUniversidade Federal do ABC\nSanto André09210-170SPBrazil"
] | [] | Photon losses are the strongest imperfection affecting boson sampling experiments. Despite their importance, little is known about the resilience of boson sampling to losses. In this work we show that all current architectures that suffer from an exponential decay of the transmission with the depth of the circuit, such as integrated photonic circuits or optical fibers, can be efficiently simulated classically. We prove that either the depth of the circuit is large enough that it can be simulated by thermal noise with an algorithm running in polynomial time, or it is short enough that a tensor network simulation runs in quasipolynomial time. This result suggest that in order to implement a quantum advantage experiment with single-photons and linear-optics we need a profound change of paradigm. | 10.22331/q-2019-08-05-169 | [
"https://arxiv.org/pdf/1712.10037v2.pdf"
] | 51,183,683 | 1712.10037 | 2625d39d2e435f140b2d0aa77ff6d05fa048248a |
Simulating boson sampling in lossy architectures
June 12, 2018
Raúl García-Patrón
Centre for Quantum Information and Communication
Ecole Polytechnique de Bruxelles
Université Libre de Bruxelles
165, 1050BrusselsCPBelgium
Jelmer J Renema
Department of Physics
Clarendon Laboratory
University of Oxford
OX1 3PUOxfordUnited Kingdom
Valery Shchesnovich
Centro de Ciências Naturais e Humanas
Universidade Federal do ABC
Santo André09210-170SPBrazil
Simulating boson sampling in lossy architectures
June 12, 2018
Photon losses are the strongest imperfection affecting boson sampling experiments. Despite their importance, little is known about the resilience of boson sampling to losses. In this work we show that all current architectures that suffer from an exponential decay of the transmission with the depth of the circuit, such as integrated photonic circuits or optical fibers, can be efficiently simulated classically. We prove that either the depth of the circuit is large enough that it can be simulated by thermal noise with an algorithm running in polynomial time, or it is short enough that a tensor network simulation runs in quasipolynomial time. This result suggest that in order to implement a quantum advantage experiment with single-photons and linear-optics we need a profound change of paradigm.
Introduction
In 2003 Knill, Laflamme and Milburn showed that single-photon sources and linear optics are sufficient to achieve universal quantum computation [1]. A single-photon and linear-optics version of measurement based quantum computation has also been thoroughly studied [2,3]. In both proposals, a key component to reach universality is the capability for a measurement outcome to change the gates implemented later in time, i.e., using active feed-forward, a challenging experimental requirement [4]. In 2010, Aaronson and Arkhipov demonstrated that removing the feedforward condition provides a framework, called boson sampling, that is not sufficient for universal quantum computation but that is hard to simu-Raúl García-Patrón: [email protected] late on a classical computer, unless the polynomial hierarchy collapses to its third level [5]. The key idea behind their proof is the connection between the output statistics of non-interacting indistinguishable photons and the permanent [6], a quantity know to be #P-hard to compute [7,8]. As shown in Figure 1, a boson sampling device implements an N single-photon interference over an M -mode randomly-selected arbitrary linear optical interferometer and measures each output mode with a photon counting detector. An Mmode linear optical interferometer is build out of two-mode couplers (beamsplitters) acting on neighboring modes and single-mode phases gates. In order to generate an arbitrary linear optical circuit, a depth proportional to M is needed [9,10].
Since 2013 a variety of quantum optics experimental groups have implemented proof-ofprinciple implementations based on different architectures, such as reconfigurable integrated photonic circuits [11], fiber-loops [12], 3D waveg-uides [13] or multimode fibers [14], that have the potential of being scalable and therefore are candidates for a quantum advantage demonstration. The motivation to further simplify the experimental scheme, led to the proposal of Scattershot boson sampling, a sampling problem as hard to simulate as the initial boson sampling proposal. Scattershot boson sampling solves the problem of obtaining N single-photon from state of the art probabilistic single-photon sources by using M heralded two-mode squeezed vacuum state sources, one per input modes of the boson sampling circuit [15]. It was discussed in [16] that, since scattershot boson sampling is a subclass of Gaussian boson sampling, the problem of sampling a Gaussian state on the photon number basis is therefore of the same complexity as the original boson sampling proposal. The counting statistics of Gaussian boson sampling are connected to the Hafnian [17] a quantity related to the perfect matchings in graphs, similarly as the permanent is to the number of perfect matching in a bipartite graph.
The lack of fault-tolerant error correction in quantum advantage architectures, such as in boson sampling, implies that increasing the size and depth of the circuit would ultimately lead to a system that is equivalent to sampling random noise. Therefore, the existence of an opportunity window where noise has not yet destroyed the quantum advantage but a classical algorithm, such as [18,19], cannot longer simulate the system is fundamental for a conclusive quantum advantage experiment. It is therefore of paramount importance to have a good understanding of when noise makes a quantum advantage architecture classically simulatable. Keshari et al. provided in [20] a first rigorous bound, which combined losses and dark counts of the detectors. Unfortunately, this bound is independent of the size of the system and can not provide an answer in terms of losses independently of the detectors dark-count rate.
Together with the indistinguishably of photons, for which an algorithm to simulate partially distinguishable photons was demonstrated recently in [21], losses are the most damaging imperfections challenging boson sampling. Despite its importance, little is know about the effect of losses, except from the fact that boson sampling remains hard in the regime of a constant number of pho-tons lost [22], a rather limited assumption. Most boson sampling architectures, and all for which interference has been shown for more than 2 photons, are based on a planar geometry of depth proportional to the number of input systems and where the loss per coupler in the circuit is constant, leading to a law of exponential decay of the transmission with the depth of the circuit. In subsection 2.3 of this work we show that for those platforms, and platforms which have similar exponential decay, boson sampling can be efficiently simulated classically. Therefore, for single-photons and linear-optics to remain competitive in the race for a quantum advantage experiment, a profound change of paradigm is radically needed. More precisely, we show that for those platforms either the depth of the circuit (D) is large enough (D ≥ O(log M )) that it can be simulated by thermal noise with an algorithm running in polynomial time, or the depth of the circuit is short enough (D ≤ O(log M )) that a tensor network simulation, similar in spirit to [23], runs in quasi-polynomial time. Not all optical architectures suffer from an exponential decay of the transmission, for example optical free-space has a quadratic decay of transmission. In paragraph 2.3.4 we extend the validity of the thermal noise simulation to this family of architectures. In addition, our result can be easily adapted to Scattershot boson sampling and boson sampling architectures where the photon-counting detectors are replaced by Gaussian measurements [24,25], as shown in subsection 2.4.
Main result 2.1 The ideal boson sampling model
The initial boson sampling proposal concerns the interference of a multi-photon input state |n = |n 1 ⊗ |n 2 ⊗ . . . ⊗ |n N , where n corresponds to a string of bits such that |n| = i n i = N , over an M -mode linear-optics interferometer modeled as a linear transformation of the annihilation operatorsâ
i = j U ijâj ,(1)
where the unitarity of U guarantees the preservation of the total photon number. Remark that U is an M × M matrix acting on the creating operators, to which corresponds a homomorphism ϕ(U ) of dimension N +M −1 N acting on the Mmode Fock space [5]. At the output we implement a measurement on the photon number basis on each mode, where the probability of obtaining an outcome z reads | z|ϕ(U )|n | 2 . When z corresponds to a string of bits, the probability outcome is connected to the permanent of a submatrix of U , a crucial tool in the hardness proof of boson sampling. Interestingly, all arrangement of two-mode couplers and single-mode phase gates leading to the same global transformation U are completely equivalent, and will be experimentally indistinguishable, with respect to their input/output statistic.
A necessary condition in the proof of the hardness of boson sampling is the fact the U needs to be a Haar random unitary. The proposal by Reck et al. [9] showed that any general linearoptics transformation can be achieved with a planar circuit composed of two-mode gates, using M (M − 1)/2 gates distributed over 2M − 3 layers and M parallel modes. A recent improvement in [10] remarkably brought this result to depth M +1, which is very close the the lower-bound M obtained from a simple counting argument based on the degrees of freedom of a unitary matrix. Therefore, an architecture implementing boson sampling needs to have a depth at least equal the number of modes, as it could be in principle any M xM unitary U . Even if boson sampling over a planar architecture needs depth scaling linearly with the number of modes, other not universal circuit of shortest depth can be of interest for specific applications. Therefore, to make our result as general as possible we will consider the depth of the circuit D as an additional free parameter.
In the initial boson sampling proposal, the proof necessitates a polynomial relation between the number of photons and modes (M = O(N 6 )). In this work we consider the generalized relation
N = kM γ ,(2)
where 0 < k < 1 and 0 < γ < 1. It is easy to see that γ = 1/2 corresponds to the bosonic birthday paradox ratio [26]. This ratio ensures that the input states |n is composed of single photons and the probability of two boson bunching at the output is negligible (on average over the set of Haar random unitaries). The case γ = 1/6 is the condition in the proof of Aaronson and Arkhipov. Finally, γ = 1 corresponds to the regime where Figure 2: a) N single photons are sent over a lossy M mode linear-optical circuit composed of D layers of twomode couplers. An uniform loss of τ per coupler leads to an exponentially decay of the transmission µ = τ D ; b) the real scheme in a) is indistinguishable from a circuit composed by a lossless linear-optics transformation V , followed by M parallel set of pure-loss channels of transmission µ i each, and a final lossless linear-optics transformation W ; c) In the case where all the µ i are equal, the virtual representation can be simplified to a layer of M identical pure-loss channels of transmission µ followed by a virtual lossless linear-optics transformation U = V W ; d) The action of a pure-loss channel of transmission µ on a single-photon state |1 is equivalent to an erasure channel of probability µ, which outputs the
T = z D U=VW V W U=VW a) b) c) d) µ1 µ2 µ3 µM |1i |1i |1i |1i |1i |1i |1i |1i |1imixed state σ = (1 − µ)|0 0| + µ|1 1|.
the density of photon (then given by k) remains constant while the size of the system increases, as opposed to the original boson sampling proposal where it decreases with the size of the system with a scaling M −5/6 .
Modeling losses
A lossy linear-optics circuit, as in Figure 2 a), can be mathematically modeled by a complex matrix A satisfying AA † ≤ I and transforming the annihilation operators of M input modesâ and M environment modesê aŝ
a out = Aâ in + 1 − AA †ê .(3)
A has a singular value decomposition A = VμW , where V and W are unitary matrices andμ = diag(
√ µ 1 , √ µ 2 , ..., √ µ M ) is a diagonal matrix of real values satisfying µ i ∈ [0, 1].
The singular value decomposition has a very natural interpretation, see Figure 2 b), which is that the real interferometer with losses characterized by A has an equivalent circuit composed by a lossless linear-optics transformation V , followed by M parallel set of pure-loss channels of transmission µ i each, and a final lossless linear-optics transformation W . A pure-loss channel is equivalent to a coupling interaction of transmission µ i between our physical mode i and an environmental mode, see [27] for details. The matrix A can be efficiently inferred using a simple tomographic technique that only needs two-mode interferences of classical laser light [28].
In practice, a linear-optics circuit is composed of a network of two-mode couplers and singlemode phase gates, where each layer of gates of an M -mode linear-optics circuit is given by a direct product of local 2 × 2 linear transformation and complex scalars, resulting in a M × M complex banded matrix A i of width 2. The total linear-optics circuit transformation results from the multiplication of D matrices A i , i.e.,
A = A 1 A 2 ...A D .
Similarly, as discussed for the ideal linear-optics circuit, any arrangement of two-mode couplers and single-mode phase gates leading to the same global transformation A will be experimentally indistinguishable with respect to their input/output statistic.
All currently existing architecture proposals to implement a boson sampling experiment, integrated photonic circuits, fiber-optical links and 3D-waveguides, suffer from exponential decay of the transmission with the length of the circuit. An intuitive explanation is that every photon has a constant probability of being lost per unit of length of the circuit or per layer of coupling gates. For a planar graph circuit composed of D layers of gates, where every gate has a transmission coefficient τ , we obtain that all the µ i are equal and the transmission follows an exponential decay rule µ = τ D . Becauseμ = √ τ D I commutes with any matrix, we can simplify the virtual representation of A to a layer of M identical pure-loss channels of transmission µ followed by a virtual lossless linear-optics transformation U = V W (see Figure 2 c)).
To make our proof more accessible, we first restrict to the scenario of uniform loss and later generalize the result to arbitrary circuits in Section 6, where we show that any result that holds for uniform µ can be generalized to µ max = max i µ i .
Simulating boson sampling over circuits with losses
The action of a pure-loss channel of transmission µ into a single-photon state |0 is equivalent to an erasure channel of probability µ, resulting into a mixed state σ = (1−µ)|0 0|+µ|1 1|, see Figure 2 d). Therefore, boson sampling over a realistic interferometer with uniform losses is equivalent to an ideal boson sampler over its virtual circuit U = V W where we replace each of the N singlephotons of the input state, located in the first N modes, by the state σ, leading to a global input state ρ in = σ N ⊗ |0 0| M −N . This state can be equivalently written
ρ in = (1 − µ) N N n=0 µ 1 − µ n n∈Φ (1) n|N |n n|, (4) where Φ (1)
n|N denotes the set of distributions of n photons without collisions, i.e., n being a string of bits over the first N modes inputs and the remaining M − N input modes are in a vacuum state. This last expression shares some similarities with the state ρ T = σ N th ⊗ |0 0| M −N , composed of N thermal states in the first N modes and the remaining M − N input modes in a vacuum state. A thermal state is given by the Bose-Einstein distribution
σ th = (1 − λ) ∞ x=0 λ x |x x|,(5)
where λ = n 1+ n , with n being the average number of photons. The state ρ T can be equivalently written
ρ T = (1 − λ) N ∞ n=0 λ n n∈Φ n|N |n n|,(6)
where Φ n|N is the set of all possible distributions of n photons over the first N input ports, where collision are now allowed. This similarity can be made formal as we show in the following section.
Simulation of lossy boson sampling with thermal noise
In section 3 we will prove the following lemma Lemma 1. For a boson sampling circuit of uniform losses µ satisfying the condition
µ ≤ 1 2 N ,(7)
the trace-distance between both states ρ in and ρ T , when setting
λ = µ/(1 − µ), satisfies D (ρ in , ρ T ) ≤ .
An intuitive interpretation of this result is the following. Suppose that you implement a boson sampling experiment where you inject N single photons in the circuit. After the lossy circuit you detected only k photons, each at a different detector, i.e., satisfying the bosonic birthday paradox condition. This provides an estimate of the losses of the order µ ≈ k/N . We do not know from which input each of the k surviving photons was coming, but being a boson sampling device we know each one comes from a different input mode. On the other hand, for k detected photons resulting from a thermal state input ρ T , all the possible distribution of the k photons among the N input modes are equiprobable, which allows all types of collisions. Imposing a bosonic birthday paradox condition k ∼ √ N the probability of collisions becomes negligible and both scenarios provide approximatively the same answer, which justifies the scaling µ ∼ 1/ √ N . Because quantum operations and measurement can only decrease the trace distance, any outcome statistics, p in and p T , resulting from any quantum operation followed by any measurement will also satisfy D (p in , p T ) ≤ . Therefore, any classical algorithm efficiently simulating a boson sampling experiment with input state ρ T will be -close from the one with input ρ in . The following lemma on the efficient classical simulation of boson sampling with thermal states input, which was implicit in [29], is proven in section 4, Lemma 2. There exist a polynomial time classical algorithm that simulates the evolution of a thermal state ρ T over a ideal or lossy interferometer followed by measurement on the photon number basis, where the output distribution isclose to the ideal one and the computational cost scales as O M µ 2 N log 2 N .
As detailed in section 4 the algorithm combines the three following well-know facts in quantum optics. Firsly, any thermal states ρ T has a Glauber-Sudarshan P -representation as a mixture of an N -mode tensor product of coherent states |α = M i=1 |α i according to a Gaussian distribution P (α), which can be sampled efficiently. Secondly, a linear-optical circuit characterized by a unitary matrix U transforms a tensor product of coherent states α into another tensor product of coherent states |β = U |α . Thirdly, coherent states follow a Poisson photon number distribution, which can be sampled efficiently.
The combination of both lemmas will allow us to simulate a lossy boson sampling architecture composed of D layers of gates with exponentially decaying transmission µ = τ D , as stated in the following theorem.
D * = 1 2 log 1 τ γ log M + log k + 3 log 2 .(8)
Proof. Any classical algorithm that generates a distributionp T approximating the sampling from
an ideal thermal state distribution p T , where λ = µ/(1 − µ), satisfies the bound D(p in ,p T ) ≤ D(p in , p T ) + D(p T , p T ) ≤ D(ρ in , ρ T ) + D(p T , p T ),(9)
where we use the triangle inequality in the first inequality and the fact that a measurement over a quantum state can only decrease its trace norm in the second. We can now apply Lemma 1 to set the bound D(ρ in , ρ T ) < /2 and Lemma 2 to bound D(p T , p T ) < /2. It is then easy to see that the classical simulability condition D ≥ D * can be trivially derived starting from the condition µ ≤ 1/2 /(2N ) adapted from Lemma 1, replacing the losses by µ = τ D , taking the logarithm and replacing N by eq. (2). Therefore, when the condition D ≥ D * is satisfied, by properly selecting a thermal state ρ T satisfying λ = µ/(1 − µ) and running the algorithm defined in section 4 and leading to Lemma 2,p T provides an -approximation to our initial sampling distribution p in in polynomial time.
Simulation of shallow boson sampling circuits
In section 5 we show how one can simulate an ideal boson sampling circuit using tensor network techniques, which can be summarized in the following lemma.
Lemma 3. An ideal boson sampling circuit with N interfering photons over an M -mode linear interferometer of depth D can be simulated exactly using tensor networks with a running time
O(M 2 (N + 1) 8D ).
Tensor networks are a way of encoding quantum states and operating with them that have proven to be very successful in many-body physics [30,31,32,33,34]. Our tensor network proof is a quantum optics version, adapted from [23], of the well-know result that logarithmicdepth planar circuit of M qudits can be simulated on polynomial time [35]. It is easy to see that when the depth of the circuit scales logarithmically with the number of modes M and N satisfies equation (2), our algorithm runs in quasipolynomial time. This is due to the unbounded nature of the Hilbert space of optical modes. In order to have an exact simulation we need to fix the local dimension on each mode to be as large as the total number of photons in the circuit, which results into a quasipolynomial scaling.
Simulation of exponential decaying transmission architectures
It is now easy to see that combining theorem 1 and lemma 3 we can classically simulate any boson sampling architecture that has an exponential decaying transmission, for any depth of the circuit, as stated by the following theorem.
D * = 1 2 log 1 τ γ log M + log k + 3 log 2 .(10)
Proof. The classical simulability under condition D ≥ D * is a direct corollary of theorem 1. For a uniform losses boson sampling circuit satisfying D ≤ D * we simulate the circuit with an equivalent virtual circuit composed of an ideal boson sampler U of same depth D precede by M lossy channels of transmission µ = τ D , as explained in subsection 2.1. This is equivalent to an ideal boson sampling circuit U on the the input state ρ in = σ N ⊗ |0 0| M −N . We model the initial state ρ in by starting with N single-photons and transforming each one into a vacuum state with probability µ. We then proceed with the tensor network simulation of U using lemma 3, where we just need to change the input tensor accordingly to the random sequence of (surviving) input single-photons. This leads to a quasipolynomial time algorithm with a running time
O M γ 2 2 log 1 log M .
Algebraic decay of transmission
Not all optical architectures suffer from an exponential decay of the transmission, for example optical free-space suffers from a decay of transmission scaling as 1/D 2 . Suppose that a given architecture follows the following algebraic decay of losses
µ = 1 + D d −β ,(11)
where d is a length scale that together with the parameter β model the algebraic decrease of transmission. Then theorem 1 can be adapted to the following weaker form.
Corollary 1. The statistics of N photons in-
terfering over an M -mode linear-optics planar circuit of depth D, with algebraic losses given by eq. (11), and a relation between photons and modes given by eq. (2) can be approximate with trace distance error in polynomial time when
D ≥ D * , where D * = d 8k 1 2β M γ 2β − 1 .(12)
It is not difficult to check that when the condition γ/β < 2 is satisfied, it always exist an M * such that the condition D * ≤ M − 1 is satisfied for all M ≥ M * . This shows that any boson sampling experiment, which needs a depth D = M , on algebraic decaying architecture satisfying γ/β < 2 will be classically simulatable by an approximation by thermal noise sampling for all M ≥ M * . A condition meet by any free-space boson sampling architecture with its depth proportional to the number of modes (D = M ).
Generalization to alternative boson sampling proposals
Scattershot boson sampling was presented in [15] to circumvent the main problem of current state of the art non-deterministic single-photon sources, where the probability of firing N photon at the same time decays exponentially. The protocol starts by generating M two-mode squeezed vacuum states,
|ψ = √ 1 − λ ∞ n=0 λ n/2 |n |n ,(13)
where λ is the same parameter as in the definition of a thermal state, as a two-mode squeezed vacuum state is its purification. Then we send half of the two-mode squeezed vacuum states through a boson sampling circuit, while the remaining modes are used to herald the presence of photons. By properly tunning the squeezing parameter one can guarantee that most of the heralded sequences are collision-free, i.e., satisfy the birthday-paradox condition. The price to pay is that the modes where the photons enter the circuit are completely randomized, which is not a problem for boson sampling as the circuit is anyway Haar random. Because right after the heralding process the setup is strictly equivalent to a traditional boson sampling device, up to the randomization of the modes where the single-photons enter the interferometer, both of our simulation algorithms (thermal state sampling and tensor network simulation) can be trivially adapted. We only need to randomly generate valid heralding sequences following the distribution given by eq. (13) and depending on the obtained heralded value we run a boson sampling simulations as detailed in subsection 2.3. The only difference is that now the input photons enter the interferometer on a random selection of N input modes. More recently, a variant of boson sampling where photon detectors are replaced by a Gaussian measurement, has been proposed [24,25]. Because quantum operations and measurement can only decrease the trace distance, the outcome statistics p in and p T will remain -close after any quantum measurement. The evolved thermal state being Gaussian, we can extend our result to this scenario by using well-know techniques of simulating Gaussian measurement over Gaussian states [36].
Proof of Lemma 1
The input state ρ in = σ N ⊗ |0 0| M −N , resulting from sending the N initial single-photons through N pure-loss channels of losses µ reads (14) where Φ (1) n|N denotes the set of non-collisional distributions of photons over the first N modes inputs. As was mentioned in subsection 2.3, this last expression is strikingly similar to
ρ in = (1 − µ) N N n=0 µ 1 − µ n n∈Φ (1) n|N |n n|,ρ T = (1 − λ) N ∞ n=0 λ n n∈Φ n|N |n n|,(15)
where λ = n 1+ n , with n being the average number of photons on each of the N thermal states, and Φ n|N is the set of all possible distributions of n photons over the first N input ports, where now collision are allowed. When µ < 1/2, which always holds when equation (7) is satisfied, we
can fix λ = µ/(1 − µ) in ρ in to obtain ρ in = 1 1 + λ N N n=0 λ n n∈Φ (1) n|N |n n|.(16)
Let us estimate the trace distance between the states in Eqs. (16) and (15)
D(ρ T , ρ in ) = 1 2 Tr (ρ T − ρ in ) 2 = 1 2 Tr ρ (n≤N ) T − ρ in 2 + 1 2 Trρ (n>N ) T(17)
where we have decomposed the thermal state into two parts according to the summation terms (i.e., with ρ (n≤N ) T being the part obtained from Eq. (15) by dropping the terms with n > N from the sum) and used that ρ
(n>N ) T ρ (n≤N ) T = ρ (n≤N ) T ρ (n>N ) T = ρ (n>N ) T ρ in = ρ in ρ (n>N ) T
= 0, as they correspond to the product of density operators acting on orthogonal subspaces. We also define
q(λ, N ) = (1 − λ) N N n=0 λ N N + n − 1 n ,(18)
which provides the simplifications Trρ
(n≤N ) T = q(λ, N ) and Trρ (n>N ) T = 1 − q(λ, N ). It is easy to prove that D(ρ T , ρ in ) = 1 2 1 (1 + λ) N − (1 − λ) N N n=0 λ n N n + (1 − λ) N 2 N n=0 λ n N + n − 1 n − N n = 1 + q(λ, N ) 2 − (1 − λ 2 ) N ,(19)
where the first line corresponds to the collisionfree terms and the second to the collision ones, for which only ρ T contributes. One can upperbound the two terms on the r.h.s. of Eq. (19) as
D(ρ T , ρ in ) ≤ 1−(1−λ 2 ) N ≤ N λ 2 ≤ 4N µ 2 ,(20)
where we used q(λ, N ) ≤ 1 in the first inequality, the easy to check relation
1 − N x ≤ (1 − x) N in the second inequality, and λ = µ/(1 − µ) ≤ 2µ
when µ ≤ 1/2 in the third. Remark that µ ≤ 1/2 is a corollary of equation (7). Therefore, the condition
µ ≤ 1/2 /N implies D(ρ T , ρ in ) ≤
as stated in the definition of lemma 1.
Proof of Lemma 2
An ideal algorithm for boson sampling of thermal states was implicit in [29] and uses three properties. Firstly, any thermal states ρ T has a Glauber-Sudarshan P -representation as a mixture of Nmode coherent states |α ≡ |α 1 , ..., α N = N i=1 |α i according to a Gaussian distribution
ρ T = C N p(α)|α α| ⊗ |0 0| M −N ,(21)
where
p(α) = N i=1 d 2 α i π n exp − |α i | 2 n .(22)
Secondly, a linear-optical circuit characterized by a unitary matrix U transforms a tensor product of
coherent states M i=1 |α i , where α i = 0 for N ≤ i ≤ M , into another tensor product of coherent states M i=1 |β i with amplitudes β i = M j=1 U ji α j .(23)
In other words, coherent states remain in a tensor product form while evolved through a linear optical circuit. Thirdly, coherent states follow a Poisson photon number distribution
P (m, β i ) = e −|β i | 2 |β i | 2m m! . (24)
Therefore, a concatenation of three stochastic processes simulates the boson sampling of thermal states. The first process generates a complex vector α following the probability distribution p(α). The second one applies the map U to the vector α generating the output β = Uα. The third process P generates an M -dimensional vector m from β by sampling from the Mdimensional Poisson distribution P (m, β). This algorithm is an ideal one, as it assumes access to the oracles that samples from Gaussian and Poisson distributions.
In order to build a realistic algorithm we define a new three step process, where the sampling oracles are replaced by efficient approximation algorithms. The first step consist of sampling from the discrete-variable distribution p(α (s) ), similar as in [37]. It can be understood as a map N from the continuous α to its discretized version α (s) . The second stepŨ implements an approximation of matrix multiplication, mapping α (s) to β (s) . Finally the third stepP generates m (s) , an approximate sampling from an M -dimensional Poisson distributions (24) of parameter β (s) , using a scalable number of Bernouilli trials [38]. The three subroutines will be described in more detail bellow.
Error analysis
We want to show that the trace distance between the ideal and approximate algorithms, above described, satisfies D(p(m), p(m (s) )) ≤ with the algorithm remaining polynomial in time.
Using the triangle inequality for the trace distance norm and the fact that a trace preserving map (a stochastic map), such as U, P,Ũ,P can only decrease the trace distance, we obtain D(p(m), p(m (s) )) = D P •Ũ • N (p(α)) , P • U (p(α)) ≤ D P p(β (s) ) , P p(β (s) ) +D Ũ p(α (s) ) , U p(α (s) ) +D p(α (s) ), p(α) .
In what follows we will show how one can build efficient algorithm for each step such that each of the last three terms on the r.h.s. of (25) is bounded by /3, leading to D(p(m), p(m (s) )) ≤ . It is important to remark that it is of crucial importance for the efficiency of the algorithm that U andP have finite support.
Discretization of the Gaussian distribution
To simplify the algorithm presented below, we convert the Glauber P-functions into the standard complex normal distribution by setting z = α/ n = 1−λ λ α which transform p(α) to
p(z) = d 2 z π N e −z † z .(26)
We employ a cut-off the Gaussian distribution in Eq. (26) to the product Ω R of finite circular regions |z k | ≤ R, for k = 1, . . . , N . Each of the circular regions is discretized by imposing the circular grid such that there is a central circle of radius 0 ≤ |z| ≤ r 1 (which is not subdivided) and each circular region r l ≤ |z| ≤ r l+1 , for l = 1, . . . , L (with some L), is subdivided into phase sectors by polar angle φ of size δφ. We choose equal δφ l for each value of r l by setting δφ l = (δr l )/r l ≡ (r l+1 − r l )/r l , which allows to set δr l , l = 1, . . . , L, to be the same to have equal areas δA of the grid elements (which also sets r 2 1 = δA/π). With these specifications, setting the number of elementary elements of the grid to be N g , the other parameters become
δA = πR 2 N g , δr = π N g R, r 1 = R N g .(27)
We will also need the largest distance ∆z = max(|∆z l |) between two points in an element of the grid, which is bounded as
∆z = (r l + δr) 2 + r 2 l − 2(r l + dr)r l cos(δφ l ) ≤ r l δφ l 1 + δr r l δφ l + δr r l δφ l 2 = √ 3δr = 3π N g R,(28)
where we have used that 1 − cos(x) ≤ x 2 /2. Note the relation δA = ∆ 2 z /3. Let us estimate the trace distance between two probability distributions p(α) and the R-cut and discretized Gaussian, as above described, p (s) (α).
To do so, we denote by p (R) (α) the intermediate R-cut continuous Gaussian distribution before the discretization. Introducing the indicator function I s (z) = N k=1 I s k (z k ) of s = (s 1 , . . . , s N ), where s i is the sth element of the grid of dimension i and, denoting an inner point by z (s) , we can write the trace distance between the two probability distributions as
D(p (s) (α), p(α)) ≤ D(p (R) (α), p(α)) + D(p (s) (α), p (R) (α)) = 1 2 Ω R d 2 z π N e −z † z + 1 2 Ω R d 2 z π N s I s (z) e −z † z − e −(z (s) ) † z (s)(29)
where Ω R is the region complementary to Ω R . The first integral on the r.h.s. of Eq. (29) can be easily show to satisfy the upper bound
D(p (R) (α), p(α)) ≤ 1 2 1 − 1 − e −R 2 N .(30)
The last integral sum on the r.h.s. of Eq. (29) can been estimated using the multidimensional mean-value theorem in the variable Z = (z 1 , . . . , z N ):
e −z † z − e −z (s) ) † z (s) ≤ ∇ Z e −(ẑ (s) ) †ẑ(s) 2 z − z (s) 2 ≤ 2 (ẑ (s) ) †ẑ(s) e −(ẑ (s) ) †ẑ(s) √ N ∆ z ,(31)
where there is suchẑ (s) on the straight line between z and z (s) and the upper bound for the 2norm of a (complex) row vector Z = (z 1 , . . . , z N ):
||Z|| 2 = l |z l | 2 ≤ √ N max(|z l |)
. With these observations, by using the Cauchy-Schwarz type inequality in the summation, the last integral sum in Eq. (29) is upperbounded by
≤ √ N ∆ z Ω R d 2 z π N s I s (z) (ẑ (s) ) †ẑ(s) e −(ẑ (s) ) †ẑ(s) ≤ √ N ∆ z Ω R d 2 z π N s I s (z)(ẑ (s) ) †ẑ(s) e −(ẑ (s) ) †ẑ(s) 1/2 × Ω R d 2 z π N s I s (z)e −(ẑ (s) ) †ẑ(s) 1/2 ≤ N ∆ z(32)
where the last two integral sums on the r.h.s. of Eq. (32) are bounded from above by the respective N -dimensional Gaussian integrals on the N -dimensional circular region composed of r 1 ≤ |z k | ≤ R + δr for each z k , k = 1, . . . , N . The latter integrals are less then N and 1, respectively.
Combining the results of (29)-(32) we obtain the bound
D(p (s) (α), p(α)) ≤ N e −R 2 2 + N ∆ z .(33)
Therefore, choosing R = ln(6N/ ) and |∆ z | = /(6N ) leads to the desired bound D(p (s) (α), p(α)) ≤ /3.
The algorithm
Generate N random points z k according to the standard 1-dimensional complex Gaussian distribution G(z) = 1 π e −|z| 2 truncated to |z| ≤ R (with the necessary renormalization) and discretized by the circular grid of Eq. (27) and (28) with R and ∆ z as defined above. From Eqs. (28) we find that one needs for this step at most
N g = 3πR 2 (∆ z ) 2 ≤ 108π N 2 2 ln (6N/ )(34)
grid elements. One generate z k (i.e., sample from the grid elements) by employing an adaptation of the inverse transform sampling method to the Rcut discretized Gaussian distribution in z-plane. By setting y = (1 − e −r 2 )/(1 − e −R 2 ) we can write the probability to have r l ≤ |z k | ≤ r l+1 as y l+1 − y l = (e −r 2 l − e −r 2 l+1 )/(1 − e −R 2 ). One selects an interval [y l , y l+1 ] (i.e., circular region l in the z-plane) by sampling from the the uniform distribution on 0 ≤ y ≤ 1. Since the circular region r l ≤ |z| ≤ r l+1 for l ≥ 2 is subdivided into the equally probable phase sectors of size δφ, the next step is perform the uniformly random selection of one of the phase sectors. By these two steps one selects for k = 1, . . . , N one grid element according to the R-cut discretized Gaussian distribution. The scheme necessitates O(N ) operations and the number of bits N b required for the necessary accuracy of the uniform distribution in y scales as
N b ∼ N g R 2 ∼ 10 3 N 2 .(35)
Matrix multiplication
The transformation of β (s) = Uα (s) can be approximated using standard numerical linear algebra within error D Ũ p(α (s) ) , U p(α (s) ) ≤ /3 (36) in O(M N ) operations [41].
Approximate Poisson sampling with Bernouilli trials
Starting from an M -dimensional vector β (s) we output m (s) , where for each β
1 2 ∞ k=0 n k p k B (1−p B ) n−k −P (k, x) ≤ (1−e −x )
x n ≤ x 2 n (37) between the probability distribution of the sum of n independent Bernoulli trials, S n = ξ 1 + . . . + ξ n , with p B (ξ = 1) = x/n, p B (ξ = 0) = 1 − x/n and the Poisson distribution P (m, x). This implies that our M -dimensional output m (s) error will be bounded as D P β (s) ,P β (s) ≤ M max |β| 4 n . (38) Therefore, the number of necessary Bernoulli trials n for simulation of a Poisson distribution to an error /3 reads
n = 3M max |β| 4 ≤ 3M λ 1 − λ 2 R 4 ≤ 12M N µ 2 ln 2 6N ,(39)
where we use that β = U α = λ/(1 − λ)U z, |U | ≤ 1, and that for µ ≤ 1/2 we have λ/(1−λ) = µ/(1 − 2µ) ≤ 2µ. In full generality, simulating the evolution of this state will be hard as the Hilbert space, of size
N +M −1 N
, grows exponentially if both N and M increase proportionally to each other. The idea behind tensor networks is to represent the element C n 1 ,n 2 ,...,n M as a tensor network with M tensors with virtual degrees of freedom that contract each other, leaving M free parameters corresponding to the physical indexes n i . This representation provides a very intuitive representation of quantum states and allows for a very efficient encoding and manipulation when the states have a high degree of locality.
Matrix product states
In our case we are interested in the evolution of a particular example of tensor network called matrix product states,
|ψ = d 1 n 1 =0 d 2 n 2 =0 ... d M n M =0 B [1] n 1 B [2] n 2 ...B [M ] n M |n 1 , ..., n M ,(41)
where B [1] n 1 is a transposed vector of dimension χ 1 , B [N ] n N is a vector of dimension χ M , and B
[i] n i for 1 < i < M is a matrix of dimension χ i × χ i+1 .
The physical indexes n i take values {0, 1, ..., d}. As shown in Figure 3 (c), one can associate to each matrix product state a 1-dimensional graph where each vertex is associated to a three index tensors B
[i] n i ,α,β (Figure 3 (a)) and the edges determine the contraction rule of the tensor indexes (Figure 3 (b)).
Canonical form
It is well know that any bipartite quantum state can be rewritten as (42) where the Schmidt coefficients λ α result from the singular value decomposition c ij = α U i,α λ α V j,α . Every matrix product state can be also transformed into a canonical form
|ψ = d 1 i=0 d 2 j=0 c ij |ij = min d 1 ,d 2 α=0 λ α |ϕ α |ψ α ,|ψ = n 1 ,n 2 ,...,n M Γ [1] n 1 λ [1] Γ [2] n 2 λ [2] . . . Γ [N ]
n M |n 1 , n 2 , . . . , n M (43) by iteratively applying the singular value decomposition [43,44], with its graph representation shown in Figure 3
Simulating ideal linear-optics circuits
A linear optics circuit is composed of one-mode phase gates and two-mode couplers implementing an interaction between two adjacent modes. In what follows we first explain how to update a matrix product state that goes under the evolution of linear-optics gates and later discuss how to sample from the final output state.
One mode gates (phase rotation)
As shown in Figure 4, a single-mode gate acting on mode i is modeled by a matrix G .
↵ n i n i b) a) n 0 i G G [i] n 0 i ,ni G n 0 i ↵ n 0 i ˜ ˜ [i] ↵, ,n 0 i = X ni G [i] (n 0 i ),(ni) [i](44)
A phase rotation θ has a matrix G [i] that is diagonal with coefficients G [i] = exp(iθn i ). Therefore, the update of a single local gate scales as O (d + 1)χ 2 , as G is diagonal. Notice that applying a single-mode gate wont change the Schmidt coefficients of the matrix product state, as it acts only on the physical indexes of one vertex of the graph. A two-mode coupler B [k,k+1] acting on modes k and k + 1 is modeled by a 4 legs tensor, i.e., a matrix product operator, with physical indexes n k and n k+1 for the input and n k and n k+1 for the output, B [k,k+1] = n k ,n k+1 ,n k ,n k+1 C n k ,n k+1 n k ,n k+1 |n k , n k+1 n k , n k+1 |, (45) where the coefficients C n k ,n k+1 n k ,n k+1 are the well-known input-output amplitudes of a beamsplitter [45,46] (see also equation (3.9) in [5]).
Two-mode couplers
b) a) X [k] X [k+1]
In [23] an algorithm was constructed based on directly applying the unitary B [k,k+1] to the matrix product state followed by a singular-value decomposition to rebuild the canonical form of the output state, reaching a scaling O (χ 3 d 3 ). In what follows we present an alternative algorithm that provides a better scaling when the bond dimension χ is higher than the physical dimension d, which is generally the case in most realistic simulations.
As shown in Figure 5 a), in order to model the evolution of modes k and k + 1 under a two-mode coupler operation, we first implement a singularvalue decomposition of the matrix product operator B [k,k+1] with respect to the separation between (n k , n k ) and (n k+1 , n k+1 ) indexes, i.e.,
B [k,k+1] (n k ,n k ),(n k+1 ,n k+1 ) = χ BS γ=1 X [k] n k ,n k ,γ σ [k] γ X [k+1]
n k+1 ,n k+1 ,γ (46) where χ BS is the Schmidt rank of the matrix product operator. The Schmidt rank of a singular-value decomposition of a matrix being upper-bounded by the largest of the two local dimensions provides the bound
χ BS ≤ (d + 1) 2 ,(47)
where the running time of the matrix product operator decomposition scales as O (d + 1) 6 . As shown in Figure 5 b), the next step is to contract the tensors Γ [k] and X [k] of mode k and Γ [k+1] and X [k+1] of mode k + 1 in order to generate the tensorsΓ [k] andΓ [k+1] of the state resulting after the beamsplitter transformation.
The running time of the contraction leading to the tensorΓ [k] scales as χ k−1 χ k χ BS (d+1) 2 where forΓ [k+1] scales as χ k+1 χ k χ BS (d+1) 2 which leads to a scaling of the contraction running time
T C = O χ 2 χ BS (d + 1) 2 ≤ O χ 2 (d + 1) 4 .
(48) Remark that, as shown in Figure 5, the tensors Γ [k] andΓ [k+1] are connected by two pairs of singular values, χ k from the initial state and χ BS from the beamsplitter matrix product operator, which can be merged into a singleχ satisfying χ k = χ k χ BS , which combined with equation (47) provides the bound
χ ≤ χ(d + 1) 2 ,(49)
which is the equivalent of Lemma 4 (i) in [35].
Circuit simulation
The boson sampling input state corresponds to a trivial matrix product state of bond dimension χ = 1, composed of N tensors Γ
[i] 1 = δ n i ,1 δ α i−1 ,1 δ α i ,1 encoding single-photon inputs and M − N tensors Γ [i] 0 = δ n i ,0 δ α i−1 ,1 δ α i ,
Sampling from a matrix product state
Once the matrix product state resulting from D layers of gates has been calculated we can sample from it following a sequential procedure explained in [47] and reproduced here for completeness. We generates one random outcome per mode at a time and exploits the chain rule p(n 1 , . . . , n M ) = p(n M |n M −1 , . . . , n 1 ) . . . p(n 1 ) (50) First calculate for each of the d+1 outcomes n 1 the probability Tr [|ψ ψ||n 1 n 1 | ⊗ I 2...M ], where |n 1 n 1 | is the projector on the photon number state n 1 of mode 1 and I 2...M is the identity operator on modes 2 to M . This is done by contraction the matrix product state with himself, interleaved with a matrix product operator representing the measurement projector |n 1 n 1 |. Then we randomly select one of the d + 1 potential outcomes n 1 and update our state by generating |ψñ 1 := ñ 1 |ψ , where the bra ñ 1 | acts only on mode 1. The result of this contraction is a new, unnormalized matrix product state |ψñ 1 of size N − 1. Note that this new matrix product state satisfies the condition ψñ 1 |ψñ 1 = p(ñ 1 ). The second step now uses the state |ψñ 1 . Firstly, we calculate the d + 1 outcome probabilities p(n 2 ,ñ 1 ) := ψñ 1 | (|n 2 n 2 | ⊗ I 3,...,N ) |ψñ 1 and randomly select añ 2 from the probability distribution p(n 2 |ñ 1 ) := p(n 2 ,ñ 1 )/p(ñ 1 ). Secondly, we generate a new, unnormalized matrix product state |ψñ 1 ,ñ 2 := ñ 2 |ψñ 1 of size N − 2. Continuing this procedure for the remaining M − 3 output modes, we end up with one sample drawn according to the probability distribution p(n 1 , n 2 , . . . , n N ). The highest computational cost corresponds to the contraction leading to p (n 1 , n 2 , . . . , n N ). A trivial contraction algorithm provides a running time of O M χ 4 (d + 1) 2 , which for matrix product state resulting from D layers of couplers gives O(M 2 (d + 1) 8D+2 ).
Simulating an ideal logarithmic depth circuit
It is important to notice that the bond dimension scales exponentially with the depth of the circuit. If d was a constant, such as in spin systems simulations, a shallow circuit satisfying a logarithmic depth constraint as in eq. (10) would lead to a polynomial time algorithm. In a tensor network simulation of quantum optics, the potential bunching of photons, which can all potentially accumulate in a given mode, makes the simulation harder. In order to obtain an exact simulation of the evolution and the sampling of N input single photons over a circuit of depth D we fixed the physical dimension over the whole evolution to d = N , the total number of photons. Because N scales with the number of modes M , see eq. (2), the computational cost of contraction, storage and sampling becomes quasipolynomial in the size of the system.
Generalization to non-uniform losses
In subsection 2.1 we have shown that a lossy linear-optics interferometer is mathematically modeled by a complex linear transformation A satisfying a singular value decomposition A = VμW . The singular value decomposition has a very natural interpretation, see Figure 6 a), where the real interferometer with losses characterized by A has an equivalent circuit composed by a loss-less linear-optics transformation V , followed by M parallel set of different pure-loss channels of transmission µ i , and a final lossless linear-optics transformation W . The M parallel set of pure-loss channels of transmission µ i can be decomposed into a concatenation of two pure-loss channels of transmission µ = max µ i andμ i = µ i /µ. Because M parallel pure-loss channel of transmission µ commute with the unitary V , we obtain the final scheme of Figure 6 b), where M parallel set of pure-loss channels of transmission µ are followed by the ideal circuit V , followed by M parallel set of pure-loss channels (μ i ) and a final ideal circuit W . Then we can define the state ρ in as resulting from applying M parallel set of pure-loss channels of transmission µ and the thermal state ρ T is chosen such that λ = µ/(1 + µ), in the same way we did in subsection 2.1. The only difference is that now the set of operations after the pure-loss channels of transmission µ is not longer ideal interferometer U but an M -mode quantum channel L resulting from concatenating an ideal interferometer V and M parallel set of pure-loss channels of transmissionμ i followed by a final ideal interferometer W .
Generalizing lemma 1
Because applying a quantum operation L can only make the trace-distance decrease, similarly as for a measurement, it is trivial to see that one can generalize lemma 1 by replacing the uniform losses µ by µ = max µ i .
Generalizing lemma 2
Additionally, the thermal state sampling algorithm in section can be easily adapted. It is a well-know fact in quantum optics that the action of a pure loss channel of transmission µ i on a coherent state |α outputs a weaker coherent state |µ i α . Therefore, the evolution of an input multimode coherent state |α can be easily computed, by implementing the matrix multiplication β = Aα. Once the output multimode coherent state has been determined, the sampling from Poisson distribution is done similarly as before.
Generalizing lemma 3
The adaptation of the tensor network simulation is slightly more involved. Let's use the notation A i,j for the coupler acting on input mode i and i+1 at the layer of couplers j. Every A i,j has a decomposition into a unitary V i,j followed by two independent pure-loss channels and a final unitary V i,j . Because every pure-loss channel can be seen as a coupling interaction with an environmental mode, it is easy to see that a circuit with losses can be transformed into an ideal lossless circuit by doubling the number of couplers and adding two ancillary modes per coupler with losses. We can then place all the ancillary modes interacting with mode i between input modes i and i+1, i.e., D of them bellow each input mode for a circuit of depth D. For a circuit of depth D there is at most 3D gates acting on each mode with a range of at most D. As detailed in [35], one can transform a D range gate into 2D nearest-neighbor gates. Therefore, our initial circuit with losses of depth D = M becomes a circuit with M 2 modes and 6M 2 nearest-neighbor gates. This leads to a less favorable scaling of the computational cost of contraction, storage and sampling, but which remains quasipolynomial in M . This last algorithm is certainly not optimal and we are convinced that more elaborated choices can certainly improve the simulation of linear-optical circuits with non-uniform losses.
Conclusion
The vast majority of currently existing architecture proposals to implement a boson sampling experiment, suffer from exponential decay of the transmission with the length of the circuit. We have shown that for those platforms, boson sampling and most of its variants can be efficiently simulated classically by sampling thermal noise with an algorithm running in polynomial time. More precisely, we have show that either the depth of the circuit (noted D) is large enough (D ≥ O(log M )) that it can be simulated by thermal noise with an algorithm running in polynomial time, or the depth of the circuit is shallow enough (D ≤ O(log M )) that a tensor network simulation runs in quasi-polynomial time.
This result suggest that in order to implement a quantum advantage experiment with singlephotons and linear-optics we need a profound change of paradigm. One possibility would be to shift to platforms where the transmission does not decreases exponentially and for which our algebraic decay result does not hold. Another option would be prove the hardness of novel boson sampling architectures beyond the planar circuit architecture. A promising route would be to reduce the the depth of the circuit to the shallow regime while maintaining the complexity by moving to a lattice structure. A potential candidate would be an adaptation to quantum optics of the recent result on the complexity of a finite time quench over a 2D spin lattice [50].
We discussed that the potential bunching of photons makes the tensor network simulation of quantum optical systems more involved than its finite spin counterparts. One could potentially restore the polynomial scaling of logarithmic depth circuits observed for finite systems by designing an -approximate algorithm that truncates every mode to a finite size. To our knowledge, this is a non-trivial result that is certainly worth pursuing in future research.
This work is only a first step in the study of boson sampling under losses and further results may restrict even further its quantum advantage regime. Finally, an interesting open question is whether our proof can be adapted to other technological platforms candidates to a quantum advantage test.
After the completion of this article we learned about Ref. [51] that obtains a similar result using very different techniques.
Figure 1 :
1In a boson sampling device N single-mode photons are sent over an M mode randomly selected linear optical circuit composed of M layers of two-mode coupling gates and detected at the output with photon counting detectors. The circuit being selected randomly from the Haar measure, we can place the N photons over the first N modes without lost of generality.
Theorem 1 .
1The statistics p in of N photons interfering over an M -mode linear-optics planar circuit of depth D, transmission τ per layer of gates, and a relation between photons and modes given by N = kM γ (eq. (2)) can be approximate with trace distance error in polynomial time for D ≥ D * , where D * reads
Theorem 2 .
2The statistics of N photons interfering over an M -mode linear-optics planar circuit of depth D, transmission τ per layer of gates, and a relation between photons and modes given by N = kM γ (eq. (2)) can be approximate with trace distance error in polynomial time for D ≥ D * and in quasi-polynomial time for D ≤ D * , where D * reads
. . . , M we approximately sample from the Poisson distribution P (m l , β (s) l ) (24) via independent Bernoulli trials. To determine the number of Bernoulli trials to achieve a given error we can use the trace distance bound[42]
state of M bosonic modes with at most N total number of photons reads |ψ = {n:|n|=N } C n 1 ,n 2 ,...,n M |n 1 , n 2 , ..., n M . (40) α,β is a tensor of rank three: represented by a vertex i with three edges for the a physical index n i and two virtual indexes α, and β. (b) The tensor contraction of B [i] and B [i+1] along their shared index (dashed line) is an operation equivalent to matrix multiplication along the shared index. (c) Graphical representation of a quantum state |ψ corresponding to a matrix product state of four tensors, as defined in Eq. (41). (d) All matrix product states can be transformed into a canonical form where a diagonal matrix λ of Schmidt coefficients is assigned to every edge between two vertex.
(d). The matrices λ[i] are diagonal and contain the Schmidt coefficients corresponding to the bipartition of modes (1, ..., i) versus (i + 1, .., M ). The Schmidt rank of each link reads χ k , and χ = max{χ k } is called the bond dimension. The total number of parameters and thus the storage cost for such a matrix product state scales as O(M (d + 1)χ 2 ).
i ,n i that transform the input physical indexes n i to the output physical indexes n i . The evolution correspond to the contraction of the physical indexes n i of Γ[i] α,n i ,β and G [i] n i ,n i as Γ i α,β,n i = n i G [i] (n i ),(n i ) Γ [i] (n i ),(α,β) .
Figure 4
4: a)A single-mode phase-rotation acting on mode i is modeled by a matrix (tensor) G[i] n i ,ni that transform the input physical indexes n i to the output physical indexes n i ; b) The tensor Γ [i] of virtual indexes α, β and physical index n i is transformed to the tensor Γ [i] by implementing a tensor contraction between the tensor Γ [i] and the phase-rotation G [i]
Figure 5 :
5a) The action of a two-mode coupler on modes k and k + 1 of a matrix product state written on its canonical form is first obtained by doing a singular-value decomposition of the matrix product operator of the coupler; b) Secondly, we contract the tensors Γ [k] and X[k] of mode k and Γ [k+1] and X [k+1] of mode k + 1, giving Γ [k] andΓ [k+1] respectively; c) Finally, we relabel the two singular values λ and γ into a new labelλ of the resulting matrix product state.
1 encoding vacuum inputs. For every layer of couplers we apply in parallel the matrix product update detailed in subsections 5.2.2 and 5.2.1. The bond dimension scales with the depth of the circuit D as O (d + 1) 2D , the storage space for the tensors as O M (d + 1) 4D+1 , and the cost of the contraction of the matrix product state scales as O M (d + 1) 4(D+1) , where the leading order corresponds to the contractions of the last layer of gates.
Figure 6
6: a) the real optical circuit is indistinguishable from a circuit composed by a lossless linear-optics transformation V , followed by M parallel set of pureloss channels of transmission µ i each, and a final lossless linear-optics transformation W ; b) this is equivalent to M parallel set of pure-loss channels of transmission µ = max µ i (where we have chosen µ = µ 1 without loss of generality) followed by the idea circuit V , followed by M parallel set of pure-loss channels (μ i ) and a final ideal circuit W .
Acknowledgements
A scheme for efficient quantum computation with linear optics. E Knill, R Laflamme, G J Millburn, Nature. 40946E. Knill, R. Laflamme, and G. J. Millburn, A scheme for efficient quantum computation with linear optics, Nature 409, 46 (2001).
Linear optical quantum computing with photonic qubits. P Kok, W J Munro, K Nemoto, T C Ralph, J P Dowling, G J Milburn, Rev. Mod. Phys. 79135P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Lin- ear optical quantum computing with photonic qubits, Rev. Mod. Phys. 79, 135 (2007).
Why I am optimistic about the silicon-photonic route to quantum computing. T Rudolph, arXiv:1607.08535T. Rudolph, Why I am optimistic about the silicon-photonic route to quantum computing, arXiv:1607.08535 (2016).
High-speed linear optics quantum computing using active feed-forward Nature 445. R Prevedel, P Walther, F Tiefenbacher, P Böhi, R Kaltenbaek, T Jennewein, A Zeilinger, R. Prevedel, P. Walther, F. Tiefenbacher, P. Böhi, R. Kaltenbaek, T. Jennewein, and A. Zeilinger, High-speed linear optics quan- tum computing using active feed-forward Na- ture 445, 65-69 (2007)
The Computational Complexity of Linear Optics, Theory of Computing. S Aaronson, A Arkhipov, 9143S. Aaronson and A. Arkhipov, The Computa- tional Complexity of Linear Optics, Theory of Computing 9, 143 (2013).
On quantum field theory I: explicit solution of Dyson's equation in electrodynamics without use of Feynman graphs. D Caianiello, Nuovo Cimento. 101634D. Caianiello, On quantum field theory I: ex- plicit solution of Dyson's equation in electrody- namics without use of Feynman graphs, Nuovo Cimento 10, 1634 (1953).
The complexity of computing the permanent. L G Valiant, Theoretical Computer Science. 8L.G.Valiant, The complexity of computing the permanent, Theoretical Computer Science 8, 189-201 (1979)
A linear-optical proof that the permanent is #P-hard. S Aaronson, Proceedings of the Proc. R. Soc. A. 4673393S. Aaronson, A linear-optical proof that the permanent is #P-hard, Proceedings of the Proc. R. Soc. A 467, 3393 (2011)
Experimental realization of any discrete unitary operator. M Reck, A Zeilinger, H J Bernstein, P Bertani, Phys. Rev. Lett. 7358M. Reck, A. Zeilinger, H. J. Bernstein, and P. Bertani, Experimental realization of any dis- crete unitary operator, Phys. Rev. Lett. 73, 58 (1994).
. W R Clements, P C Humphreys, B J Metcalf, W Kolthammer, I A Walmsley, arXiv:1603.08788An Optimal Design for Universal Multiport Interferometers. W. R. Clements, P. C. Humphreys, B. J. Metcalf, W. S Kolthammer, I. A. Walmsley, An Optimal Design for Universal Multiport Inter- ferometers, arXiv:1603.08788 (2016).
Boson sampling on a photonic chip. J B Spring, B J Metcalf, P C Humphreys, Science. 339798J. B. Spring, B. J. Metcalf, P. C. Humphreys et al., Boson sampling on a photonic chip, Science 339, 798 (2013);
Photonic boson sampling in a tunable circuit, ibid. M A Broome, A Fedrizzi, S Rahimi-Keshari, 339794M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, et al., Photonic boson sampling in a tunable circuit, ibid. 339, 794 (2013);
Experimental boson sampling. M Tilmann, B Dakić, R Heilmann, S Nolte, Nat. Photonics. 7540M. Tilmann, B. Dakić, R. Heil- mann, S. Nolte, et al., Experimental boson sampling, Nat. Photonics 7, 540 (2013).
Time-Bin-Encoded Boson Sampling with a Single-Photon Device. Yu He, X Ding, Z.-E Su, Phys. Rev. Lett. 118190501Yu He, X. Ding, Z.-E. Su, et al., Time-Bin- Encoded Boson Sampling with a Single-Photon Device, Phys. Rev. Lett. 118, 190501 (2017).
Integrated multimode interferometers with arbitrary designs for photonic boson sampling. A Crespi, R Osellame, R Ramponi, Nat. Photonics. 7545A. Crespi, R. Osellame, R. Ramponi, et al., Integrated multimode interferometers with ar- bitrary designs for photonic boson sampling, Nat. Photonics 7, 545 (2013).
Two-photon quantum walk in a multimode fiber. H Defienne, M Barbieri, I A Walmsley, B J Smith, S Gigan, Science Advances. 21501054H. Defienne, M. Barbieri, I. A. Walmsley, B. J. Smith, S. Gigan, Two-photon quantum walk in a multimode fiber, Science Advances 2, 1501054 (2016).
Boson Sampling from a Gaussian State. A P Lund, A Laing, S Rahimi-Keshari, T Rudolph, J L O'brien, T C Ralph, Phys. Rev. Lett. 113100502A.P. Lund, A. Laing, S. Rahimi-Keshari, T. Rudolph, J. L. O'Brien, and T. C. Ralph, Bo- son Sampling from a Gaussian State, Phys. Rev. Lett. 113, 100502 (2014).
R Garcia-Patron, Comment #36 to the post Scattershot BosonSampling: A new approach to scalable BosonSampling experiments on Scott Aaronson blog. R. Garcia-Patron, Comment #36 to the post Scattershot BosonSampling: A new approach to scalable BosonSampling ex- periments on Scott Aaronson blog (2013);
Gaussian Boson Sampling. C S Hamilton, R Kruse, L Sansoni, S Barkhofen, C Silberhorn, I Jex, Phys. Rev. Lett. 119170501C. S. Hamilton, R. Kruse, L. Sansoni, S. Barkhofen, C. Silberhorn, and I. Jex, Gaussian Boson Sampling, Phys. Rev. Lett. 119, 170501 (2017)
No imminent quantum supremacy by boson sampling. A Neville, C Sparrow, R Clifford, E Johnston, P M Birchall, A Montanaro, A Laing, Nature Physics. 131153A. Neville, C. Sparrow, R. Clifford, E. John- ston, P. M. Birchall, A. Montanaro, A. Laing, No imminent quantum supremacy by boson sampling, Nature Physics 13, 1153 (2017).
P Clifford, R Clifford, arXiv:1706.01260The Classical Complexity of Boson Sampling. P. Clifford, and R. Clifford, The Clas- sical Complexity of Boson Sampling, arXiv:1706.01260 (2017).
Sufficient Conditions for Efficient Classical Simulation of Quantum Optics. S Rahimi-Keshari, T C Ralph, C M Caves, Phys. Rev. X. 621039S. Rahimi-Keshari, T. C. Ralph, C. M. Caves, Sufficient Conditions for Efficient Clas- sical Simulation of Quantum Optics, Phys. Rev. X 6, 021039 (2016).
J Jelmer, Adrian Renema, William R Menssen, Clements, arXiv:1707.02793Efficient algorithm for boson sampling with partially distinguishable photons. Jelmer J. Renema, Adrian Menssen, William R. Clements, et al., Efficient algorithm for bo- son sampling with partially distinguishable pho- tons, arXiv:1707.02793.
Boson Sampling with Lost Photons. Scott Aaronson, Daniel J Brod, Phys. Rev. A. 9312335Scott Aaronson, Daniel J. Brod, Boson Sam- pling with Lost Photons, Phys. Rev. A 93, 012335 (2016).
K Temme, P Wocjan, arXiv:1208.6589Efficient Computation of the Permanent of Block Factorizable Matrices. K. Temme, P. Wocjan, Efficient Computa- tion of the Permanent of Block Factorizable Matrices, arXiv:1208.6589.
Exact boson sampling using Gaussian continuous-variable measurements. A P Lund, S Rahimi-Keshari, T C Ralph, Phys. Rev. A. 9622301A. P. Lund, S. Rahimi-Keshari, and T. C. Ralph, Exact boson sampling using Gaussian continuous-variable measurements Phys. Rev. A 96, 022301 (2017).
Boson sampling with Gaussian measurements. L Chakhmakhchyan, N J Cerf, Phys. Rev. A. 9632326L. Chakhmakhchyan and N. J. Cerf, Boson sampling with Gaussian measurements, Phys. Rev. A 96, 032326 (2017).
The bosonic birthday paradox. A Arkhipov, G Kuperberg, Geometry & Topology Monographs. 181A. Arkhipov, G. Kuperberg, The bosonic birthday paradox, Geometry & Topology Mono- graphs 18, 1 (2012).
A solution of the Gaussian optimizer conjecture. V Giovannetti, A S Holevo, R Garcia-Patron, Comm. Math. Phy. 3341553V. Giovannetti, A. S. Holevo, R. Garcia- Patron, A solution of the Gaussian optimizer conjecture, Comm. Math. Phy. 334, 1553 (2015).
Direct characterization of linear-optical networks. S Rahimi-Keshari, M A Broome, R Fickler, A Fedrizzi, T C Ralph, A G White, Optics Express. 2113450S. Rahimi-Keshari, M. A. Broome, R. Fick- ler, A. Fedrizzi, T. C. Ralph, A. G. White, Di- rect characterization of linear-optical networks Optics Express, Vol. 21, 13450 (2013).
What Can Quantum Optics Say about Computational Complexity Theory?. Austin P Saleh Rahimi-Keshari, Timothy C Lund, Ralph, Phys. Rev. Lett. 11460501Saleh Rahimi-Keshari, Austin P. Lund, and Timothy C. Ralph, What Can Quantum Optics Say about Computational Complexity Theory?, Phys. Rev. Lett. 114, 060501 (2015).
Density matrix formulation for quantum renormalization groups. S R White, Phys. Rev. Lett. 692863S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992).
Thermodynamic Limit of Density Matrix Renormalization. S Östlund, S Rommer, Phys. Rev. Lett. 753537S. Östlund and S. Rommer, Thermody- namic Limit of Density Matrix Renormaliza- tion Phys. Rev. Lett. 75, 3537 (1995).
Density Matrix Renormalization Group and Periodic Boundary Conditions: A Quantum Information Perspective. F Verstraete, D Porras, J I Cirac, Phys. Rev. Lett. 93227205F. Verstraete, D. Porras, and J. I. Cirac, Density Matrix Renormalization Group and Periodic Boundary Conditions: A Quantum Information Perspective, Phys. Rev. Lett. 93, 227205 (2004).
The density-matrix renormalization group. U Schollwöck, Rev. Mod. Phys. 77259U. Schollwöck, The density-matrix renormal- ization group, Rev. Mod. Phys. 77, 259 (2005).
The density-matrix renormalization group in the age of matrix product states. U Schollwöck, Ann. Phys. 32696U. Schollwöck, The density-matrix renor- malization group in the age of matrix product states, Ann. Phys. 326, 96 (2011).
R Jozsa, arXiv:quant-ph/0603163On the simulation of quantum circuits. R. Jozsa, On the simulation of quantum cir- cuits, arXiv:quant-ph/0603163
Emerson, Negative quasi-probability as a resource for quantum computation. V Veitch, C Ferrie, D Gross, J , New J. Phys. 14113011V. Veitch, C. Ferrie, D. Gross, and J. Emer- son, Negative quasi-probability as a resource for quantum computation, New J. Phys. 14, 113011 (2012).
A Convenient Method for Generating Normal Variables. G Marsaglia, T A Bray, SIAM Rev. 6260G. Marsaglia and T. A. Bray, A Conve- nient Method for Generating Normal Variables, SIAM Rev. 6, 260 (1964).
Topics in Poisson approximation. A D Barbour, P Hall, Math. Proc. Camb. Philos. Soc. 95473A. D. Barbour and P. Hall, Topics in Poisson approximation, Math. Proc. Camb. Philos. Soc. 95, 473 (1984).
S M Barnett, Quantum Information. Oxford University PressS. M. Barnett, Quantum Information (Ox- ford University Press, 2009), Chapter 4.
R Glauber, Quantum Theory of Optical Coherence. Wiley-VCH Verlag GmbH & Co. KGaAR. Glauber, Quantum Theory of Optical Coherence (Wiley-VCH Verlag GmbH & Co. KGaA, 2007).
The Art of Computer Programming. D Knuth, Addison-Wesley2Third EditionD. Knuth. The Art of Computer Program- ming, Volume 2. (Third Edition, Addison- Wesley 1997).
On the rate of Poisson convergence. A D Barbour, P Hall, Math. Proc. Camb. Philos. Soc. 95473A. D. Barbour and P. Hall, On the rate of Poisson convergence, Math. Proc. Camb. Phi- los. Soc. 95, 473 (1984).
Efficient Classical Simulation of Slightly Entangled Quantum Computations. G Vidal, Phys. Rev. Lett. 91147902G. Vidal, Efficient Classical Simulation of Slightly Entangled Quantum Computations, Phys. Rev. Lett. 91, 147902 (2003).
D Pérez-García, F Verstraete, M Wolf, J I Cirac, Matrix Product State Representations, Quantum Inf. Comp. 7401D. Pérez-García, F. Verstraete, M. Wolf, and J. I. Cirac, Matrix Product State Representa- tions, Quantum Inf. Comp. 7, 401 (2007).
Quantum-mechanical lossless beam splitter: SU(2) symmetry and photon statistics. R A Campos, B E A Saleh, M C Teich, Phys. Rev. A. 401371R. A. Campos, B. E. A. Saleh, and M. C. Teich, Quantum-mechanical lossless beam split- ter: SU(2) symmetry and photon statistics, Phys. Rev. A 40, 1371 (1989).
Entanglement by a beam splitter: Nonclassicality as a prerequisite for entanglement. M Kim, W Son, V Buzek, P L Knight, Phys. Rev. A. 6532323M. Kim, W. Son, V. Buzek, and P. L. Knight, Entanglement by a beam splitter: Non- classicality as a prerequisite for entanglement, Phys. Rev. A 65, 032323 (2002).
Tensor network states in time-bin quantum optics. M Lubasch, Phys. Rev. A. 9762304M. Lubasch et al., Tensor network states in time-bin quantum optics Phys. Rev. A 97, 062304 (2018).
Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems. F Verstraete, V Murg, J I Cirac, Adv. Phys. 57143F. Verstraete, V. Murg, and J. I. Cirac, Ma- trix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems Adv. Phys. 57, 143 (2008).
A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States. R Orús, Ann. Phys. 349117158R. Orús, A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, Ann. Phys. 349, 117158 (2014).
Architectures for quantum simulation showing a quantum speedup. J Bermejo-Vega, D Hangleiter, M Schwarz, R Raussendorf, J Eisert, Phys. Rev. X. 821010J. Bermejo-Vega, D. Hangleiter, M. Schwarz, R. Raussendorf, J. Eisert, Architectures for quantum simulation showing a quantum speedup, Phys. Rev. X 8, 021010 (2018)
M Oszmaniec, D J Brod, arXiv:1801.06166Classical simulation of photonic linear optics with lost particles. M. Oszmaniec, D. J. Brod, Classical simula- tion of photonic linear optics with lost particles, arXiv:1801.06166 (2018).
| [] |
[
"PRE-TRAINING VIA DENOISING FOR MOLECULAR PROPERTY PREDICTION",
"PRE-TRAINING VIA DENOISING FOR MOLECULAR PROPERTY PREDICTION"
] | [
"Sheheryar Zaidi [email protected] \nUniversity of Oxford\n‡ DeepMind\n",
"Michael Schaarschmidt [email protected]. \nUniversity of Oxford\n‡ DeepMind\n",
"James Martens \nUniversity of Oxford\n‡ DeepMind\n",
"Hyunjik Kim \nUniversity of Oxford\n‡ DeepMind\n",
"Yee Whye Teh \nUniversity of Oxford\n‡ DeepMind\n",
"Alvaro Sanchez-Gonzalez \nUniversity of Oxford\n‡ DeepMind\n",
"Peter Battaglia \nUniversity of Oxford\n‡ DeepMind\n",
"Razvan Pascanu \nUniversity of Oxford\n‡ DeepMind\n",
"Jonathan Godwin \nUniversity of Oxford\n‡ DeepMind\n"
] | [
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind",
"University of Oxford\n‡ DeepMind"
] | [] | Many important problems involving molecular property prediction from 3D structures have limited data, posing a generalization challenge for neural networks. In this paper, we describe a pre-training technique based on denoising that achieves a new state-of-the-art in molecular property prediction by utilizing large datasets of 3D molecular structures at equilibrium to learn meaningful representations for downstream tasks. Relying on the well-known link between denoising autoencoders and score-matching, we show that the denoising objective corresponds to learning a molecular force field -arising from approximating the Boltzmann distribution with a mixture of Gaussians -directly from equilibrium structures. Our experiments demonstrate that using this pre-training objective significantly improves performance on multiple benchmarks, achieving a new state-of-the-art on the majority of targets in the widely used QM9 dataset. Our analysis then provides practical insights into the effects of different factors -dataset sizes, model size and architecture, and the choice of upstream and downstream datasets -on pre-training.Inspired by recent advances in noise regularization for graph neural networks (GNNs) (Godwin et al., 2022), our pre-training objective is based on denoising in the space of structures (and is hence self-supervised). Unlike existing pre-training methods, which largely focus on 2D graphs, our approach targets the setting where the downstream task involves 3D point clouds defining the molecular structure. Relying on the well-known connection between denoising and score-matching(Vincent, 2011;Song & Ermon, 2019;Ho et al., 2020), we show that the denoising objective is 1 Note that PCQM4Mv2 is a new version of PCQM4M that now offers 3D structures. | 10.48550/arxiv.2206.00133 | [
"https://export.arxiv.org/pdf/2206.00133v2.pdf"
] | 249,240,346 | 2206.00133 | 082aa5c92202719a5740c8e2edee65cc4bf3ccfa |
PRE-TRAINING VIA DENOISING FOR MOLECULAR PROPERTY PREDICTION
Sheheryar Zaidi [email protected]
University of Oxford
‡ DeepMind
Michael Schaarschmidt [email protected].
University of Oxford
‡ DeepMind
James Martens
University of Oxford
‡ DeepMind
Hyunjik Kim
University of Oxford
‡ DeepMind
Yee Whye Teh
University of Oxford
‡ DeepMind
Alvaro Sanchez-Gonzalez
University of Oxford
‡ DeepMind
Peter Battaglia
University of Oxford
‡ DeepMind
Razvan Pascanu
University of Oxford
‡ DeepMind
Jonathan Godwin
University of Oxford
‡ DeepMind
PRE-TRAINING VIA DENOISING FOR MOLECULAR PROPERTY PREDICTION
* Equal contribution. § Work done during an internship at DeepMind. Correspondence to:
Many important problems involving molecular property prediction from 3D structures have limited data, posing a generalization challenge for neural networks. In this paper, we describe a pre-training technique based on denoising that achieves a new state-of-the-art in molecular property prediction by utilizing large datasets of 3D molecular structures at equilibrium to learn meaningful representations for downstream tasks. Relying on the well-known link between denoising autoencoders and score-matching, we show that the denoising objective corresponds to learning a molecular force field -arising from approximating the Boltzmann distribution with a mixture of Gaussians -directly from equilibrium structures. Our experiments demonstrate that using this pre-training objective significantly improves performance on multiple benchmarks, achieving a new state-of-the-art on the majority of targets in the widely used QM9 dataset. Our analysis then provides practical insights into the effects of different factors -dataset sizes, model size and architecture, and the choice of upstream and downstream datasets -on pre-training.Inspired by recent advances in noise regularization for graph neural networks (GNNs) (Godwin et al., 2022), our pre-training objective is based on denoising in the space of structures (and is hence self-supervised). Unlike existing pre-training methods, which largely focus on 2D graphs, our approach targets the setting where the downstream task involves 3D point clouds defining the molecular structure. Relying on the well-known connection between denoising and score-matching(Vincent, 2011;Song & Ermon, 2019;Ho et al., 2020), we show that the denoising objective is 1 Note that PCQM4Mv2 is a new version of PCQM4M that now offers 3D structures.
INTRODUCTION
The success of the best performing neural networks in vision and natural language processing (NLP) relies on pre-training the models on large datasets to learn meaningful features for downstream tasks (Dai & Le, 2015;Simonyan & Zisserman, 2014;Devlin et al., 2018;Brown et al., 2020;Dosovitskiy et al., 2020). For molecular property prediction from 3D structures (a point cloud of atomic nuclei in R 3 ), the problem of how to similarly learn such representations remains open. For example, none of the best models on the widely used QM9 benchmark use any form of pre-training (e.g. Klicpera et al., 2020a;Liu et al., 2022b;Schütt et al., 2021;Thölke & De Fabritiis, 2022), in stark contrast with vision and NLP. Effective methods for pre-training could have a significant impact on fields such as drug discovery and material science.
In this work, we focus on the problem of how large datasets of 3D molecular structures can be utilized to improve performance on downstream molecular property prediction tasks that also rely on 3D structures as input. We address the question: how can one exploit large datasets like PCQM4Mv2, 1 that contain over 3 million structures, to improve performance on datasets such as DES15K that are orders of magnitude smaller? Our answer is a form of self-supervised pre-training that generates useful representations for downstream prediction tasks, leading to state-of-the-art (SOTA) results. The contributions of our work are summarized as follows:
• We investigate a simple and effective method for pre-training via denoising in the space of 3D structures with the aim of improving downstream molecular property prediction from such 3D structures. Our denoising objective is shown to be related to learning a specific force field. • Our experiments demonstrate that pre-training via denoising significantly improves performance on multiple challenging datasets that vary in size, nature of task, and molecular composition. This establishes that denoising over structures successfully transfers to molecular property prediction, setting, in particular, a new state-of-the-art on 10 out of 12 targets in the widely used QM9 dataset. Figure 1 illustrates performance on one of the targets in QM9. • We make improvements to a common GNN, in particular showing how to apply Tailored Activation Transformation (TAT) (Zhang et al., 2022) to Graph Network Simulators (GNS) , which is complementary to pre-training and further boosts performance. • We analyze the benefits of pre-training by gaining insights into the effects of dataset size, model size and architecture, and the relationship between the upstream and downstream datasets.
RELATED WORK
Pre-training of GNNs. Various recent works have formulated methods for pre-training using graph data (Liu et al., 2021b;Hu et al., 2020a;Xie et al., 2021;Kipf & Welling, 2016), rather than 3D point clouds of atom nuclei as in this paper. Approaches based on contrastive methods rely on learning representations by contrasting different views of the input graph Veličković et al., 2019;You et al., 2020;Liu et al., 2021a), or bootstrapping (Thakoor et al., 2021). Autoregressive or reconstruction-based approaches, such as ours, learn representations by requiring the model to predict aspects of the input graph (Hu et al., 2020a;Rong et al., 2020;Liu et al., 2019). Most methods in the current literature are not designed to handle 3D structural information, focusing instead on 2D graphs. The closest work to ours is GraphMVP (Liu et al., 2021a), where 3D structure is treated as one view of a 2D molecule for the purpose of upstream contrastive learning. Their work focuses on downstream tasks that only involve 2D information, while our aim is to improve downstream models for molecular property prediction from 3D structures. After the release of this pre-print, similar ideas have been studied by Jiao et al. (2022) and Liu et al. (2022a).
Denoising, representation learning and score-matching. Noise has long been known to improve generalization in machine learning (Sietsma & Dow, 1991;Bishop, 1995). Denoising autoencoders have been used to effectively learn representations by mapping corrupted inputs to original inputs (Vincent et al., 2008;2010). Specific to GNNs Scarselli et al., 2009;Bronstein et al., 2017), randomizing input graph features has been shown to improve performance (Hu et al., 2020a;Sato et al., 2021). Applications to physical simulation also involve corrupting the state with Gaussian noise . Our work builds on Noisy
Nodes (Godwin et al., 2022), which incorporates denoising as an auxiliary task to improve performance, indicating the effectiveness of denoising for molecular property prediction (cf. Section 3.2.2). Denoising is also closely connected to score-matching (Vincent, 2011), which has become popular for generative modelling (Song & Ermon, 2019;Ho et al., 2020;Hoogeboom et al., 2022;Xu et al., 2022;Shi et al., 2021). We also rely on this connection to show that denoising structures corresponds to learning a force field.
Equivariant neural networks for 3D molecular property prediction. Recently, the dominant approach for improving molecular property prediction from 3D structures has been through the design of architectures that incorporate roto-translational inductive biases into the model, such that the outputs are invariant to translating and rotating the input atomic positions. A simple way to achieve this is to use roto-translation invariant features as inputs, such as inter-atomic distances (Schütt et al., 2017;Unke & Meuwly, 2019), angles (Klicpera et al., 2020b;a;Shuaibi et al., 2021;Liu et al., 2022b), or the principal axes of inertia (Godwin et al., 2022). There is also broad literature on equivariant neural networks, whose intermediate activations transform accordingly with roto-translations of inputs thereby naturally preserving inter-atomic distance and orientation information. Such models can be broadly categorized into those that are specifically designed for molecular property prediction (Thölke & De Fabritiis, 2022;Schütt et al., 2021;Batzner et al., 2021;Anderson et al., 2019;Miller et al., 2020) and general-purpose architectures (Satorras et al., 2021;Finzi et al., 2020;Hutchinson et al., 2021;Thomas et al., 2018;Kondor et al., 2018;Brandstetter et al., 2021). Our pre-training technique is architecture-agnostic, and we show that it can be applied to enhance performance in both a GNN-based architecture (Sanchez-Gonzalez et al., 2020) and a Transformer-based one (Thölke & De Fabritiis, 2022). We conjecture that similar improvements will hold for other models.
METHODOLOGY
PROBLEM SETUP
Molecular property prediction consists of predicting scalar quantities given the structure of one or more molecules as input. Each data example is a labelled set specified as follows: we are provided with a set of atoms S = {(a 1 , p 1 ), . . . , (a |S| , p |S| )}, where a i ∈ {1, . . . , 118} and p i ∈ R 3 are the atomic number and 3D position respectively of atom i in the molecule, alongside a label y ∈ R.
We assume that the model, which takes S as input, is any architecture consisting of a backbone, which first processes S to build a latent representation of it, followed by a vertex-level or graph-level "decoder", that returns per-vertex predictions or a single prediction for the input respectively.
PRE-TRAINING VIA DENOISING
Given a dataset of molecular structures, we pre-train the network by denoising the structures, which operates as follows. Let D structures = {S 1 , . . . , S n } denote the upstream dataset of equilibrium structures, and let GNN θ denote a graph neural network with parameters θ which takes S ∈ D structures as input and returns per-vertex predictions GNN θ (S) = (ˆ 1 , . . . ,ˆ |S| ). The precise parameterization of the models we consider in this work is described in Section 3.3 and Appendix A.
Starting with an input molecule S ∈ D structures , we perturb it by adding i.i.d. Gaussian noise to its atomic positions p i . That is, we create a noisy version of the molecule:
S = {(a 1 ,p 1 ), . . . , (a |S| ,p |S| )}, wherep i = p i + σ i and i ∼ N (0, I 3 ),(1)
The noise scale σ is a tuneable hyperparameter (an interpretation of which is given in Section 3.2.1). We train the model as a denoising autoencoder by minimizing the following loss with respect to θ:
E p(S,S) GNN θ (S) − ( 1 , . . . , |S| ) 2 .
(2)
The distribution p(S, S) corresponds to sampling a structure S from D structures and adding noise to it according to Equation (1). Note that the model predicts the noise, not the original coordinates. Next, we motivate denoising as our pre-training objective for molecular modelling.
DENOISING AS LEARNING A FORCE FIELD
Datasets in quantum chemistry are typically generated by minimizing expensive-to-compute interatomic forces with methods such as density functional theory (DFT) (Parr & Weitao, 1994). We speculate that learning this force field would give rise to useful representations for downstream tasks, since molecular properties vary with forces and energy. Therefore, a reasonable pre-training objective would be one that involves learning the force field. Unfortunately, this force field is either unknown or expensive to evaluate, and hence it cannot be used directly for pre-training. An alternative is to approximate the data-generating force field with one that can be cheaply evaluated and use it to learn good representations -an approach we outline in this section. Using the well-known link between denoising autoencoders and score-matching (Vincent, 2011;Song & Ermon, 2019;, we can show that the denoising objective in Equation (2) is equivalent to learning a particular force field directly from equilibrium structures with some desirable properties. For clarity, in this subsection we condition on and suppress the atom types and molecule size in our notation, specifying a molecular structure by its coordinates x ∈ R 3N (with N as the size of the molecule).
From the perspective of statistical physics, a structure x can be treated as a random quantity sampled from the Boltzmann distribution p physical (x) ∝ exp(−E(x)), where E(x) is the (potential) energy of x. According to p physical , low energy structures have a high probability of occurring. Moreover, the per-atom forces are given by ∇ x log p physical (x) = −∇ x E(x), which is referred to as the force field. Our goal is to learn this force field. However both the energy function E and distribution p physical are unknown, and we only have access to a set of equilibrium structures x 1 , . . . , x n that locally minimize the energy E. Since x 1 , . . . , x n are then local maxima of the distribution p physical , our main approximation is to replace p physical with a mixture of Gaussians centered at the data:
p physical (x) ≈ q σ (x) := 1 n n i=1 q σ (x | x i ), where we define q σ (x | x i ) = N (x; x i , σ 2 I 3N )
. This approximation captures the fact that p physical will have local maxima at the equilibrium structures, vary smoothly with x and is computationally convenient. Learning the force field corresponding to q σ (x) now yields a score-matching objective:
E qσ(x) GNN θ (x) − ∇x log q σ (x) 2 .(3)
As shown by Vincent (2011), and recently applied to generative modelling (Song & Ermon, 2019;Ho et al., 2020;Shi et al., 2021;Xu et al., 2022), this objective is equivalent to the denoising objective. Specifically, defining q 0 (x) = 1 n n i=1 δ(x = x i ) to be the empirical distribution and q σ (x, x) = q σ (x | x)q 0 (x), the objective in Equation (3) is equivalent to:
E qσ(x,x) GNN θ (x) − ∇x log q σ (x | x) 2 = E qσ(x,x) GNN θ (x) − x −x σ 2 2 .(4)
We notice that the RHS corresponds to the earlier denoising loss in Equation (2) (up to a constant factor of 1/σ applied to GNN θ that can be absorbed into the network). To summarize, denoising equilibrium structures corresponds to learning the force field that arises from approximating the distribution p physical with a mixture of Gaussians. Note that we can interpret the noise scale σ as being related to the sharpness of p physical or E around the local maxima x i . We also remark that the equivalence between Equation (3) and the LHS of Equation (4) does not require q σ (x | x i ) to be a Gaussian distribution (Vincent, 2011), and other choices will lead to different denoising objectives, which we leave as future work. See Appendix B for technical caveats and details.
NOISY NODES: DENOISING AS AN AUXILIARY LOSS
Recently, Godwin et al. (2022) also applied denoising as an auxiliary loss to molecular property prediction, achieving significant improvements on a variety of molecular datasets. In particular, their approach, called Noisy Nodes, consisted of augmenting the usual optimization objective for predicting y with an auxiliary denoising loss. They suggested two explanations for why Noisy Nodes improves performance. First, the presence of a vertex-level loss discourages oversmoothing (Chen et al., 2019;Cai & Wang, 2020) of vertex/edge features after multiple message-passing layers -a common problem plaguing GNNs -because successful denoising requires diversity amongst vertex features in order to match the diversity in the noise targets i . Second, they argued that denoising can aid representation learning by encouraging the network to learn aspects of the input distribution.
The empirical success of Noisy Nodes indicates that denoising can indeed result in meaningful representations. Since Noisy Nodes incorporates denoising only as an auxiliary task, the representation . Each bar represents one molecular composition (e.g. one carbon atom, two oxygen atoms). Right: Percentage of elements appearing in QM9, DES15K, OC20 that also appear in PCQM4Mv2.
learning benefits of denoising are limited to the downstream dataset on which it is used as an auxiliary task. Our approach is to apply denoising as a pre-training objective on another large (unlabelled) dataset of structures to learn higher-quality representations, which results in better performance.
GNS AND GNS-TAT
The main two models we consider in this work are Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020), which is a type of GNN, and a better-performing variant we contribute called GNS-TAT. GNS-TAT makes use of a recently published network transformation method called Tailored Activation Transforms (TAT) (Zhang et al., 2022), which has been shown to prevent certain degenerate behaviors at initialization in deep MLPs/convnets that are reminiscent of oversmoothing in GNNs (and are also associated with training difficulties). While GNS is not by default compatible with the assumptions of TAT, we propose a novel GNN initialization scheme called "Edge-Delta" that makes it compatible by initializing to zero the weights that carry "messages" from vertices to edges. This marks the first application of TAT to any applied problem in the literature. See Appendix A for details.
EXPERIMENTS
The goal of our experimental evaluation in this section is to answer the following questions. First, does pre-training a neural network via denoising improve performance on the downstream task compared to training from a random initialization? Second, how does the benefit of pre-training depend on the relationship between the upstream and downstream datasets? Our evaluation involves four realistic and challenging molecular datasets, which vary in size, compound compositions (organic or inorganic) and labelling methodology (DFT-or CCSD(T)-generated), as described below.
DATASETS AND TRAINING SETUP
Datasets. First, the main dataset we use for pre-training is PCQM4Mv2 (Nakata & Shimazaki, 2017), which contains 3.4 million organic molecules, specified by their 3D structures at equilibrium calculated using DFT. 2 The molecules in PCQM4Mv2 only contain one label, however the labels are not used as denoising only requires the structures. The large scale and diversity of PCQM4Mv2 makes it well-suited for pre-training via denoising. Second, as a dataset for fine-tuning, we use QM9 (Ramakrishnan et al., 2014), which contains around 130,000 small organic molecules and is widely used as a molecular property prediction benchmark (Klicpera et al., 2020a;Fuchs et al., 2020;Satorras et al., 2021;Finzi et al., 2020;Hutchinson et al., 2021;Schütt et al., 2021;Thölke & De Fabritiis, 2022;Godwin et al., 2022). Each molecule is specified by its structure alongside 12 associated molecular property labels. Third, Open Catalyst 2020 (OC20) (Chanussot* et al., 2021) is a recent large benchmark of interacting surfaces and adsorbates relevant to catalyst discovery. OC20 contains various tasks, such as predicting the relaxed state energy from an initial high-energy structure (IS2RE). We explore different combinations of upstream and downstream tasks as described in Section 4.3. Lastly, DES15K (Donchev et al., 2021) is a small dataset we use for fine-tuning, which contains around 15,000 dimer geometries (i.e. molecule pairs) with non-covalent molecular interactions. Each pair is labelled with its interaction energy computed using the gold-standard CCSD(T) method (Bartlett & Musiał, 2007). CCSD(T) is usually both more expensive and accurate than DFT, which is used for all aforementioned datasets. See Appendix D for further details and a discussion about the choice of using DFT-generated structures for pre-training.
Figure 2 (right) shows what percentage of elements appearing in each of QM9, OC20 and DES15K also appear in PCQM4Mv2. Whereas QM9 is fully covered by PCQM4Mv2, we observe that DES15K has less element overlap with PCQM4Mv2 and less than < 30% of elements in OC20 are contained in PCQM4Mv2. This is owing to the fact that surface molecules in OC20 are inorganic lattices, none of which appear in PCQM4Mv2. This suggests that we can expect least transfer from PCQM4Mv2 to OC20. We also compare PCQM4Mv2 and QM9 in terms of the molecular compositions, i.e. the number of atoms of each element, that appear in each. Due to presence of isomers, both datasets contain multiple molecules with the same composition. For each molecular composition in QM9, Figure 2 (left) shows its frequency in both QM9 and PCQM4Mv2. We observe that most molecular compositions in QM9 also appear in PCQM4Mv2. We also remark that since pre-training is self-supervised using only unlabelled structures, test set contamination is not possiblein fact, PCQM4Mv2 does not have most of the labels in QM9.
Training setup. GNS/GNS-TAT were implemented in JAX (Bradbury et al., 2018) using Haiku and Jraph (Hennigan et al., 2020;. All experiments were averaged over 3 seeds. Detailed hyperparameter and hardware settings can be found in Appendices E and F.
RESULTS ON QM9
We evaluate two variants of our model on QM9 in Table 1, GNS-TAT with Noisy Nodes trained from a random initialization versus pre-trained parameters. Pre-training is done on PCQM4Mv2 via denoising. For best performance on QM9, we found that using atom type masking and prediction during pre-training additionally helped (Hu et al., 2020a). We fine-tune a separate model for each of the 12 targets, as usually done on QM9, using a single pre-trained model. This is repeated for three seeds (including pre-training). Following customary practice, hyperparameters, including the noise scale for denoising during pre-training and fine-tuning, are tuned on the HOMO target and then kept fixed for all other targets. We first observe that GNS-TAT with Noisy Nodes performs competitively with other models and significantly improves upon GNS with Noisy Nodes, revealing the benefit of the TAT modifications. Utilizing pre-training then further improves performance across all targets, achieving a new state-of-the-art compared to prior work for 10 out of 12 targets. Interestingly, for the Figure 3: Left: Validation performance curves on the OC20 IS2RE task (ood_both split) See Table 8 for a comparison to other models in the literature. Right: Test performance curves for predicting interaction energies of dimer geometries in the DES15K dataset. "PT" and "NN" stand for pre-training and Noisy Nodes respectively. electronic spatial extent target R 2 , we found GNS-TAT to perform worse than other models, which may be due to the optimal noise scale being different from that of other targets.
RESULTS ON OC20
Next, we consider the Open Catalyst 2020 benchmark focusing on the downstream task of predicting the relaxed energy from the initial structure (IS2RE). We compared GNS with Noisy Nodes trained from scratch versus using pre-trained parameters. We experimented with two options for pre-training:
(1) pre-training via denoising on PCQM4Mv2, and (2) pre-training via denoising on OC20 itself. For the latter, we follow Godwin et al.'s [2022] approach of letting the denoising target be the relaxed structure, while the perturbed input is a random interpolation between the initial and relaxed structures with added Gaussian noise -this corresponds to the IS2RS task with additional noise. As shown in Figure 3 (left), pre-training on PCQM4Mv2 offers no benefit for validation performance on IS2RE, however pre-training on OC20 leads to considerably faster convergence but the same final performance. The lack of transfer from PCQM4Mv2 to OC20 is likely due to the difference in nature of the two datasets and the small element overlap as discussed in Section 4.1 and Figure 2 (right). On the other hand, faster convergence from using parameters pre-trained on OC20 suggests that denoising learned meaningful features. Unsurprisingly, the final performance is unchanged since the upstream and downstream datasets are the same in this case, so pre-training with denoising is identical to the auxiliary task of applying Noisy Nodes. The performance achieved is also competitive with other models in the literature as shown in Table 8.
RESULTS ON DES15K
In our experiments so far, all downstream tasks were based on DFT-generated datasets. While DFT calculations are more expensive than using neural networks, they are relatively cheap compared to even higher quality methods such as CCSD(T) (Bartlett & Musiał, 2007). In this section, we evaluate how useful pre-training on DFT-generated structures from PCQM4Mv2 is when fine-tuning on the recent DES15K dataset which contains higher quality CCSD(T)-generated interaction energies. Moreover, unlike QM9, inputs from DES15K are systems of two interacting molecules and the dataset contains only around 15,000 examples, rendering it more challenging. We compare the test performance on DES15K achieved by GNS-TAT with Noisy Nodes when trained from scratch versus using pre-trained parameters from PCQM4Mv2. As a baseline, we also include pre-training on PCQM4Mv2 using 2D-based AttrMask (Hu et al., 2020a) by masking and predicting atomic numbers. Figure 3 (right) shows that using Noisy Nodes significantly improves performance compared to training from scratch, with a further improvement resulting from using pre-training via denoising. AttrMask underperforms denoising since it likely does not fully exploit the 3D structural information. Importantly, this shows that pre-training by denoising structures obtained through relatively cheap methods such as DFT can even be beneficial when fine-tuning on more expensive and smaller downstream datasets. See Appendix G.1 for similar results on another architecture. Validation performance curves on the OC20 S2EF task (ood_both split) for different model sizes. "PT" and "NN" stand for pre-training and Noisy Nodes respectively.
ANALYSIS
PRE-TRAINING A DIFFERENT ARCHITECTURE
To explore whether pre-training is beneficial beyond GNS/GNS-TAT, we applied pre-training via denoising to the TorchMD-NET architecture (Thölke & De Fabritiis, 2022). TorchMD-NET is a transformer-based architecture whose layers maintain per-atom scalar features x i ∈ R F and vector features v i ∈ R 3×F , where F is the feature dimension, that are updated in each layer using a selfattention mechanism. We implemented denoising by using gated equivariant blocks (Weiler et al., 2018;Schütt et al., 2021) applied to the processed scalar and vector features. The resulting vector features are then used as the noise prediction. Figure 4 (left), pre-training improves the downstream performance for all dataset sizes. The difference in test MAE also grows as the downstream training data reduces. Second, we assess the effect of varying the amount of pre-training data while fixing the downstream dataset size for both GNS and GNS-TAT as shown in Figure 4 (middle). For both models, we find that downstream performance generally improves as upstream data increases, with saturating performance for GNS-TAT. More upstream data can yield better quality representations.
VARYING MODEL SIZE
We study the benefit of pre-training as models are scaled up on large downstream datasets. Recall that the S2EF dataset in OC20 contains around 130 million DFT evaluations for catalytic systems, providing three orders of magnitude more training data than QM9. We compare the performance of four GNS models with sizes ranging from 10 million to 1.2 billion parameters scaled up by increasing the hidden layer sizes in the MLPs. Each is pre-trained via denoising using the trajectories provided for the IS2RE/IS2RS tasks as described in Section 4.3. We also compare this to a 130 million parameter variant of GNS trained from scratch. As shown in Figure 4 (right), the pre-trained models continue to benefit from larger model sizes.We also observe that pre-training is beneficial, as the model trained from scratch underperforms in comparison: the 130 million parameters model trained from scratch is outperformed by a pre-trained model of less than half the size. As shown in Section 3.2.1, denoising structures corresponds to learning an approximate force field directly from equilibrium structures. We explore whether pre-training via denoising would therefore also improve models trained to predict atomic forces. We compare the performance of TorchMD-NET for force prediction on the MD17 (aspirin) dataset with and without pre-training on PCQM4Mv2. Table 3 shows that pre-training improves force prediction. We also assess the effect of pre-training on the OC20 dataset for force prediction in Appendix G.3, similarly finding an improvement due to pre-training. Finally, we perform an experiment to assess how useful the features learned by pre-training are if they are not fine-tuned for the downstream task but kept fixed instead. Specifically, on the HOMO target in QM9, we freeze the backbone of the model and fine-tune only the decoder (cf. Appendix A). To evaluate this, we compare it to using random parameters from initialization for the model's backbone, which allows us to isolate how useful the pre-trained features are. As described in Appendix A, the decoder is a simple module involving no message-passing. Figure 5 shows that only training the decoder while keeping the pre-trained parameters fixed results in test MAE of 40 meV, which is worse than fine-tuning the entire model but substantially better than the performance of >100 meV in test MAE resulting from training the decoder when the remaining parameters are randomly initialized. This suggests that the features learned by denoising are more discriminative for downstream prediction than random features. We note that training only the decoder is also substantially faster than training the entire network -one batch on a single V100 GPU takes 15ms, which is 50× faster than one batch using 16 TPUs for the full network.
PRE-TRAINING IMPROVES FORCE PREDICTION
FREEZING PRE-TRAINED PARAMETERS
LIMITATIONS & FUTURE WORK
We have shown that pre-training can significantly improve performance for various tasks. One additional advantage of pre-trained models is that they can be shared in the community, allowing practitioners to fine-tune models on their datasets. However, unlike vision and NLP, molecular networks vary widely and the community has not yet settled on a "standard" architecture, making pre-trained weights less reusable. Moreover, the success of pre-training inevitably depends on the relationship between the upstream and downstream datasets. In the context of molecular property prediction, understanding what aspects of the upstream data distribution must match the downstream data distribution for transfer is an important direction for future work. More generally, pre-training models on large datasets incurs a computational cost. However, our results show that pre-training for 3D molecular prediction does not require the same scale as large NLP and vision models. We discuss considerations on the use of compute and broader impact in Appendix C.
CONCLUSION
We investigated pre-training neural networks by denoising in the space of 3D molecular structures. We showed that denoising in this context is equivalent to learning a force field, motivating its ability to learn useful representations and shedding light on successful applications of denoising in other works (Godwin et al., 2022). This technique enabled us to utilize existing large datasets of 3D structures for improving performance on various downstream molecular property prediction tasks, setting a new SOTA in some cases such as QM9. More broadly, this bridges the gap between the utility of pre-training in vision/NLP and molecular property prediction from structures. We hope that this approach will be particularly impactful for applications of deep learning to scientific problems.
A ARCHITECTURAL DETAILS
A.1 STANDARD GNS As our base model architecture we chose a Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020), which consists of an ENCODER which constructs a graph representation from the input S, a PROCESSOR of repeated message passing blocks that update the latent graph representation, and a DECODER which produces predictions. Our implementation follows Godwin et al.'s [2022] modifications to enable molecular and graph-level property predictions which has been shown to achieve strong results across different molecular prediction tasks without relying on problem-specific features.
In the ENCODER, we represent the set of atoms S = {(a 1 , p 1 ), (a 2 , p 2 ), . . . , (a |S| , p |S| )} as a directed graph G = (V, E) where V = {v 1 , v 2 , . . . , v |S| } and E = {e i,j } i,j are the sets of "featurized" vertices and edges, respectively. Edges e i,j ∈ E are constructed whenever the distance between the i-th and j-th atoms is less than the connectivity radius R cut , in which case we connect v i and v j with a directed edge e i,j from i to j that is a featurization of the displacement vector p j − p i . Meanwhile, for the i-th atom, v i is given by a learnable vector embedding of the atomic number a i .
The PROCESSOR consists of L message-passing steps that produce intermediate graphs G 1 , . . . , G L (with the same connectivity structure as the initial one). Each of these steps computes the sum of a shortcut connection from the previous graph, and the application of an Interaction Network (Battaglia et al., 2016). Interaction Networks first update each edge feature by applying an "edge update function" to a combination of the existing feature and the features of the two connected vertices. They then update each vertex feature by applying a "vertex update function" to a combination of the existing feature and the (new) edge features of incoming edges. In GNS, edge update functions are 3 hidden layer fully-connected MLPs, using a "shifted softplus" (ssp(x) = log(0.5e x + 0.5)) activation function, applied to the concatenation of the relevant edge and vertex features, followed by a layer normalization layer. Vertex update functions are similar, but are applied to the concatenation of the relevant vertex feature and sum over relevant edge features.
In our implementation of GNS we applied the same PROCESSOR in sequence three times (with shared parameters), with the output of each being decoded to produce a prediction and corresponding loss value. The loss for the whole model is then given by the average of these. (Test-time predictions are meanwhile computed using only the output of the final PROCESSOR.)
The DECODER is responsible for computing graph-level and vertex-level predictions from the output of each PROCESSOR. Vertex-level predictions, such as noise as described in Section 3.2, are decoded using an MLP applied to each vertex feature. Graph-level predictions (e.g. energies) are produced by applying an MLP to each vertex feature, aggregating the result over vertices (via a sum), and then applying another MLP to the result. Tailored Activation Transformation (TAT) (Zhang et al., 2022) is a method for initializing and transforming neural networks to make them easier to train, and is based on a similar method called Deep Kernel Shaping (DKS) (Martens et al., 2021). TAT controls the propagation of "q values", which are initialization-time approximations to dimension-normalized squared norms of the network's layer-wise activation vectors, and "c values", which are cosine similarities between such vectors (for different inputs). In other words, q values approximate z(x) 2 / dim(z(x)), where z(x) denotes a layer's output as a function of the network's input x, and c values approximate z(x) z(x )/( z(x) z(x ) ), where x is another possible network input. In standard deep networks, c values will converge to a constant value c ∞ ∈ [0, 1], so that "geometric information" is lost, which leads to training difficulties (Martens et al., 2021). DKS/TAT prevents this convergence through a combination of careful weight initialization, and transformations to the network's activation functions and sum/average layers.
A.2 GNS WITH TAILORED ACTIVATION TRANSFORMATION (GNS-TAT)
Oversmoothing (Chen et al., 2019;Cai & Wang, 2020;Rong et al., 2019;Zhou et al., 2020;Yang et al., 2020;Zhao & Akoglu, 2020;Do et al., 2021) is a phenomenon observed in GNN architectures where vertex/edge features all converge to approximately the same value with depth, and is associated with training difficulties. It is reminiscent of how, when c ∞ = 1, feature vectors will converge with depth to a constant input-independent vector in standard deep networks. It therefore seems plausible that applying TAT to GNNs may help with the oversmoothing problem and thus improve training performance.
Unfortunately, the GNS architecture violates two key assumptions of TAT. Firstly, the sums over edge features (performed in the vertex update functions) violate the assumption that all sum operations must be between the outputs of linear layers with independently sampled initial weights. Secondly, GNS networks have multiple inputs for which information needs to be independently preserved and propagated to the output, while DKS/TAT assumes a single input (or multiple inputs whose representations evolve independently in the network).
To address these issues we introduce a new initialization scheme called "Edge-Delta", which initializes to zero the weights that multiply incoming vertex features in the edge update functions (and treats these weights as absent for the purpose of computing the initial weight variance). This approach is inspired by the use of the "Delta initialization" (Balduzzi et al., 2017;Xiao et al., 2018) for convolutional networks in DKS/TAT, which initializes filter weights of the non-central locations to zero, thus allowing geometric information, in the form of c values, to propagate independently for each location in the feature map. When using the Edge-Delta initialization, edge features propagate independently of each other (and of vertex features), through what is essentially a standard deep residual network (with edge update functions acting as the residual branches), which we will refer to as the "edge network".
Given the use of Edge-Delta we can then apply TAT to GNS as follows 4 . First, we replace GNS's activation functions with TAT's transformed Leaky-ReLU activation functions (or "Tailored ReLUs"), which we compute with TAT's η parameter set to 0.8, and its "subnetwork maximizing function" defined on the edge network 5 . We also replace each sum involving shortcut connections with weighted sums, whose weights are 0.9 and √ 1 − 0.9 2 for the shortcut and non-shortcut branches respectively. We retain the use of layer normalization layers in the edge/vertex update functions, but move them to before the first fully-connected layer, as this seems to give the best performance. As required by TAT, we use a standard Gaussian fan-in initialization for the weights, and a zero initialization for the biases, with Edge-Delta used only for the first linear layer of the edge update functions. Finally, we replace the sum used to aggregate vertex features in the DECODER with an average. See Figures 6 and 7 for an illustration of these changes.
We experimented with an analogous "Vertex-Delta" initialization, which initializes to zero weights in the vertex update functions that multiply summed edge features, but found that Edge-Delta gave the best results. This might be because the edge features, which encode distances between vertices (and are best preserved with the Edge-Delta approach), are generally much more informative than the vertex features in molecular property prediction tasks. We also ran informal ablation studies, and found that each of our changes to the original GNS model contributed to improved results, with the use of Edge-Delta and weighted shortcut sums being especially important.
B DENOISING AS LEARNING A FORCE FIELD
We specify a molecular structure as x = (x (1) , . . . , x (N ) ) ∈ R 3N , where x (i) ∈ R 3 is the coordinate of atom i. Let E(x) denote the total (potential) energy of x, such that −∇ x E(x) are the forces on the atoms. As discussed in Section 3.2.1, learning the force field, i.e. the mapping x → −∇ x E(x), is a reasonable pre-training objective. Furthermore, learning the force field can be viewed as scorematching if we define the distribution p physical (x) ∝ exp(−E(x)) and observe that the score of p physical is the force field:
∇ x log p physical (x) = −∇ x E(x).
However, a technical caveat is that p physical is an improper probability density, because it cannot be normalized due to the translation invariance of E. Writing the translation of a structure as
x + t := (x (1) + t, . . . , x (N ) + t) where t ∈ R 3 is a constant vector, we have E(x + t) = E(x).
This implies that the normalizing constant R 3N p physical (x) dx diverges to infinity. To remedy this, we can restrict ourselves to the (3N − 3)
-dimensional subspace V := {x ∈ R 3N | i x (i) = 0} ⊆ R 3N
consisting of the mean-centered structures, over which p physical can be defined as a normalizable distribution.
Proceeding similarly as Section 3.2.1, let x 1 , . . . , x n ∈ V be a set of mean-centered equilibrium structures. For anyx ∈ V , we now approximate
p physical (x) ≈ q σ (x) := 1 n n i=1 q σ (x | x i ),
where the Gaussian distributions q σ (x | x i ) are defined on V as:
q σ (x | x i ) = 1 (2πσ) (3N −3)/2 exp − 1 2σ 2 x − x i 2 .
For convenience, we have expressed structures as vectors in the ambient space R 3N , however they are restricted to lie in the smaller space V . Note that the normalizing constant accounts for the fact that V is (3N − 3)-dimensional. As before, we define q 0 (x) = 1 n n i=1 δ(x = x i ) to be the empirical distribution and q σ (x, x) = q σ (x | x)q 0 (x). The score-matching objective is given by:
J 1 (θ) = E qσ(x) GNN θ (x) − ∇x log q σ (x) 2 ,(5)
where the expectation is now over V . As shown by Vincent (2011), minimizing the objective above is equivalent to the minimizing the following objective:
J 2 (θ) = E qσ(x,x) GNN θ (x) − ∇x log q σ (x | x) 2 .(6)
This is recognized as a denoising objective, because ∇x log q σ (x | x) = (x −x)/σ 2 . A practical implication of this analysis is that the noise (x −x)/σ 2 ∈ V should be mean-centered, which is intuitive since it is impossible to predict a translational component in the noise.
We include a proof of the equivalence between Equations (5) and (6) for completeness: Proposition 1 (Vincent (2011)). The minimization objectives J 1 (θ) and J 2 (θ) are equivalent.
Proof. We first observe:
J 1 (θ) = E qσ(x) GNN θ (x) 2 − 2E qσ(x) [ GNN θ (x), ∇x log q σ (x) ] + C 1 J 2 (θ) = E qσ(x) GNN θ (x) 2 − 2E qσ(x,x) [ GNN θ (x), ∇x log q σ (x | x) ] + C 2 ,
where C 1 , C 2 are constants independent of θ. Therefore, it suffices to show that the middle terms on the RHS are equal. Since expectations over q σ (x) and q σ (x, x) are restricted to V ⊆ R 3N , we apply a change of basis to write them as integrals against the (3N −3)-dimensional Lebesgue measure. Pick an orthonormal basis {v 1 , . . . , v 3N −3 } ⊆ R 3N for V and let P V = [v 1 , . . . , v 3N −3 ] ∈ R 3N ×3(N −1) be the projection matrix, so z = P V x expresses a mean-centered structure x in terms of the coordinates of the chosen basis for V . Noting that P V has orthonormal columns and that it yields a bijection between V and R 3N −3 , we calculate:
E qσ(x) [ GNN θ (x), ∇x log q σ (x) ] = R 3N −3 q σ (P Vz ) GNN θ (P Vz ), ∇ log q σ (P Vz ) dz = R 3N −3 q σ (P Vz ) GNN θ (P Vz ), ∇q σ (P Vz ) q σ (P Vz ) dz = R 3N −3 GNN θ (P Vz ), ∇q σ (P Vz ) dz = R 3N −3 GNN θ (P Vz ), 1 n n i=1 ∇q σ (P Vz | x i ) dz = R 3N −3 GNN θ (P Vz ), 1 n n i=1 q σ (P Vz | x i )∇ log q σ (P Vz | x i ) dz = R 3N −3 1 n n i=1 q σ (P Vz | x i ) GNN θ (P Vz ), ∇ log q σ (P Vz | x i ) dz = E qσ(x,x) [ GNN θ (x), ∇x log q σ (x | x) ]
C BROADER IMPACT
Who may benefit from this work? Molecular property prediction works towards a range of applications in materials design, chemistry, and drug discovery. Wider use of pre-trained models may accelerate progress in a similar manner to how pre-trained language or image models have enabled practitioners to avoid training on large datasets from scratch. Pre-training via denoising is simple to implement and can be immediately adopted to improve performance on a wide range of molecular property prediction tasks. As research converges on more standardized architectures, we expect shared pre-trained weights will become more common across the community.
Potential negative impact and ethical considerations. Pre-training models on large structure datasets incurs additional computational cost when compared to training a potentially smaller model with less capacity from scratch. Environmental mitigation should be taken into account when pretraining large models (Patterson et al., 2021). However, the computational cost of pre-training can and should be offset by sharing pre-trained embeddings when possible. Moreover, in our ablations of upstream dataset sizes for GNS-TAT, we observed that training on a subset of PCQM4Mv2 was sufficient for strong downstream performance. In future work, we plan to investigate how smaller subsets with sufficient diversity can be used to minimize computational requirements, e.g. by requiring fewer gradient steps.
D DATASETS
PCQM4Mv2. The main dataset we use for pre-training is PCQM4Mv2 (Nakata & Shimazaki, 2017) (license: CC BY 4.0), which contains 3,378,606 organic molecules, specified by their 3D structures at equilibrium (atom types and coordinates) calculated using DFT. Molecules in PCQM4Mv2 have around 30 atoms on average and vary in terms of their composition, with the dataset containing 22 unique elements in total. The molecules in PCQM4Mv2 only contain one label, unlike e.g. QM9, which contains 12 labels per molecule, however we do not use these labels as denoising only requires the structures.
QM9. QM9 is a dataset (Ramakrishnan et al., 2014) (license: CCBY 4.0) with approximately 130,000 small organic molecules containing up to nine heavy C, N, O, F atoms, specified by their structures. Each molecule has 12 different labels corresponding to different molecular properties, such as highest occupied molecular orbital (HOMO) energy and internal energy, which we use for fine-tuning.
OC20. Open Catalyst 2020 (Chanussot* et al., 2021) (OC20, license: CC Attribution 4.0) is a recent large benchmark containing trajectories of interacting surfaces and adsorbates that are relevant to catalyst discovery and optimization. This dataset contains three tasks: predicting the relaxed state energy from the initial structure (IS2RE), predicting the relaxed structure from the initial structure (IS2RS) and predicting the energy and forces given the structure at any point in the trajectory (S2EF). For IS2RE and IS2RS, there are 460,000 training examples, where each data point is a trajectory of a surface-adsorbate molecule pair starting with a high-energy initial structure that is relaxed towards a low-energy, equilibrium structure. For S2EF, there are 113 million examples of (non-equilibrium) structures with their associated energies and per-atom forces.
DES15K. DES15K (Donchev et al., 2021) (license: CC0 1.0) is a small dataset containing around 15,000 interacting molecule pairs, specifically dimer geometries with non-covalent molecular interactions. Each pair is labelled with the associated interaction energy computed using the coupled-cluster method with single, double, and perturbative triple excitations (CCSD(T)) ( Bartlett & Musiał, 2007), which is widely regarded as the gold-standard method in electronic structure theory.
Usage of DFT-generated structures for pre-training. The structures in PCQM4Mv2 are obtained using DFT calculations, which in principle could have also been used to generate labels for molecular properties in DFT-generated downstream datasets, such as QM9. However, there are multiple reasons why denoising remains a desirable pre-training objective in such settings. First, although there is a computational cost for generating datasets such as PCQM4Mv2, it is now openly available and part of our aim is to understand how to leverage such datasets for tasks on other datasets (analogous to how ImageNet is expensive to build, but once it is available, it is important to understand how it can improve downstream performance on other datasets). Second, even if PCQM4Mv2 contained all the labels in QM9, pre-training via denoising structures allows one to pre-train a single, label-agnostic model which can be individually fine-tuned on any of the targets in QM9. This is substantially cheaper than per-target pre-training, and the resulting pre-trained model is also re-useable for differing needs and downstream tasks.
In other settings where the downstream dataset is generated using more expensive methods than DFT, pre-training via denoising DFT-relaxed structures can also be helpful and has a clear benefit, as shown by our experiment on the CCSD(T)-generated dataset DES15K where denoising on PCQM4Mv2 improves performance for a downstream task involving a "higher" level of theory such as CCSD(T). Generally, we emphasize that the methodology of denoising structures can be applied to any dataset of structures (regardless of whether they are computed using DFT or not). We hope that denoising will be useful in the future for learning representations by pre-training on structures obtained through other methods such as experimental data (in which case labeling may be expensive) and databases generated by other models such as AlphaFold (where only structures are available) (Jumper et al., 2021).
E EXPERIMENT SETUP AND COMPUTE RESOURCES
Below, we list details on our experiment setup and hardware resources used.
GNS & GNS-TAT. GNS-TAT training for QM9, PCQM4Mv2 and DES15K was done on a cluster of 16 TPU v3 devices and evaluation on a single V100 device. GNS training for OC20 was done on 8 TPU v4 devices, with the exception of the 1.2 billion parameters variant of the model, which was trained on 64 TPU v4 devices. Pre-training on PCQM4Mv2 was executed for 3 · 10 5 gradient updates (approximately 1.5 days of training). Fine-tuning experiments were run until convergence for QM9 (10 6 gradient updates taking approximately 2 days) and DES15K (10 5 gradient updates taking approximately 4 hours) and stopped after 5 · 10 5 gradient updates on OC20 (2.5 days) to minimize hardware use (the larger models keep benefiting from additional gradient updates).
TorchMD-NET. We implemented denoising for TorchMD-NET on top of Thölke & De Fabritiis's [2022] open-source code. 6 Models were trained on QM9 using data parallelism over two NVIDIA RTX 2080Ti GPUs. Pre-training on PCQM4Mv2 was done using three GPUs to accommodate the larger molecules while keeping the batch size approximately the same as QM9. All hyperparameters except the learning rate schedule were kept fixed at the defaults. Pre-training took roughly 24 hours, whereas fine-tuning took around 16 hours.
Hyperparameter optimization. We note that effective pre-training via denoising requires sweeping noise values, as well as loss co-efficients for denoising and atom type recovery. For GNS/GNS-TAT, we relied on the hyperparameters published by Godwin et al. (2022) but determined new noise values for pre-training and fine-tuning by tuning over a grid of approximately 5 values each on PCQM4Mv2 and QM9 (for the HOMO target). We used the same values for DES15K without modification. We also ran a similar number of experiments to determine cosine cycle parameters for learning rates.
F HYPERPARAMETERS
We report the main hyperparameters used for GNS and GNS-TAT below. In addition to the experiments involving GNS-TAT on DES15K in Section 4.4, we also consider the performance of TorchMD-NET on DES15K with and without pre-training on PCQM4Mv2. As shown in Table 7, TorchMD-NET outperforms GNS-TAT when trained from scratch, and pre-training then yields a further boost in performance as with GNS-TAT. Table 8 compares the performance of our models with various other architectures proposed in prior work. GNS yields SOTA performance on each of the four validation sets for the direct IS2RE task. As discussed in Section 4.3 and shown in Figure 3 (left), the model pre-trained on OC20 itself achieves SOTA performance with faster convergence than all other GNS variants. Note that since we pre-train on OC20 itself, the pre-trained model performs equally well at convergence as the model trained from scratch with noisy nodes (cf. Section 4.3). Recall that in Section 5.4 we explored whether pre-training also improves force prediction models, given the link between denoising and learning forces as described in Section 3.2.1. In this section, we consider a second experiment for force models. The OC20 dataset contains a force prediction task (S2EF), where a model is trained to predict point-wise energy and forces (each point being a single DFT evaluation during a relaxation trajectory) from a given 3D structure. Tables 9 and 10 show the performance of GNS when trained from scratch vs. pre-trained via denoising on equilibrium structures in OC20, as described in Section 4.3. We show two metrics for measuring force prediction performance on each of the four validation datasets: mean absolute error (lower is better) and cosine similarity (higher is better). We observe that the pre-trained model improves upon the model trained from scratch for both metrics and all four validation datasets, with improvements up to 15%.
Figure 1 :
1GNS-TAT pre-trained via denoising on PCQM4Mv2 outperforms prior work on QM9. equivalent to learning a particular force field, adding a new interpretation of denoising in the context of molecules and shedding light on how it aids representation learning.
Figure 2 :
2Left: Frequency of compositions of molecules appearing in QM9 overlayed with the corresponding frequency in PCQM4Mv2
Figure 4 :
4Left: Impact of varying the downstream dataset size for the HOMO target in QM9 with GNS-TAT. Middle: Impact of varying the upstream dataset size for the HOMO target in QM9. Right:
Figure 5 :
5Training only the decoder results in significantly better performance when using pre-trained features rather than random ones.
Figure 6 :
6Diagram showing the edge update for a single step t of the PROCESSOR. Left: Edge update for GNS. Right: Edge update for GNS-TAT (with modifications shown in red).
Figure 7 :
7Diagram showing the vertex update for a single step t of the PROCESSOR. Left: Vertex update for GNS. Right: Vertex update for GNS-TAT (with modifications shown in red).
Table 1 :
1Results on QM9 comparing the performance of GNS-TAT + Noisy Nodes (NN) with and without pre-training on PCQM4Mv2 (averaged over three seeds) with other baselines.Target Unit SchNet E(n)-GNN DimeNet++ SphereNet PaiNN TorchMD-NET
GNS
+ NN
GNS-TAT
+ NN
Pre-trained
GNS-TAT
+ NN
µ
D
0.033
0.029
0.030
0.027
0.012
0.011
0.025 0.021
0.016
α
a 0
3
0.235
0.071
0.043
0.047
0.045
0.059
0.052 0.047
0.040
HOMO meV 41.0
29.0
24.6
23.6
27.6
20.3
20.4
17.3
14.9
LUMO meV 34.0
25.0
19.5
18.9
20.4
18.6
17.5
17.1
14.7
∆
meV 63.0
48.0
32.6
32.3
45.7
36.1
28.6
25.7
22.0
R 2 a 0
2
0.07
0.11
0.33
0.29
0.07
0.033
0.70
0.65
0.44
ZPVE meV 1.700
1.550
1.210
1.120
1.280
1.840
1.160 1.080
1.018
U 0
meV 14.00
11.00
6.32
6.26
5.85
6.15
7.30
6.39
5.76
U
meV 19.00
12.00
6.28
7.33
5.83
6.38
7.57
6.39
5.76
H
meV 14.00
12.00
6.53
6.40
5.98
6.16
7.43
6.42
5.79
G
meV 14.00
12.00
7.56
8.00
7.35
7.62
8.30
7.41
6.90
c v
cal
mol K 0.033
0.031
0.023
0.022
0.024
0.026
0.025 0.022
0.020
Table 2 :
2Performance of TorchMD-NET with Noisy Nodes and pre-training on PCQM4Mv2.MethodHOMO
LUMO
TorchMD-NET 22.0 ± 0.6 18.7 ± 0.4
+ Noisy Nodes 18.1 ± 0.1 15.6 ± 0.1
+ Pre-training
15.6 ± 0.1 13.2 ± 0.2
Table 3
3:
Performance of
TorchMD-NET for force predic-
tion on MD17 (aspirin).
Method
Test MAE
TorchMD-NET 0.268 ± 0.003
+ Pre-training 0.222 ± 0.003
Table 4 :
4GNS-TAT hyperparameters for pre-training on PCQM4Mv2.Parameter
Value or description
Gradient steps
3 · 10 5
Optimizer
Adam with warm up and 1-cycle cosine decay schedule
β 1
0.9
β 2
0.95
Warm up steps
10 4
Warm up start learning rate
10 −5
Warm up max learning rate
10 −4
Cosine min learning rate
10 −7
Cosine cycle length
5 · 10 5
Loss type
Mean squared error
Batch size
Dynamic to max edge/vertex/graph count
Max vertices in batch
256
Max edges in batch
9216
Max graphs in batch
8
Distance featurization
Bessel first kind (r min = 0, σ = 1.0)
Max edges per vertex
20
MLP number of layers
3
MLP hidden sizes
1024
Activation
Tailored ReLU (with negative slope chosen using TAT)
message passing layers
10
Block iterations
3
Vertex/edge latent vector sizes 512
Decoder aggregation
Mean
Position noise
Gaussian (µ = 0, σ = 0.02)
Parameter update
Exponentially moving average (EMA) smoothing
EMA decay
0.9999
Position loss coefficient
1.0
Atom type mask probability
0.75
Atom type loss coefficient
4.0
Table 5 :
5GNS-TAT hyperparameters for fine-tuning on QM9 and DES15K.Parameter
Value or description
Gradient steps
10 6 QM9 / 10 5 DES15K
Optimizer
Adam with warm up and 1-cycle cosine decay schedule
β 1
0.9
β 2
0.95
Warm up steps
10 4
Warm up start learning rate
10 −5
Warm up max learning rate
10 −4
Cosine min learning rate
3 · 10 −7
Cosine cycle length
10 6 QM9 / 10 5 DES15K
Loss type
Mean squared error
Batch size
Dynamic to max edge/vertex/graph count
Max vertices in batch
256
Max edges in batch
3072
Max graphs in batch
8
Distance featurization
Bessel first kind (r min = 0, σ = 1.0)
Max edges per vertex
20
MLP number of layers
3
MLP hidden sizes
1024
Activation
Tailored ReLU (with negative slope chosen using TAT)
message passing layers
10
Block iterations
3
Vertex/edge latent vector sizes 512
Decoder aggregation
Mean
Position noise
Gaussian (µ = 0, σ = 0.05)
Parameter update
Exponentially moving average (EMA) smoothing
EMA decay
0.9999
Position loss coefficient
0.01
Atom type mask probability
0.0
Atom type loss coefficient
0.0
Table 6 :
6GNS hyperparameters for OC20. G ADDITIONAL EXPERIMENTAL RESULTS G.1 PERFORMANCE OF TORCHMD-NET ON DES15KParameter
Value or description
Gradient steps
5 · 10 5
Optimizer
Adam with warm up and 1-cycle cosine decay schedule
β 1
0.9
β 2
0.95
Warm up steps
5 · 10 5
Warm up start learning rate
10 −5
Warm up max learning rate
10 −4
Cosine min learning rate
5 · 10 −6
Cosine cycle length
5 · 10 6
Loss type
Mean squared error
Batch size
Dynamic to max edge/vertex/graph count
Max vertices in batch
1024
Max edges in batch
12800
Max graphs in batch
10
Distance featurization
Gaussian (µ = 0, σ = 0.5)
Max edges per vertex
20
MLP number of layers
3
MLP hidden sizes
1024
Activation
shifted softplus
message passing layers
5
Block iterations
5
Vertex/edge latent vector sizes 512
Decoder aggregation
Sum
Position noise
Gaussian (µ = 0, σ = 0.2)
Parameter update
Exponentially moving average (EMA) smoothing
EMA decay
0.9999
Position loss coefficient
1.0
Atom type mask probability
0.0
Atom type loss coefficient
0.0
Table 7 :
7Performance of TorchMD-NET with and without pre-training for interaction energy prediction on DES15K. Model Test MAE (kcal/mol) G.2 COMPARISON OF OC20 IS2RE PRE-TRAINING PERFORMANCE WITH OTHER ARCHITECTURESTorchMD-NET
0.721
+ Pre-training on PCQM4Mv2
0.406
Table 8 :
8Comparison of different variants of GNS with other baseline architectures on IS2RE prediction for OC20. PRE-TRAINING VIA DENOISING FOR FORCE PREDICTION ON OC20Model
Validation MAE for IS2RE
ID
OOD Adsorbate OOD Catalyst OOD Both Average
DimeNet ++
0.5636
0.7127
0.5612
0.6492
0.6217
GemNet
0.5561
0.7342
0.5659
0.6964
0.6382
SphereNet
0.5632
0.6682
0.5590
0.6190
0.6024
SEGNN
0.5310
0.6432
0.5341
0.5777
0.5715
GNS
0.5233
0.6295
0.5202
0.5617
0.5587
GNS + NN
0.4196
0.4900
0.4316
0.4282
0.4424
GNS + NN (PT on PCQM4Mv2) 0.4135
0.4856
0.4245
0.4245
0.4370
GNS + NN (PT on OC20)
0.4164
0.4836
0.4267
0.4237
0.4376
G.3
Table 9 :
9Force prediction on OC20 by MAE (lower is better).Validation Dataset
Model
GNS
Pre-trained GNS
ID
0.0332
0.0282
OOD Adsorbate
0.0366
0.0314
OOD Catalyst
0.0360
0.0335
OOD Both
0.0406
0.0382
Table 10 :
10Force prediction on OC20 by cosine similarity (higher is better).Validation Dataset
Model
GNS
Pre-trained GNS
ID
0.4845
0.5517
OOD Adsorbate
0.4730
0.5414
OOD Catalyst
0.4553
0.4983
OOD Both
0.4417
0.4849
An earlier version of this dataset without any 3D structures, called PCQM4M, was used for supervised pre-training(Ying et al., 2021), but to our knowledge, this is the first time the 3D structures from v2 have been used and in a self-supervised manner.
GitHub repository: https://github.com/shehzaidi/pre-training-via-denoising.
Note that Edge-Delta initialization is compatible with TAT, since for the purposes of q/c value propagation, zero-initialized connections in the network can be treated as absent.5 For the purposes of computing the subnetwork maximizing function we ignore the rest of the network and just consider the edge network. While the layer normalization layer (which we move before the MLP) technically depends on the vertex features, this dependency can be ignored as long as the q values of these features is 1 (which will be true given the complete set of changes we make to the GNS architecture).
Available on GitHub at: https://github.com/torchmd/torchmd-net.
Cormorant: Covariant molecular neural networks. M Brandon, T Anderson, R Hy, Kondor, NeurIPS. Brandon M. Anderson, T. Hy, and R. Kondor. Cormorant: Covariant molecular neural networks. In NeurIPS, 2019. 3
The shattered gradients problem: If resnets are the answer, then what is the question?. David Balduzzi, Marcus Frean, Lennox Leary, Kurt Wan-Duo Lewis, Brian Ma, Mcwilliams, PMLRInternational Conference on Machine Learning. 16David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In International Conference on Machine Learning, pp. 342-350. PMLR, 2017. 16
Coupled-cluster theory in quantum chemistry. J Rodney, Monika Bartlett, Musiał, https:/link.aps.org/doi/10.1103/RevModPhys.79.291Rev. Mod. Phys. 7920Rodney J. Bartlett and Monika Musiał. Coupled-cluster theory in quantum chemistry. Rev. Mod. Phys., 79:291-352, Feb 2007. doi: 10.1103/RevModPhys.79.291. URL https://link.aps. org/doi/10.1103/RevModPhys.79.291. 6, 7, 20
Interaction networks for learning about objects, relations and physics. P Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, K Kavukcuoglu, abs/1612.00222ArXiv. 15P. Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and K. Kavukcuoglu. Inter- action networks for learning about objects, relations and physics. ArXiv, abs/1612.00222, 2016. 15
. P Battaglia, Jessica B Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, A Santoro, R Faulkner, Çaglar Gülçehre, H Song, A J Ballard, J Gilmer, George E Dahl, Ashish Vaswani, Kelsey R Allen, Charlie Nash, Victoria Langston, Chris Dyer, N Heess, Daan Wierstra, P Kohli, M Botvinick, Y Vinyals, Razvan Li, Pascanu, Relational inductive biases, deep learning, and graph networks. ArXiv, abs/1806.01261P. Battaglia, Jessica B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, A. Santoro, R. Faulkner, Çaglar Gülçehre, H. Song, A. J. Ballard, J. Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charlie Nash, Victoria Langston, Chris Dyer, N. Heess, Daan Wierstra, P. Kohli, M. Botvinick, Oriol Vinyals, Y. Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. ArXiv, abs/1806.01261, 2018. 2
Se(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Simon Batzner, T Smidt, L Sun, J Mailoa, M Kornbluth, N Molinari, B Kozinsky, abs/2101.03164ArXiv. 3Simon Batzner, T. Smidt, L. Sun, J. Mailoa, M. Kornbluth, N. Molinari, and B. Kozinsky. Se(3)- equivariant graph neural networks for data-efficient and accurate interatomic potentials. ArXiv, abs/2101.03164, 2021. 3
Training with noise is equivalent to tikhonov regularization. Charles M Bishop, Neural Computation. 72Charles M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural Computation, 7:108-116, 1995. 2
JAX: composable transformations of Python+NumPy programs. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vanderplas, Skye Wanderman-Milne, Qiao Zhang, James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. 6
Geometric and physical quantities improve e (3) equivariant message passing. Johannes Brandstetter, Rob Hesselink, Elise Van Der Pol, Erik Bekkers, Max Welling, arXiv:2110.02905arXiv preprintJohannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik Bekkers, and Max Welling. Ge- ometric and physical quantities improve e (3) equivariant message passing. arXiv preprint arXiv:2110.02905, 2021. 3
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017. 2
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. LinScott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordCurran Associates, Inc33Ilya Sutskever, and Dario AmodeiTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Ben- jamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. 1
A note on over-smoothing for graph neural networks. CoRR, abs. Chen Cai, Yusu Wang, 416Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. CoRR, abs/2006.13318, 2020. URL https://arxiv.org/abs/2006.13318. 4, 16
Open catalyst 2020 (oc20) dataset and community challenges. Lowik Chanussot, * , Abhishek Das, * , Siddharth Goyal, * , Thibaut Lavril, * , Muhammed Shuaibi, * , Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C Lawrence Zitnick, Zachary Ulissi, 10.1021/acscatal.0c04525ACS Catalysis. 520Lowik Chanussot*, Abhishek Das*, Siddharth Goyal*, Thibaut Lavril*, Muhammed Shuaibi*, Morgane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C. Lawrence Zitnick, and Zachary Ulissi. Open catalyst 2020 (oc20) dataset and community challenges. ACS Catalysis, 2021. doi: 10.1021/acscatal.0c04525. 5, 20
Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, Xu Sun, abs/1909.03211CoRR416Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over- smoothing problem for graph neural networks from the topological view. CoRR, abs/1909.03211, 2019. URL http://arxiv.org/abs/1909.03211. 4, 16
Semi-supervised sequence learning. M Andrew, Quoc V Dai, Le, Advances in Neural Information Processing Systems. C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/ paper/2015/file/7137debd45ae4d0ab9aa953017286b20-Paper.pdf. 1
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. 1
Graph convolutional neural networks with node transition probability-based message passing and dropnode regularization. Tien Huu Do, Minh Duc, Giannis Nguyen, Adrian Bekoulis, N Munteanu, Deligiannis, Expert Syst. Appl. 17416Tien Huu Do, Duc Minh Nguyen, Giannis Bekoulis, Adrian Munteanu, and N. Deligiannis. Graph con- volutional neural networks with node transition probability-based message passing and dropnode regularization. Expert Syst. Appl., 174:114711, 2021. 16
Quantum chemical benchmark databases of gold-standard dimer interaction energies. Alexander G Donchev, Andrew G Taube, Elizabeth Decolvenaere, Cory Hargus, Robert T Mcgibbon, Ka-Hei Law, Brent A Gregersen, Je-Luen Li, Kim Palmo, Karthik Siva, Michael Bergdorf, John L Klepeis, David E Shaw, Scientific Data. 820Alexander G. Donchev, Andrew G. Taube, Elizabeth Decolvenaere, Cory Hargus, Robert T. McGib- bon, Ka-Hei Law, Brent A. Gregersen, Je-Luen Li, Kim Palmo, Karthik Siva, Michael Bergdorf, John L. Klepeis, and David E. Shaw. Quantum chemical benchmark databases of gold-standard dimer interaction energies. Scientific Data, 8, 2021. 6, 20
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL https://arxiv.org/abs/2010.11929. 1
Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson, PMLRInternational Conference on Machine Learning. 35Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In International Conference on Machine Learning, pp. 3165-3176. PMLR, 2020. 3, 5
Se(3)-transformers: 3d roto-translation equivariant attention networks. ArXiv, abs. F Fuchs, Daniel E Worrall, Volker Fischer, M Welling, F. Fuchs, Daniel E. Worrall, Volker Fischer, and M. Welling. Se(3)-transformers: 3d roto-translation equivariant attention networks. ArXiv, abs/2006.10503, 2020. 5
Jraph: A library for graph neural networks in jax. Jonathan Godwin, * , Thomas Keck, * , Peter Battaglia, Victor Bapst, Thomas Kipf, Yujia Li, Kimberly Stachenfeld, Petar Veličković, Alvaro Sanchez-Gonzalez, Jonathan Godwin*, Thomas Keck*, Peter Battaglia, Victor Bapst, Thomas Kipf, Yujia Li, Kimberly Stachenfeld, Petar Veličković, and Alvaro Sanchez-Gonzalez. Jraph: A library for graph neural networks in jax., 2020. URL http://github.com/deepmind/jraph. 6
Simple GNN regularisation for 3d molecular property prediction and beyond. Jonathan Godwin, Michael Schaarschmidt, Alexander L Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, Peter Battaglia, International Conference on Learning Representations. 1521Jonathan Godwin, Michael Schaarschmidt, Alexander L Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, and Peter Battaglia. Simple GNN regularisation for 3d molecular property prediction and beyond. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=1wVvweK3oIb. 1, 3, 4, 5, 7, 10, 15, 21
Haiku: Sonnet for JAX. Tom Hennigan, Trevor Cai, Tamara Norman, Igor Babuschkin, Tom Hennigan, Trevor Cai, Tamara Norman, and Igor Babuschkin. Haiku: Sonnet for JAX, 2020. URL http://github.com/deepmind/dm-haiku. 6
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, arxiv:2006.1123914arXiv preprintJonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020. 1, 3, 4
Equivariant diffusion for molecule generation in 3d. Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, Max Welling, Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d, 2022. URL https://arxiv.org/abs/2203.17003. 3
Weihua Hu, Bowen Liu, Joseph Gomes, M Zitnik, Percy Liang, V Pande, J Leskovec, Strategies for pre-training graph neural networks. arXiv: Learning, 2020a. 67Weihua Hu, Bowen Liu, Joseph Gomes, M. Zitnik, Percy Liang, V. Pande, and J. Leskovec. Strategies for pre-training graph neural networks. arXiv: Learning, 2020a. 2, 6, 7
GPT-GNN: generative pre-training of graph neural networks. CoRR, abs. Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, Yizhou Sun, Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. GPT-GNN: generative pre-training of graph neural networks. CoRR, abs/2006.15437, 2020b. URL https://arxiv. org/abs/2006.15437. 2
Lietransformer: Equivariant self-attention for lie groups. J Michael, Charline Le Hutchinson, Sheheryar Lan, Emilien Zaidi, Yee Whye Dupont, Hyunjik Teh, Kim, PMLRInternational Conference on Machine Learning. 35Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, and Hyunjik Kim. Lietransformer: Equivariant self-attention for lie groups. In International Conference on Machine Learning, pp. 4533-4543. PMLR, 2021. 3, 5
Energy-motivated equivariant pretraining for 3d molecular graphs. Rui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, Yang Liu, arXiv:2207.088242022arXiv preprintRui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, and Yang Liu. Energy-motivated equivariant pretraining for 3d molecular graphs. arXiv preprint arXiv:2207.08824, 2022. 2
Highly accurate protein structure prediction with AlphaFold. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, A A Simon, Andrew J Kohl, Andrew Ballard, Bernardino Cowie, Stanislav Romera-Paredes, Rishub Nikolov, Jonas Jain, Trevor Adler, Stig Back, David Petersen, Ellen Reiman, Michal Clancy, Martin Zielinski, Michalina Steinegger, Tamas Pacholska, Sebastian Berghammer, David Bodenstein, Silver, 10.1038/s41586-021-03819-2.20Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Andrew W Senior596NatureJohn Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, 596 (7873):583-589, 2021. doi: 10.1038/s41586-021-03819-2. 20
Variational graph auto-encoders. ArXiv. Thomas Kipf, Max Welling, abs/1611.07308Thomas Kipf and Max Welling. Variational graph auto-encoders. ArXiv, abs/1611.07308, 2016. 2
Fast and uncertainty-aware directional message passing for non-equilibrium molecules. CoRR, abs. Johannes Klicpera, Shankari Giri, Johannes T Margraf, Stephan Günnemann, 35Johannes Klicpera, Shankari Giri, Johannes T. Margraf, and Stephan Günnemann. Fast and uncertainty-aware directional message passing for non-equilibrium molecules. CoRR, abs/2011.14115, 2020a. URL https://arxiv.org/abs/2011.14115. 1, 3, 5
Directional message passing for molecular graphs. ArXiv, abs. Johannes Klicpera, Janek Groß, Stephan Günnemann, 2020b. 303123Johannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. ArXiv, abs/2003.03123, 2020b. 3
Covariant compositional networks for learning graphs. CoRR, abs/1801.02144. Risi Kondor, Hy Truong Son, Horace Pan, Brandon M Anderson, Shubhendu Trivedi, Risi Kondor, Hy Truong Son, Horace Pan, Brandon M. Anderson, and Shubhendu Trivedi. Covariant compositional networks for learning graphs. CoRR, abs/1801.02144, 2018. URL http:// arxiv.org/abs/1801.02144. 3
N-gram graph: Simple unsupervised representation for graphs, with applications to molecules. Shengchao Liu, F Mehmet, Yingyu Demirel, Liang, NeurIPS. Shengchao Liu, Mehmet F. Demirel, and Yingyu Liang. N-gram graph: Simple unsupervised representation for graphs, with applications to molecules. In NeurIPS, 2019. 2
Pretraining molecular graph representation with 3d geometry. CoRR, abs/2110.07728, 2021a. Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, Jian Tang, Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, and Jian Tang. Pre- training molecular graph representation with 3d geometry. CoRR, abs/2110.07728, 2021a. URL https://arxiv.org/abs/2110.07728. 2
Molecular geometry pretraining with se(3)-invariant denoising distance matching. Shengchao Liu, Hongyu Guo, Jian Tang, arXiv:2206.13602arXiv preprintShengchao Liu, Hongyu Guo, and Jian Tang. Molecular geometry pretraining with se(3)-invariant denoising distance matching. arXiv preprint arXiv:2206.13602, 2022a. 2
Spherical message passing for 3d molecular graphs. Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, Shuiwang Ji, International Conference on Learning Representations. 13Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3d molecular graphs. In International Conference on Learning Representations, 2022b. URL https://openreview.net/forum?id=givsRXsOt9r. 1, 3
Graph self-supervised learning: A survey. CoRR, abs/2103.00111, 2021b. Yixin Liu, Shirui Pan, Ming Jin, Chuan Zhou, Feng Xia, Philip S Yu, Yixin Liu, Shirui Pan, Ming Jin, Chuan Zhou, Feng Xia, and Philip S. Yu. Graph self-supervised learning: A survey. CoRR, abs/2103.00111, 2021b. URL https://arxiv.org/abs/2103. 00111. 2
Rapid training of deep neural networks without skip connections or normalization layers using deep kernel shaping. James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, Samuel S Schoenholz, James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. Rapid training of deep neural networks without skip connections or normalization layers using deep kernel shaping, 2021. URL https://arxiv. org/abs/2110.01765. 16
Relevance of rotationally equivariant convolutions for predicting molecular properties. Kurt Benjamin, Mario Miller, Tess E Geiger, Frank Smidt, Noé, arXiv:2008.08461arXiv preprintBenjamin Kurt Miller, Mario Geiger, Tess E Smidt, and Frank Noé. Relevance of rotationally equivariant convolutions for predicting molecular properties. arXiv preprint arXiv:2008.08461, 2020. 3
Pubchemqc project: A large-scale first-principles electronic structure database for data-driven chemistry. Maho Nakata, Tomomi Shimazaki, 10.1021/acs.jcim.7b0008328481528Journal of Chemical Information and Modeling. 57619Maho Nakata and Tomomi Shimazaki. Pubchemqc project: A large-scale first-principles electronic structure database for data-driven chemistry. Journal of Chemical Information and Modeling, 57 (6):1300-1308, 2017. doi: 10.1021/acs.jcim.7b00083. URL https://doi.org/10.1021/ acs.jcim.7b00083. PMID: 28481528. 5, 19
Density-Functional Theory of Atoms and Molecules. G Robert, Yang Parr, Weitao, Oxford University PressUSARobert G. Parr and Yang Weitao. Density-Functional Theory of Atoms and Molecules. Oxford University Press, USA, 1994. ISBN 0195092767. URL http://www.amazon.com/ Density-Functional-Molecules-International-Monographs-Chemistry/ dp/0195092767/ref=sr_1_1?ie=UTF8&s=books&qid=1279096906&sr=1-1. 3
Carbon emissions and large neural network training. David A Patterson, Joseph Gonzalez, Quoc V Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R So, Maud Texier, Jeff Dean, abs/2104.10350CoRRDavid A. Patterson, Joseph Gonzalez, Quoc V. Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. CoRR, abs/2104.10350, 2021. URL https://arxiv.org/abs/2104.10350. 19
Learning mesh-based simulation with graph networks. T Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, P Battaglia, abs/2010.03409ArXiv. 2T. Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and P. Battaglia. Learning mesh-based simula- tion with graph networks. ArXiv, abs/2010.03409, 2020. 2
Quantum chemistry structures and properties of 134 kilo molecules. R Ramakrishnan, Pavlo O Dral, M Rupp, O A Von Lilienfeld, Scientific Data. 119R. Ramakrishnan, Pavlo O. Dral, M. Rupp, and O. A. von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1, 2014. 5, 19
The truly deep graph convolutional networks for node classification. CoRR, abs/1907.10903. Yu Rong, Wenbing Huang, Tingyang Xu, Junzhou Huang, Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. The truly deep graph convolutional networks for node classification. CoRR, abs/1907.10903, 2019. URL http://arxiv.org/ abs/1907.10903. 16
Self-supervised graph transformer on large-scale molecular data. Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, Junzhou Huang, Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data, 2020. URL https://arxiv. org/abs/2007.02835. 2
Alvaro Sanchez-Gonzalez, N Heess, Jost Tobias Springenberg, J Merel, Martin A Riedmiller, R Hadsell, P Battaglia, abs/1806.01242Graph networks as learnable physics engines for inference and control. Alvaro Sanchez-Gonzalez, N. Heess, Jost Tobias Springenberg, J. Merel, Martin A. Riedmiller, R. Hadsell, and P. Battaglia. Graph networks as learnable physics engines for inference and control. ArXiv, abs/1806.01242, 2018. 2
Learning to simulate complex physics with graph networks. Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter Battaglia, PMLRProceedings of the 37th International Conference on Machine Learning. Hal Daumé III and Aarti Singhthe 37th International Conference on Machine Learning11915Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 8459-8468. PMLR, 13-18 Jul 2020. URL http://proceedings.mlr.press/v119/sanchez-gonzalez20a.html. 2, 3, 5, 15
Random features strengthen graph neural networks. R Sato, Makoto Yamada, Hisashi Kashima, SDM. R. Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In SDM, 2021. 2
E(n) equivariant graph neural networks. Emiel Victor Garcia Satorras, Max Hoogeboom, Welling, 35Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks, 2021. 3, 5
The graph neural network model. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, 10.1109/TNN.2008.2005605.2IEEE Transactions on Neural Networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. doi: 10.1109/TNN.2008.2005605. 2
Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, A Tkatchenko, K Müller, NIPS. Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, A. Tkatchenko, and K. Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In NIPS, 2017. 3
Equivariant message passing for the prediction of tensorial properties and molecular spectra. Kristof Schütt, Oliver Unke, Michael Gastegger, PMLRInternational Conference on Machine Learning. Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In International Conference on Machine Learning, pp. 9377-9388. PMLR, 2021. 1, 3, 5, 8
Learning gradient fields for molecular conformation generation. Chence Shi, Shitong Luo, Minkai Xu, Jian Tang, International Conference on Machine Learning. 34Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. In International Conference on Machine Learning, 2021. 3, 4
Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, C Lawrence Zitnick, arXiv:2106.09575Rotation invariant graph neural networks using spin convolutions. arXiv preprintMuhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, and C Lawrence Zitnick. Rotation invariant graph neural networks using spin convolutions. arXiv preprint arXiv:2106.09575, 2021. 3
Creating artificial neural networks that generalize. J Sietsma, Robert J F Dow, Neural Networks. 42J. Sietsma and Robert J. F. Dow. Creating artificial neural networks that generalize. Neural Networks, 4:67-79, 1991. 2
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014. URL https://arxiv.org/abs/1409.1556. 1
Generative Modeling by Estimating Gradients of the Data Distribution. Yang Song, Stefano Ermon, Curran Associates IncRed Hook, NY, USAYang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribu- tion. Curran Associates Inc., Red Hook, NY, USA, 2019. 1, 3, 4
Improved techniques for training score-based generative models. Yang Song, Stefano Ermon, Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20. the 34th International Conference on Neural Information Processing Systems, NIPS'20Red Hook, NY, USACurran Associates Inc. ISBN 978171382954634Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. 3, 4
Infograph: Unsupervised and semisupervised graph-level representation learning via mutual information maximization. Fan-Yun Sun, Jordan Hoffman, Vikas Verma, Jian Tang, International Conference on Learning Representations. Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi- supervised graph-level representation learning via mutual information maximization. In Interna- tional Conference on Learning Representations, 2019. 2
Bootstrapped representation learning on graphs. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Rémi Munos, Petar Velickovic, Michal Valko, abs/2102.06514Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Rémi Munos, Petar Velickovic, and Michal Valko. Bootstrapped representation learning on graphs. CoRR, abs/2102.06514, 2021. URL https://arxiv.org/abs/2102.06514. 2
Philipp Thölke, Gianni De Fabritiis, arXiv:2202.02541Torchmd-net: Equivariant transformers for neural network based molecular potentials. 820arXiv preprintPhilipp Thölke and Gianni De Fabritiis. Torchmd-net: Equivariant transformers for neural network based molecular potentials. arXiv preprint arXiv:2202.02541, 2022. 1, 3, 5, 8, 20
Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. Nathaniel Thomas, Tess Smidt, Steven M Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, Patrick Riley, abs/1802.08219CoRRNathaniel Thomas, Tess Smidt, Steven M. Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. CoRR, abs/1802.08219, 2018. URL http://arxiv.org/abs/1802.08219. 3
Physnet: A neural network for predicting energies, forces, dipole moments, and partial charges. T Oliver, Markus Unke, Meuwly, 10.1021/acs.jctc.9b00181Journal of Chemical Theory and Computation. 156Oliver T. Unke and Markus Meuwly. Physnet: A neural network for predicting energies, forces, dipole moments, and partial charges. Journal of Chemical Theory and Computation, 15(6):3678-3693, May 2019. ISSN 1549-9626. doi: 10.1021/acs.jctc.9b00181. URL http://dx.doi.org/10. 1021/acs.jctc.9b00181. 3
Deep Graph Infomax. Petar Veličković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, R Devon Hjelm, International Conference on Learning Representations. Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep Graph Infomax. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rklz9iAcKQ. 2
A connection between score matching and denoising autoencoders. Pascal Vincent, 10.1162/NECO_a_00142Neural Computation. 23718Pascal Vincent. A connection between score matching and denoising autoencoders. Neural Computa- tion, 23(7):1661-1674, 2011. doi: 10.1162/NECO_a_00142. 1, 3, 4, 18
Extracting and composing robust features with denoising autoencoders. Pascal Vincent, H Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol, ICML '08. Pascal Vincent, H. Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML '08, 2008. 2
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Pascal Vincent, H Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, J. Mach. Learn. Res. 112Pascal Vincent, H. Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371-3408, 2010. 2
3d steerable cnns: Learning rotationally equivariant features in volumetric data. Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco Cohen, In NeurIPS. 8Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, 2018. 8
Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, Jeffrey Pennington, International Conference on Machine Learning. 16Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolu- tional neural networks. In International Conference on Machine Learning, pp. 5393-5402, 2018. 16
Self-supervised learning of graph neural networks: A unified review. CoRR, abs/2102.10757. Yaochen Xie, Zhao Xu, Zhengyang Wang, Shuiwang Ji, Yaochen Xie, Zhao Xu, Zhengyang Wang, and Shuiwang Ji. Self-supervised learning of graph neural networks: A unified review. CoRR, abs/2102.10757, 2021. URL https://arxiv.org/abs/ 2102.10757. 2
Geodiff: A geometric diffusion model for molecular conformation generation. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, Jian Tang, International Conference on Learning Representations. Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon, and Jian Tang. Geodiff: A geometric diffusion model for molecular conformation generation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=PzcvxEMzvQC. 3, 4
Revisiting" over-smoothing" in deep gcns. Chaoqi Yang, Ruijie Wang, Shuochao Yao, Shengzhong Liu, Tarek Abdelzaher, arXiv:2003.1366316arXiv preprintChaoqi Yang, Ruijie Wang, Shuochao Yao, Shengzhong Liu, and Tarek Abdelzaher. Revisiting" over-smoothing" in deep gcns. arXiv preprint arXiv:2003.13663, 2020. 16
Do transformers really perform bad for graph representation? In NeurIPS. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu, Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform bad for graph representation? In NeurIPS, 2021. 5
Graph contrastive learning with augmentations. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 5812- 5823. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/ 2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.pdf. 2
Deep learning without shortcuts: Shaping the kernel with tailored rectifiers. Guodong Zhang, Aleksandar Botev, James Martens, International Conference on Learning Representations. 516Guodong Zhang, Aleksandar Botev, and James Martens. Deep learning without shortcuts: Shaping the kernel with tailored rectifiers. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=U0k7XNTiFEq. 2, 5, 16
L Zhao, Leman Akoglu, Pairnorm: Tackling oversmoothing in gnns. ArXiv, abs/1909.12223. 16L. Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. ArXiv, abs/1909.12223, 2020. 16
Effective training strategies for deep graph neural networks. CoRR, abs. Kuangqi Zhou, Yanfei Dong, Wee Sun Lee, Bryan Hooi, Huan Xu, Jiashi Feng, Kuangqi Zhou, Yanfei Dong, Wee Sun Lee, Bryan Hooi, Huan Xu, and Jiashi Feng. Effective training strategies for deep graph neural networks. CoRR, abs/2006.07107, 2020. URL https: //arxiv.org/abs/2006.07107. 16
| [
"https://github.com/shehzaidi/pre-training-via-denoising.",
"https://github.com/torchmd/torchmd-net.",
"http://github.com/google/jax.",
"http://github.com/deepmind/jraph.",
"http://github.com/deepmind/dm-haiku."
] |
[
"Efficient query evaluation techniques over large amount of distributed linked data",
"Efficient query evaluation techniques over large amount of distributed linked data"
] | [
"Eleftherios Kalogeros \nDatabase & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece\n",
"Manolis Gergatsoulis [email protected]@gmail.com \nDatabase & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece\n",
"Matthew Damigos \nDatabase & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece\n",
"Christos Nomikos [email protected] \nDepartment of Computer Science and Engineering\nUniversity of Ioannina\nP.O Box 118645110IoanninaGreece, Greece\n"
] | [
"Database & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece",
"Database & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece",
"Database & Information Systems Group (DBIS)\nDepartment of Archives, Library Science and Museology\nLaboratory on Digital Libraries and Electronic Publishing\nIonian University\nIoannou Theotoki 7249100CorfuGreece",
"Department of Computer Science and Engineering\nUniversity of Ioannina\nP.O Box 118645110IoanninaGreece, Greece"
] | [] | As RDF becomes more widely established and the amount of linked data is rapidly increasing, the efficient querying of large amount of data becomes a significant challenge. In this paper, we propose a family of algorithms for querying large amount of linked data in a distributed manner. These query evaluation algorithms are independent of the way the data is stored, as well as of the particular implementation of the query evaluation. We then use the MapReduce paradigm to present a distributed implementation of these algorithms and experimentally evaluate them, although the algorithms could be straightforwardly translated into other distributed processing frameworks. We also investigate and propose multiple query decomposition approaches of Basic Graph Patterns (subclass of SPARQL queries) that are used to improve the overall performance of the distributed query answering. A deep analysis of the effectiveness of these decomposition algorithms is also provided. arXiv:2209.05359v1 [cs.DB] 12 Sep 2022 communication cost is minimized using the techniques of the approach proposed in[13]. Such an approach could be used for answering conjunctive SPARQL queries[48,56,68]. A method of answering SPARQL Basic Graph Pattern using traditional multi-way join into MapReduce, instead of multiple individual joins, is also presented in[45], where certain joining keys are selected to avoid unnecessary iterations. This approach can be used for every type of partitioning of the RDF data. SHARD [58] is built on top of Hadoop, and uses the Hadoop distributed files system (HDFS) to store data in native text files. It uses subject hash partitioning to decompose the RDF data graph; all the triples with the same subject are stored in the same line of the text file. For the execution, one MapReduce job is created for every query triple, while an additional job is used, at the end, to remove duplicated results and apply the required projection. Hence, assuming an n-triple query pattern, n + 1 jobs are required, and all the data graph is scanned n times.HadoopRDF[35]uses predicate hash partition method to distribute the data graph; similar to the vertical partitioning approach applied by SW-Store [10]. In general, the number of the data fragments is equal to the number of the distinct predicates. The query evaluation is performed through a sequence of MapReduce jobs and is optimized using a heuristic and a greedy approach.CliqueSquare [29] presents a method that generates highly parallelizable query plans for BGP queries, which rely on n-ary equality joins with minimum amount of MapReduce stages. CliqueSquare uses a data partitioning scheme that permits first-level joins can be evaluated locally at each node. The triples that share the same value in subject, predicate or object are placed on the same node. This partition ensures that queries sharing the same variable (like star queries) can be evaluated locally. H2RDF [54] uses the Apache HBase [6] to store data triples. Three RDF indices on subject, predicate and object (spo, pos and osp combinations) are materialized and stored to HBase in the form of key-value pairs. Different strategies are used to execute joins and answer the given query. H2RDF+ [53] extends H2RDF by considering three more indices (ops, osp and sop). Furthermore, MapReduce Merge Join algorithm is used to join query triples that share the same variable and the MapReduce Sort-Merge Join algorithm is used for joining the intermediate results.PigSPARQL [61] is yet another approach which uses Hadoop-based implementation of vertical partitioning of the data stored into HDFS. It implements a translation from SPARQL to Pig Latin [51]. In the system RAPID+ [41], an alternative query algebra, called the Nested Triple Group Algebra, is used as an extension of Apache Pig, to improve the performance of SPARQL query processing over MapReduce.The authors in [34] proposed a graph partitioning schema, which resembles the s-decomposition partitioning defined in this work. In particular, the data is partitioned in such a way that the vertices that are relatively close to each and the values of missing border nodes emitted by the mappers and reducers of the Phase 1 directly to the mappers of Phase 2 (see Example 16):(n2, Article1), (n2, Article2), (n2, Article3) | 10.1016/j.is.2023.102194 | [
"https://export.arxiv.org/pdf/2209.05359v1.pdf"
] | 252,199,717 | 2209.05359 | 4eb966f43278e594be9359bdaf5ed89dd9969e49 |
Efficient query evaluation techniques over large amount of distributed linked data
Eleftherios Kalogeros
Database & Information Systems Group (DBIS)
Department of Archives, Library Science and Museology
Laboratory on Digital Libraries and Electronic Publishing
Ionian University
Ioannou Theotoki 7249100CorfuGreece
Manolis Gergatsoulis [email protected]@gmail.com
Database & Information Systems Group (DBIS)
Department of Archives, Library Science and Museology
Laboratory on Digital Libraries and Electronic Publishing
Ionian University
Ioannou Theotoki 7249100CorfuGreece
Matthew Damigos
Database & Information Systems Group (DBIS)
Department of Archives, Library Science and Museology
Laboratory on Digital Libraries and Electronic Publishing
Ionian University
Ioannou Theotoki 7249100CorfuGreece
Christos Nomikos [email protected]
Department of Computer Science and Engineering
University of Ioannina
P.O Box 118645110IoanninaGreece, Greece
Efficient query evaluation techniques over large amount of distributed linked data
Linked DataGraph QueryingBig DataMap-ReduceDistributed ProcessingCloud ComputingSemantic Web
As RDF becomes more widely established and the amount of linked data is rapidly increasing, the efficient querying of large amount of data becomes a significant challenge. In this paper, we propose a family of algorithms for querying large amount of linked data in a distributed manner. These query evaluation algorithms are independent of the way the data is stored, as well as of the particular implementation of the query evaluation. We then use the MapReduce paradigm to present a distributed implementation of these algorithms and experimentally evaluate them, although the algorithms could be straightforwardly translated into other distributed processing frameworks. We also investigate and propose multiple query decomposition approaches of Basic Graph Patterns (subclass of SPARQL queries) that are used to improve the overall performance of the distributed query answering. A deep analysis of the effectiveness of these decomposition algorithms is also provided. arXiv:2209.05359v1 [cs.DB] 12 Sep 2022 communication cost is minimized using the techniques of the approach proposed in[13]. Such an approach could be used for answering conjunctive SPARQL queries[48,56,68]. A method of answering SPARQL Basic Graph Pattern using traditional multi-way join into MapReduce, instead of multiple individual joins, is also presented in[45], where certain joining keys are selected to avoid unnecessary iterations. This approach can be used for every type of partitioning of the RDF data. SHARD [58] is built on top of Hadoop, and uses the Hadoop distributed files system (HDFS) to store data in native text files. It uses subject hash partitioning to decompose the RDF data graph; all the triples with the same subject are stored in the same line of the text file. For the execution, one MapReduce job is created for every query triple, while an additional job is used, at the end, to remove duplicated results and apply the required projection. Hence, assuming an n-triple query pattern, n + 1 jobs are required, and all the data graph is scanned n times.HadoopRDF[35]uses predicate hash partition method to distribute the data graph; similar to the vertical partitioning approach applied by SW-Store [10]. In general, the number of the data fragments is equal to the number of the distinct predicates. The query evaluation is performed through a sequence of MapReduce jobs and is optimized using a heuristic and a greedy approach.CliqueSquare [29] presents a method that generates highly parallelizable query plans for BGP queries, which rely on n-ary equality joins with minimum amount of MapReduce stages. CliqueSquare uses a data partitioning scheme that permits first-level joins can be evaluated locally at each node. The triples that share the same value in subject, predicate or object are placed on the same node. This partition ensures that queries sharing the same variable (like star queries) can be evaluated locally. H2RDF [54] uses the Apache HBase [6] to store data triples. Three RDF indices on subject, predicate and object (spo, pos and osp combinations) are materialized and stored to HBase in the form of key-value pairs. Different strategies are used to execute joins and answer the given query. H2RDF+ [53] extends H2RDF by considering three more indices (ops, osp and sop). Furthermore, MapReduce Merge Join algorithm is used to join query triples that share the same variable and the MapReduce Sort-Merge Join algorithm is used for joining the intermediate results.PigSPARQL [61] is yet another approach which uses Hadoop-based implementation of vertical partitioning of the data stored into HDFS. It implements a translation from SPARQL to Pig Latin [51]. In the system RAPID+ [41], an alternative query algebra, called the Nested Triple Group Algebra, is used as an extension of Apache Pig, to improve the performance of SPARQL query processing over MapReduce.The authors in [34] proposed a graph partitioning schema, which resembles the s-decomposition partitioning defined in this work. In particular, the data is partitioned in such a way that the vertices that are relatively close to each and the values of missing border nodes emitted by the mappers and reducers of the Phase 1 directly to the mappers of Phase 2 (see Example 16):(n2, Article1), (n2, Article2), (n2, Article3)
Introduction
Linked data has become a widely-established approach for publishing and sharing semantically-meaningful information through distributed and interrelated data. RDF is the standard model Linked data is built upon. As RDF data is rapidly increasing, the efficient querying of large amount of Linked data becomes a significant challenge in many business and research areas, such as bioinformatics and cheminformatics, and digital libraries [11,52].
Both centralized (e.g., [65,66]) and distributed (e.g., [35,58]) processing of RDF data has been extensively investigated in the past, where SPARQL [64,55] is mainly used as query language. To process large amount of RDF data in a distributed manner, parallel processing frameworks are considered [11,16,19]. Apache Hadoop [5] (the open source alternative of Google's MapReduce [25]), Spark [72,8] and Flink [4] are three widely-used programming frameworks for distributed processing. Although Apache Spark and Flink typically improve and outperform Hadoop/MapReduce, mainly due to in-memory processing, from algorithmic perspective, they all handle distributed processing in a similar manner; i.e., define workflows of tasks running in parallel and determine the way the data is reshuffled in order to be properly and efficiently joined.
In addition to the query evaluation approaches, a variety of effective storage schemes has been used to improve query answering over RDF data, such as the use of relational databases (e.g., [22,66]) and NoSQL databases (e.g., [71]). In distributed environments, the proper partitioning of the RDF data into a distributed repository (either file system or distributed NoSQL database) can significantly improve the performance of query answering [21,27,38]. Following this approach, most of the distributed based methods and systems utilize efficient partitioning of the data across a cluster of machines in order to ensure efficient query processing through minimizing the communication cost and improving parallel execution [21]. To take advantage of the selected partitioning during query answering, certain approaches for decomposing the given query, and creating a proper query plan (consisting of multiple steps of distributed processing) are proposed [26,29].
In this work, we present three distributed evaluation algorithms for querying large amount of RDF data. The main idea behind these algorithms is described as follows: a) The data graph is decomposed into a set of (possibly overlapping) data graph segments stored in different nodes of a cluster of commodity machines. b) The query graph Q is also decomposed into a set of (possibly overlapping) subqueries. c) Subqueries are applied to each data graph segment, in isolation, and intermediate results are computed. d) The intermediate results are appropriately combined to obtain the answers of the query Q. Note that the algorithms are independent of the way the data is stored as well as of the particular implementation of the query evaluation. We then use the MapReduce paradigm to present a distributed implementation of these algorithms and experimentally evaluate them, although the algorithms could be straightforwardly translated into Spark jobs and/or Flink dataflows. This paper consolidates our previous work presented in [37,28,50] into a single unified framework for distributed evaluation of Basic Graph Pattern queries (subclass of SPARQL queries), and extend this framework by proposing multiple query decomposition algorithms that could be used by a wide-variety of query evaluation approaches. In particular, we investigate decomposition approaches a) which are based on producing subqueries of special forms (with or without replication of query triples), and b) that take into account certain replication of the distributed data.
The paper is organized as follows. In Section 2, related work is presented and discussed. In Section 3, the framework of our work is defined. More specifically, after presenting some preliminary definitions in Subsection 3.1, we introduce the concept of (data and query) graph decomposition in Subsection 3.2. Then, we define the concept of partial embeddings in Subsection 3.3 and distinguish special forms of queries in Subsection 3.4.
In Section 4, we presented three query evaluation approaches. More specifically, in Subsection 4.1, we present a query evaluation approach which is based on the concept of partial embeddings. In Subsection 4.2, we present an approach which is based on the decomposition of queries into subqueries of a specific form called generalized star queries. Finally, in Subsection 4.3, we present an approach which is based on the idea that replication in data decomposition can be taken into account to efficiently answer queries. In Subsection 4.4, we present a set of query decomposition algorithms.
In Section 5, we presenta set of query evaluation algorithms, which implement the approaches presented in Section 4. Experimental evaluation results of the algorithms are presented in Section 6. Finally, Section 7 concludes the paper.
Related work
The problem of efficiently querying linked data has been widely investigated, for both centralized (single-machine) [65,66] (e.g., systems such as RDF-3X [47,49] and Hexastore [70]) and distributed environments (e.g., [35,58]). Processing large amount of linked data into a single machine has significant limitations, since it lacks scalability [21]. To handle this problem, a variety of distributed methods for storing and processing linked data has been proposed [71]. Most of the approaches proposed in the literature to handle scalability of answering SPARQL queries over big linked datasets [11] focus on two aspects, distributed storage of linked data and distributed processing of SPARQL queries. Typically, the proper partitioning of the data into a distributed repository (either file system or distributed NoSQL database) can significantly improve the performance of query answering [21,27,38] (e.g., Random Partitioning [28,50], Hash Partitioning [58,26,23], Graph Partitioning [34,39], and Semantic Partitioning [43]).
To process and query large amount of linked data, distributed processing frameworks, such as MapReduce [5] or Apache Spark [8], are used. Apart from these approaches, there is a noteworthy amount of related work focusing on utilizing distributed NoSQL databases [57,18,60,42] to store the linked data graph and answer the given queries. In these cases, the query evaluation is achieved either through translating the given SPARQL query into the query language supported by the NoSQL database [44], or by using a distributed processing framework to implement the overall query execution plan [29] (in such a case, the NoSQL database mainly used as a storage layer ensuring proper data partitioning).
In this context, Afrati et al. [15] proposed an approach for optimizing joins in MapReduce by choosing the appropriate map-key and shares. This approach is extended in [12] to data graphs, where the cost of evaluating queries on data graphs using one round of Map-Reduce is investigated, and an approach of translating the query patterns into conjunctive queries is proposed. The other are included in the same segment. In this context, the following main methodologies are investigated and proposed: the undirected and the directed n-hop guarantee. The former focuses on initially partitioning the vertices and then assigning the triple-paths of length n that start from a vertex that is already included in the segment. The latter is similar to the directed one but considers any undirected path of length n. In both cases, a graph practitioner tool which is based on METIS is used for partitioning the vertices of the RDF graph into disjoint partitions so that the minimum number of edges is cut. The queries are also decomposed in such a way that the subqueries generated can be computed locally, in each cluster node. MapReduce is used for the joins of the intermediate results of subqueries. Although s-decomposition partitioning approach is similar to 1-hop undirected guarantee (or hash partitioning), the nhop guarantee of the data graph may cause data explosion especially in coherent data graphs if n > 2.
SHAPE [43] proposes a semantic hash partitioning which is based on the similarity of the URI hierarchy of the vertices. The vertices with same URI prefixes are placed in the same partition. After a simple hash partitioning is used, a replication of only a set of necessary triples is performed, using a k-hop semantic hash partitioning and context-aware filters. The system also uses a RDF-3X triple store in each data node. Query processing and the joins of the intermediate results is based in MapReduce.
The papers [37,50,28] focus on both decomposing queries and partition the RDF data, where the data is stored into MySQL and the framework used to evaluate the queries is MapReduce. SPARQL to SQL translations is used for query processing, and MapReduce is used to apply the joins. D-SPARQ [44] uses the document database MongoDB [9] to store and index data using subject hash partition. A single MapReduce job is then used to import data into the document database and to collect statistical information for query optimization process based on join reordering. All triples sharing the same subject value are stored in the same document (JSON) file.
Another approach which is based on the MapReduce is Sempala [62], which applies SPARQL-to-SQL translation on top of Hadoop. It uses Impala [7] as a distributed SQL processing engine. Sempala uses a unified vertical partitioning (single property table) in order to boost the star-shaped queries.
In [69], the authors proposed a MapReduce algorithm, called StarMR, which is based on star decomposition for answering subgraph matching queries. The StarMR algorithm is improved with two optimization strategies. The first applies an RDF property filtering approach and the second one postpones any Cartesian product operation. RDF graph is stored in a distributed adjacency list.
[36] uses a partitioning method over the predicate value and the type of objects to store the RDF data. Query processing is performed using MapReduce and the algorithm proposed applies a number of the jobs that depends on the form of the given query.
Apache Spark anf Flink have been used to improve the performance of SPARQL query evaluation over big RDF data [16]. SPARQLGX [31] uses a vertical partition approach, where the triples are partitioned according to their predicate values. The query evaluation is performed by initially filtering the triples matching a query triple, in each segment, and then, by applying a sequence of join operations through a query plan which is generated according to predefined statistics. The authors in [19] propose an approach for translating SPARQL queries into Apache Flink [4] programs for querying RDF data, as well as investigate the semantic correspondence between Apache Flink's subset transformations and the SPARQL Algebra operators.
S2RDF [63] also proposes a vertical-like partitioning, called Extended Vertical Partitioning (ExtVP), which is based on semi-joins reductions (i.e., a certain number of semi-joins are applied between the vertical partitioning tables and their results are materialized for improving the overall performance). To evaluate queries over ExtVP, an approach of applying a certain partitioning of the query triples (in order to achieve parallel/local computation) and utilizing Spark SQL is followed.
HAQWA [23] proposes a hash-based partitioning over the subject values of the RDF triples. This ensures local computation of subject-centric star queries (a subclass of generalized star queries). To extend the supported queries, the query is decomposed into subqueries and missing triples of each subquery are replicated. The overall computation process is managed through a Spark application.
The authors in [46] analyze the query evaluation plan of a BGP expression on Spark and proposes a joins plan for efficiently evaluating BGPs over a large RDF graph. Considering an initially hash-based partitioning of the data (e.g., the triples are partitioned by their subject), the authors propose a hybrid method to find a query plan. The approach uses a cost-driven combination of partitioned/cascade and broadcast joins over Spark.
Apart from the previous approaches, it's worth mentioning the approaches S2X [59], Spar(k)ql [30] and [40], which focus on evaluating SPARQL queries using the Spark GraphX library. Sparklify [67] applies a SPARQL-to-SQL rewriter for translating SPARQL queries into Spark executable code.
In [33], a property table scheme is built on top of HBase storage system and a vertical partitioning scheme on top of Cassandra storage system. Query processing is based on SPARQL query translation to SparkSQL for both HBase and Cassandra storage schemas.
As mentioned previously, multiple approaches that use NoSQL platforms to store RDF data and answer SPARQL queries have been proposed in the literature [38]. Representative examples include the distributed systems Rya [57], AMADA [18], MAPSIN [60] and CumulusRDF [42] which use NoSQL [24] databases to store RDF data and provide efficient query processing using three different indices SPO, POS and OSP (S for subject, P for predicate and O for object values). More specifically, Rya uses Apache Accumulo [2], AMADA use Amazon DynamoDB [1], MAPSIN Apache HBase and CumulusRDF Apache Cassandra [3]. MAPSIN (Map-Side Index Nested Loop Join) joins are performed in the map phase, so shuffle and reduce phase are not required. The proposed algorithm is optimized for the efficient processing of multiway joins.
Definition of the framework
Preliminaries
Let U so and U p be two countably infinite disjoint sets of URI references, L be a countably infinite set of (plain) literals 1 and V be a countably infinite set of variables. In the following, we define two types of graphs, data graphs and query graphs. The former describes the data model and the latter determines the form of the query expressions over the stored data.
Definition 1. A triple (s, p, o) ∈ U so × U p × (U so ∪ L) is called a data triple.
In a data triple t = (s, p, o), s is called the subject, p the predicate and o the object of t. A data graph G is a non-empty set of data triples. A data graph G is a subgraph of a data graph G if G ⊆ G.
Definition 2. A triple (s, p, o) ∈ (U so ∪ V ) × U p × (U so ∪ L ∪ V ) is called a query triple.
In a query triple q = (s, p, o), s is called the subject, p the predicate and o the object of q. A query graph (or simply a query) Q is a nonempty set of query triples. The output pattern O(Q) of a query graph Q is the tuple (X 1 , . . . , X n ), with n ≥ 0, of all the variables appearing in Q. A query Q is said to be a Boolean query if n = 0. A query graph Q is a subquery of a query graph Q if Q ⊆ Q.
Definition 3. Let G be a data or query graph. A directed path (or simply path) in G is a sequence of triples (v 0 , p 1 , v 1 ), (v 1 , p 2 , v 2 ), . . . , (v n−1 , p n , v n ) in G,
where n ≥ 1. The length of the path is the number n of triples in the path. A finite directed path always has a start node which corresponds to the subject of its first triple, and an end node which is the object of its last triple. A cycle is a path in which the start node and the end node are the same. A path with no repeated nodes (i.e. without cycles) is called a simple path.
The set of nodes of a data graph G (resp. a query graph Q), denoted by N (G) (resp. N (Q)), is the set of elements of U so ∪ L (resp. U so ∪ L ∪ V ) that occur in the triples of G (resp. Q). The set of edge labels of a data graph G (resp. a query graph Q), denoted by E(G) (resp. E(Q)), is the set of elements of U p that occur in the triples of G (resp. Q). Finally, the set of variables in a query Q is denoted by V(Q).
Notice that the data graphs defined above correspond to ground RDF graphs defined in [32]. Notice also that query graphs correspond to Basic Graph Patterns (BGP) SPARQL queries. In this paper, we do not allow queries with variables in the place of predicates. However, the query evaluation algorithms proposed in this paper can be easily extended to allow such variables.
Data and query graphs are graphically represented as follows: A node (subject or object), which is a URI or a variable, is represented as a rounded rectangle, while an object which is a literal is represented by a rectangle. Each triple (s, p, o) is represented by a labeled edge s p −→ o connecting the nodes s and o. 1 In this paper we do not consider typed literals
In this paper, we use strings with initial lowercase letters to represent elements in U p (i.e., URIs corresponding to predicates), while strings with initial uppercase letters denote elements in U so (i.e., URIs corresponding to objects and subjects). Literals are represented as strings enclosed in double quotes. Finally, we assume that variables are represented by strings whose first symbol is the question mark symbol (?). Example 1. Fig. 1(a) depicts a data graph showing information about three journal papers, their authors and the relationships between the authors. Fig. 1(b) shows a query graph. Definition 4. A (total) embedding of a query graph Q in a data graph G is a total mapping e : N (Q) → N (G) with the following properties:
1. For each node v ∈ N (Q), if v is not a variable then e(v) = v. 2. For each triple (v 1 , p, v 2 ) ∈ Q, the triple (e(v 1 ), p, e(v 2 )) is in G.
The tuple (e(X 1 ), . . . , e(X n )), where (X 1 , . . . , X n ) is the output pattern of Q, is said to be an answer to the query Q.
Example 2. Fig. 2 depicts an embedding of the query graph Q in data graph G, where Q and G are the graphs appearing in Fig. 1. The answer obtained by this embedding is (?A, ?W, ?T ) = (Article2, P erson3, "T itle2"). Notice that a second embedding exists giving the answer (?A, ?W, ?T ) = (Article2, P erson2, "T itle2").
Data and query graph decomposition
A crucial problem, when we use a cluster of computer nodes to evaluate queries, is how to distribute the data in the computers of the cluster as well as how to compute the queries on the distributed data. In this section we define the concept of data and query graph decomposition. Definition 5. A data (resp. query) graph decomposition of a data (resp. query) graph F is an m-tuple of data graphs D F = (F 1 , . . . , F m ), where m ≥ 1, such that:
1. F i ⊆ F , for i = 1, . . . , m, and 2. i F i = F .
Each data (resp. query) graph F i in a data (resp. query) graph decomposition is called a data (resp. query) graph segment. When, in a data/query graph decomposition, for all the pairs i, j, with 1 ≤ i < j ≤ m, it also holds F i ∩ F j = ∅, i.e. data (resp. query) graph segments are disjoint of each other, then the data (resp. query) graph decomposition is said to be non-redundant and the graph (resp. query) segments obtained form a partition of the triples of data (resp. query) graph F , called m-triple partition of F . Notice that, F i = ∅ for i = 1, . . . , m, since, because of Definitions 1 and 2, a data/query graph is nonempty.
It should be noted that, in a data or a query graph decomposition, a triple is (in general) allowed to participate in multiple data or query graph segments. At first sight, this redundancy seems to burden the system with the extra cost of storing and managing or evaluating more data. However, as it is shown in subsequent sections (see for example Section 4.3), if it is used appropriately it may lead to more efficient computation of the query answers, due to proper parallelization of the query execution. Definition 6. Let D F = (F 1 , . . . , F m ), with m ≥ 1, be a data (resp. query) graph decomposition of a data graph F , and F i , F j , with i = j, be two data (resp. query) graph segments in D F . A border node v of F i and F j , is a node that belongs to N (F i )∩N (F j )−L. By B(F i , F j ) we denote the set of border nodes of F i and F j , while, by B(F i ), we denote the set (1≤j≤m)∧(j =i) B(F i , F j ). Finally, by B(F ) we denote the set of all border nodes of F i.e. B(F ) = 1≤i≤m B(F i ).
Notice that, according to the above definition, literals that occur in more that one segments in D F , are not considered to be border nodes.
Definition 7. Let D Q = (Q 1 , . . . , Q n ), where n ≥ 1, be a query decomposition of a query graph Q. A node n ∈ B(Q) is said to be a common border node if n ∈ B(Q i ) for each Q i in D Q . The set of common border nodes in Q is denoted as CB(Q).
Example 3. A data graph decomposition D G (more specifically a 3-triple partition) of the data graph G of Fig. 1(a) appears in Fig. 3. The dark nodes correspond to the border nodes between the data graph segments, that is:
B(G 1 ) = {P erson4, Article1} B(G 2 ) = {P erson4, Article1, Article2} B(G 3 ) = {Article1, Article2}.
A decomposition D Q of a query Q into a 3 query graph segments (subqueries) Q 1 , Q 2 , and Q 3 is illustrated in Fig. 4. The border nodes between the query graph segments are:
B(Q 1 ) = {n1, n2} = {?P 1, ?A} B(Q 2 ) = {n2, n3} = {?A, ?P 2} B(Q 3 ) = {n1, n3} = {?P 1, ?P 2}.
while the set CB(Q) of common border nodes in Q is empty. Notice that the query decomposition appearing in Fig. 4 Example 4. Consider the query graph Q appearing in the left part of Fig. 4. Q represents the query: "Find an article (variable ?A) and its title (variable ?T ) published in Journal1, which has as authors a person (variable ?P 1) and his supervisor (variable ?P 2)". It is easy to see that the evaluation of this query on the data graph G depicted in Fig. 3 returns the answers: Answer 1: (?P 1, ?A, ?P 2, ?T ) = (P erson4, Article1, P erson1, "T itle1"). Answer 2: (?P 1, ?A, ?P 2, ?T ) = (P erson2, Article2, P erson3, "T itle2"). Notice, however, that, we cannot evaluate Q on a single data graph segment in D G depicted in Fig. 3. Instead, all these graph segments are needed in order to compute the answers to this specific query Q as each of them contains part of the data needed to answer the query Q.
2
Query decomposition will be proved very useful in the subsequent sections in query evaluation. The general idea behind the algorithms that will be presented is that, in order to find the answers to a query Q, it suffices to decompose Q into a tuple of subqueries (Q 1 , . . . , Q m ), find the answers (or partial answers) of Q 1 , . . . , Q m and then combine appropriately these answers to construct the answers to the query Q.
Partial embeddings
When a query Q is evaluated over a data graph segment G i of a data graph G, it is likely that no embedding of Q in G i exists. However, this does not necessarily mean that there is no embedding of Q in G, at all. Instead, it is possible that "part" of an embedding of Q in G has images in G i , while other "parts"of the embeddings have images in other data graph segments of G. Then, to obtain the embedding of Q in G, we have to combine appropriately these "partial embeddings". This situation is formulated as follows:
Definition 8. A partial embedding of a query graph Q in a data graph G is a partial mapping e : N (Q) → N (G) such that for every node v ∈ N (Q) for which e(v) is defined, the following properties hold:
1. if v is not a variable, then e(v) = v. 2.
if v is a variable, then there exists a node u ∈ N (Q) for which e(u)
is defined and an edge label p ∈ E(Q), such that (v, p, u) ∈ Q and (e(v), p, e(u)) ∈ G or (u, p, v) ∈ Q and (e(u), p, e(v)) ∈ G.
A partial embedding is said to be non-trivial if there exists a triple (v 1 , p, v 2 ) ∈ Q such that both e(v 1 ) and e(v 2 ) are defined and the triple (e(v 1 ), p, e(v 2 )) belongs to G. In other words, a non-trivial partial embedding is a partial embedding that maps at least one edge of Q in G.
In essence, a partial embedding represents a mapping from a subset of nodes and edges of Q to a given data graph G. In other words, partial embeddings represent partial answers to Q, provided that, they can be appropriately "combined" with other "compatible" partial embeddings to give complete answers to the query Q.
The intuition behind Condition (2) is that when e(v) is defined for a variable v of a query Q, then there is a triple t in Q such that the variable is either the subject or the object of t, and t is mapped, through e, to a triple in the data graph G. Notice that, as we will prove in the next section, no answers are lost by imposing this condition to the definition of partial embeddings, while it substantially restricts the search space for computing partial embeddings.
It is easy to see that a total embedding e of Q in G is also a partial embedding of Q in G. Moreover, a total embedding of a subquery of Q, corresponds to a partial embedding of Q. Definition 9. Two partial mappings e 1 : D 1 → R 1 and e 2 : D 2 → R 2 are said to be compatible if for every node v ∈ D 1 ∩ D 2 such that e 1 (v) and e 2 (v) are defined, it is e 1 (v) = e 2 (v). Definition 10. Let e 1 : D 1 → R 1 and e 2 : D 2 → R 2 be two compatible partial mappings. The join of e 1 and e 2 is the partial mapping e : D 1 ∪ D 2 → R 1 ∪ R 2 defined as follows:
e(v) = e 1 (v) if e 1 (v) is defined e 2 (v)
if e 2 (v) is defined and e 1 (v) is undefined undefined if both e 1 (v) and e 2 (v) are undefined Note that, the above definitions apply also to total embeddings as they are partial mappings. Notice also that in the first case of Definition 10, e 2 (v) may be defined or not. If it is defined, then the compatibility of the two partial mappings (embeddings) implies that e 2 (v) = e 1 (v). It is trivial to prove that the join of two compatible partial embeddings is a partial embedding and that the join operation is commutative and associative. Therefore, we can refer to the partial embedding resulting by the join of n mutually compatible partial embeddings without ambiguity.
It should also be noted that if Q is a subquery of a query Q and e is a total embedding of Q in a data graph G, then e is a partial embedding of Q in G.
Example 5. The mappings e 1 , with e 1 (?A) = Article2 and e 1 (?T ) = T itle2, and e 2 , with e 2 (?A) = Article2 and e 2 (?W ) = P erson3, are partial embeddings of the query graph of Fig. 1(b) in the data graph of Fig. 1(a). e 1 and e 2 are compatible and their join is the partial embedding e 3 , with e 3 (?A) = Article2, e 3 (?T ) = T itle2 and e 3 (?W ) = P erson3.
Special forms of queries
In this subsection we define several forms of queries. We begin by defining the path queries:
Definition 11. A query Q is said to be a path query of length n, with n ≥ 1, if it is of the form (v 0 , p 1 , v 1 ), (v 1 , p 2 , v 2 ), . . . , (v n−1 , p n , v n ).
We now define the generalized star queries as follows:
Definition 12. A query Q is called a generalized star query if there exists a node c ∈ N (Q), called the central node of Q and denoted as C(Q), such that for every triple t = (u, p, v) ∈ Q it is either u = c or v = c.
We now define three special forms of generalized star queries, namely subject star queries, object star queries and subject-object star queries (s-query, o-query, and so-query for sort respectively).
Definition 13. A generalized star query Q is said to be a subject star query (resp. object star query) if for every triple t ∈ Q the central node c of Q is the subject (resp. object) of t.
Definition 14.
A generalized star query Q is said to be a subject-object star query if for every triple t ∈ Q, the central node c of Q is either the subject or the object of t and there is a triple t ∈ Q, such that c is the subject of t .
The interest in the above special forms of queries lies in that these queries are easier to evaluate (are evaluated more efficiently and are amenable to parallelization) than the general query graphs. Besides, as we will see in the next section, we can easily decompose a huge data graph into a set of graph segments such that a query (as those defined above) can be evaluated independently on each graph segment. These special classes of queries have the following property: for every query graph Q there exist a (non-redundant) decomposition into s-queries (or o-quaries, or so-queries, or path queries of length 1). This follows trivially from the fact that every query that consists of a single triple belongs to each of these classes of queries.
The algorithms for query evaluation proposed in this paper are based on these observations. In the following sections we will use two of the above classes of queries, namely, generalized star queries and subject-object star queries (sostar queries).
Query evaluation approaches
In this section we present three procedures for query evaluation. All procedures are based on the idea that the query is decomposed on a set of subqueries which are evaluated on data segments that are obtained from decomposing the data set using various approaches. Finally, we present algorithms for decomposing a query into so-queries.
Query evaluation using partial embeddings
The query evaluation approach presented below, called QEJPE-algorithm, is based on the idea of computing (possibly in a distributed manner) partial embeddings of subqueries (query graph segments) of a query Q over data segments of a decomposed data graph G and combining these partial embeddings to obtain (total) embeddings of the initial query Q in the data graph G. To narrow down the search space for finding partial embeddings we introduce the concept of useful partial embeddings:
Definition 15. Let D G = (G 1 , . . . , G m ), with m ≥ 1, be a data graph decomposition of a data graph G and let e be a partial embedding of a query graph Q in some G i . Then e is called a useful partial embedding of Q in G i if the following conditions hold:
1. e is non-trivial. 2. e is defined for all the nodes in (N (Q) ∩ N (G i )). 3. for each triple (v, p, u) ∈ Q, if e(v) is defined and e(v) / ∈ (B(G i ) ∪ L), then e(u) is also defined and (e(v), p, e(u)) is a triple in G i . 4. for each triple (v, p, u) ∈ Q, if e(u) is defined and e(u) / ∈ (B(G i ) ∪ L), then e(v) is also defined and (e(v), p, e(u)) is a triple in G i .
Notice that, according to the above definition, if v is a non-variable node of the query graph Q that maps to a non-border node of G i , then the second property implies that e(v) is defined, and the third and fourth properties enforce every triple that contains v to be mapped in G i . More generally, the edges which start from or end to a node that maps to a non-border node in a data graph segment G i should also have images that belong entirely to G i otherwise the partial embedding cannot be used to construct a query answer. Lemma 1. Let D G = (G 1 , . . . , G m ), with m ≥ 1, be a (redundant or nonredundant) data graph decomposition of a data graph G and let Q be a query graph. Then the following statements are equivalent:
1. e is a total embedding of Q in G. 2. there exist mutually compatible useful partial embeddings e 1 , . . . , e k of Q in G i1 , . . . , G i k , respectively, for some i 1 , . . . , i k with 1 ≤ i 1 < . . . < i k ≤ m, that satisfy the following properties: (a) for every triple (v, p, u) ∈ Q there exists some j for which e j (v), e j (u) are defined and (e j (v), p, e j (u)) ∈ G ij . (b) the join of e 1 , . . . , e k is e.
Proof. Assume that (1) holds, that is, e is an embedding of Q in G. Let
Q i = {(v, p, u) ∈ Q | (e(v)
, p, e(u)) ∈ G i }, 1 ≤ i ≤ m, and let I be the set of indices for which Q i is non-empty, that is, I = {i | Q i = ∅}. Since the query graph Q is non-empty, I must be also non-empty. Suppose that |I| = k and let i 1 , . . . , i k be the elements of I in increasing order.
For every j, 1 ≤ j ≤ k, define the following mapping e j from Q to G ij :
e j (v) = v if v is a non-variable node in N (Q) ∩ N (G ij ) e(v)
if v is a variable node in N (Q ij ) undefined otherwise It is not hard to see that e j is a partial embedding of Q in G ij and that the join of e 1 , . . . , e k is exactly e. Thus, property (2b) holds. In order to prove that property (2a) holds, consider a triple (v, p, u) ∈ Q. Then, (v, p, u) ∈ Q ij for some j, which implies that v, u ∈ N (Q ij ) and e(v), e(u) ∈ N (G ij ). From the definition of e j it follows that e j (v) = e(v) and e j (u) = e(u), which implies that (e j (v), p, e j (u)) = (e(v), p, e(u)), which is in G ij by the definition of Q ij .
It remains to prove that e j is useful. The fact that e j is non-trivial is straightforward, since Q ij is non empty. Moreover, it obviously satisfies condition (2) of Definition 15.
In order to prove that e j satisfies condition (3) of Definition 15, consider a triple (v, p, u) ∈ Q such that e j (v) is defined and e j (v) / ∈ (B(G ij ) ∪ L). Since e is an embedding of Q in G, it must be (e(v), p, e(u)) ∈ G. Moreover, e(v) = e j (v) which implies that e(v) is not a border node of G ij nor an element of L. Therefore, e(v) appears only in G ij , which implies that (e(v), p, e(u)) must be a triple in G ij . Hence, (v, p, u) ∈ Q ij , which implies that u ∈ N (Q ij ) and by the definition of e j it is e j (u) = e(u) (i.e. e j (u) is defined). The fact that (e j (v), p, e j (u)) is a triple in G ij is now clear, since it equals (e(v), p, e(u)). The proof for condition (4) of Definition 15, is similar.
For the other direction, assume that (2) holds. We first show that e (the join of e 1 , . . . , e k ) is a total mapping from N (Q) to N (G). Notice that e 1 , . . . , e k are compatible. Suppose that v ∈ N (Q), that is, v appears in some triple of the form (u, p, v) or (v, p, u) in Q. Then, e j (v) is defined for some j (by property (2a)), which implies (using the definition of join) that e(v) is also defined. We next show that e is an embedding of Q in G. Let v be a non-variable element in N (Q). From the definition of join, it must be e(v) = e j (v) for some j. Since e j is a useful partial embedding, it is e j (v) = v. Therefore, it holds e(v) = v.
Finally, consider a triple (v, p, u) ∈ Q. By property (2a), there exists some j such that (e j (v), p, e j (u)) ∈ G ij which implies that (e(v), p, e(u)) ∈ G (since e(v) = e j (v), e(u) = e j (u), and G ij ⊆ G).
2
Lemma 2. Let D Q = (Q 1 , .
. . , Q n ), with n ≥ 1, be a query decomposition of a query graph Q and G be a data graph. Then e is a total embedding of Q in G if and only if there exist mutually compatible total embeddings e 1 , . . . , e n of Q 1 , . . . , Q n in G such that the join of e 1 , . . . , e n is e.
Proof. For the one direction, assume that e is a total embedding of Q in G. For every i define e i to be the restriction of e in N (Q i ) (that is,
e i : N (Q i ) → N (G), with e i (v) = e(v))
. Obviously e i is a total mapping. Furthermore, for every non-variable element v ∈ N (Q i ) it is e i (v) = e(v) = v and for every triple
(v 1 , p, v 2 ) ∈ Q i it is (e i (v 1 ), p, e i (v 2 )) = (e(v 1 ), p, e(v 2 )) ∈ G, which implies that e i is actually an embedding of Q i in G. Moreover, for every i, j with i = j, if v ∈ N (Q i ) ∩ N (Q j ) then it is e i (v) = e j (v) = e(v)
, which implies that e i and e j are compatible. Therefore, the join e of e 1 , . . . , e n exists. It remains to show that e = e. Consider an arbitrary v ∈ N (Q). Then v appears in some triple t ∈ Q. Since D Q is a decomposition of Q, there exists some i such that t ∈ Q i . Thus, v ∈ N (Q i ), which implies that e i (v) is defined. From the definition of join, it follows that e (v) = e i (v), which implies e (v) = e(v).
For the other direction, assume that e 1 , . . . , e n are compatible total embeddings of Q 1 , . . . , Q n in G and let e be their join. Using the same argument as above, we can prove that for every v ∈ N (Q) there exists some i such that e i (v) is defined, which implies that e(v) is also defined. Thus, e is a total mapping. We next show that e is an embeding of Q in G. Consider any non-variable element v ∈ N (G). Since e is total, e(v) is defined. From the definition of join, there must be some i such that e(v) = e i (v), which implies e(v) = v (since e i is an embedding).
Finally
, let (v 1 , p, v 2 ) be a triple in Q. Since D Q is a decomposition of Q, (v 1 , p, v 2 ) belongs to some Q i . Since e i is a total embedding of Q i in G, it holds (e i (v 1 ), p, e i (v 2 )) ∈ G, which implies (e(v 1 ), p, e(v 2 )) ∈ G. 2 Theorem 3. Let D Q = (Q 1 , .
. . , Q n ), with n ≥ 1, be a query decomposition of a query graph Q and D G = (G 1 , . . . , G m ), with m ≥ 1, be a data graph decomposition of a data graph G. Then the following statements are equivalent:
1. e is a total embedding of Q in G.
2. for every j, with 1 ≤ j ≤ n, there exist useful partial embeddings e j,1 , . . . , e j,kj of Q j in G ij,1 , . . . , G i j,k j for some i j,1 , . . . , i j,kj with 1 ≤ i j,1 < . . . < i j,kj ≤ m that satisfy the following properties:
(a) for every j, with 1 ≤ j ≤ n, and every triple (v, p, u) ∈ Q j there exists some such that e j, (v), e j, (u) are defined and (e j, (v), p, e j, (u)) ∈
G i j, . (b) for every j 1 , j 2 , 1 , 2 , with 1 ≤ j 1 ≤ j 2 ≤ n, 1 ≤ 1 ≤ k j1 , 1 ≤ 2 ≤ k j2
, the partial embeddings e j1, 1 and e j2, 2 are compatible. (c) the join of e j, j for all j ∈ {1, . . . , n} and all j ∈ {1, . . . , k j } is e.
Proof. For the one direction, assume that (1) holds, that is, e is an embedding of Q in G. From Lemma 2 we conclude that there are mutually compatible total embeddings e 1 , . . . , e n of Q 1 , . . . , Q n in G, such that the join of e 1 , . . . , e n is e. Now, from Lemma 1 we conclude that, for each Q j , there exist mutually compatible useful partial embeddings e j,1 , . . . , e j,kj of Q j in G ij,1 , . . . , G i j,k j such that property (a) holds and the join of e j,1 , . . . , e j,kj is e j . In order to show that propery (b) holds, suppose for the sake of contradiction, that e j1, 1 and e j2, 2 are not compatible, for some j 1 , j 2 , 1 , 2 . Then, there exists some v such that e j1, 1 (v) = e j2, 2 (v). Since e j1 (v) = e j1, 1 (v) and e j2 (v) = e j2, 2 (v), the total embeddings e j1 and e j2 must also be incompatible, which is a contradiction. Therefore, property (b) holds. Finally, property (c) holds since for all j, the join of e j,1 , . . . , e j,kj is e j and the join of e 1 , . . . , e n is e.
For the other direction, assume that (2) holds. From Lemma 1, it follows that for every j, the join e j of the partial embeddings e j,1 , . . . , e j,kj is a total embedding of Q i in G. Moreover, the resulting embeddings e 1 , . . . , e n are mutually compatible, since we have assumed that e j1, 1 and e j2, 2 are compatible for all j 1 , j 2 , 1 , 2 . Now, from Lemma 2 it follows that the join e of e 1 , . . . , e n is a total embedding of Q in G.
2
Theorem 3 implies a generic query evaluation strategy, named Query Evaluation by Joining Partial Embeddings (QEJPE) strategy consisting of four steps. The algorithm assumes an arbitrary decomposition of the data graph G into a tuple D G of data graph segments G 1 , . . . , G m , with m ≥ 1, stored into a cluster of computer nodes.
Step 1: Decompose the query Q into a tuple D Q of subqueries Q 1 , . . . , Q n , with n ≥ 1.
Step 2: Compute all possible useful partial embeddings of each subquery Q j over each data graph segment G i of G.
Step 3: For each subquery Q j , collect all the partial embeddings of Q j obtained in Step 2 and join them to get total embeddings of Q j .
Step 4: To construct the total embeddings (i.e. answers) of Q, join the total embeddings obtained in Step 3 by using one embedding for each subquery, in all possible ways.
Notice that the above generic query evaluation strategy has several interesting properties: a) it is independent of the way the data graph is decomposed and the way the data graph segments obtained by this decomposition are stored in the nodes of the cluster, b) it is independent of the way the query graph is decomposed, and c) it is independent of the algorithm used to compute (partial) embeddings.
In Subsection 5.4, we present an implementation of this strategy on a cluster of commodity computers based on the Map-Reduce programming framework.
Query evaluation by decomposing queries into generalized stars
In this section we present another approach, called eval-STARS algorithm, for evaluating queries over linked data. The algorithm is based on assumptions similar to these on which the QEJPE-algorithm, presented in Section 4.1, is based. The main difference is that we now impose subqueries obtained from the decomposition of a user query Q to be in the form called generalized star queries. Besides, the algorithm is based on evaluation of total embeddings of the subqueries instead of partial embeddings.
Recall that, as we proved in Lemma 2, in order to compute the answers to a given query in a data graph G, we can decompose the query into a tuple of subqueries, compute the embeddings of the subqueries in G (which may be a more efficient task due to the simpler or special form of the subqueries) and then join these embedding to obtain the desired result. However, given a target class of queries C, it may not be always possible to decompose an arbitrary query Q into subqueries that belong to C. For example, if C is the class of path queries of length 3, in other words if the subqueries must be of the form {(u, p, v), (v, p , w), (w, p , z)}, then it can be proved that it is not possible to decompose every user query into a set of path queries of length 3. Nevertheless, if the target class C is the class of generalized star queries, then for every query Q there exist a (non-redundant) decomposition of Q into a tuple of generalized star subqueries. This trivially follows from the fact that every query that consists of a single triple is also a generalized star query (with either the subject or the object being the central node). We next present a more general result, relating the decomposition of a query graph Q into generalized star subqueries, to the node covers of this query graph.
Definition 16. Let Q be a query graph. A set of nodes V ⊆ N (Q)−L is called a node cover of Q if for every triple (s, p, o) ∈ Q, it holds either s ∈ V or o ∈ V . Lemma 4. Let Q be a query graph and V = {v 1 , . . . , v k } be a node cover of Q. For each v i ∈ V define the generalized star query Q vi = {t ∈ Q | t = (s, p, v i )} ∪ {t ∈ Q | t = (v i , p, o) and o / ∈ V }. Then D Q = (Q v1 , . . . , Q v k ) is a non-redundant decomposition of Q.
Proof. It easy to see that (Q v1 , . . . , Q v k ) forms a decomposition of Q since:
(1) by construction Q vi ⊆ Q, for i = 1, . . . , k, and
(2) i Q vi = Q, since for every triple t = (s, p, o) ∈ Q, either s ∈ V or o ∈ V . If o ∈ V then, by construction t ∈ Q o . Otherwise (i.e. if s ∈ V and o ∈ V ) then t ∈ Q s .
We will now prove (by contradiction) that D Q = (Q v1 , . . . , Q v k ) is nonredundant. Assume that D Q is redundant. Then there exists a triple t = (s, p, o) ∈ Q such that t belongs to two different subqueries in D Q . It is easy to see that these subqueries should be Q s and Q o and s, o ∈ V . However, since t ∈ Q s , then, by construction, o ∈ V , which contradicts with the fact that
s, o ∈ V . 2
Therefore, if a set of nodes is a node cover of a query Q, then its elements are the central nodes of the generalized star subqueries in a non-redundant decomposition of Q. It turns out that the converse also holds.
Lemma 5. Let Q be a query graph, let D Q = (Q 1 , . . . , Q k ) be a decomposition of Q such that Q 1 , . . . , Q k are generalized star queries and let c 1 , . . . , c k be their central nodes. Then {c 1 , . . . , c k } is a node cover of Q.
Proof. It immediately follows from Definitions 5, 12 and 16. 2 Example 6. In Fig. 5 we see a decomposition of the query Q into three generalized star queries Q 1 , Q 2 and Q 3 , which is obtained by the construction of Lemma 4, using the node cover {n 4 , n 2 , n 5 } of Q. Following the discussion above we can be specialize the generic query evaluation strategy (QEJPE strategy) presented in Subsection 4.1, obtaining in this way a new algorithm called eval-STARS algorithm. As in the case of QEJPE strategy we assume an arbitrary decomposition of the data graph G into a tuple D G of data graph segments G 1 , . . . , G m , with m ≥ 1, stored into a cluster of computer nodes.
The eval-STARS algorithm algorithm consists of the following steps:
Step 1: Decompose the query Q into a tuple of generalized star subqueries D Q = (Q 1 , . . . , Q n ), with n ≥ 1.
Step 2: Compute all possible embeddings of each triple in Q over each data graph segment G i of G.
Step 3: For each subquery Q j , collect the embeddings of all the triples in Q j and join compatible embeddings in all possible ways to compute the total embeddings of Q j in G.
Step 4: To construct the total embeddings (i.e. answers) of Q, join the total embeddings obtained in Step 3 by using one embedding for each subquery, in all possible ways.
Note that eval-STARS algorithm applies two query decomposition processes. Initially, the given query is decomposed into generalized star queries (Step 1 of eval-STARS) and each star query is further decomposed (Step 2 of eval-STARS) into its triples. On the contrary, QEJPE applies a single decomposition (Step 1 of QEJPE). Following this stepwise approach of two decompositions, in fact, we achieve the construction of the total embeddings in two phases, where each phase gathers the compatible partial embeddings and join them together (i.e., it applies Lemma 2 twice). Although this extra decomposition could be thought of as a redundant step, in parallel computation (see Section 5), such an approach brings a significant performance improvement and facilitates the distribution of both the intermediate data and the computation.
Query evaluation by data decomposition using replication
In this section we propose a query evaluation approach, called QE-with-Redundancy, which uses a specific form of replication in the data graph decomposition to efficiently answer queries. More specifically:
(a) Data graphs are decomposed into data graph segments in which replication of the data triples is allowed. Data triples are replicated in such a way that all the answers to a special form of queries, namely subject-object star queries, can be obtained from a single data segment. The partition of the data graph is specified by an arbitrary partition of the data nodes, while data segments consist of the in-and out-edges of each block of nodes. Therefore, triples containing nodes that are in two different blocks occur in both segments of the data graph corresponding to these blocks. This redundancy, as we show, ensures that the subject-object star subqueries can be easily evaluated over each segment, independently. (b) Each query posed by the user is decomposed into a tuple of subject-object star subqueries.
In the evaluation strategy presented in this section, our aim is to construct the embeddings of a query Q in a data graph G, by appropriately combining embeddings (i.e. joining compatible embeddings) of so-subqueries of Q over the proper sugbgaphs of G.
The following lemma refers to the compatibility of embeddings:
Lemma 6. Let D Q = (Q 1 , .
. . , Q n ), with n ≥ 1, be a query decomposition of a query graph Q and D G = (G 1 , . . . , G m ), with m ≥ 1, be a data graph decomposition of a data graph G. Let e Qi and e Qj be two embeddings of the subqueries Q i and Q j respectively, with 1 ≤ i = j ≤ n, on two (not necessarily different) graph segments D k and D l in D G . Let B(Q i , Q j ) be the border nodes of Q i , Q j . Then e Qi and e Qj are compatible if and only if for each node v ∈ B(Q i , Q j ), it holds that e Qi (v) = e Qj (v).
Proof. It immediately follows from Definitions 9 and 6.
2
In the following definition we present a decomposition scheme for a data graph G, called star-oriented decomposition (or simply s-decomposition).
Definition 17. A star-oriented decomposition (or s-decomposition for short) of a data graph G is a tuple of graphs D G = (G 1 , . . . , G m ), with m ≥ 1, if N P = (N 1 , . . . , N m ) is a partition of the nodes in N (G) − L and for each i, with 1 ≤ i ≤ m, G i = {t | t = (s, p, o) and t ∈ G and s ∈ N i or o ∈ N i }. Subgraphs G 1 , . . . , G m are called s-graph segments. A node in N (G i ) − L − N i is called a replicated node in G i . A replicated triple t = (s, p, o) in a s-graph segment G i is a data triple in G i such that ether s or o is a replicated node.
In the following, the set of replicated nodes in a s-graph segment Fig. 6 shows an s-decomposition of the data graph G of Fig. 1(a), which is based on the following partition of the set of nodes in N (G) − L: The grey colored nodes in the segments G 1 , G 2 , and G 3 correspond to the nodes in N 1 , N 2 and N 3 , respectively, while the pink colored nodes are the replicated nodes. Finally, the dashed lines in the graph segments correspond to replicated data triples. Consider now the query graph Q appearing in the right part of Fig. 1. It is easy to see that we cannot obtain the solution described in Example 1 by finding an embedding of Q in a single graph segment of G appearing in Fig. 6 (as such an embedding does not exist).
G i is denoted by R N (G i ). The replicated nodes of a data graph G is R N (G) = i R N (G i ). Similarly, the set of replicated triples in a s-graph segment G i is denoted by R t (G i ). Finally, replicated triples of a data graph G is R t (G) = i R t (G i ). Example 7.N 1 = {Article1, Article3, Journal2, P erson4} N 2 = {P erson1, P erson2, P erson3} N 3 = {Article2, Journal1}.
The following lemma presents some interesting properties of the star-oriented decomposition of a data graph.
Lemma 7. Let D G = (G 1 , . . . , G m ), with m ≥ 1, be an s-decomposition of a data graph G based on the partition N P = (N 1 , . . . , N m ) of the nodes in N (G) − L.
Then the following hold:
1. (N (G i ) − L) ⊇ N i , for each i, with 1 ≤ i ≤ m. 2. i≤m N (G i ) = N (G) 3. i≤m G i = G 4. Consider a node s ∈ R N (G i ). Then for each triple t = (s, p, o) ∈ G i it holds that o ∈ N i and t ∈ R t (G i ). 5. Consider a node o ∈ R N (G i ). Then for each triple t = (s, p, o) ∈ G i it holds that s ∈ N i and t ∈ R t (G i ). 6. For each node v ∈ R N (G i ), with 1 ≤ i ≤ m there exists an index j, with 1 ≤ j ≤ m and i = j, such that v ∈ N j . 7. For each triple t ∈ R t (G i ), with 1 ≤ i ≤ m, there exists an index j, with 1 ≤ j ≤ m and i = j, such that t ∈ G j .
Proof. Proof of 1: It immediately follows from Defitition 17. Proof of 2: Let v be a node in i≤m N (G i ). Then, there exist a triple (s,
p, o) ∈ G i , for some i, such that v = s or v = o. Since G i ⊆ G, we have (s, p, o) ∈ G, which implies that s, o ∈ N (G). Therefore, v ∈ N (G). Now let v a node in N (G). Then, there exist a triple (s, p, o) ∈ G, such that v = s or v = o.
Since s is the subject of this triple, it must be s ∈ N (G) − L, which implies that s ∈ N i , for some i. Therefore, (s, p, o) ∈ G i and thus
s, o ∈ N (G i ). Consequently, v ∈ i≤m N (G i ). Proof of 3: From Definition 17 we conclude that i≤m G i ⊆ G. To prove the inverse let t = (s, p, o) be a triple in G. Then s ∈ (N (G) − L). Thus s ∈ N i for some i with 1 ≤ i ≤ n.
Hence, by construction of the s-segments, t ∈ G i and therefore t ∈ i≤m G i . Therefore, G ⊆ i≤m G i . Proof of 4: It immediately follows form Definition 17. Proof of 5: It immediately follows form Definition 17. Proof of 6: As v ∈ R N (G i ), from Definition 17 we conclude that v ∈ N i . But as v is a node in N (G) − L, the node v should belong to another set N j , with j = i, of the partition of the nodes in N (G) − L. Proof of 7: Assume that t is of the form t = (s, p, o). As t ∈ R t (G i ), from Definition 17 we see that either s or o is a replicated node in R N (G i ). Assume that the replicated node is s. Then, from (6) we conclude that there is an index j = i such that s ∈ N j . Then, from Definition 17, we conclude that t ∈ G j . In a similar way we reach the same conclusion by assuming that o is the replicated node.
2
The following theorem relates the embeddings of the so-queries obtained from graph segments to the embeddings of the query Q on the graph G.
Theorem 8. Let D Q = (Q 1 , . . . , Q n ), with n ≥ 1, be a query decomposition of a query graph Q, such that each Q i , with 1 ≤ i ≤ n, is an so-query. Let also D G = (G 1 , . . . , G m ), with m ≥ 1, be an s-decomposition of a data graph G.
Then e is a total embedding of Q in G if and only if e is the join of e 1 , . . . , e n , where e 1 , . . . , e n are mutually compatible embeddings such that for each i, with
1 ≤ i ≤ n, e i is a total embedding of Q i in some segment G j , where 1 ≤ j ≤ m.
Proof. For the one direction, assume that e is a (total) embedding of Q in
G. For every i, with 1 ≤ i ≤ n, define e i to be the restriction of e in Q i (that is, e i : N (Q i ) → N (G), with e i (v) = e(v) for every node v ∈ N (Q i )).
Obviously e i is a total mapping. Furthermore, for every non-variable element
v ∈ N (Q i ) it is e i (v) = e(v) = v and for every triple (v 1 , p, v 2 ) ∈ Q i it is (e i (v 1 ), p, e i (v 2 )) = (e(v 1 ), p, e(v 2 )) ∈ G, which implies that e i is actually an embedding of Q i in G.
As Q i is an so-query, let C(Q i ) be the central node of Q i and e(C(Q i )) ∈ N (G) the image of C(Q i ) in G. From Definition 17 we conclude that e(C(Q i )) ∈ N j , for some N j with 1 ≤ j ≤ m and that e i is an embedding of Q i in G j .
We next prove that the embeddings e i , with 1 ≤ i ≤ n are mutually compatible and their join is e. By construction, for every i,
j with i = j, if v ∈ N (Q i ) ∩ N (Q j ) then it is e i (v) = e j (v) = e(v)
, which implies that e i and e j are compatible. Therefore, the join e of e 1 , . . . , e n exists. It remains to show that e = e. Consider an arbitrary v ∈ N (Q). Then v appears in some triple t ∈ Q. Since D Q is a decomposition of Q, there exists some i such that t ∈ Q i . Thus, v ∈ N (Q i ), which implies that e i (v) is defined. From the definition of join, it follows that e (v) = e i (v), which implies e (v) = e(v).
For the other direction, assume that for each i, with 1 ≤ i ≤ m, there is an embedding e i for the subquery Q i in some graph segment G j . Assume also that e 1 , . . . , e n are mutually compatible embeddings and let e be their join. We will prove that e is an embedding of Q in G. It is easy to see that for every v ∈ N (Q) there exists some i such that e i (v) is defined, which implies that e(v) is also defined. Thus, e is a total mapping. We next show that e is an embeding of Q in G. Consider any non-variable element v ∈ N (G). Since e is total, e(v) is defined. From the definition of join, there must be some i such that e(v) = e i (v), which implies e(v) = v (since e i is an embedding).
Finally
, let (v 1 , p, v 2 ) be a triple in Q. Since D Q is a decomposition of Q, (v 1 , p, v 2 ) belongs to some Q i . Since e i is a total embedding of Q i in an s-segment of G, it is also a total embedding of Q i in G. Thus, (e i (v 1 ), p, e i (v 2 )) ∈ G, which implies (e(v 1 ), p, e(v 2 )) ∈ G. 2
The above theorem suggests the following strategy for the evaluation of a query Q on a data graph G, called QE-with-Redundancy. QE-with-Redundancy strategy assumes a star-oriented decomposition of the data graph G. To obtain such a decomposition we assume an arbitrary partition N P = (N 1 , . . . , N m ), with m ≥ 1 of the nodes in N (G) − L. Then we decompose the data graph G into a tuple of graph segments D G = (G 1 , . . . , G m ), such that D G is a staroriented decomposition of G (as defined in Definition 17).
The QE-with-Redundancy strategy consists of the following steps:
Step 1: Decompose the query Q into a tuple of queries D Q = (Q 1 , . . . , Q n ), with n ≥ 1, such that each query in D Q is a subject-object star query.
Step 2: Compute all possible embeddings of each subquery in D Q on every segment in D G .
Step 3: Compute the embeddings of Q on G by joining compatible embeddings of the subqueries Q 1 , . . . , Q n .
It is important to note that the algorithm is independent of the choice of the specific partition N P of the nodes in N (G) − L, used for the data graph decomposition, as well as of the specific query decompotition strategy (employed in Step 1).
Query decomposition algorithms
In this section, we present and analyze algorithms for decomposing queries into a set of so-subqueries. In the previous subsections, we assumed that the queries are decomposed into a set of subqueries, but we have not typically discussed any algorithm for finding such a decomposition, so far. Although the algorithms presented in Sections 4.1, 4.2, and 4.3 can be used to evaluate a query over a single machine, they are designed to be efficiently applied on a distributed environment, as we will see in the next sections. In this context, we focus on decomposition algorithms that can boost parallelization. Furthermore, although the QEJPE algorithm (Sections 4.1) is quite generic and can support every query decomposition, the algorithms presented in this section aim to take advantage of the special, so-queries decomposition, which can be used in the evaluation algorithms eval-STARS (Section 4.2) and QE-with-Redundancy (Section 4.3).
Intuitively, the decomposition approach followed can affect the efficiency of the overall query evaluation process, since an appropriate algorithm can significantly reduce the amount of the data transferred through the network (i.e., the amount of intermediate results). For example, in the extreme scenario that we decompose the query in Figure 2 so that each edge defines a different subquery, it is easy to see that all the 6 edges with predicate "hasAuthor" are mapped by the edge-subquery {(?A, hasAuthor, ?W )}; hence, 6 embeddings are resulted by
Step 2 of the QE-with-Redundancy algorithm and passed to Step 3. If, however, we decompose the query Q in such a way that at least one constant (i.e., nonvariable node) is included in each subquery, the number of embeddings found in Step 2 and used in Step 3 is significantly reduced; e.g., consider the query {(?A, hasAuthor, ?W ), (?A, year, 2008)}, or Q itself. Practically, the more the number of constants each query has, the less the embeddings that are found, since the constants filter out useless embeddings (i.e., partial embeddings that surely cannot be used in Step 3 to construct a total embedding). The Steps 3 and 4 of the eval-STARS algorithm operate similarly. The following proposition proves this statement.
Proposition 9. Let Q 1 and Q 2 be two generalized star queries, such that Q 2 ⊆ Q 1 and each triple in the set
(Q 1 − Q 2 ) is either of the form (C, p, c) or of the form (c, p, C), where C = C(Q 1 ) = C(Q 2 )
, p is a predicate, and c is not a variable. Then, for every data graph G the set of answers of Q 1 over G is a subset of the set of answers of Q 2 over G, and n e 1 ≤ n e 2 , where n e i is the number of embeddings of Q i over G, with i = 1, 2.
The proof of the previous proposition is straightforwardly given by expressing both queries as conjunctive queries and checking containment of the corresponding conjunctive queries [20,14].
To maximize the number of constants in each subquery, one could come up with the following simple decomposition algorithm (called naive algorithm). Let Q be a query.
Step 1: For each node n in N (Q), we construct the star query Q n such that Q n includes all the edges in Q of either the form (n, p, m) or the form (m, p, n), where m ∈ N (Q). Let S Q be the set including all the subqueries constructed by this process.
Step 2: We, then, remove from S Q the subqueries that are not so-queries.
It is easy to see that the remaining subqueries in S Q form a decomposition D Q of Q that can be used in both QE-with-Redundancy and eval-STARS algorithms.
Proposition 10. Considering a query Q, the naive algorithm results a decomposition D Q of Q such that each query in D Q is an so-query.
The proof of the Proposition 10 is straightforward since each edge of Q will be at least in the subquery centered by its subject. Furthermore, it is easy to see that the naive algorithm results a query decomposition that maximizes the number of constants in each star subquery, since each subquery is constructed by a query node along with all of its adjacent edges. This algorithm, however, results a quite large number of subqueries as at the worst-case scenario one subquery for each query edge is obtained (the subject of each query edge, may introduce a new subquery), and does not limit the number of variables in each subquery, which might impact the overall performance of the query evaluation, as we will see in the next sections.
In the following sections, we study two additional parameters, the number of variables into each subquery and the number of subqueries in the decomposition, in order to provide an effective decomposition approach. In particular, in Section 4.4.1, we present an algorithm that decomposes a query in a way that the number of variables do not exceed a given threshold. Section 4.4.2 discusses multiple algorithms that aim to reduce the number of subqueries into the decomposition.
Subqueries with a limited number of variables
In this section, we present a decomposition algorithm that aims to keep the number of the variables in each subquery as low as possible. As Proposition 9 shows, by decomposing into so-subqueries with large number of constants we can achieve significant improvement in the overall performance of both QEwith-Redundancy and eval-STARS algorithms. However, we might not have the same result if we choose star subqueries with large number of variables. To see which is the impact of the number of variables into the overall query evaluation process we start our analysis with an example.
Consider Notice that although the answers of both Q and Q 2 are empty, there are 9 total embeddings from Q 1 to G, giving 9 answers. Looking at the algorithms eval-STARS (Steps 4 and 5) and QE-with-Redundancy (Steps 3 and 4), it is worth further decomposing Q 1 into two subqueries Q 11 , Q 12 , one for each edge, instead of keeping Q 1 into D. In particular, if we replace Q 1 in D with Q 11 and Q 12 , the number of embeddings found in Step 4 of eval-STARS and passed to Step 5 (resp., found in Step 3 of QE-with-Redundancy and passed to Step 4) is 6, instead of 9 in the case we use Q 1 .
In the previous example, we saw that the presence of multiple variables in a subquery might increase the number of embeddings of this subquery in both QE-with-Redundancy and eval-STARS algorithms. Especially, if we apply these algorithms into a distributed environment, as we will see in the next sections, we might have significant impact on the performance of each algorithm, since the communication cost might be increased tremendously from the large number of embeddings transferred through the network.
In this context, we present the min-res decomposition algorithm, which finds a decomposition by keeping the number of variables into each subquery at most 2. One could wonder how we come up with the threshold number 2. Typically, we want to keep the number of variables in each subquery as low as possible. If we set such a threshold to one variable, we miss edges consisting of two variables, i.e., we cannot find a valid decomposition of any given query.
Min-res algorithm decomposes a query Q into a set of so-subqueries, such that each subquery has at most two variables. It also allows replication of triples that contains at most one variable, and maximizes the number of "constraints" (triples that do not increase the number of variables in the query) in each subquery containing variables. As for the subqueries that do not contain any variable, the algorithm constructs maximal subqueries without redundant constraints. The min-res decomposition algorithm, in detail, is given as follows. . Intuitively, the algorithm performs as follows. Let Q be a query. Initially, for each edge t of two variables in Q, it constructs an so-query Q s having the subject s of t as central node. All the adjacent edges of s in Q such that s is their only variable are added into the subquery. In each construction step, the possibility to get an so-query Q o , whose central node is the object of t, is also considered, and the query with maximum number of edges between Q s and Q o is finally selected. It is easy to see that the subqueries constructed in this step include 2 variables. Next, the algorithm constructs the subqueries that have a single variable as central code. These subqueries have at least one edge whose subject is the central variable. Then, the remaining query triples give so-subqueries whose central node is not a variable. Each of the subqueries constructed in this step have at most a single variable which is not the central node. Notice here that the min-res algorithm constructs subqueries with two variables only if those variables are used by an edge in Q. Note also that in each of the aforementioned steps, we build an so-subquery by initially selecting an edge from a set (e.g., the set of edges having two variables). The order the edges are selected might give different decompositions. Here, we consider an arbitrary ordering of the edges included in each set. The following proposition shows that the min-res algorithm results a decomposition of a query into a set of so-queries.
Proposition 11. Considering a query Q, the min-res algorithm results a decomposition D Q of Q such that each query in D Q is an so-query.
Proof. Consider the sets T sub−obj , T sub , T obj , T c of edges as defined in the min-res algorithm. To prove that R is a decomposition of Q, we need to show that (1) each query Q i in R is a subquery of Q, and (2) Qi∈R Q i = Q. The first condition is straightforward since each edge of Q i is an edge in T sub−obj ∪ T sub ∪ T obj ∪ T c , which equals Q. To show the second condition, we need to prove that each edge t of Q is included in at least one subquery in R. Since the algorithm uses all the edges in T sub−obj ∪ T sub ∪ T obj ∪ T c to construct the subqueries, we have that t is included in at least one subquery in R. Besides, it is easy to see that, by construction, all queries in D Q are so-queries. 2
Example 8. Consider the query Q depicted in Figure 5. Figure 7 illustrates a decomposition of Q resulted by the min-res algorithm. In particular, we initially select the edge t2 and construct Q 1 . Similarly, the queries Q 2 , Q 3 and Q 4 are given by selecting the edges t3, t4 and t5, respectively, having two variables, as well. Q 5 and Q 6 are then constructed by selecting the corresponding edges of Q. Note here that the edge t6 is replicated to multiple subqueries, as minres algorithm requires, since it can reduce the number of intermediate answers through the constant "2008".
Reducing the number of subqueries
Unlike the min-res algorithm which minimizes the number of variables in each subquery, in this section, we investigate algorithms that keep the number of so-subqueries as low as possible, as well as select so-subqueries with high degree. As we will see in the following, there are settings where the number of queries in the decomposition affects the overall performance of the query evaluation, since large number of subqueries might increase the amount of intermediate results.
The following example presents such a case. Example 9. Consider the query Q and the data graph G depicted in Figure 5 and Figure 2, respectively. It is easy to see that there is a single total embedding from Q to G. Suppose two decompositions D 1 Q and D 2 Q illustrated in Figure 7 and Figure 8, respectively. As we saw in Example 8, D 1 Q is resulted by min-res algorithm. Counting now the embeddings found for the subqueries of each decomposition over G, we have that there are 12 embeddings, in total, from queries in D 1 Q to G, while D 2 Q gives 10 embeddings. Hence, we can see that although each subquery in D 1 Q has minimum number of variables, the total number of embeddings is high, due to the large number of subqueries. Box
To construct a decomposition with minimum number of subqueries, we follow an approach based on the naive algorithm. In particular, considering a query Q, a simple algorithm, called min-subquery decomposition algorithm, computing a decomposition with minimum number of so-subqueries is given as follows.
Step 1: We initially apply the naive algorithm and get a decomposition D N .
Step 2: Then, we construct the set S including all the subsets D of D N such that the queries in D cover all the edges of Q; i.e., Qi∈D (Q i ) = Q.
Step 3: Finally, we find the sets in S with the minimum number of subqueries and output one of them.
It is easy to see that the min-subquery algorithm returns a decomposition with minimum number of so-subqueries. Note that there might be multiple decompositions that minimize the number of so-subqueries. Proposition 12. Considering a query Q, the min-subquery algorithm results a decomposition D Q of Q such that each query in D Q is an so-query and D Q has the minimum number of so-subqueries, among all the decompositions of Q including so-subqueries.
The proof of the previous proposition follows by the Proposition 10. The last step of the algorithm also ensures that the output has the minimum number of so-subqueries.
Although the min-subquery algorithm returns a minimal decomposition, it applies an exhaustive search over the search space and the resulted decomposition has high redundancy (i.e., there are triples that are included in two subqueries). Especially, if the replicated edges include variables, as we saw in the previous section, the amount of the intermediate results could affect the overall evaluation time. To overcome these issues, we focus on an efficient approach that constructs a decomposition based on the nodes' degree. In particular, we focus on selecting first the subqueries containing as many triples as possible. In addition, each query triple is included in a unique so-subquery (i.e. redundancy is not allowed in query decomposition). The decomposition algorithm, called max-degree, that follows this approach is given below. Intuitively, the max-degree algorithm performs similarly to the min-subquery algorithm. In particular, in each step, it finds an so-query with max degree and removes its edges from the remaining so-stars. The algorithm stops once all the query edges are covered. Note that the max-degree algorithm does not aim to minimize the number of subqueries in the decomposition. An example describing such a case is illustrated in Figure 9. Notice that D 1 and D 2 are two decompositions of Q, where D 1 is the result of the max-degree algorithm and D 2 is the result of min-subquery algorithm. On the other hand, there are queries where the results of both algorithms match. Such an example is illustrated in Figure 8.
As we mentioned above, the max-degree decomposition algorithm does not apply any edge replication (no redundant edges are allowed). Lack of replication might improve the performance of finding the embeddings of a subquery, since less edges are checked in order to find an embedding. However, as we saw in Proposition 9, replicating edges that add more constraints in the subquery might decrease the total number of embeddings of subqueries; hence, it may also decrease the communication cost in a distributed execution. Taking this into account, we present a modification of the max-degree algorithm, called maxdegree-with-redundancy, which replicates edges with constants. The max-degreewith-redundancy algorithm is given by replacing the function ReconstructS Q in max-degree algorithm with the function ReconstructS Q Redundancy, which is defined as follows.
ReconstructS Q Redundancy(S Q , R, T Covered ) //Find next so-subquery and update both the result R and the set T Covered of covered edges. begin //select a maximal so-query in S Q select a (n, Q ) ∈ S Q such that |Q | is maximal among all elements in S Q . //add triples that have already covered and do not add any new variable to the subquery found Q = Q ∪ {t = (n1, p, n2)|t ∈ (Q − Q ), and either n1 = n and n2 / ∈ V(Q) or n2 = n and n1 / ∈ V(Q)}; R = R ∪ {Q }; // ... add Q to the result and ...
T Covered = T Covered ∪ Q ; // ... add its triples to T Covered . S old Q = S Q − {(n, Q )}; return S old Q , R, T Covered ; end.
Comparing the max-degree and max-degree-with-redundancy algorithms, we can easily see that the function ReconstructS Q Redundancy used in the maxdegree-with-redundancy algorithm to construct each subquery and add it into the resulting set R, constructs each subquery Q from Q (which is similar to the query Q constructed by ReconstructS Q in the max-degree algorithm) and all the query triples in Q that either start or end to the central node of Q and do not include a variable in the other node; i.e., the triples having constants in the non-central node are replicated and reused. In the contrary, the max-degree algorithm (function ReconstructS Q ) does not replicate any edge during construction of the result.
Proposition 13. Considering a query Q, the results of both the max-degree and max-degree-with-redundancy algorithms are decompositions of Q that include only so-queries.
Proof. By construction all subqueries produced in both algorithms are soqueries. Besides, as all query triples are used the algorithms produce decompositions of Q.
2
As we mentioned above the main difference between max-degree and maxdegree-with-redundancy algorithms is that in the latter, we replicate edges that have constants in the adjacent nodes of the central node. As Proposition 14 shows, the decomposition resulted by the max-degree-with-redundancy algorithm might reduce the number of embeddings exchanged between the last two steps of the evaluation algorithms eval-STARS and QE-with-Redundancy, comparing to the corresponding decomposition resulted by the max-degree algorithm.
Proposition 14. Let Q be a query and D M be a decomposition of Q resulted by the max-degree. Then, there is a decomposition D R of Q resulted by the max-degree-with-redundancy algorithm such that the following hold:
• there is an one-to-one mapping µ from D M to D R such that µ(P R ) = P M , if P M ⊆ P R , where P R ∈ D R and P M ∈ D M ; and
• for each data graph G and every query P in D R , the number of embeddings of P over G is less than or equal to the number of embeddings of µ(P ) over G.
Proof. Let Q be a query and D M be a decomposition of Q resulted by the maxdegree. We now need to prove that the max-degree-with-redundancy algorithm can result a decomposition of Q which satisfies the aforementioned properties. Each subquery Q in D M is constructed by the function ReconstructS Q , and specifically, once it is constructed it is inserted into the resulting set R (which eventually equals D M ). Let's consider that instead of returning the subquery Q into the result R, we return the subquery Q = Q ∪{t = (n 1 , p, n 2 )|t ∈ (Q−Q ), and either n 1 = n and n 2 / ∈ V(Q) or n 2 = n and n 1 / ∈ V(Q)}, where n is the central node of Q . Since R simply stores the resulting subqueries and is not used in any other step of the algorithm, such a modification does not affect the construction of the subqueries. It is easy to verify that the modified function is given by the function ReconstructS Q Redundancy, and the modified algorithm is the max-degree-with-redundancy. Let also D R be the result of the modified algorithm (i.e., the final set R returned by the algorithm); hence, D R is the result of the max-degree-with-redundancy.
According to the previous modification, for each query Q in D M , there is a query Q in D R , such that Q = Q ∪ {t = (n 1 , p, n 2 )|t ∈ (Q − Q ), and either n 1 = n and n 2 / ∈ V(Q) or n 2 = n and n 1 / ∈ V(Q)}, where n is the central node of Q . Hence, there is an one-to-one mapping µ from D M to D R such that Q = µ(Q ) and Q ⊆ Q ; which proves the first condition of the proposition. Furthermore, Proposition 9 and the construction of Q from Q imply that for each data graph G, the number of embeddings of Q over G is less than or equal to the number of embeddings of Q over G; which means that the second property is also satisfied. Hence, the decomposition D R satisfies both properties of the Proposition 14.
2
As we have seen, both max-degree and max-degree-with-redundancy algorithms iterate over the maximal so-subqueries found by the F indInitM axSoQueries function, from the queries of max degree to the queries with min degree, and remove triples covered in the previous iterations. In each iteration, if the query resulted by removing the covered triples is not an so-query, then both the algorithms ignore this query and continue to the next iteration. Let Q R be the set of the remaining triples, in such cases. Note that Q R will be covered in the next iterations, but the number of iterations might increase due to the triples that do not form an so-query in some iterations. To reduce the number of iterations, we can construct so-queries from the triples in Q R by adding to Q R a triple that makes it so-query. Such a triple t is found in the set of covered triples. In addition, to avoid replicating triples that add variables to a query, we remove the triple t from the so-query that was constructed in the previous iterations. Such an approach might reshape the so-queries constructed in the previous iterations. A decomposition algorithm following this approach is given as follows, and called max-degree-with-reshaping. //i.e. remove covered triples whose object is n that add variable in Q T = {t | t = (n, p, o) ∈ Q and t ∈ T Covered and o ∈ V(Q )}; Proposition 15. Considering a query Q, the results of max-degree-with-reshaping algorithm are decompositions of Q that include only so-queries.
If S − T is so-query then S = S − T else begin S = S − T ∪ {t } where t is a triple in T ; replace F by F − {t } in R where F is
Proof. By construction all subqueries produced in the algorithm are so-queries. Besides, as all query triples are used the algorithm produces decompositions of Q. 2
Distributed query evaluation algorithms using MapReduce
In this section, we present a set of distributed algorithms implementing the query evaluation approaches presented in Section 4. These algorithms take advantage of the commutation power provided by the MapReduce computation framework.
The MapReduce framework
MapReduce is a programming model for processing large datasets in a distributed manner. It is based on the definition of two functions, the Map and the Reduce function. The storage layer for the MapReduce framework is a Distributed File System (DFS), such as Hadoop Distributed File System (HDFS), and is characterized by the block/chuck size (the chunk size, which is larger than the chuck size in conventional file systems, is typically 16-128MB in most of DFSs) and the replication of chunks in relatively independent locations to ensure availability. Creating a MapReduce job is straightforward. Briefly, the user defines the functions, which run in each cluster node, in isolation. The map function is applied on one or more files, in DFS, and results [key,value] pair. This process is called Map process/task. The nodes that run the Map processes are called Mappers, and may run multiple tasks over different input files. The master controller is responsible to route the pairs to the Reducers (i.e., the nodes that apply the reduce function on the pairs) so that all pairs with the same key initialize a single reduce process, called reduce task. The reduce tasks apply the reduce function on the input pairs and result [key,value] pairs; which are stored in the DFS. This procedure describes one MapReduce step. Furthermore, the output of the reducer can be set as the input of a map function, which gives to the user the flexibility to create pipelines of multiple steps.
Overall methodology
Before describing the query MapReduce query evaluation algorithms, we focus on presenting the main patterns used to construct these algorithms. In particular, the algorithms presented in the upcoming sections are based on the following patterns:
1. Data graph decomposition: The data graph G is decomposed into a set of data segments according to a given decomposition approach. The data graph segments are stored in the nodes of a cluster of commodity computers. 2. Storage of the data graph segments: A generic methodology for storing the data graph segment is used. Such an approach focuses on storing the RDF data into simple text files in N-triple format. Each file also includes the set of border nodes of the segment represented by the triples in the file. Although the segments are stored in simple text files, relational, NoSQL and graph databases could be used, instead, for storing the corresponding segments. Especially the use of multiple relational databases to store the data segments can facilitate the implementation of certain algorithms, but it has a significant impact on the scalability, and fault tolerance. 3. Query graph decomposition: The query graph Q is decomposed into a tuple of subqueries (Q 1 , . . . , Q n ), with n ≥ 1, according to the principles specified in the definition of the corresponding algorithm. 4. Implementing the query evaluation algorithm: The proposed query evaluation algorithms are implemented in the MapReduce programming framework. In general, the implementation of each algorithm consists of a preprocessing phase followed by two MapReduce phases (see next section).
Preprocessing Phase
As mentioned earlier, all the query evaluation algorithms presented in the subsequent sections consider a preprocessing phase, where the setting is prepared. In particular, the pre-processing phase accepts a query Q which is posed by the user and decomposes it into a tuple of subqueries (Q 1 , . . . , Q n ), with n ≥ 1, following the decomposition principles determined by the specific query evaluation algorithm. These subqueries broadcasted or distributed to the mappers of the first MapReduce phase of the query evaluation algorithm.
Preprocessing phase also constructs some auxiliary structures and emits them to the mappers/reducers that implement the algorithm. To define these structures we assume an enumeration n 1 , n 2 , . . ., n |N (Q)| of the nodes of the query Q, so that n 1 , n 2 , . . . , n |B(Q)| are the border nodes of Q and n |B(Q)|+1 , . . ., n |N (Q)| are the non-border nodes of Q. We denote by I the function that gives the index of a node in N (Q) with respect to the above enumeration (that is, for every x ∈ N (Q) it holds x = n I(x) ). We also denote by I nb the function from Consider now that a query prototype is assigned to each (sub)query Q i . Each item in the tuples of the prototype has either the value '+' to denote the presence of the corresponding border node/non-border node/triple, in Q i , or the value '-' to denote the absence of that node or triple.
We also construct a set 2 called Missing Border Nodes (MBN) as follows:
M BN = {(b i , Q j )| b i ∈ B(Q) and b i ∈ N (Q j )}. An element (b i , Q j ) in MBN
denotes that the border node b i of Q does not appear among the nodes of the subquery Q j of Q.
Based on the idea of query prototype we can represent a partial or total embedding e of a (sub)query in a similar way; i.e. as a triple of tuples of the form (BorderN odeV alues, N onBorderN odeV alues, T riplesM atched). More specifically, BorderN odeV alues stores the images of the border nodes of the query through the (partial)embedding, while N onBorderN odeV alues stores the images of non border nodes of the query. The star symbol ('*') is placed in the corresponding node place if no image of that node is defined in e. Finally, T riplesM atched keeps track of the triples of the query that have images on the data graph through the (partial) embedding e (by putting a '+' sign or a '-' sign in the corresponding place of T riplesM atched).
QEJPE-algorithm
In this section we present an implementation of the query evaluation algorithm (QEJPE-algorithm) presented in Subection 4.1. The implementation is based on the MapReduce programming framework. Besides the general assumption on which all algorithms are based, we have the following specific assumptions of the present algorithm:
1. In this algorithm both the decomposition of the data graph G and the query graph Q may be redundant or non-redundant. The query graph Q is decomposed into a tuple of arbitrary subqueries (Q 1 , . . . , Q n ), with n ≥ 1. 2. The implementation of the algorithm consists of a preprocessing phase followed by two map-reduce phases:
(a) In the first map-reduce phase the subqeries are applied to each graph segment, in isolation, and intermediate results are computed. More specifically, the mappers of phase 1 compute useful (partial or total) embeddings of the subqueries, by applying each subquery to each specific graph segment. Then the reducers of phase 1 combine (i.e. join) the partial embeddings to compute the total embeddings of each subquery. Notice that the total embeddings of the subqueries are, in general, partial embeddings of the query Q to the graph G. (b) In the second map-reduce phase, the embeddings of the subqueries are combined appropriately to produce the embedding of the query Q on the graph G. More specifically, the mapper of phase 2 fills the missing border nodes in each sub-query embeddings using the values obtained from the embeddings of the other subqueries. Then, reducers of phase 2 construct the embeddings of the query Q by combining compatible embeddings, one for each subquery.
The preprocessing phase
In the preprocessing phase the users' query Q is decomposed into a tuple of subqueries (Q 1 , . . . , Q n ), with n ≥ 1 and the auxiliary structures presented in Subsection 5.2 are constructed. Preprocessing phase emits these structures with key the pair (subqueryID, SegmentID) to the mappers of Phase 1, except of M BN list, which is emitted directly to the reducers of Phase 1.
Example 10. Consider the query Q appearing in Fig. 4 and assume that the subqueries Q 1 , Q 2 , and Q 3 are constructed in the preprocessing phase. Assume also that the numbering functions has numbered the nodes and the edges of Q as shown in Fig. 4. Then, it is easy to see that B(Q) = {n1, n2, n3} while N (Q) − B(Q) = {n4, n5}. Finally, the lists of triples is (t1, t2, t3, t4, t5). It is thus easy to see that the query prototypes for the subqueries Q 1 , Q 2 and Q 3 are:
Phase 1 of the QEJPE-algorithm
The mapper of phase 1 gets as input a subquery Q i and a graph segment G j and evaluates Q i on G j obtaining in this way all useful (total and partial) embeddings. These embeddings are emitted to the reducers of Phase 1 with key the subquery ID Q i . The procedure for the Mapper of Phase 1 is given below: Example 11. (Continued from Example 10). Some embeddings of the subqueries Q 1 , Q 2 and Q 3 (see Fig. 4) in the segments G 1 , G 2 and G 3 (see Fig. 3) computed by the corresponding mappers and emitted with key the subquery ID, appear below. More specifically, a total embedding evaluated and emitted by the mapper working on (Q 1 , G 1 ) is 3 :
(1) key = Q1, value = (<Person4,Article1,*>, <*,"Title1">, <+, , , ,+>)
The Mapper working on (Q 2 , G 1 ) computes and emits the partial embedding:
(2) key = Q2, value = (<*,Article1,Person4>, <*,*>, < , ,+, , >) Among the embeddings obtained and emitted by the Mapper working on (Q 1 , G 2 ) is the (partial) embedding: The Mapper working on (Q 3 , G 2 ) computes and emits the total embeddings: The Mapper working on (Q 2 , G 3 ) emits the partial embeddings: Finally, the mappers working on (Q 3 , G 3 ) and (Q 3 , G 1 ) return no (partial or total) embeddings.
2
It is important to note that the procedure for the mapper1 does not determine a specific method for the computation of the useful (partial) embeddings of the subqueries. This means that, every algorithm that can compute all partial embeddings can be used in a specific implementation of the mapper1. Moreover, mapper1 is independent of the way the data graph is stored.
A Reducer of Phase 1 receives all useful (partial) embeddings of a subquery Q i whose ID is the key of the reducer, in all graph segments G 1 , . . . , G m of G. A reducer: (a) computes all total embeddings of Q i in G and emits them to the mappers of Phase 2 with key the subquery ID, and (b) it finds all border node values from the total embeddings of Q i that are missing from the total embeddings of other subqueries and emits them with the appropriate subquery IDs as keys. The reducer is defined as follows 4 : ). An embedding is total if it has images for all (border and non-border) nodes and triples of the subquery (i.e. for all nodes and triples of the subquery the '+' sign appears in the corresponding place of the query prototype.) 4 In the presentation of the procedures the following abbreviations are used: bnv stands for BorderNodeValues, nbnv for NonBorderNodeValues, and tm for TriplesMatched. Example 12. (Continued from Example 11). Among the total embeddings of Q 1 that constructs and emits reducer with key Q1 are:
(1)=> key = Q1, value = (<Person4,Article1,*>, <*,"Title1">) The Reducer for key Q2 constructs and emits the total embeddings for Q 2 : Some key-value pairs produced and emitted (as above) by this mapper are: key = (Person4,Article1,Person1), value = (Q3, <*,*>) key = (Person2,Article2,Person3), value = (Q3, <*,*>)
2
In each reducer of phase 2, the embeddings (one for each subquery in (Q 1 , . . . , Q n )) are joined 5 to construct the final answers of Q. The reducer of phase 2 is given below: Combining (i.e. joining) these embeddings the reducer returns the answer: Some improvements to the proposed algorithm are as follows :
1. Note that, in order to obtain all total embeddings of a subquery Q i in reducer1 it suffices to combine partial embeddings obtained from different data graph segments. However, such provenance information is not emitted from mapper1 in its present form. It is, however, easy to adapt the QEJPE-algorithm, so as the mapper1 emits this information to reducer1 and the reducer1 takes it into account to construct more efficiently total embeddings of the subqueries. 2. Notice that, as we can see in Example 12, several instances of reducer1 may emit the same values, either embeddings or missing node values, to the mapper2. This is due to the fact that the same embedding or the same candidate missing node value may be found and emitted by several reducers. Thus, a specific instance of mapper2 may receive multiple times the same value which may lead in the construction of the same embedding several times. A possible optimization is to eliminate redundant values from the lists E of embeddings and the list V of candidate values for missing nodes that an instance of a mapper2 receives before the computation of embeddings that will be emitted to reducer2 .
eval-STARS algorithm
In this section we present a MapReduce based implementation of the eval-STARS query evaluation algorithm presented in Subection 4.2. The algorithm is based on similar assumptions on which the QEJPE-algorithm is based. The main difference is that in eval-STARS algorithm, a queries Q posed by the user is decomposed into a tuple of queries (Q 1 , . . . , Q n ), with n ≥ 1, of a specific form called generalized star queries. The query decomposition may be redundant or non-redundant.
The implementation of the algorithm consists of a preprocessing phase followed by two map-reduce phases: The first map-reduce phase takes advantage of the generalized star form of the sub-queries and focuses on evaluating the generalized star subqueries over the input segments. The results of the subqueries are emitted to the second phase, which combines them properly in order to produce the answers of the initial query.
In the preprocessing phase the users' query Q is decomposed into a tuple of generalized star subqueries, with n ≥ 1 and the auxiliary structures presented in Subsection 5.2 are constructed. Preprocessing phase emits these structures with key the pair (subqueryID, SegmentID) to the mappers of Phase 1.
Example 15. Consider the query graph appearing in the left part of Fig. 5 which is decomposed into three generalized star subqueries Q 1 , Q 2 , and Q 3 appearing in the right part of The list M BN = [(n 2 , Q 1 ), (n 3 , Q 3 )] is also constructed in preprocessing phase. 2
Phase 1 of the algorithm
The first phase of the algorithm computes the embeddings of the generalized star subqueries Q 1 , . . . , Q n in G.
In Phase 1 each mapper gets as input a generalized star subquery Q i , a graph segment G j and the M BN list. Let c i = C(Q i ) be the central node of Q i (recall that this node appears in every triple of Q i ). The operation of the mapper is divided into two parts.
Part 1: The mapper computes the embeddings of each triple of Q i in G j that map the central node c i to a border node or to a literal, and emits the results to appropriate reducers. More specifically, let t = (s, p, o) be a triple that belongs to subquery Q i and let e be an embedding of t into G j such that e(c i ) ∈ B(G j ). If the central node of Q i is s then the mapper emits a pair (key, value), where key = (Q i , e(s)) and value = (o, e(o)). Otherwise (i.e., if the central node of Q i is o) then key = (Q i , e(o)) and value = (s, e(s)).
Notice that embeddings of triples in Q i that map c i to different nodes of G j are incompatible and cannot be joined to obtain an embedding of Q i . Since the value of c i is included in the key, incompatible embeddings of triples are emitted to different reducers, while compatible embeddings are emitted to the same reducer.
Part 2: This part of the mapper1 computes all embeddings of Q i into G j which map the central node of Q i to a non-border and non-literal node of G j . Notice that if for some embedding e of Q i in G the value of c i is a non-border and non-literal node of G j (i.e., is e(c i ) ∈ (N (G i )−(B(G i )∪L))), then e(v) ∈ G j for every node v ∈ N (Q i ). This means that e is an embedding of Q i into G j and it can be computed locally i.e. no other data graph segments are needed to compute e.
The computation of the embeddings of Q i into G j , which map c i to a nonborder node of G j can be achieved either by adding an appropriate conjunct to Q i , or by computing all the embeddings of Q i in G j and then removing those that assign border nodes to c i . The embedings computed in the second part of the mapper are directly emitted to the mappers of Phase 2 (rather than to the reducers of Phase 1). Similarly, the values of missing border nodes are emitted to the mappers of Phase 2. Example 16. (Continued from Example 15). In this example, we assume that the query graph Q and its generalized star subquries are those appearing in Fig. 5, while the data graph G and the graph segments obtained by decomposing G are those appearing in Fig. 3. Below, we see the application of mapper1 on the pairs of subqueries and graph segments: Applying mapper1 on (Q 1 , G 1 ) results in emission (see Part 1 of the procedure for mapper1) of the following (key, value) pairs to the reducer1: key = (Q1, Article1), value = (n1, Person4) (embedding of t1) key = (Q1, Article1), value = (n6, "Title1") (embedding of t7)
No key value pairs are emitted to Phase 2 (see Part 2 of the procedure for mapper1).
Concerning the Reducer of Phase 1 For each key (Q i , v) the corresponding reducer computes all the embeddings of Q i that map the central node c i of Q i to v. The input to this reducer is a list of pairs of the form (n k , u), where n k is a node of Q i different from c i and u is a possible value for n k in an embedding of Q i in G. Suppose that n k1 , n k2 , . . . , n km are the non-central nodes in From the above we see that the list for the non-central node n 7 is empty. Thus, these values cannot be user to construct a valid embedding for the query Q 2 . Therefore, nothing is emitted to the next phase from this reducer. The reducer with key (Q 2 , Article2) receives the following list of values: The reducer with key (Q 3 , P erson4) receives the following list of values:
[(n2, Article1), (n2, Article3)].
Based on this values it constructs the following lists (corresponding to the values of the non-central nodes n 1 and n 2 of the subquery Q 3 ):
L[1] = [ ] L[2] = [Article1, Article3]
From the above we see that the list for the non-central node n 1 is empty. Thus, these values cannot be used to construct a valid embedding for the query Q 3 . Therefore, nothing is emitted to the next phase from this reducer. and constructs the unique embedding of Q in G:
(<Person2, Article2, Journal1>, <Article1, Person3, Title1, "2008">)
The remaining 11 reducers do not return any answer (they don't receive values for at least one subquery). 2
Discussion
Due to the specific form in which the user query Q is decomposed, namely the generalized star queries, the eval-STARS algorithm can compute embeddings more efficiently than QEJPE-algorithm computes partial embeddings of the subqueries of Q.
Notice also that mapper1 computes and emits directly to mapper2 total embeddings of the subqueries that map their central nodes to non-border nodes of the data graph segment. This is also an advantage of the eval-STARS algorithm compared with the QEJPE-algorithm.
QE-with-Redundancy algorithm
In this section we present an implementation of the QE-with-Redundancy query evaluation algorithm presented in Subsection 4.3 based on the MapReduce programming framework. Recall that, for the implementation of the algorithm we assume a star-oriented decomposition (s-decomposition) of the data graph G and a (possibly redundant) decomposition of the query Q posed by the user into a set of subject-object star subqueries {Q 1 , . . . , Q n }, with n ≥ 1. The implementation of the algorithm consists of a preprocessing phase followed by one and a half Map-Reduce phase. The first phase of our algorithm takes advantage of the star form of the sub-queries and focuses on evaluating the star subqueries over the input segments. The results of the sub-queries are emitted to the second phase, which combines them properly in order to produce the answers of the initial query.
The preprocessing phase
In the preprocessing phase the users' query Q is decomposed into a set of so-queries {Q 1 , . . . , Q n }, with n ≥ 1, and the auxiliary structures presented in Subsection 5.2 are constructed. Preprocessing phase emits the above to the mappers of Phase 1 with key the pair (subqueryID, SegmentID).
Example 20. To present the QE-with-redundancy algorithm, we will use again the query Q and its decomposition into three so-queries presented in Fig. 5. The query prototypes, the MBN list and the tuple of common border nodes of appearing in these subqueries are the same as in Example 15. Concerning the data graph decomposition, to present the algorithm we will use the data graph segments (s-segments) obtained by decomposing the data graph G as presented in Fig. 6. Query Q 2 has no embeddings in segment G 2 ; hence nothing is emitted in this case. The following two embedding of Q 2 into G 3 are computed by the algorithm: e'2 = (<Person2, Article2, Journal1>, <*, *, *, "2008">) e'3 = (<Person3, Article2, Journal1>, <*, *, *, "2008">)
As above, based on these embeddings as well as on the content of the MBL list and the CB(Q), mapper1 also emits to mapper2 the following key value pairs: key = (Q2, Person2), value = (<Person2, Article2, Journal1>, <*, *, *, "2008">) key = (Q1, Person2), value = (n2, Article2) key = (Q3, Person2), value = (n3, Journal1) key = (Q2, Person3), value = (<Person3, Article2, Journal1>, <*, *, *, "2008">) key = (Q1, Person3), value = (n2, Article2) key = (Q3, Person3), value = (n3, Journal1)
The following embedding of Q 3 into G 1 is computed by the algorithm: The following embeddings of Q 3 into G 2 are computed by the algorithm: e"'1 = (<Person4, Article1, *>, <*, Person1, *, *>) e"'2 = (<Person2, Article2, *>, <*, Person3, *, *>) As above, based on these embeddings as as well as on the content of the MBL list and the CB(Q), mapper1 also emits to mapper2 the following key value pairs: key = (Q3, Person4), value = (<Person4, Article1, *>, <*, Person1, *, *>) key = (Q1, Person4), value = (n2, Article1) key = (Q3, Person2), value = (<Person2, Article2, *>, <*, Person3, *, *>) key = (Q1, Person2), value = (n2, Article2)
Query Q 3 has no embeddings in segment G 3 ; hence nothing is emitted by this mapper. Each mapper in Phase 2 gets as input all the embeddings of a specific subquery Q i which have the same values for the nodes in CB(Q); moreover for each border node that does not occur in Q i it gets as input the values assigned to this node by the embeddings of the other subqueries. It fills in their missing border node values using the corresponding values in the input, and emits the resulted embeddings to the reducers of Phase 2. The key is the tuple of the border node values, which implies that two embeddings are emitted to the same reducer if and only if they are compatible. but it does not get any value for the missing border nodes. As V = ∅ no ground instances of (<Person1, *, Journal1>) can be found. Therefore this mapper does not emit (key, value) pairs to reducer2. The mapper applied for the key (Q 1 , P erson2) gets the value: This mapper constructs the instance <Person2, Article2, Journal1> of bnv and emits the following (key, value) pair to reducer2: key = <Person2, Article2, Journal1>, value = (Q1, <Article1, *, Title1, *>)
The mapper applied for the key (Q 1 , P erson3) gets the values:
Discussion
QE-with-Redundancy algorithm has several advantages compared with QEJPEalgorithm and eval-STARS algorithm. Notice that QE-with-Redundancy algorithm is implemented using one and a half Map-Reduce phases while QEJPEalgorithm and eval-STARS algorithm are implemented using two Map-Reduce phases. Another advantage of QE-with-Redundancy algorithm is that, due to the replication of the data triples in the decomposition of the data graph, and the special form of subqueries in which the user query Q is decomposed, namely subject-object star queries, all the answers to a subject-object star queries can be obtained from a single data segment.
On the other hand, due to the replication of the data triples in the decomposition of the data graph, multiple occurrences of the same embedding as well as multiple instances of member of MBL list may be produced and emitted in Phase 1 of the algorithm.
Experimental results
In this section, we present a set of experiments performed over a cluster of 10 virtual machines, and analyze the outcomes. Each cluster node has the the following characteristics: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz ( The data graph was partitioned using three approaches, random edge partition, vertex partition, and METIS. In particular, the random edge partition was implemented by randomly adding each edge into a file such that each file had approximately 450,000 triples. We also stored information about the border nodes in each file. The vertex partition essentially describes the implementation of s-decomposition approach defined in Subsection 4.3. The last partitioning approach used is METIS [39], in order to minimize the number of border nodes in each file. Note that the random edge and METIS partitioning approaches were used to evaluate queries using the QEJPE-algorithm and eval-STARS algorithms, while the vertex partition was used to evaluate the QE-with-Redundancy algorithm.
In the implementation of each algorithm, we used the library RDFLib 6 to pose the subqueries over data segments in each MapReduce task. In particular, in order to find the partial embeddings in each MapReduce task, we parse the data segment and load it into certain structure using RDFLib. Then, we use the query evaluation mechanism of the library to query the loaded segment and find the corresponding partial embeddings. Although the usage of RDFLib facilitates the evaluation of subqueries and provides an efficient evaluation tool in each task, there is an overhead due to loading of each data segment, which is around 40 seconds for each data segment. Note that the loading time does not include the transfer time of each segment as well as the time that takes each task to be initialized.
We conducted several types of experiments to investigate both the performance of the query evaluation algorithms proposed in this paper and the impact of the query decomposition algorithms on the overall query evaluation. In the following, we initially analyze the scalability of each query evaluation algorithm, in terms of both the size of the dataset and the number of cluster nodes. Then, we analyze how the query evaluation algorithms perform in widely-used pattern types of queries and different partitions of the data graph (the ones mentioned above). Finally, we analyze how the overall performance of query evaluation is affected by the type of query decomposition selected.
Scalability
In this section, we investigate the scalability of the QEJPE-algorithm, eval-STARS and QE-with-Redundancy algorithms. In particular, we conducted a set of experiments to analyze how the query evaluation algorithms perform in terms of both the size of the input dataset and the number of compute nodes in the cluster.
Initially, we selected three queries of different types from the WatDiv Benchmark and evaluated them using each of the algorithms over each of the D2-D4 datasets. The queries selected are illustrated in the Table 2, along with the number of subqueries generated per algorithm. For each query, the type of the query and the number of the resulting tuples for each dataset are included in the table, as well. Table 3 summarizes the execution time of each query, per evaluation algorithm and dataset, where L, S and F represent the Linear, Star and Snowflake queries selected, respectively. Figures 10a, 10b and 10c graphically show the execution time per dataset and evaluation algorithm, for each query. Figure 10d illustrates the average execution time for each dataset and each algorithm. Looking at the experimental results, we can see that although the amount of the data in each dataset is doubled (i.e., D3 and D4 have around 100% more triples than D2 and D3, respectively), the growth rate of the execution time remains less than 30%, in average; which shows that each algorithm scales well in terms of the size of the dataset 7 . To evaluate the scalability in terms of the size of the cluster (i.e., the number of compute nodes), we performed as follows. We evaluated over 3 cluster settings the 3 queries described in Table 2 using each algorithm over the dataset D4. In particular, the first setting had 4 compute nodes (NodeManagers), the second had 7 compute nodes and the last one utilized all the 10 available compute nodes. Then, we executed the evaluation algorithms in each setting. The execution Table 4. Figures 11a, 11b and 11c illustrate the execution time in terms of the size of the cluster per algorithm for each type of query. As we can see, the algorithms scale well in terms of the number of compute nodes; i.e., the execution time is decreasing by increasing the number of compute nodes.
Comparison of query evaluation algorithms
In this section, we present the outcomes of the experiments performed in order to compare the performance of the query evaluation algorithms proposed in this work; i.e., the QEJPE-algorithm, eval-STARS and QE-with-Redundancy algorithms. The evaluation performed by applying all the algorithms for a variety of queries over the dataset D4 described in the previous sections. We used multiple queries from WatDiv Benchmark, from all the proposed query types (Linear, Star, Snowflake, and Complex). Table 5 summarizes the queries used in this experiment, along with corresponding characteristics of each query; e.g., number of triples, number of variables, number of resulting tuples over the dataset D4, and the number of subqueries generated by the query decomposition.
The QEJPE-algorithm and eval-STARS algorithms were also tested over two data partitioning approaches, random partitioning and METIS, while the QE-with-Redundancy algorithm was only evaluated over vertex-partitioned data (due to the requirements of the algorithm). The execution time for each query is included in the Table 6, where the execution time is given in minutes, followed by seconds (i.e., Minutes:Seconds). The average execution time for each algorithm and each query type, per data partitioning approach, is illustrated in Table 7 and graphically presented in Figure 12. Note that evaluating the majority of Snowflake and Complex queries using QEJPE-algorithm the cluster reached the memory limits (14.5GB for all YARN containers on a node) and did not manage to provide any result. As we can see in the experimental results, QEJPE is more efficient for Linear queries than Star queries. In addition, METIS outperforms Random partition for both Star and Linear queries.
Queries L2 and L4 from WatDiv Benchmark (Linear query type) evaluated in all the algorithms and the mean execution times in seconds of these queries are presented in Figure 12a and in Table 7. QE-with-Redundancy algorithm performs better than QEJPE-algorithm and eval-STARS algorithms. eval-STARS algorithm perform better than QEJPE-algorithm using both METIS and Random partition.
Queries S3 and S5 were used to evaluate star type queries. QE-with-Redundancy algorithm performs better than QEJPE-algorithm and eval-STARS algorithms while eval-STARS perform better than QEJPE-algorithm. Both eval-STARS and QEJPE-algorithm perform better for METIS partition than random partition.
In case of Snowflake queries, queries F1 and F4 executed. Experimental results prove that QEJPE-algorithm is not efficient for this type of queries. QE-with-Redundancy algorithm performs again better results from eval-STARS algorithm. eval-STARS algorithm performed almost the same results for random and METIS partition.
Similar behavior with Snowflake queries had the Complex type queries. C3 query executed and QEJPE-algorithm was not efficient, QE-with-Redundancy algorithm performs better results than eval-STARS algorithm. In this type of query, eval-STARS algorithm performed better using random partition rather than METIS partition.
Query Decomposition Algorithms Evaluation
In this section, we experimentally analyze how the selection of the query decomposition algorithm can affect the overall query evaluation performance. We focus on the three main query decomposition algorithms proposed in Subsection 4.4; i.e., min-res, max-degree, and max-degree-with-reshaping. To perform this experiment, we decomposed multiple queries using the aforementioned decomposition algorithms and evaluate them using a single evaluation algorithm and over a single dataset.
In particular, we initially used a query template (i.e., query graph structure) over the Watdiv data model and generated six different queries by setting variables and constants to the nodes. The queries Q1-Q6 that were constructed are depicted in Figure 13, where the white-colored nodes represent the variables and dark-colored nodes represent constants. We also constructed an additional complex query Q7, over the Watdiv data model, asking for certain edges of the data graph multiple times. We then decomposed the queries Q1-Q6 using different decomposition algorithms and evaluated them using the QE-with-Redundancy algorithm and the dataset D4. For Q7 query, the smaller dataset D1 was used to overcome memory limitation due to the large number of results. The execution time for each query and each decomposition algorithm is illustrated in Table 8, along with the number of subqueries resulted by each decomposition algorithm and the number of resulting tuples. The execution time per query and algorithm is graphically presented in Figure 14.
Analyzing the execution time of the queries per decomposition algorithm (Table 8 and Figure 14), we can easily see that for queries Q1 and Q2, the For queries Q4-Q6 the performance of min-res algorithm is improved compared with the max-degree and max-degree-with-reshaping algorithms. Query Q3 is an exception since the decomposition resulted by all the three algorithms is the same.
Comparing now the execution time of the max-degree and the max-degreewith-reshaping, these algorithms resulted similar decompositions. Hence, as we can see in Table 8 and Figure 14, their execution time for the majority of the queries is very close.
Looking however the execution time of the query Q7, the decomposition resulted by min-res outperforms the decompositions given by max-degree and max-degree-with-reshaping. To analyze this result in more detail, we can easily see that since the node ?x1 is high degree, gives a subquery with multiple variables in the decompositions given by max-degree and max-degree-withreshaping. In addition, the variables ?x2, ?x3, and ?x4 in both subqueries map to the same data nodes and increase significantly the number of intermediate results (comparing with the number of the corresponding data edges mapped by these variables in all the embeddings). On the other hand, min-res handles such a case better, since it does not allow subqueries having more than 2 variables to be generated.
Conclusions
In this paper, we presented a set of distributed query evaluation algorithms that are independent of the storage and data distribution approaches. These algorithms could also be implemented in various distributed processing frameworks. We also presented a set of query decomposition approaches and analysed their advantages and disadvantages. Evaluating the proposed algorithms, we showed that each problem instance (data and query graph) might benefit from different decomposition algorithm and/or evaluation approach.
As future work, we aim to investigate the proper methods for storing data in order to further improve our algorithms. Investigation of the usage of certain NoSQL databases with the appropriate indices is also considered, in order to optimize the query plans used to combine the results of the generalized star subqueries in the last approach presented. Furthermore, we aim to analyze additional query decomposition approaches, focusing on finding an optimal query decomposition for every different setting. An additional topic for further investigation is how our approach could be extended to support query evaluation over dynamic RDF data. Finally, improvements of our algorithms using in-memory processing frameworks, such as Apache Spark and Flink, are also considered for further investigation.
Bibliography
Figure 1 :
1(a) A data graph, and (b) a query graph2
Figure 2 :
2An embedding of the query graph Q in the data graph G.
Figure 3
3: 3-triple partition of the data graph G ofFig. 1(a).
Figure 4 :
4Query decomposition.
Figure 5 :
5Query decomposition into star queries.
Figure 6 :
6An s-decomposition of the data graph G ofFig. 1.
the simple query Q = {(c, p 1 , ?X), (c, p 2 , ?Y ), (?X, p 3 , ?Y )} and the data graph G = {(c, p 1 , c 11 ), (c, p 1 , c 12 ), (c, p 1 , c 13 ), (c, p 2 , c 21 ), (c, p 2 , c 22 ), (c, p 2 , c 23 )}, where p 1 , p 2 , p 3 are two predicates and c, c ij are either URIs or literals. Suppose now a decomposition D = {Q 1 , Q 2 } of Q, such that Q 1 = {(c, p 1 , ?X), (c, p 2 , ?Y )} and Q 2 = {(?X, p 3 , ?Y )}.
min-res(Q) // Q a query. // The min-res function returns a decomposition R of Q consisting of so-subqueries of Q begin T sub−obj = {t ∈ Q | t = (s, p, o) and s, o ∈ V(Q)}; // subject and object are variables T sub = {t ∈ Q | t = (s, p, o) and s ∈ V(Q) and o ∈ V(Q)}; // only subject is variable T obj = {t ∈ Q | t = (s, p, o) and o ∈ V(Q) and s ∈ V(Q)}; // only object is variable Tc = {t ∈ Q | t = (s, p, o) and s, o ∈ V(Q)}; // subject and object are nonvariables R = ∅;foreach t = (s, p, o) ∈ T sub−obj do // select a maximal so-query centered at subject of t begin // by adding triples that do not add variablesQs = {t} ∪ {t | t ∈ T sub and t = (s, p , c)} ∪ {t | t ∈ T obj and t = (c , p , s)}; // Qs is an so-query with central node s S = {t | t ∈ T sub and t = (o, p , c)}; If S = ∅ then Qo = ∅ else Qo = {t} ∪ S ∪ {t | t ∈ T obj and t = (c , p , o)}; // Qo is an so-query with central node o If |Qs| ≥ |Qo| then Q = Qs else Q = Qo; R = R ∪ {Q }; end T sub = T sub − {t | t ∈ Qand Q ∈ R}; // Remove from T sub the triples used so far T obj = T obj − {t | t ∈ Q and Q ∈ R}; // Remove from T obj the triples used so far while T sub = ∅ do //For each member of T sub construct an so-query begin extract a triple t = (s, p, o) from T sub ; Q = {t} ∪ {t | t ∈ T sub and t = (s, p , c)} ∪ {t | t ∈ T obj and t = (c , p , s)}; If |Q | = 1 then // No other triple has s as object or subject begin S = {t | t ∈ Tc and t = (o, p , c)}; If S = ∅ then Q = {t} ∪ S ∪ {t | t ∈ Tc and t = (c , p , o)}; end T sub = T sub − {t | t ∈ Q }; // Remove from T sub the triples used in Q T obj = T obj − {t | t ∈ Q }; // Remove from T obj the triples used in Q R = R ∪ {Q }; end foreach t = (s, p, o) ∈ T obj do //For each member of T obj construct an so-query begin Q = {t} ∪ {t | t ∈ Tc and t = (s, p , c)} ∪ {t | t ∈ Tc and t = (c , p , s)}; R = R ∪ {Q }; end Tc = Tc − {t | t ∈ Q and Q ∈ R}; while Tc = ∅ do // select maximal so-query centered at subject or object begin extract a triple t = (s, p, o) from Tc; Qs = {t} ∪ {t | t ∈ Tc and t = (s, p , c)} ∪ {t | t ∈ Tc and t = (c , p , s)}; S = {t | t ∈ Tc and t = (o, p , c)}; If S = ∅ then Qo = ∅ else Qo = {t} ∪ S ∪ {t | t ∈ Tc and t = (c , p , o)}; If |Qs| ≥ |Qo| then Q = Qs else Q = Qo; R = R ∪ {Q }; Tc = Tc − {t | t ∈ Q }; // Remove from Tc the triples used in Q end return R; end.
Figure 7 :
7Min-res Query decomposition.
Figure 8 :
8Min-subquery decomposition.
max-degree(Q) // Q a query. // max-degree function returns a decomposition R of Q consisting of so-subqueries beginR = ∅; N = N (Q) − L; // The non-literal nodes. S Q = F indInitM axSoQueries(N ) T Covered = ∅; while S Q = ∅ do begin S old Q , R, T Covered = ReconstructS Q (S Q , R, T Covered ); S Q = ∅; foreach (m, S) ∈ S old Q do begin S = S − T Covered ;if there is a triple with subject m in S then S Q = S Q ∪ {(m, S )}; end end return R; end.
F
indInitM axSoQueries(N) // N is a set of the non-literal nodes of a query begin S Q = ∅; foreach n ∈ N do // S Q contains all pairs (n, S(n)) where n ∈ N and begin // S(n) is the maximal so-query with n as central node. if there is a triple (n, p, o) ∈ Q then begin S(n) = {t | t ∈ Q and (t = (n, p, o) or t = (s, p , n))}; S Q = S Q ∪ {(n, S(n))}; end end return S Q ; end. ReconstructS Q (S Q , R, T Covered ) //Find next so-subquery and update both the result R and the set T Covered of covered edges. begin //select a maximal so-query in S Q select a (n, Q ) ∈ S Q such that |Q | is maximal among all elements in S Q . R = R ∪ {Q }; // ... add Q to the result and ... T Covered = T Covered ∪ Q ; // ... add its triples to T Covered . S old Q = S Q − {(n, Q )}; return S old Q , R, T Covered ; end.
Figure 9 :
9Min-subquery vs. Max-degree decomposition.
max-degree-with-reshaping(Q) // Q a query. // The function returns a decomposition R of Q consisting of so-subqueries of Q begin R = ∅; N = N (Q) − L; // The non-literal nodes. S Q = F indInitM axSoQueries(N ); T Covered = ∅; while S Q = ∅ do begin select a (n, Q ) ∈ S Q s.t. ∀(m, Q ) ∈ (S Q − {(n, Q )}) it holds N C(Q ) ≥ N C(Q ); // N C(Q) function returns the number of Not Covered triples in Q S = Q − {t | t = (s, p, n) ∈ Q and t ∈ T Covered and s ∈ V(Q )};
the query in R containing t ; // notice that F − {t } is also an so-query end R = R ∪ {S}; // ... add S to the set of subqueries of Q ... T Covered = T Covered ∪ S; // ... add the triples of S to T Covered . S old Q = S Q − {(n, Q )}; S Q = ∅; foreach (m, S) ∈ S old Q do //Reconstruct S Q by removing the queries whose triples are ... begin //... completely covered by the so-queries already constructed ... if S − T Covered = ∅ then S Q = S Q ∪ {(m, S)}; end end return R; end.
N (Q)−B(Q) to {1, . . . , |N (Q)−B(Q)|}, with I nb (x) = I(x)−|B(Q)|.Similarly, we assume an enumeration t 1 , t 2 , . . . , t |Q| of the triples in Q. Using the above enumeration functions we now define the concept of query prototype. A query prototype is a triple of tuples of the form:(BorderN odeF lags, N onBorderN odeF lags, T ripleF lags) where BorderN odeF lags is a tuple of |B(Q)| items, one item for each border node in B(Q). Similarly, the N onBorderN odeF lags is a tuple of |N (Q) − B(Q)| items, one for each non border node in N (Q) − B(Q). Finally, the tuple T ripleF lag has |Q| items, one for each triple in Q.
Q1: (<+,+, >, < ,+>, <+, , , ,+>) Q2: (< ,+,+>, <+, >, < , ,+,+, >) Q3: (<+, ,+>, < , >, < ,+, , , >) while the list of missing border nodes is M BN = [(n1, Q2), (n2, Q3), (n3, Q1)].
mapper1 ((Qi, Gj), (GjData, subqueryInf o)) //(Qi,Gj): Qi is the ID of a subquery, Gj is the ID of a data segment // GjData: the content of the data graph segment Gj // SubqueryInf o: prototypes/border & non-border nodes/triples of Q begin compute E = {e | e is a useful partial embedding of Qi in GjData}; for each e ∈ E do emit ([Qi, e]); end.
( 3 )
3key = Q1, value = (<Person2,Article2,*>, <*,*>, <+, , , , >) Among the embeddings obtained and emitted by the Mapper working on (Q 2 , G 2 ) are the (partial) embeddings:(4) key = Q2, value = (<*,Article1,Person1>, <*,*>, < , ,+, , >) (5) key = Q2, value = (<*,Article2,Person3>, <*,*>, < , ,+, , >)
( 6 )
6key = Q3, value = (<Person4,*,Person1>, <*,*>, < ,+, , , >) (7) key = Q3, value = (<Person2,*,Person3>, <*,*>, < ,+, , , >)The Mapper working on (Q 1 , G 3 ) emits the partial embedding:(8) key = Q1, value = (<*,Article2,*>, <*,"Title2">, < , , , ,+>)
( 9 )
9key = Q2, value = (<*,Article1,*>, <Journal1,*>, < , , ,+, >) (10) key = Q2, value = (<*,Article2,*>, <Journal1,*>, < , , ,+, >)
reducer1(Qi, values) // Qi: a subquery ID. // values: contains the list of the embeddings for Qi and the M BN list begin collect in a list Fi the total embeddings of Qi appearing in values or obtained by joining compatible partial embeddings in values; if Fi is empty then EXIT; // there is no solution for the subquery Qi corresponding subquery prototype (see Example 10
/
/ and thus for the original query Q extract the MBN list from values; foreach embedding e = (bnv, nbnv, tm) in Fi do begin emit([Qi, (bnv, nbnv)]); // emits total embedding with key the subquery ID Qi for i = 1 to |bnv| do if (bnv[i] != '*') then for each (ni, Qj) in M BN do emit([Qj, (ni, bnv[i])]); end end end.
( 3 )
3+(8)=> key = Q1, value = (<Person2,Article2,*>, <*,"Title2">) Taking into account the contents of the MBN list: M BN = [(n1, Q2), (n2, Q3), (n3, Q1)] the reducer also emits the following missing border node values: key = Q2, value = (1,Person2) key = Q2, value = (1,Person4) key = Q3, value = (2,Article1) key = Q3, value = (2,Article2), ...
( 4 ). 3 .
43+(9)=> key = Q2, value = (<*,Article1,Person1>, <Journal1,*>) (5)+(10)=> key = Q2, value = (<*,Article2,Person3>, <Journal1,*>) and the following values for missing border nodes: Phase 2 of the QEJPE-algorithm Each Mapper in Phase2 manipulates the embeddings of a specific subquery. It fills in their missing border node values using values from the embeddings of other subqueries that have been emitted by the reducer1 based in the information in MBN list and emits the resulted embeddings to the reducers of Phase 2 (the key is the tuple of the border node values). The mapper of Phase 2 is given below:mapper2(Qi, values) // Qi: the ID of a subquery // values: a list E of the parts (bnv, nbnv) of the total embeddings of Qi and // a list V of pairs (i, v), where v is a candidate value for bnv[i] begin for each embedding e = (bnv, nbnv) in E do for each instance bnv' of bnv using the values in V do emit([bnv', (Qi, nbnv)]); end. Example 13. (Continued from Example 12). Mapper with key Q1 receives: E = [(<Person4,Article1,*>, <*,"Title1">), (<Person2,Article2,*>, <*,"Title2">),...] V = [(3,Person1), (3,Person3),... ] This mapper produces instances of the border node tuples in E by replacing the '*' in the 3rd place with a value in V. Among the key-value pairs obtained and emitted in this way are: key = (Person4,Article1,Person1), value = (Q1, <*,"Title1">) key = (Person2,Article2,Person3), value = (Q1, <*,"Title2">) The input of the Mapper with key Q2 is: E = [(<*,Article1,Person1>, <Journal1,*>), (<*,Article2,Person3>, <Journal1,*>), ... ] V = [(1,Person2), (1,Person4), ...] Some of the instances that this mapper produces and emits are: key = (Person4,Article1,Person1), value = (Q2, <Journal1,*>) key = (Person2,Article2,Person3), value = (Q2, <Journal1,*>) The input of the Mapper with key Q3 is: E = [ (<Person4,*,Person1>, <*,*>), (<Person2,*,Person3>, <*,*>) ] V = [(2,Article1), (2,Article2)]
reducer2(key, values) // key: a tuple of border node values // values: pairs of the form (Qi, partial embedding for non-border nodes) begin for each join of compatible embeddings obtained by using one embedding for each subquery do Emit the result produced by this join; end. Example 14. (Continued from Example 13). The Reducer with key (Person4, Article1, Person1) receives the list: [(Q1, <*,"Title1">), (Q2, <Journal1,*>), (Q3, <*,*>)]
Notice that no other reducer returns solution (as they do not receive embeddings for all subqueries). This can be verified by considering all possible embeddings of all subqueries, which do not appear, for space reasons, in the examples of this subsection. 25.4.4. DiscussionQEJPE-algorithm computes the answers to the given query Q correctly, independently of a) the data graph partitioning, b) the way the graph segments are stored, c) the query graph decomposition, and d) the algorithm used for calculating intermediate (partial) results.
Fig. 5 .
5The border nodes are B(Q) = {n1, n2, n3}, while the non-border nodes are N (Q) − B(Q) = {n4, n5, n6, n7}. The query prototypes are the following: Q1: (<+, ,+>, <+, ,+, >, <+, , , , , ,+,+>) Q2: (<+,+,+>, < , , ,+>, < ,+, , ,+,+, , >) Q3: (<+,+, >, < ,+, , >, < , ,+,+, , , , >)
mapper1((Qi, Gj), (GjData, B(GjData), subqueryInfo, M BN )) //(Qi,Gj): Qi is the ID of a subquery; Gj is the ID of a data segment // GjData: the content of the data graph segment Gj // B(GjData): the set of border nodes of Gj // SubqueryInfo: prototypes/border & non-border nodes/triples of Q // MBN: the list of missing border nodes begin Let ci = C(Qi); % Part 1 foreach triple t = (ci, p, o) in Qi do begin compute E = {e | e is an embedding of t in GjData and e(ci) ∈ B(GjData) }; foreach embedding e in E do emit([(Qi, e(ci)),(o, e(o))]); end foreach triple t = (s, p, ci) in Qi do begin compute E = {e | e is an embedding of t in GjData and e(ci) ∈ (B(GjData) ∪ L) }; for each embedding e in E do emit([(Qi, e(ci)),(s, e(s))]); end % Part 2 compute E = {e | e is a embedding of Qi in GjData and e(ci) / ∈ (B(GjData) ∪ L) }; for each embedding e = (bn, nbn) in E do begin emitToSecondPhase([Qi, (bnv, nbnv)]); // i.e. to the mapper of phase 2 for k = 1 to |bnv| do if (bnv[k] != '*') then for each (n k , Qj) in M BN do emitToSecondPhase([Qj, (n k , bnv[k])]); end end.
Q i . Then, for every j = 1, . . . , m, the reducer constructs a set L[k j ] of all possible values for node n kj . More specifically, for each element (x 1 , x 2 , . . . , x m ) of the cartesian product L[k 1 ] × L[k 2 ] × . . . × L[k m ], it constructs an embedding e = (bnv,nbnv) of Q i in G, such that e(c i ) = v and e(n kj ) = x j and emits (Q i , (bnv,nbnv)) (see Subsection 5.3 for the representation of an embedding). Moreover, if at least one embedding of Q i has been found, Reducer 1 emits the values of missing border nodes.reducer1((Qi, v), values) // Qi: a subquery ID // v: the value of the central node of Qi // values: contains (i) a list of pairs (x, u), with x = C(Qi) and u is a candidate image of x and (ii) the M BN list. begin % Part 1 allN onEmpty = true; foreach non-central node x in Qi do begin L[I(x)] = {u | (x, u) ∈ values}; if L[I(x)] = ∅ then allN onEmpty = f alse; end % Part 2 if allN onEmpty = true then // i.e. there are values for all non-central nodes of Qi begin create an embedding with undefined values; (bnv,nbnv)= ( * , . . . , * , * , . . . , * ); ci = C(Qi); L[I(ci)] = {v}; if ci is a border node then bnv[I(ci)] = v; else nbnv[I nb (ci)] = v; E = {(bnv,nbnv)}; for each non-central node x in Qi do begin E = ∅; foreach e in E do foreach u in L[I(x)] do begin create a copy e'=(bnv',nbnv') of e; if x is a border node then bnv'[I(x)] = u; else nbnv'[I nb (x)] = u; insert (bnv',nbnv') in E ; end E = E ; end foreach embedding e = (bnv, nbnv) in E do emit([Qi, (bnv, nbnv)]); foreach (x, Qj) in M BN do if x is a node in Qi then foreach u in L[I(x)] do emit([Qj, (x, u)]); end end. Example 17. (Continued from Example 16). The reducer with key (Q 1 , Article1) receives the following list of values: [(n 1 , P erson4), (n 6 , "T itle1"), (n 1 , P erson1), (n 1 , P erson2), (n 3 , Journal1)]. Notice that, as we can conclude from the sub-query prototypes appearing in Example 15, the border nodes of Q is B(Q) = {n1, n2, n3}, while the non-border are N (Q) − B(Q) = {n4, n5, n6, n7}. Besides, from Fig. 5, we see that the central node of Q 1 is n 4 while its non-central nodes are n 1 , n 3 and n 6 . Finally, the M BN list is M BN = [(n 2 , Q 1 ), (n 3 , Q 3 )]. Taking into account the above, the reducer1 with key (Q 1 , Article1), concludes by applying Part1 of the procedure that it has received values for all non-central nodes of Q 1 . More specifically, reducer1 constructs the following lists: the values for the non-central nodes n 1 , n 3 and n 6 respectively. Combining these values, as well as the value Article1 of the central node n 4 , reducer1 in Part 2 constructs and emits the following (key, value) pairs (that represent embeddings of Q 1 ): key = Q1, value = (<Person1,*,Journal1>, <Article1,*,Title1,*>) key = Q1, value = (<Person2,*,Journal1>, <Article1,*,Title1,*>) key = Q1, value = (<Person4,*,Journal1>, <Article1,*,Title1,*>) Besides, reducer1, based on the MBN list, emits the following: Q3, (n3,Journal1)The reducer with key (Q 2 , Article1) receives the following list of values:[(n1, Person4), (n1, Person1) (n1, Person2), (n3, Journal1)].Based on this values it constructs the following lists (corresponding to the values of the non-central nodes n 1 , n 3 and n 7 of the subquery Q 2 ):
Combining these values, as well as the value Article2 of the central node n 2 , Part 2 of reducer1 constructs and emits the following (key, value) pairs (that represent embeddings of Q 2 ): key = Q2, value = (<Person2,Article2,Journal1>, <*,*,*,"2008">) key = Q2, value = (<Person3,Article2,Journal1>, <*,*,*,"2008">)Besides, reducer1, based on the MBN list, emits the following: key = Q1, value = (n2,Article2) key = Q3, value = (n3,Journal1)
2 5. 5 . 2 .
252Phase 2 of the algorithm Phase 2 of the algorithm is similar to the Phase 2 of the QEJPE-algorithm presented in Subsection 5.4.3. The input of each mapper of Phase 2, consist of all the embeddings of a specific subquery Q i . Besides, for each border node that does not occur in Q i , mapper gets as input, the values assigned to this node by the embeddings of the other queries. These values are sent by the mappers and reducers of Phase 1 based on the MBN list. Based on its input, the mappers of Phase 2 fills in their missing border node values using the corresponding input values, and emits the resulted embeddings to the reducers of Phase 2 using as key the tuple of the border node values. This means that two embeddings are emitted to the same reducer if and only if they are compatible. mapper2(Qi, values) // Qi: the ID of a subquery // values: a set E of the parts (bnv, nbnv) of the total embeddings of Qi // and a set V of pairs (n k , v), where v is a candidate value for bnv[k] begin foreach embedding e = (bnv, nbnv) in E do foreach instance bnv of bnv using the values in V do emit([bnv', (Qi, nbnv)]); end. Example 18. (Continued from Example 17). The mapper that works for the subquery Q 1 (i.e. the key is Q 1 ), gets a list of values that contain the embeddings of Q 1 in G: (<Person1,*,Journal1>, <Article1,*,Title1,*>) (<Person2,*,Journal1>, <Article1,*,Title1,*>) foreach join obtained by using one embedding for each subquery do Emit the result produced by this join; end. Example 19. (Continued from Example 18). The reducer with key <Person2,Article2,Journal1> receives the following list of values: [(Q1, <Article1,*,Title1,*>), (Q2, <*,*,*,"2008">), (Q3, <*,Person3,*,*>)]
2 5. 6 . 2 .
262Phase 1 of the algorithm The first phase of the algorithm computes the embeddings of the so-queries Q 1 , . . . , Q n in G locally in each star graph segment G j of G.Each mapper in phase 1 gets as input an s-graph segment G j , an so-query Q i , the M BN list, and the tuple CB(Q) and computes the embeddings of Q i into G j . The embeddings computed are directly emitted to the mappers of Phase 2. Similarly, the values of the nodes in MBN are emitted to the mappers of Phase 2. Notice that the instances of the nodes in CB(Q) take part in the keys of the (key, value) pairs emitted to the mappers of Phase 2.mapper1((Qi, Gj), (GjData, SubqueryInfo, M BN , CB(Q))) //(Qi,Gj): Qi/Gj is the ID of a subquery/data segment // GjData: the content of the data graph segment Gj // SubqueryInfo: prototypes of the subqueries of Q // MBN: the list of missing border nodes // CB(Q) is the tuple of common border nodes of Q begin compute E = {e | e is an embedding of Qi in GjData} for each embedding e = (bnv, nbnv) in E do begin if (M BN ! = []) then emitToMapper2([(Qi,e(CB(Q))), (bnv, nbnv)]); for k = 1 to |bnv| do if (bnv[k] != '*') then foreach (n k , Qj) in M BN do emitToMapper2([(Qj, e(CB(Q))), (n k , bnv[k])]); else emitToReducer2([bnv, (Qi, nbnv)]); end end end. Example 21. (Continued from Example 20). This example shows the results obtained by the application of mapper1 on the pairs (Q i , G j ), where Q i is an so-query and G j is a graph segment. The following three embeddings of Q 1 into G 1 are computed by the algorithm: e1 = (<Person1, *, Journal1>, <Article1, *, Title1, *>) e2 = (<Person2, *, Journal1>, <Article1, *, Title1, *>) e3 = (<Person4, *, Journal1>, <Article1, *, Title1, *>) For e 1 the algorithm emits the following (key, value) pair to mapper2: key = (Q, Person1), value = (<Person1, *, Journal1>, <Article1, *, Title1, *>) Besides, based on the MBN list and the CB(Q), which, in the preprocessing phase have been computed to M BN = [(n 2 , Q 1 ), (n 3 , Q 3 )] and CB(Q) = {n 1 }, mapper1 also emits to mapper2 the following key value pair: key = (Q3, Person1), value = (n3, Journal1) Similarly, mapper1 also emits the following (key, value) pais based on the embeddings e 2 and e 3 : key = (Q1, Person2), value = (<Person2, *, Journal1>, <Article1, *, Title1, *>) key = (Q3, Person2), value = (n3, Journal1) key = (Q1, Person4), value = (<Person4, *, Journal1>, <Article1, *, Title1, *>) key = (Q3, Person4), value = (n3, Journal1) Concerning query Q 1 there are no embeddings in segments G 2 and G 3 . Thus nothing is emitted by the corresponding mappers. The following embedding of Q 2 into G 1 is computed (among others) by the algorithm: e'1 = (<Person4, Article3, Journal2>, <*, *, *, "2008">) For e 1 the algorithm emits the following(key, value) pair to mapper2: key = (Q2, Person4), value = (<Person4, Article3, Journal2>, <*, *, *, "2008">)Besides, based on the MBN list and the CB(Q), mapper1 also emits to map-per2 the following key value pair: key = (Q1, Person4), value = (n2, Article3) key = (Q3, Person4), value = (n3, Journal2)
For e 1
1the algorithm emits the following(key, value) pair to mapper2:key = (Q3, Person4), value = (<Person4, Article1, *>, <*, Person1, *, *>)Besides, based on the MBN list and the CB(Q), mapper1 also emits to map-per2 the following key value pair: key = (Q1, Person4), value = (n2, Article1)
2 5. 6 . 3 .
263Phase 2 of the algorithm Phase 2 of the algorithm is similar to the Phase 2 of the eval-STARS algorithm.
mapper2((Qi,e(CB(Q))), values) // Qi: the ID of a subquery // values: a set E of the parts (bnv, nbnv) of the embeddings // of Qi,e(CB(Q)) and a set V of pairs (n k , v), // where v is a candidate value for bnv[k] begin foreach embedding e = (bnv, nbnv) in E do foreach ground instance bnv of bnv using the values in V do emit([bnv , (Qi, nbnv)]); end. Example 22. (Continued from Example 21). This example shows the application of mapper2. The mapper applied for the key (Q 1 , P erson1) gets the value: (<Person1, *, Journal1>, <Article1, *, Title1, *>)
key: a tuple of values for the border nodes of Q // values: pairs of the form (Qi, partial embeddings of non-border nodes of Qi) begin foreach join obtained by using one embedding for each subquery do Emit the result produced by this join; end. Example 23. (Continued from Example 22). The reducer with key: < P erson2, Article2, Journal1 > receives the list: [(Q1, <Article1, *, Title1, *>), (Q2, <*, *, *,"2008">), (Q3, <*, Person3, *, *>)] As this lists contains an embedding for each subquery, we join them and obtain the following embedding of the query Q: (<Person2, Article2, Journal1>, <Article1, Person3, Title1, "2008">) This embedding corresponds to the answer: (?P1, ?A, ?J, ?P2, ?T) = (Person2, Article2, Journal1, Person3, Title1 ) Note that, the remaining reducers do not return any answer (they don't receive values for at least one subquery).2
Figure 10 :Figure 11 :
1011Query evaluation in terms of the size of dataset Query evaluation in terms of compute nodes size per algorithm times are summarized into the
Figure 12 :Figure 14 :
1214Comparison of query evaluation algorithms for a variety of query Query Decomposition Algorithms Evaluationcompositions given by the two other algorithms, as well as the results are few.
is non-redundant.?P1
?A
?T
?P2
n1
n2
n3
n4
n5
(Q)
(Q 1 )
(Q 2 )
Journal1
?P1
?T
hasAuthor
?A
n1
n2
n5
?A
?P2
n2
n3
n4
Journal1
?P1
?P2
n1
n3
(Q 3 )
8 Cores) with 16GB RAM, 60GB HD, Ubuntu 16.04 LTS, 64-bit Operating System. We used Apache Hadoop v3.1 with HDFS (1 NameNode, 1 Secondary NameMode, 10 DataNodes each one 30GB) and YARN (1 ResourceManager, 10 NodeManagers). The 10 virtual machines were connected through external IP addresses.To perform the experiments we used four different datasets (D1, D2, D3, D4) in N-Triples format from the Waterloo SPARQL Diversity Test Suite (WatDiv)[17] to evaluate the algorithms proposed in this paper. The number of triples of each dataset, as well as the scale factors used to generate the datasets, are illustrated inTable 1.Dataset/Query
Scale Factor
Number of triples
Number of files
D1
25
2,731,510
7
D2
50
5,486,199
13
D3
100
10,979,566
25
D4
200
21,961,070
49
Table 1 :
1Description of the Datasets
Table 2 :
2Description of the queries evaluated over datasets of different sizesQEJPE-algorithm
eval-STARS
QE-with-Redundancy
L
S
F
L
S
F
L
S
F
D2
1079
1106
1109
1086
931
893
796
469
792
D3
1140
1207
1606
1134
1135
1156
872
544
856
D4
1265
1358
2874
1205
1270
1288
1103
749
1102
Table 3 :
3Query evaluation in datasets of different sizes (in seconds)
Table 4 :
4Query evaluation in D4 dataset for different compute nodes size (in seconds)
Table 5 :
5Description of Watdiv queriesQuery
QEJPE
eval-STARS
QE-with-Redundancy
METIS
Random
METIS
Random
L2
21:06
21:05
21:11
20:05
18:23
L4
19:56
21:06
21:09
21:03
12:43
S3
21:47
34:59
18:51
21:18
12:37
S5
20:43
22:38
18:14
21:10
12:29
F1
23:29
47:54
22:22
21:28
18:22
F4
-
-
21:22
21:38
18:23
C3
-
-
30:09
25:30
17:48
Table 6 :
6Execution time per queryQuery Type
Data Partitioning
Linear
Star
Snowflake
Complex
QEJPE-algorithm
METIS
1231,0
1275,0
-
-
Random
1265,5
1728,5
-
-
eval-STARS
METIS
1270,0
1112,5
1312,0
1809,0
Random
1234,0
1274,0
1293,0
1530,0
QE-with-Redundancy
s-decomposition
933,0
753,0
1102,5
1068,0
Table 7 :
7Average execution time per query type (seconds)
In the algorithms presented in this section we represent the MBN set as list.
Notice that we can check if an embedding is total or partial by comparing it with the
Notice that the joined embeddings are, by construction, compatible.
RDFLib documentation: https://rdflib.readthedocs.io/
Note that the scalability of each algorithm is limited by the capacity of the cluster resources (i.e., memory, disk space).
Applying mapper1 on (Q 2 , G 1 ) results in emission (see Part 1) of the following key, value pair to the reducer1: key = (Q2, Article1), value = (n1, Person4) (embedding of t2)Besides, the following key, value pairs are emitted directly to the mapper2 (Mapper of Phase 2) (seePart 2): key = Q2, value = (<Person4,Article3,Journal2>, <*,*,*,"2008">) key = Q1, value = (n2, Article3) key = Q3, value = (n3, Journal2)Notice that the last two emissions are conducted by the MBN list which, as we have seen in Example 15, is M BN = [(n 2 , Q 1 ), (n 3 , Q 3 )]. Applying mapper1 on (Q 3 , G 1 ) results in emission (see Part 1) of the following (key, value) pairs to reducer1: key = (Q3, Person4), value = (n2, Article1) (embedding of t4) key = (Q3, Person4), value = (n2, Article3) (embedding of t4)No key value pairs are emitted to Phase 2. Applying mapper1 on (Q 1 , G 2 ) results in emission (see Part 1) of the following (key, value) pairs to reducer1:No key value pairs are emitted to Phase 2. Applying mapper1 on (Q 2 , G 2 ) results in emission (see Part 1) of the following key, value pairs to the reducer1: No key value pairs are emitted to Phase 2. Applying mapper1 on (Q 3 , G 2 ) results in no emission of any (key, value) pair to reducer1 (see Part1). However, the following (key, value) pairs are emitted (see Part 2) to mapper2: key = Q3, value = (<Person4, Article1, *>, <*, Person1,* , *>) key = Q3, value = (<Person2, Article2, *>, <*, Person3, *, *>) key = Q1, value = (n2, Article1) key = Q1, value = (n2, Article2) Applying mapper1 on (Q 1 , G 3 ) results in emission of the following (key, value) pair to reducer1:No key value pairs are emitted to Phase 2 (see Part2). Applying mapper1 on (Q 2 , G 3 ) results in emission of the following (key, value) pair to reducer1:No key value pairs are emitted to Phase 2. Applying mapper1 on (Q 3 , G 3 ) results in no emission of any (key, value) pair. The mapper that works for the subquery Q 2 (i.e. the key is Q 2 ), receives a list of values containing the following embeddings of Q 2 in G:(<Person4,Article3,Journal2>, <*,*,*,"2008">) (<Person2,Article2,Journal1>, <*,*,*,"2008">) (<Person3,Article2,Journal1>, <*,*,*,"2008">)Notice that Q 2 has no missing border nodes. The mapper emits the following (key, value) pairs to the reducers of Phase 2: key = (<Person4,Article3,Journal2>), value = (Q2, <*,*,*,"2008">) key = (<Person2,Article2,Journal1>), value = (Q2, <*,*,*,"2008">) key = (<Person3,Article2,Journal1>), value = (Q2, <*,*,*,"2008">)The mapper that works for the subquery Q 3 (i.e. the key is Q 3 ), receives a list of values that contain the embeddings of Q 3 in G: The mapper applied for the key (Q 2 , P erson2) gets the value: The mapper applied for the key (Q 2 , P erson3) gets the value: The mapper applied for the key (Q 3 , P erson1) gets the value: (n3, Journal1)As E = ∅ this mapper emits nothing to reducer2. The mapper applied for the key (Q 3 , P erson2) gets the value: The mapper applied for the key (Q 3 , P erson3) gets the value: (n3, Journal1)As E = ∅ this mapper emits nothing to reducer2. The mapper applied for the key (Q 3 , P erson4) gets the values: Concerning the Reducer of Phase 2, each reducer gets as input embeddings for each subquery that are compatible as the key for the reducer is a tuple of values for all border nodes of the query Q. The embeddings (one for each subquery Q 1 , . . . , Q n ) are joined to construct the final answers of Q:Queries Results min-res max-degree max-degree-reshaping subqueries time subqueries time subqueries time Q1 11 5 1784 3 1103 3 1103 Q2 33 5 1544 3 1129 3 1129 Q3 1 3 1144 3 1144 3 1144 Q4 1580 5 1422 3 1430 3 1430 Q5 5808 6 1436 3 1431 3 1429 Q6 12705 6 1428 3 1432 3 1425 Q7 438976 7 1132 3 13053 1331 decompositions of the max-degree and max-degree-with-reshaping perform better than the ones given by min-res. Queries Q4-Q6 perform similarly, for all the three algorithms. This can be explained by the fact that the queries Q1 and Q2 give decompositions with more subqueries using min-res than the de-
. * Title1, * > , 1784<Person4,*,Journal1>, <Article1,*,Title1,*>) 1784 1544 1144 1422 1436 1428 1132 1103 1129 1144 1430 1431 1432 1305 1103 1129 1144 1430 1429 1425 1331
. Amazon Dynamodb, Amazon DynamoDB. https://aws.amazon.com/dynamodb/.
. Apache Accumulo, Apache Accumulo. https://accumulo.apache.org/.
. Apache Cassandra, Apache Cassandra. http://cassandra.apache.org/.
. Apache Flink, Apache Flink. https://flink.apache.org/.
. Apache Hadoop, Apache Hadoop. https://hadoop.apache.org/.
. Apache Hbase, Apache HBase. https://hbase.apache.org/.
. Apache Impala, Apache Impala. https://impala.apache.org/.
. Apache Spark, Apache Spark. https://spark.apache.org/.
. Mongodb, NoSQL Document Database. MongoDB, NoSQL Document Database. https://www.mongodb.com/.
SW-Store: a vertically partitioned DBMS for semantic web data management. Daniel J Abadi, Adam Marcus, Samuel R Madden, Kate Hollenbach, The VLDB Journal. 182Daniel J. Abadi, Adam Marcus, Samuel R. Madden, and Kate Hollenbach. SW-Store: a vertically partitioned DBMS for semantic web data manage- ment. The VLDB Journal, 18(2):385-406, 2009.
A survey and experimental comparison of distributed SPARQL engines for very large RDF data. Ibrahim Abdelaziz, Razen Harbi, Zuhair Khayyat, Panos Kalnis, Proceedings of the VLDB Endowment. the VLDB Endowment10Ibrahim Abdelaziz, Razen Harbi, Zuhair Khayyat, and Panos Kalnis. A survey and experimental comparison of distributed SPARQL engines for very large RDF data. Proceedings of the VLDB Endowment, 10(13):2049- 2060, 2017.
Enumerating subgraph instances using Map-Reduce. F N Afrati, D Fotakis, J D Ullman, 2013 IEEE 29th International Conference on Data Engineering (ICDE). F. N. Afrati, D. Fotakis, and J. D. Ullman. Enumerating subgraph instances using Map-Reduce. In 2013 IEEE 29th International Conference on Data Engineering (ICDE), pages 62-73, April 2013.
Optimizing multiway joins in a Map-Reduce environment. F N Afrati, J D Ullman, IEEE Transactions on Knowledge and Data Engineering. 239F. N. Afrati and J. D. Ullman. Optimizing multiway joins in a Map-Reduce environment. IEEE Transactions on Knowledge and Data Engineering, 23(9):1282-1298, Sept 2011.
Answering Queries Using Views, Second Edition. Synthesis Lectures on Data Management. N Foto, Rada Afrati, Chirkova, Morgan & Claypool PublishersFoto N. Afrati and Rada Chirkova. Answering Queries Using Views, Second Edition. Synthesis Lectures on Data Management. Morgan & Claypool Publishers, 2019.
Optimizing joins in a Map-reduce environment. N Foto, Jeffrey D Afrati, Ullman, Proceedings of the 13th International Conference on Extending Database Technology, EDBT '10. the 13th International Conference on Extending Database Technology, EDBT '10New York, NY, USAACMFoto N. Afrati and Jeffrey D. Ullman. Optimizing joins in a Map-reduce environment. In Proceedings of the 13th International Conference on Ex- tending Database Technology, EDBT '10, pages 99-110, New York, NY, USA, 2010. ACM.
Plexousakis. RDF query answering using Apache Spark: Review and assessment. G Agathangelos, G Troullinou, H Kondylakis, K Stefanidis, D , 34th IEEE International Conference on Data Engineering, ICDE Workshops. IEEE Computer SocietyG. Agathangelos, G. Troullinou, H. Kondylakis, K. Stefanidis, and D. Plex- ousakis. RDF query answering using Apache Spark: Review and assess- ment. In 34th IEEE International Conference on Data Engineering, ICDE Workshops, pages 54-59. IEEE Computer Society, 2018.
Diversified stress testing of RDF data management systems. Güneş Aluç, Olaf Hartig, M Tamerözsu, Khuzaima Daudjee, The Semantic Web -ISWC 2014. Peter Mika, Tania Tudorache, Abraham Bernstein, Chris Welty, Craig Knoblock, Denny Vrandečić, Paul Groth, Natasha Noy, Krzysztof Janowicz, and Carole GobleChamSpringer International PublishingGüneş Aluç, Olaf Hartig, M. TamerÖzsu, and Khuzaima Daudjee. Di- versified stress testing of RDF data management systems. In Peter Mika, Tania Tudorache, Abraham Bernstein, Chris Welty, Craig Knoblock, Denny Vrandečić, Paul Groth, Natasha Noy, Krzysztof Janowicz, and Carole Goble, editors, The Semantic Web -ISWC 2014, pages 197-212, Cham, 2014. Springer International Publishing.
AMADA: web data repositories in the Amazon cloud. Andrés Aranda-Andújar, Francesca Bugiotti, Jesús Camacho-Rodríguez, Dario Colazzo, François Goasdoué, Zoi Kaoudi, Ioana Manolescu, 21st ACM International Conference on Information and Knowledge Management, CIKM'12. Maui, HI, USAAndrés Aranda-Andújar, Francesca Bugiotti, Jesús Camacho-Rodríguez, Dario Colazzo, François Goasdoué, Zoi Kaoudi, and Ioana Manolescu. AMADA: web data repositories in the Amazon cloud. In 21st ACM Interna- tional Conference on Information and Knowledge Management, CIKM'12, Maui, HI, USA, October 29 -November 02, 2012, pages 2749-2751, 2012.
SPARQL2Flink: Evaluation of SPARQL queries on. Oscar Ceballos, Carlos Alberto Ramírez Restrepo, María Constanza Pabón, Andres M Castillo, Oscar Corcho, Apache Flink. Applied Sciences. 11152021Oscar Ceballos, Carlos Alberto Ramírez Restrepo, María Constanza Pabón, Andres M. Castillo, and Oscar Corcho. SPARQL2Flink: Evaluation of SPARQL queries on Apache Flink. Applied Sciences, 11(15), 2021.
Optimal implementation of conjunctive queries in relational data bases. K Ashok, Chandra, Philip M Merlin, Proceedings of the ninth annual ACM symposium on Theory of computing. the ninth annual ACM symposium on Theory of computingAshok K Chandra and Philip M Merlin. Optimal implementation of con- junctive queries in relational data bases. In Proceedings of the ninth annual ACM symposium on Theory of computing, pages 77-90, 1977.
Storage, partitioning, indexing and retrieval in big RDF frameworks: A survey. Tanvi Chawla, Girdhari Singh, Emmanuel S Pilli, M C Govil, Computer Science Review. 38100309Tanvi Chawla, Girdhari Singh, Emmanuel S. Pilli, and M.C. Govil. Storage, partitioning, indexing and retrieval in big RDF frameworks: A survey. Computer Science Review, 38:100309, 2020.
Semantics preserving SPARQL-to-SQL translation. Artem Chebotko, Shiyong Lu, Farshad Fotouhi, Data & Knowledge Engineering. 6810Artem Chebotko, Shiyong Lu, and Farshad Fotouhi. Semantics preserving SPARQL-to-SQL translation. Data & Knowledge Engineering, 68(10):973- 1000, 2009.
HAQWA: a hash-based and query workload aware distributed RDF store. O Curé, H Naacke, M Baazizi, B Amann, Proceedings of the ISWC 2015 Posters & Demonstrations Track co-located with the 14th International Semantic Web Conference (ISWC-2015). the ISWC 2015 Posters & Demonstrations Track co-located with the 14th International Semantic Web Conference (ISWC-2015)Bethlehem, PA, USA1486CEUR Workshop ProceedingsO. Curé, H. Naacke, M. Baazizi, and B. Amann. HAQWA: a hash-based and query workload aware distributed RDF store. In Proceedings of the ISWC 2015 Posters & Demonstrations Track co-located with the 14th In- ternational Semantic Web Conference (ISWC-2015), Bethlehem, PA, USA, October 11, 2015., CEUR Workshop Proceedings, vol. 1486, 2015.
A survey on NoSQL stores. Ali Davoudian, Liu Chen, Mengchi Liu, 40:1-40:43ACM Computing Surveys. 512Ali Davoudian, Liu Chen, and Mengchi Liu. A survey on NoSQL stores. ACM Computing Surveys, 51(2):40:1-40:43, 2018.
MapReduce: simplified data processing on large clusters. J Dean, S Ghemawat, Communications of the ACM. 511J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107-113, 2008.
HadoopRDF: A Scalable Semantic Data Analytical Engine. Jin-Hang Du, Hao-Fen Wang, Yuan Ni, Yong Yu, SpringerBerlin Heidelberg; Berlin, HeidelbergJin-Hang Du, Hao-Fen Wang, Yuan Ni, and Yong Yu. HadoopRDF: A Scalable Semantic Data Analytical Engine, pages 633-641. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
A survey of RDF storage approaches. David C Faye, Olivier Curé, Guillaume Blin, Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées. 15David C. Faye, Olivier Curé, and Guillaume Blin. A survey of RDF storage approaches. Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées, 15:11-35, 2012.
An algorithm for querying linked data using Map-Reduce. M Gergatsoulis, C Nomikos, E Kalogeros, M Damigos, Data Management in Cloud, Grid and P2P Systems -6th International Conference. A. Hameurlain, J. W. Rahayu, and D. TaniarGlobe; Prague, Czech RepublicSpringer8059M. Gergatsoulis, C. Nomikos, E. Kalogeros, and M. Damigos. An algorithm for querying linked data using Map-Reduce. In A. Hameurlain, J. W. Rahayu, and D. Taniar, editors, Data Management in Cloud, Grid and P2P Systems -6th International Conference, Globe 2013, Prague, Czech Republic, August 28-29, 2013. Proceedings, volume 8059 of Lecture Notes in Computer Science, pages 51-62. Springer, 2013.
CliqueSquare: Flat Plans for Massively Parallel RDF Queries. François Goasdoué, Zoi Kaoudi, Ioana Manolescu, Jorge-Arnulfo Quiané-Ruiz, Stamatis Zampetakis, International Conference on Data Engineering. Seoul, South KoreaFrançois Goasdoué, Zoi Kaoudi, Ioana Manolescu, Jorge-Arnulfo Quiané- Ruiz, and Stamatis Zampetakis. CliqueSquare: Flat Plans for Massively Parallel RDF Queries. In International Conference on Data Engineering, Seoul, South Korea, 2015.
Spar(k)ql: SPARQL evaluation method on Spark GraphX. G Gombos, G Rácz, A Kiss, 4th IEEE International Conference on Future Internet of Things and Cloud Workshops, FiCloud Workshops. M. Younas, I. Awan, and J. E. HaddadVienna, AustriaIEEE Computer SocietyG. Gombos, G. Rácz, and A. Kiss. Spar(k)ql: SPARQL evaluation method on Spark GraphX. In M. Younas, I. Awan, and J. E. Haddad, editors, 4th IEEE International Conference on Future Internet of Things and Cloud Workshops, FiCloud Workshops 2016, Vienna, Austria, August 22-24, 2016, pages 188-193. IEEE Computer Society, 2016.
SPARQLGX: efficient distributed evaluation of SPARQL with Apache Spark. D Graux, L Jachiet, P Genevès, N Layaïda, The Semantic Web -ISWC 2016 -15th International Semantic Web Conference. P. T. Groth et al.Kobe, JapanSpringer9982Proceedings, Part IID. Graux, L. Jachiet, P. Genevès, and N. Layaïda. SPARQLGX: efficient distributed evaluation of SPARQL with Apache Spark. In P. T. Groth et al., editors, The Semantic Web -ISWC 2016 -15th International Semantic Web Conference, Kobe, Japan, October 17-21, 2016, Proceedings, Part II, volume 9982 of Lecture Notes in Computer Science, pages 80-87. Springer, 2016.
Foundations of semantic web databases. Claudio Gutierrez, Carlos A Hurtado, Alberto O Mendelzon, Jorge Pérez, Journal of Computer and System Sciences. 773Claudio Gutierrez, Carlos A. Hurtado, Alberto O. Mendelzon, and Jorge Pérez. Foundations of semantic web databases. Journal of Computer and System Sciences, 77(3):520-541, 2011.
Semantic data querying over NoSQL databases with Apache Spark. Mahmudul Hassan, K Srividya, Bansal, 2018 IEEE International Conference on Information Reuse and Integration. Salt Lake City, UT, USAIEEEMahmudul Hassan and Srividya K. Bansal. Semantic data querying over NoSQL databases with Apache Spark. In 2018 IEEE International Con- ference on Information Reuse and Integration, IRI 2018, Salt Lake City, UT, USA, July 6-9, 2018, pages 364-371. IEEE, 2018.
Scalable SPARQL querying of large RDF graphs. J Huang, D J Abadi, K Ren, Proceedings of the VLDB Endowment. the VLDB Endowment4J. Huang, D. J. Abadi, and K. Ren. Scalable SPARQL querying of large RDF graphs. Proceedings of the VLDB Endowment, 4(11):1123-1134, 2011.
Heuristics-based query processing for large RDF graphs using cloud computing. Mohammad Husain, James Mcglothlin, Mohammad M Masud, Latifur Khan, Bhavani M Thuraisingham, IEEE Transactions on Knowledge and Data Engineering. 239Mohammad Husain, James McGlothlin, Mohammad M. Masud, Latifur Khan, and Bhavani M. Thuraisingham. Heuristics-based query process- ing for large RDF graphs using cloud computing. IEEE Transactions on Knowledge and Data Engineering, 23(9):1312-1327, 2011.
Data intensive query processing for large RDF graphs using cloud computing tools. Latifur Mohammad Farhan Husain, Murat Khan, Bhavani M Kantarcioglu, Thuraisingham, CLOUD 2010IEEE International Conference on Cloud Computing. Miami, FL, USAIEEE Computer SocietyMohammad Farhan Husain, Latifur Khan, Murat Kantarcioglu, and Bha- vani M. Thuraisingham. Data intensive query processing for large RDF graphs using cloud computing tools. In IEEE International Conference on Cloud Computing, CLOUD 2010, Miami, FL, USA, 5-10 July, 2010, pages 1-10. IEEE Computer Society, 2010.
Redundancy in linked data partitioning for efficient query evaluation. E Kalogeros, M Gergatsoulis, M Damigos, 3rd International Conference on Future Internet of Things and Cloud. I. Awan, M. Younas, and M. MecellaRome, ItalyIEEE Computer SocietyE. Kalogeros, M. Gergatsoulis, and M. Damigos. Redundancy in linked data partitioning for efficient query evaluation. In I. Awan, M. Younas, and M. Mecella, editors, 3rd International Conference on Future Internet of Things and Cloud, FiCloud 2015, Rome, Italy, August 24-26, 2015, pages 497-504. IEEE Computer Society, 2015.
Rdf in the clouds: A survey. Zoi Kaoudi, Ioana Manolescu, The VLDB Journal. 241Zoi Kaoudi and Ioana Manolescu. Rdf in the clouds: A survey. The VLDB Journal, 24(1):67-91, 2015.
A fast and high quality multilevel scheme for partitioning irregular graphs. George Karypis, Vipin Kumar, SIAM Journal on scientific Computing. 201George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on scientific Com- puting, 20(1):359-392, 1998.
. B Kassaie, Sparql, Graphx, Corr, abs/1701.03091B. Kassaie. SPARQL over GraphX. CoRR, abs/1701.03091, 2017.
From SPARQL to MapReduce: The journey using a nested triplegroup algebra. Hyeongsik Kim, Padmashree Ravindra, Kemafor Anyanwu, Proceedings of the VLDB Endowment. the VLDB Endowment4HyeongSik Kim, Padmashree Ravindra, and Kemafor Anyanwu. From SPARQL to MapReduce: The journey using a nested triplegroup algebra. Proceedings of the VLDB Endowment, 4(12):1426-1429, 2011.
CumulusRDF: linked data management on nested key-value stores. G Ladwig, A Harth, The 7th International Workshop on Scalable Semantic Web Knowledge Base Systems. 30G. Ladwig and A. Harth. CumulusRDF: linked data management on nested key-value stores. In The 7th International Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2011), volume 30, 2011.
Scaling queries over big RDF graphs with semantic hash partitioning. Kisung Lee, Ling Liu, Proceedings of the VLDB Endowment. the VLDB Endowment6Kisung Lee and Ling Liu. Scaling queries over big RDF graphs with seman- tic hash partitioning. Proceedings of the VLDB Endowment, 6(14):1894- 1905, September 2013.
D-SPARQ: distributed, scalable and efficient RDF query engine. R Mutharaju, S Sakr, A Sala, P Hitzler, Proceedings of the ISWC 2013 Posters & Demonstrations Track. E. Blomqvist and T. Grozathe ISWC 2013 Posters & Demonstrations TrackSydney, Australia1035R. Mutharaju, S. Sakr, A. Sala, and P. Hitzler. D-SPARQ: distributed, scalable and efficient RDF query engine. In E. Blomqvist and T. Groza, editors, Proceedings of the ISWC 2013 Posters & Demonstrations Track, Sydney, Australia, October 23, 2013, volume 1035 of CEUR Workshop Proceedings, pages 261-264, 2013.
SPARQL basic graph pattern processing with iterative MapReduce. Jaeseok Myung, Jongheum Yeon, Sang-Goo Lee, Proceedings of the 2010 Workshop on Massive Data Analytics on the Cloud, MDAC '10. the 2010 Workshop on Massive Data Analytics on the Cloud, MDAC '10New York, NY, USAACM6Jaeseok Myung, Jongheum Yeon, and Sang-goo Lee. SPARQL basic graph pattern processing with iterative MapReduce. In Proceedings of the 2010 Workshop on Massive Data Analytics on the Cloud, MDAC '10, pages 6:1- 6:6, New York, NY, USA, 2010. ACM.
SPARQL graph pattern processing with Apache Spark. H Naacke, B Amann, O Curé, Proceedings of the Fifth International Workshop on Graph Datamanagement Experiences & Systems, SIGMOD/PODS 2017. P. A. Boncz and Josep-Lluís Larriba-Peythe Fifth International Workshop on Graph Datamanagement Experiences & Systems, SIGMOD/PODS 2017Chicago, IL, USAACM1H. Naacke, B. Amann, and O. Curé. SPARQL graph pattern process- ing with Apache Spark. In P. A. Boncz and Josep-Lluís Larriba-Pey, editors, Proceedings of the Fifth International Workshop on Graph Data- management Experiences & Systems, SIGMOD/PODS 2017, Chicago, IL, USA, May 14 -19, 2017, pages 1:1-1:7. ACM, 2017.
RDF-3X: a RISC-style engine for RDF. T Neumann, G Weikum, Proceedings of the VLDB Endowment. the VLDB Endowment1T. Neumann and G. Weikum. RDF-3X: a RISC-style engine for RDF. Proceedings of the VLDB Endowment, 1(1):647-659, 2008.
Scalable join processing on very large RDF graphs. Thomas Neumann, Gerhard Weikum, Proceedings of the 2009 ACM SIGMOD International Conference on Management of data. the 2009 ACM SIGMOD International Conference on Management of dataThomas Neumann and Gerhard Weikum. Scalable join processing on very large RDF graphs. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of data, pages 627-640, 2009.
The RDF-3X engine for scalable management of RDF data. Thomas Neumann, Gerhard Weikum, The VLDB Journal. 191Thomas Neumann and Gerhard Weikum. The RDF-3X engine for scalable management of RDF data. The VLDB Journal, 19(1):91-113, February 2010.
A Map-Reduce algorithm for querying linked data based on query decomposition into stars. C Nomikos, M Gergatsoulis, E Kalogeros, M Damigos, Proceedings of the Workshops of the EDBT/ICDT 2014 Joint Conference (EDBT/ICDT 2014). K. Selçuk Candan et al.the Workshops of the EDBT/ICDT 2014 Joint Conference (EDBT/ICDT 2014)Athens, Greece1133C. Nomikos, M. Gergatsoulis, E. Kalogeros, and M. Damigos. A Map- Reduce algorithm for querying linked data based on query decomposition into stars. In K. Selçuk Candan et al., editors, Proceedings of the Workshops of the EDBT/ICDT 2014 Joint Conference (EDBT/ICDT 2014), Athens, Greece, March 28, 2014, volume 1133 of CEUR Workshop Proceedings, pages 224-231, 2014.
Pig Latin: a not-so-foreign language for data processing. Christopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, Andrew Tomkins, Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD '08. the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD '08New York, NY, USAACMChristopher Olston, Benjamin Reed, Utkarsh Srivastava, Ravi Kumar, and Andrew Tomkins. Pig Latin: a not-so-foreign language for data processing. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD '08, pages 1099-1110, New York, NY, USA, 2008. ACM.
A survey of RDF data management systems. M Tamerözsu, Frontiers of Computer Science. 103M TamerÖzsu. A survey of RDF data management systems. Frontiers of Computer Science, 10(3):418-432, 2016.
H2RDF+: High-performance distributed joins over large-scale RDF graphs. N Papailiou, I Konstantinou, D Tsoumakos, P Karras, N Koziris, 2013 IEEE International Conference on Big Data. N. Papailiou, I. Konstantinou, D. Tsoumakos, P. Karras, and N. Koziris. H2RDF+: High-performance distributed joins over large-scale RDF graphs. In 2013 IEEE International Conference on Big Data, pages 255- 263, Oct 2013.
H2RDF: Adaptive query processing on RDF data in the cloud. Nikolaos Papailiou, Ioannis Konstantinou, Dimitrios Tsoumakos, Nectarios Koziris, WWW '12 Companion: Proceedings of the 21st International Conference on World Wide Web, WWW '12 Companion. New York, NY, USAACMNikolaos Papailiou, Ioannis Konstantinou, Dimitrios Tsoumakos, and Nec- tarios Koziris. H2RDF: Adaptive query processing on RDF data in the cloud. In WWW '12 Companion: Proceedings of the 21st International Conference on World Wide Web, WWW '12 Companion, pages 397-400, New York, NY, USA, 2012. ACM.
Semantics and Complexity of SPARQL. Jorge Pérez, Marcelo Arenas, Claudio Gutierrez, Series Title: Lecture Notes in Computer Science. David et. al. HutchisonBerlin Heidelberg; Berlin, HeidelbergSpringer4273Jorge Pérez, Marcelo Arenas, and Claudio Gutierrez. Semantics and Com- plexity of SPARQL. In David et. al. Hutchison, editor, The Semantic Web - ISWC 2006, volume 4273, pages 30-43. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006. Series Title: Lecture Notes in Computer Science.
What are real SPARQL queries like?. François Picalausa, Stijn Vansummeren, Proceedings of the International Workshop on Semantic Web Information Management. the International Workshop on Semantic Web Information ManagementFrançois Picalausa and Stijn Vansummeren. What are real SPARQL queries like? In Proceedings of the International Workshop on Semantic Web Information Management, pages 1-6, 2011.
Rya: a scalable RDF triple store for the clouds. R Punnoose, A Crainiceanu, D Rapp, Cloud-I '121st International Workshop on Cloud Intelligence. Istanbul, TurkeyACMcolocated with VLDB 2012R. Punnoose, A. Crainiceanu, and D. Rapp. Rya: a scalable RDF triple store for the clouds. In 1st International Workshop on Cloud Intelligence (colocated with VLDB 2012), Cloud-I '12, Istanbul, Turkey, August 31, 2012. ACM, 2012.
Clause-iteration with MapReduce to scalably query datagraphs in the SHARD graph-store. Kurt Rohloff, Richard E Schantz, Proceedings of the Fourth International Workshop on Data-intensive Distributed Computing, DIDC '11. the Fourth International Workshop on Data-intensive Distributed Computing, DIDC '11New York, NY, USAACMKurt Rohloff and Richard E. Schantz. Clause-iteration with MapReduce to scalably query datagraphs in the SHARD graph-store. In Proceedings of the Fourth International Workshop on Data-intensive Distributed Computing, DIDC '11, pages 35-44, New York, NY, USA, 2011. ACM.
S2X: graph-parallel querying of RDF with GraphX. A Schätzle, M Przyjaciel-Zablocki, T Berberich, G Lausen, Biomedical Data Management and Graph Online Querying -VLDB 2015 Workshops, Big-O(Q) and DMAH. F. Wang et al., editorsWaikoloa, HI, USASpringer9579Revised Selected PapersA. Schätzle, M. Przyjaciel-Zablocki, T. Berberich, and G. Lausen. S2X: graph-parallel querying of RDF with GraphX. In F. Wang et al., edi- tors, Biomedical Data Management and Graph Online Querying -VLDB 2015 Workshops, Big-O(Q) and DMAH, Waikoloa, HI, USA, August 31 - September 4, 2015, Revised Selected Papers, volume 9579 of Lecture Notes in Computer Science, pages 155-168. Springer, 2016.
Cascading map-side joins over HBase for scalable join processing. Alexander Schätzle, Martin Przyjaciel-Zablocki, Christopher Dorner, Thomas Hornung, Georg Lausen, Proceedings of the Joint Workshop on Scalable and High-Performance Semantic Web Systems. Achille Fokoue, Thorsten Liebig, Eric L. Goodman, Jesse Weaver, Jacopo Urbani, and David Mizellthe Joint Workshop on Scalable and High-Performance Semantic Web SystemsBoston, USA943CEUR-WS.orgAlexander Schätzle, Martin Przyjaciel-Zablocki, Christopher Dorner, Thomas Hornung, and Georg Lausen. Cascading map-side joins over HBase for scalable join processing. In Achille Fokoue, Thorsten Liebig, Eric L. Goodman, Jesse Weaver, Jacopo Urbani, and David Mizell, editors, Pro- ceedings of the Joint Workshop on Scalable and High-Performance Seman- tic Web Systems, Boston, USA, November 11, 2012, volume 943 of CEUR Workshop Proceedings, pages 59-74. CEUR-WS.org, 2012.
PigSPARQL: Mapping SPARQL to Pig Latin. Alexander Schätzle, Martin Przyjaciel-Zablocki, Georg Lausen, Proceedings of the International Workshop on Semantic Web Information Management, SWIM '11. the International Workshop on Semantic Web Information Management, SWIM '11New York, NY, USAACM4Alexander Schätzle, Martin Przyjaciel-Zablocki, and Georg Lausen. PigSPARQL: Mapping SPARQL to Pig Latin. In Proceedings of the In- ternational Workshop on Semantic Web Information Management, SWIM '11, pages 4:1-4:8, New York, NY, USA, 2011. ACM.
Sempala: Interactive SPARQL query processing on Hadoop. Alexander Schätzle, Martin Przyjaciel-Zablocki, Antony Neu, Georg Lausen ; Tania Tudorache, Abraham Bernstein, Chris Welty, Craig A Knoblock, Denny Vrandecic, Paul Groth, Natasha F Noy, Krzysztof Janowicz, Carole A Goble, The Semantic Web -ISWC 2014 -13th International Semantic Web Conference. Riva del Garda, Italy, OcSpringer8796Proceedings, Part IAlexander Schätzle, Martin Przyjaciel-Zablocki, Antony Neu, and Georg Lausen. Sempala: Interactive SPARQL query processing on Hadoop. In Peter Mika, Tania Tudorache, Abraham Bernstein, Chris Welty, Craig A. Knoblock, Denny Vrandecic, Paul Groth, Natasha F. Noy, Krzysztof Janowicz, and Carole A. Goble, editors, The Semantic Web -ISWC 2014 -13th International Semantic Web Conference, Riva del Garda, Italy, Oc- tober 19-23, 2014. Proceedings, Part I, volume 8796 of Lecture Notes in Computer Science, pages 164-179. Springer, 2014.
S2RDF: RDF Querying with SPARQL on Spark. Alexander Schätzle, Martin Przyjaciel-Zablocki, Simon Skilevic, Georg Lausen, Proceedings of the VLDB Endowment. the VLDB Endowment9Alexander Schätzle, Martin Przyjaciel-Zablocki, Simon Skilevic, and Georg Lausen. S2RDF: RDF Querying with SPARQL on Spark. Proceedings of the VLDB Endowment, 9(10):804-815, 2016.
Andy Seaborne and Eric Prud'hommeaux. SPARQL Query Language for RDF. W3C Recommendation. Andy Seaborne and Eric Prud'hommeaux. SPARQL Query Language for RDF. W3C Recommendation, January 2008.
Ultrawrap: SPARQL execution on relational data. F Juan, Sequeda, P Daniel, Miranker, Journal of Web Semantics. 22Juan F Sequeda and Daniel P Miranker. Ultrawrap: SPARQL execution on relational data. Journal of Web Semantics, 22:19-39, 2013.
Bringing relational databases into the semantic web: A survey. Dimitrios-Emmanuel Spanos, Periklis Stavrou, Nikolas Mitrou, Semantic Web. 3Dimitrios-Emmanuel Spanos, Periklis Stavrou, and Nikolas Mitrou. Bring- ing relational databases into the semantic web: A survey. Semantic Web, 3(2):169-209, 2012.
Sparklify: A scalable software component for efficient evaluation of SPARQL queries over distributed RDF datasets. Claus Stadler, Gezim Sejdiu, Damien Graux, Jens Lehmann, The Semantic Web -ISWC 2019 -18th International Semantic Web Conference. Chiara Ghidini, Olaf Hartig, Maria Maleshkova, Vojtech Svátek, Isabel F. Cruz, Aidan Hogan, Jie Song, Maxime Lefrançois, and Fabien GandonAuckland, New ZealandSpringer11779Proceedings, Part IIClaus Stadler, Gezim Sejdiu, Damien Graux, and Jens Lehmann. Spark- lify: A scalable software component for efficient evaluation of SPARQL queries over distributed RDF datasets. In Chiara Ghidini, Olaf Hartig, Maria Maleshkova, Vojtech Svátek, Isabel F. Cruz, Aidan Hogan, Jie Song, Maxime Lefrançois, and Fabien Gandon, editors, The Semantic Web - ISWC 2019 -18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part II, volume 11779 of Lec- ture Notes in Computer Science, pages 293-308. Springer, 2019.
Efficiently joining group patterns in sparql queries. Maria-Esther Vidal, Edna Ruckhaus, Tomas Lampo, Amadís Martínez, Javier Sierra, Axel Polleres, Extended Semantic Web Conference. SpringerMaria-Esther Vidal, Edna Ruckhaus, Tomas Lampo, Amadís Martínez, Javier Sierra, and Axel Polleres. Efficiently joining group patterns in sparql queries. In Extended Semantic Web Conference, pages 228-242. Springer, 2010.
Efficient subgraph matching on large RDF graphs using MapReduce. Xin Wang, Lele Chai, Qiang Xu, Yajun Yang, Jianxin Li, Junhu Wang, Yunpeng Chai, Data Science and Engineering. 41Xin Wang, Lele Chai, Qiang Xu, Yajun Yang, Jianxin Li, Junhu Wang, and Yunpeng Chai. Efficient subgraph matching on large RDF graphs using MapReduce. Data Science and Engineering, 4(1):24-43, 2019.
Hexastore: Sextuple indexing for semantic web data management. Cathrin Weiss, Panagiotis Karras, Abraham Bernstein, Proceedings of the VLDB Endowment. the VLDB Endowment1Cathrin Weiss, Panagiotis Karras, and Abraham Bernstein. Hexastore: Sextuple indexing for semantic web data management. Proceedings of the VLDB Endowment, 1(1):1008-1019, 2008.
Big data: From beginning to future. I Yaqoob, I A T Hashem, A Gani, S Mokhtar, E Ahmed, N B Anuar, A V Vasilakos, International Journal of Information Management. 366I. Yaqoob, I. A. T. Hashem, A. Gani, S. Mokhtar, E. Ahmed, N. B. Anuar, and A. V. Vasilakos. Big data: From beginning to future. International Journal of Information Management, 36(6):1231-1247, 2016.
Spark: Cluster computing with working sets. M Zaharia, M Chowdhury, M J Franklin, S Shenker, I Stoica, 2nd USENIX Workshop on Hot Topics in Cloud Computing, Hot-Cloud'10. E. M. Nahum and D. XuBoston, MA, USAM. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: Cluster computing with working sets. In E. M. Nahum and D. Xu, editors, 2nd USENIX Workshop on Hot Topics in Cloud Computing, Hot- Cloud'10, Boston, MA, USA, June 22, 2010, 2010.
| [] |
[
"CLEAR: High-Ionization [Ne V] λ3426 Emission-line Galaxies at 1.4 <z< 2.3",
"CLEAR: High-Ionization [Ne V] λ3426 Emission-line Galaxies at 1.4 <z< 2.3"
] | [
"Nikko J Cleri \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nDepartment of Physics\nUniversity of Connecticut\n06269StorrsCTUSA\n",
"Guang Yang \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nKapteyn Astronomical Institute\nUniversity of Groningen\nP.O. Box 8009700 AVGroningenThe Netherlands\n\nSRON Netherlands Institute for Space Research\nPostbus 8009700 AVGroningenThe Netherlands\n",
"Casey Papovich \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n",
"Jonathan R Trump \nDepartment of Physics\nUniversity of Connecticut\n06269StorrsCTUSA\n",
"Bren E Backhaus \nDepartment of Physics\nUniversity of Connecticut\n06269StorrsCTUSA\n",
"Vicente Estrada-Carpenter \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nDepartment of Astronomy & Physics\nSaint Mary's University\n923 Robie StreetB3H 3C3HalifaxNSCanada\n",
"Steven L Finkelstein \nDepartment of Astronomy\nThe University of Texas\n78712AustinTexasUSA\n",
"Mauro Giavalisco \nDepartment of Astronomy\nUniversity of Massachusetts Amherst\n710 N. Pleasant St01003AmherstMAUSA\n",
"Taylor A Hutchison \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nAstrophysics Science Division\nNASA Goddard Space Flight Center\n8800, 20771Greenbelt Rd, GreenbeltMDUSA\n",
"Zhiyuan Ji \nDepartment of Astronomy\nUniversity of Massachusetts Amherst\n710 N. Pleasant St01003AmherstMAUSA\n",
"Intae Jung \nDepartment of Physics\nThe Catholic University of America\n20064WashingtonDCUSA\n\nAstrophysics Science Division\nGoddard Space Flight Center\n20771GreenbeltMDUSA\n\nCenter for Research and Exploration in Space Science and Technology\nNASA/GSFC\n20771GreenbeltMD\n",
"Jasleen Matharu \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nCosmic Dawn Center\nNiels Bohr Institute\nUniversity of Copenhagen\nRadmandsgade 622200CopenhagenDenmark\n",
"Ivelina Momcheva \nMax-Planck-Institut für Astronomie\nKönigstuhl 17D-69117HeidelbergGermany\n\nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n",
"Grace M Olivier \nDepartment of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n\nGeorge P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA\n",
"Raymond Simons \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n",
"Benjamin Weiner \nMMT/Steward Observatory\n933 N. Cherry St\n\nUniversity of Arizona\n85721TucsonAZUSA\n"
] | [
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Department of Physics\nUniversity of Connecticut\n06269StorrsCTUSA",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Kapteyn Astronomical Institute\nUniversity of Groningen\nP.O. Box 8009700 AVGroningenThe Netherlands",
"SRON Netherlands Institute for Space Research\nPostbus 8009700 AVGroningenThe Netherlands",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Department of Physics\nUniversity of Connecticut\n06269StorrsCTUSA",
"Department of Physics\nUniversity of Connecticut\n06269StorrsCTUSA",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Department of Astronomy & Physics\nSaint Mary's University\n923 Robie StreetB3H 3C3HalifaxNSCanada",
"Department of Astronomy\nThe University of Texas\n78712AustinTexasUSA",
"Department of Astronomy\nUniversity of Massachusetts Amherst\n710 N. Pleasant St01003AmherstMAUSA",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Astrophysics Science Division\nNASA Goddard Space Flight Center\n8800, 20771Greenbelt Rd, GreenbeltMDUSA",
"Department of Astronomy\nUniversity of Massachusetts Amherst\n710 N. Pleasant St01003AmherstMAUSA",
"Department of Physics\nThe Catholic University of America\n20064WashingtonDCUSA",
"Astrophysics Science Division\nGoddard Space Flight Center\n20771GreenbeltMDUSA",
"Center for Research and Exploration in Space Science and Technology\nNASA/GSFC\n20771GreenbeltMD",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Cosmic Dawn Center\nNiels Bohr Institute\nUniversity of Copenhagen\nRadmandsgade 622200CopenhagenDenmark",
"Max-Planck-Institut für Astronomie\nKönigstuhl 17D-69117HeidelbergGermany",
"Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA",
"Department of Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"George P\nCynthia Woods Mitchell Institute for Fundamental Physics and Astronomy\nTexas A&M University\n77843-4242College StationTXUSA",
"Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA",
"MMT/Steward Observatory\n933 N. Cherry St",
"University of Arizona\n85721TucsonAZUSA"
] | [] | We analyze a sample of 25 [Ne V] λ3426 emission-line galaxies at 1.4 < z < 2.3 using Hubble Space Telescope/Wide Field Camera 3 G102 and G141 grism observations from the CANDELS Lyman-α Emission at Reionization (CLEAR) survey. [Ne V] emission probes extremely energetic photoionization (97.11-126.21 eV), and is often attributed to energetic radiation from active galactic nuclei (AGN), shocks from supernova, or an otherwise very hard ionizing spectrum from the stellar continuum. In this work, we use [Ne V] in conjunction with other rest-frame UV/optical emission lines([O II] λλ3726,3729, [Ne III] λ3869, Hβ, [O III] λλ4959,5007, Hα+[N II] λλ6548,6583, [S II] λλ6716,6731), deep(2-7Ms) X-ray observations (from Chandra), and midinfrared imaging (from Spitzer) to study the origin of this emission and to place constraints on the nature of the ionizing engine. The majority of the [Ne V]-detected galaxies have properties consistent with ionization from AGN. However, for our [Ne V]-selected sample, the X-ray luminosities are consistent with local (z 0.1) X-ray-selected Seyferts, but the [Ne V] luminosities are more consistent with those from z ∼ 1 X-ray-selected QSOs. The excess [Ne V] emission requires either reduced hard X-rays, or a ∼0.1 keV excess. We discuss possible origins of the apparent [Ne V] excess, which could be related to the "soft (X-ray) excess" observed in some QSOs and Seyferts, and/or be a consequence of a complex/anisotropic geometry for the narrow line region, combined with absorption from a warm, relativistic wind ejected from the accretion disk. We also consider implications for future studies of extreme high-ionization systems in the epoch of reionization (z 6) with the James Webb Space Telescope. | 10.3847/1538-4357/acc1e6 | [
"https://export.arxiv.org/pdf/2209.06247v2.pdf"
] | 252,222,315 | 2209.06247 | d423ff10e5e6d612bc4c9f0e6125c51e1c25d224 |
CLEAR: High-Ionization [Ne V] λ3426 Emission-line Galaxies at 1.4 <z< 2.3
Nikko J Cleri
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Department of Physics
University of Connecticut
06269StorrsCTUSA
Guang Yang
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Kapteyn Astronomical Institute
University of Groningen
P.O. Box 8009700 AVGroningenThe Netherlands
SRON Netherlands Institute for Space Research
Postbus 8009700 AVGroningenThe Netherlands
Casey Papovich
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Jonathan R Trump
Department of Physics
University of Connecticut
06269StorrsCTUSA
Bren E Backhaus
Department of Physics
University of Connecticut
06269StorrsCTUSA
Vicente Estrada-Carpenter
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Department of Astronomy & Physics
Saint Mary's University
923 Robie StreetB3H 3C3HalifaxNSCanada
Steven L Finkelstein
Department of Astronomy
The University of Texas
78712AustinTexasUSA
Mauro Giavalisco
Department of Astronomy
University of Massachusetts Amherst
710 N. Pleasant St01003AmherstMAUSA
Taylor A Hutchison
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Astrophysics Science Division
NASA Goddard Space Flight Center
8800, 20771Greenbelt Rd, GreenbeltMDUSA
Zhiyuan Ji
Department of Astronomy
University of Massachusetts Amherst
710 N. Pleasant St01003AmherstMAUSA
Intae Jung
Department of Physics
The Catholic University of America
20064WashingtonDCUSA
Astrophysics Science Division
Goddard Space Flight Center
20771GreenbeltMDUSA
Center for Research and Exploration in Space Science and Technology
NASA/GSFC
20771GreenbeltMD
Jasleen Matharu
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Cosmic Dawn Center
Niels Bohr Institute
University of Copenhagen
Radmandsgade 622200CopenhagenDenmark
Ivelina Momcheva
Max-Planck-Institut für Astronomie
Königstuhl 17D-69117HeidelbergGermany
Space Telescope Science Institute
3700 San Martin Drive21218BaltimoreMDUSA
Grace M Olivier
Department of Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
George P
Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy
Texas A&M University
77843-4242College StationTXUSA
Raymond Simons
Space Telescope Science Institute
3700 San Martin Drive21218BaltimoreMDUSA
Benjamin Weiner
MMT/Steward Observatory
933 N. Cherry St
University of Arizona
85721TucsonAZUSA
CLEAR: High-Ionization [Ne V] λ3426 Emission-line Galaxies at 1.4 <z< 2.3
DRAFT VERSION APRIL 25, 2023 Typeset using L A T E X preprint style in AASTeX63
We analyze a sample of 25 [Ne V] λ3426 emission-line galaxies at 1.4 < z < 2.3 using Hubble Space Telescope/Wide Field Camera 3 G102 and G141 grism observations from the CANDELS Lyman-α Emission at Reionization (CLEAR) survey. [Ne V] emission probes extremely energetic photoionization (97.11-126.21 eV), and is often attributed to energetic radiation from active galactic nuclei (AGN), shocks from supernova, or an otherwise very hard ionizing spectrum from the stellar continuum. In this work, we use [Ne V] in conjunction with other rest-frame UV/optical emission lines([O II] λλ3726,3729, [Ne III] λ3869, Hβ, [O III] λλ4959,5007, Hα+[N II] λλ6548,6583, [S II] λλ6716,6731), deep(2-7Ms) X-ray observations (from Chandra), and midinfrared imaging (from Spitzer) to study the origin of this emission and to place constraints on the nature of the ionizing engine. The majority of the [Ne V]-detected galaxies have properties consistent with ionization from AGN. However, for our [Ne V]-selected sample, the X-ray luminosities are consistent with local (z 0.1) X-ray-selected Seyferts, but the [Ne V] luminosities are more consistent with those from z ∼ 1 X-ray-selected QSOs. The excess [Ne V] emission requires either reduced hard X-rays, or a ∼0.1 keV excess. We discuss possible origins of the apparent [Ne V] excess, which could be related to the "soft (X-ray) excess" observed in some QSOs and Seyferts, and/or be a consequence of a complex/anisotropic geometry for the narrow line region, combined with absorption from a warm, relativistic wind ejected from the accretion disk. We also consider implications for future studies of extreme high-ionization systems in the epoch of reionization (z 6) with the James Webb Space Telescope.
INTRODUCTION
Studies from modern observatories like the Hubble Space Telescope (HST) have shown that cosmic star-formation den-the reionization of the Universe. These rapidly star-forming galaxies exhibit prominent high-ionization nebular emissionlines in their rest frame ultraviolet (UV) and optical spectra, suggesting that these reionization era galaxies are characterized by extreme radiation fields (e.g., Trump et al. 2023;Katz et al. 2023;Brinchmann 2022).
The underlying physics of these high-ionization systems of the epoch of reionization (EoR, z > 6) remain poorly understood, and much of the information about the EoR is extrapolated from local metal-poor dwarf galaxies (e.g., Olivier et al. 2022;Berg et al. 2019Berg et al. , 2021. One means to test the ionizing sources in galaxies across cosmic time is through ratios of strong optical and near-UV emission lines, which serve as useful diagnostics of conditions in the interstellar medium (ISM) (e.g., Kewley et al. 2019b). Much of the knowledge of the physics of higher-redshift star-formation is derived from bright Balmer lines of hydrogen (Hα and Hβ), along with lines of oxygen ([O II] λλ3726,3728 and [O III] λλ4959, 5007), sulfur ([S II] λλ6717,6731) and nitrogen ([N II] λ6584). This suite of near-UV/optical emission lines used for spectral classifications of galaxies is optimized for observation by HST around the peak of cosmic star formation at z ∼ 2, where these lines are redshifted into the near-IR.
Previous work using this suite of emission features in the optical and near-UV have shown that, for galaxies around cosmic noon (z ∼ 2), oxygen abundances are lower, ionization parameters are higher, and ionization fields are harder at similar stellar masses than z ∼ 0 galaxies (e.g., Erb et al. 2006;Nakajima & Ouchi 2014;Steidel et al. 2014;Tang et al. 2021).
Other bright near-UV/optical emission lines remain largely unexplored. The high ionization lines of neon have been only studied slightly, focusing on the [Ne III]λ3869 (40.96-63.45 eV) line (e.g., Masters et al. 2014;Levesque & Richardson 2014;Zeimann et al. 2015;Backhaus et al. 2022a In this work, we study an even higher energy near-UV emission feature: quadruply-ionized neon ([Ne V] λλ3346,3426). The energy required to produce [Ne V] photons (97.11-126.21 eV) is extremely high compared to other bright UV/optical emission lines; the minimum bound is nearly triple that of [O III] (35.12 eV) and nearly double that of ionized helium, He II (54.42 eV), which denotes the boundary of "high ionization" and "very high ionization" in the four-zone ionization model of Berg et al. (2021).
The production of such a high ionization emission line requires an extreme photoionizing source. Studies attribute [Ne V] production to photoionization from active galactic nuclei (AGN), stellar light from the extremely hot ionizing spectra (e.g., Wolf-Rayet stars), or energetic shocks from supernovae Backhaus et al. 2022a;Gilli et al. 2010;Izotov et al. 2012;Mignoli et al. 2013).
Studies of local star-forming galaxies have attempted to explain the [Ne V] production through energetic supernova shocks. Izotov et al. (2012) finds five oxygen-poor blue compact dwarf (BCDs) galaxies with [Ne V] emission which have [Ne V]/He II flux ratios reproducible by radiative shock models with shock velocities in the 300-500 km s −1 range and shock ionizing contributions ∼ 10% that of stellar continuum ionization. However, this modeling cannot conclusively discount this ∼ 10% contribution, responsible for [Ne V] emission, from being produced by AGN. These studies have primarily focused on low-mass galaxies (BCDs in the case of Izotov et al. (2012)). However, there are other examples, including Leung et al. (2021), which studied extended [Ne V] emission in the local ultra-luminous infrared galaxy MRK273 and showed that [Ne V] is consistent with production from shocks for this object. Therefore, shocked gas is also a viable mechanism for [Ne V] emission in and around galaxies.
Emission from [Ne V] has been used to study conditions in the narrow-line region (NLR) in AGN. Gilli et al. (2010) and Mignoli et al. (2013) used [Ne V] luminosities in conjunction with hard-band (2-10 keV) X-ray luminosities to probe highly obscured/Compton Thick (CT) AGN. These analyses with [Ne V] luminosities are inspired by similar analyses using X-ray and [O III] luminosities (Maiolino et al. 1998;Yan et al. 2011;Lambrides et al. 2020;Heckman et al. 2005), with the added benefit of the extreme energies required to produce [Ne V]. Gilli et al. (2010) and Mignoli et al. (2013) conclude that galaxies with very low (<15) X-ray/[Ne V] luminosity ratios are effectively all CT AGN, though the relation of the absorption column densities (N H ) from X-ray spectral fits to the X-ray/[Ne V] ratio is highly dependent on model assumptions (Li et al. 2019). Therefore, while low X-rayto-[Ne V] ratios indicate CT AGN in Seyferts and higherredshift QSOs, such data have not been extended to more modest galaxies (including the the galaxies that dominate the cosmic star-formation rate density (Madau & Dickinson 2014) and SMBH accretion (Hickox & Alexander 2018)).
In this work, we study the properties of galaxies with [Ne V] at redshifts 1. Hβ, Hα, and [S II], to study the physical characteristics of highly-ionizing radiation in galaxies around the peak of cosmic star formation at z ∼ 2. Because these lines trace a range of ionization state, these lines offer direct traces of multiple phases in the ISM (Berg et al. 2021). By studying multiple transitions within the same (and multiple) elements, we can more clearly understand the chemical characteristics of a galaxy.
Understanding this population of high-ionization galaxies has important implications for studies of the EoR with JWST (Trump et al. 2023;Katz et al. 2023;Rhoads et al. 2022, e.g.,). The Near-IR Camera (NIRCam) and Near-IR Spectrograph (NIRSpec) on JWST have the wavelength coverage to detect these extreme [Ne V]-emitting galaxies at 0.8 z 14, and the Near-IR Imager and Slitless Spectrograph (NIRISS) will give slitless spectroscopy of [Ne V] at lower redshifts (3 < z < 7). These objects are sure to be critical targets in spectroscopic surveys with JWST in the near future.
The remainder of this work is as follows: Section 2 describes our parent sample from the CLEAR survey and our selection of [Ne V] sources. Section 3 compares UV/optical emission-line ratios as diagnostics of AGN activity. Section 4 explores the the X-ray/[Ne V] ratio along with X-ray and [Ne V] luminosity functions. Section 5 discusses the implications of our results. Section 6 summarizes the results of this work and discusses future studies of high-ionization galaxies with JWST and the Nancy Grace Roman Space Telescope.
Throughout this work, we assume a flat ΛCDM cosmology with H 0 = 70 km s −1 Mpc −1 and Ω M = 0.30 (Planck Collaboration et al. 2020).
DATA
Our data come from the CANDELS Lyman-α Emission at Reionization (CLEAR) 1 survey (a Cycle 23 HST program, PI: Simons et al. 2023), which consists of deep (12-orbit depth) HST/WFC3 G102 slitless grism spectroscopy covering 0.8-1.15 µm within 12 fields split between the GOODS-North (GN) and GOODS-South (GS) extragalactic survey fields (Estrada-Carpenter et al. 2019;Simons et al. 2021Simons et al. , 2023. The CLEAR pointings overlap with the larger 3D-HST survey area (Momcheva et al. 2016), which provides slitless G141 grism spectra of 2-orbit depth with spectral coverage of 1.1-1.65 µm. These data will be described fully in the survey paper on the data release and have been discussed in other works using these data (e.g., Estrada-Carpenter et al. 2019Simons et al. 2021;Jung et al. 2022;Backhaus et al. 2022a;Cleri et al. 2022;Matharu et al. 2022;Papovich et al. 2022;Backhaus et al. 2022b).
G102 and G141 Spectroscopy, Redshifts and Line Fluxes
The grizli (grism redshift and line analysis) pipeline 2 serves as the primary method of data reduction for the 1 https://clear.physics.tamu.edu 2 https://github.com/gbrammer/grizli/ CLEAR dataset. In contrast to traditional methods of extracting one-dimensional (1D) spectra from slit observations, grizli directly fits the two-dimensional (2D) spectra with model spectra convolved to the galaxy image for multiple position angles of grism observations. This process yields complete and uniform characterization of the suite of spectral line features of all objects observed in each of the G102 and G141 grisms. The flux calibrations of the G102 and G141 spectra are, in general, accurate to within a few (∼3) percent (Estrada-Carpenter et al. 2019;Pirzkal et al. 2016Pirzkal et al. , 2017Lee et al. 2014 .) In this work, we use the CLEAR v4.1 catalogs (Simons et al. (2021). The data products of these catalogs include emission line fluxes, spectroscopic redshifts, and other derived quantities and their respective uncertainties for 6048 objects from grizli run on the combination of the G102 and G141 grism data and broad-band photometry using the 3DHST+ catalogs. Of these galaxies, 4707 galaxies have coverage with both G102 and G141, which constitutes the initial catalog which we used to identify galaxies for our study here. The emission-line fluxes from the grizli reduction presented in this work are not corrected for attenuation by dust in the ISM.
The uncertainties of the emission lines account for the uncertainties of the continuum model as they are fit simultaneously. We note that the low spectral resolution may bias our sample to large equivalent widths, and the uncertainties in the continuum may lead to more uncertain equivalent widths than higher-resolution samples.
Photometry and Derived Quantities
We use stellar masses for objects in our sample from the 3D-HST catalog (Skelton et al. 2014), derived from the CANDELS photometry (Grogin et al. 2011;Koekemoer et al. 2011). The stellar masses are calculated by modeling the spectral energy distribution (SED) with FAST (Kriek et al. 2009), using a Bruzual & Charlot (2003) stellar population synthesis model library, a Chabrier (2003) IMF, solar metallicity, and assuming exponentially declining star formation histories. The stellar masses of our galaxies are generally robust to these assumptions because the peak of the stellar emission is well-constrained by the high-quality CANDELS near-IR imaging. Stellar masses from the 3D-HST survey have a mass limit at z ∼ 2 of log(M * /M ) ∼ 8.5 for H < 25 (Skelton et al. 2014).
To place these galaxies in context, we also compare their SFRs to other galaxies at similar redshifts. For this pur-pose, we use UV continuum SFRs from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CAN-DELS)/ Survey for High-z Absorption Red and Dead Sources (SHARDS) catalog of Barro et al. (2019), which supplements the CANDELS multiwavelength data with SHARDS photometry (Pérez-González et al. 2013) in GOODS-N and GOODS-S. Attenuation-corrected UV SFRs are calculated using the Kennicutt (1998) calibration with a dust attenuation correction. This is fully described in Barro et al. (2019).
Parent Dataset and Sample Selection
Our parent sample represents all CLEAR galaxies within the redshift range for detectable [Ne V] and [O III] in the G102 and G141 spectrum (1.39 < z < 2.40). Requiring the wavelength coverage of a strong line such as [O III] eliminates many potentially spurious objects from the prospective sample and secures reliable spectroscopic redshifts for each object. The wavelength limits of this selection are set by the coverage of the blue end of the G102 and red end of the G141 grisms (8000 Å and 16500 Å, respectively). The CLEAR spectral extractions are limited to galaxies with m F105W < 25.
We select a sample of [Ne V]-emitting galaxies from the CLEAR parent catalog using the following steps:
• Require a grism spectroscopic redshift, 1. • Visual inspection of direct images with 1D and 2D spectra.
This last step ensures that no objects with poor continuummodeling and/or bad contamination subtraction make it into the final selection (this is a known issue with slitless spectroscopy, and visual inspection is important, especially for studies of objects with faint(er) emission lines like ours here, e.g., Zeimann et al. 2014Zeimann et al. , 2015Estrada-Carpenter et al. 2019Backhaus et al. 2022a,b). Each object was inspected by at least three authors. This selection rejects ∼ 40% of objects which pass the first two criteria. After applying all these selection processes, we have a sample of 25 [Ne V]-emitting objects in CLEAR within the allowable redshift range of the G102 and G141 grisms (1.39 < z < 2.30). In our sample, all galaxies have [O III] S/N>5 (significantly greater than the minimum requirement of S/N>3). This lends greater credence to the spectroscopic redshifts and line identification. We also note that 9/25 objects have [Ne V] S/N>5; the implications of the veracity of the [Ne V] detections are discussed in Section 5. Our sample comprises approximately 2.6% of all objects in the CLEAR catalog in this redshift range.
Our final sample has a redshift range of 1.40 < z < 2.29, with a median grism redshift of 1.61. The redshift distribution of the 25 [Ne V]-detected galaxies in our sample is shown in Figure 1. It is fairly evenly distributed in redshift, with a possible spike at z ∼ 1.6, which corresponds to the known overdensity of galaxies at this redshift in the GOODS-S field (Estrada-Carpenter et al. 2019). The properties and derived quantities (including line fluxes) for the galaxies in our sample are shown in Tables 1 and 2. Figure 2 shows rest-frame one-dimensional (1D) spectra of five [Ne V]-emitting objects in the CLEAR sample, ordered by increasing redshift. The points and error bars shown in the 1D spectra are the medians in each bin of wavelength over all exposures. The points are separated into G102 (blue) and G141 (red). We note the region around several lines of interest: [ Figure 3 shows the relation between star formation derived from attenuation-corrected UV luminosity and stellar mass, from Barro et al. (2019). Our sample is again broadly consistent with CLEAR towards lower stellar mass. All but three of our [Ne V]-emitting galaxies lie within the 95% contours of the CLEAR parent population. Eight objects lie between the 85% and 95% contours, indicating that nearly half of the galaxies in our sample have elevated SFRs compared to the rest of CLEAR. One of these objects (GS 42758) has the highest attenuation-corrected UV SFR in the sample and has a V-band attenuation of 1.1 magnitudes (Skelton et al. 2014), and is also an X-ray AGN (see section 2.4). The rest of the sample is consistent with low-dust: we see a median V-band attenuation for the sample of 0.3 magnitudes.
Comparison with X-ray AGN Catalogs
The CLEAR fields include the deepest X-ray imaging from Chandra available anywhere on the sky (7 Ms in GOODS-S and 2 Ms in GOODS-N, see Luo et al. 2017 andXue et al. 2016, respectively). We use the X-ray luminosity for galaxies to diagnose their AGN activity. We matched our sample with the X-ray catalogs for the Chandra Deep Field-North and -South (CDF-N and CDF-S) (Xue et al. 2016;Luo et al. 2017). Classifications from these catalogs include "AGN", "galaxy", or "star", where the AGN classification must satisfy at least one of the following four criteria from Xue et al. (2011), that is, we combine these with a logical OR:
• L 0.5−7 keV ≥ 3 × 10 42 erg s −1 (consistent with Luminous AGN)
• Effective photon index Γ ≤ 1.0 (evidence for Obscured AGN)
• X-ray-to-optical flux ratio of log( f X / f R ) > −1 where f x = f 0.5−7 keV , f 0.5−2 keV or f 2−7 keV (evidence for AGN origin of X-ray emission) Figure 3. The relation between attenuation-corrected UV star formation rate and stellar mass for galaxies of redshift 1.39 < z < 2.30. Our sample of [Ne V]-emitting galaxies is shown as diamonds colorcoded by grism redshift, with the rest of the CLEAR galaxies with derived UV SFRs in this redshift range shown as gray contours. We show the 1σ uncertainties in the attenuation-corrected UV SFRs for the [Ne V] detections, most of which are smaller than the marker size. The lower-mass objects in our sample are broadly consistent with the rest of the CLEAR sample in SFR-mass space; however our sample includes several objects with higher mass / SFR than expected compared to the parent population, with 11/25 objects outside of the second to last (85%) contour, 3 of which are outside of the last (95%) contour.
• Excess X-ray emission over expectation from pure star formation L 0.5−7 keV ≥ 3 × (8.9 × 10 17 L R )
where f X ,L X , f R and L R are the X-ray and R-band fluxes and luminosities, respectively. Objects labeled as "galaxies" in the X-ray catalogs are those which are confirmed to be galaxies (e.g., not stars/objects with a redshift of 0) but do not meet any of these criteria. This selection is subject to the caveat that luminous starbursts may be able to produce sufficient Xray emission from X-ray binaries (XRBs) to be classified as AGN (Lehmer et al. 2010). Matching with our [Ne V] sample, we find 8 objects with X-ray detections classified as AGN. Throughout this work, we will denote these X-ray detected AGN in figures with a solid X marker where appropriate. This selection alone suggests that [Ne V] is a useful tracer of AGN activity: 32% (8/25) of the [Ne V] sample are classified as X-ray AGN, while only 6.5% of the rest of the CLEAR galaxies at this redshift range are classified as X-ray AGN.
Comparison with IR AGN Catalogs
In addition to the X-ray matching to select potential AGN, we also select objects which are identified as AGN by mid-IR photometry. We use photometry from the Infrared Array Camera (IRAC) on Spitzer (Fazio et al. 2004;Lacy et al. 2004;Stern et al. 2005). Our IR color selection criteria are outlined in Donley et al. (2012) and Coil et al. (2015), designed to limit contamination by star-forming galaxies to z < 3 while maintaining reliability and completeness. For the following selection, our notation is such that
x = log 10 f 5.8µm f 3.6µm , y = log 10 f 8.0µm f 4.5µm (1)
The selection of IR-AGN from Donley et al. (2012) requires all of the following criteria:
• Objects are detected in all four IRAC channels (peak wavelengths 3.6µm, 4.6µm, 5.8µm, 8.0µm)
• x ≥ 0.08 and y ≥ 0.15
• y ≥ 1.21x − 0.27 • y ≤ 1.21x + 0.27 • f 8.0µm > f 5.8µm > f 4.5µm > f 3.6µm
This selection identifies 2 objects in our [Ne V]-detected CLEAR sample as IR-AGN. Both objects identified through this photometric selection are also identified as X-ray AGN in the selection given in Section 2.4. IR-AGN selection generally samples more luminous AGN than X-ray selection (Mendez et al. 2013). Consequently, it is unsurprising to find that the two IR AGN in our sample are also X-ray detected. This selection, similarly to the X-ray AGN selection, suggests that [Ne V] emission traces AGN activity: 10.5% of the [Ne V] sample are classified as IR AGN, while only 1.4% of the rest of the CLEAR galaxies at this redshift range are classified as IR-AGN.
SPECTRAL CLASSIFICATION OF STAR FORMATION AND AGN ACTIVITY
To characterize the source of ionizing radiation for each galaxy in our [Ne V] sample, we primarily use the [O III]/Hβ ratio combined with other diagnostics. When [O III]/Hβ ratios are compared to other galactic parameters (e.g. stellar mass and other line ratios), the relation can be used to diagnose the "activity" of the galaxy: "active" galaxies hosting AGN or "inactive" galaxies dominated by star formation.
The Mass-Excitation Diagram
We start by considering the "mass-excitation" (MEx) diagram, which combines the [O III]/Hβ ratio with the stellar mass (Juneau et al. 2011(Juneau et al. , 2014. The MEx diagram is the most inclusive of these AGN diagnostics, i.e., it is suitable for galaxies even in the case that we have only one line ratio, [O III]/Hβ. The dependence of the MEx diagnostic on stellar mass includes biases and assumptions of SED modeling (Barro et al. 2019). Juneau et al. (2014) derived empirical demarcations between AGN and galaxies with star-formation.
First, they set a relation between [O III]/Hβ and and stellar mass that identifies galaxies with ionization from only AGN, defined to have y ≡ log([O III]/Hβ) 3 greater than the value given by the relationship
y = 0.375 (x−10.5) + 1.14, x ≤ 9.9 410.24 − 109.333x +9.71731x 2 − 0.288244x 3 , otherwise(2)
where x ≡ log(M * ). Second, they use a relation between line ratio and stellar mass, where galaxies below this line ratio contain ionization from only star-formation, where y for this relation is given by
y = 352.066 − 93.8249x + 8.32651x 2 − 0.246416x 3(3)
in the range 9.9 < log(M * ) < 11.2. To summarize, galaxies which lie above the top curve (Eqn. 2) in the MEx diagram are classified as AGN; galaxies which lie below the bottom curve (Eqn. 3) are classified as star-forming. We further define galaxies that lie between these two curves as composite sources, with contributions from both AGN and starformation. However, we need to adjust the MEx diagram to account for redshift evolution. For example, Coil et al. (2015) studied a population of AGN at z ∼ 2.3 with rest-frame optical emission line ratios. They concluded that they needed to apply a shift of ∆ log(M * /M ) = 0.75 dex, which more accurately separates star-forming galaxies and AGN at this redshift. This provides a more pure selection of confirmed X-ray AGN via the MEx diagram than the local Juneau et al. (2014) line.
Here, we use the MEx relation and adopt a shift intermediate between that of Juneau et al. (2014) and Coil et al. (2015) as the CLEAR [Ne V] sample lies at a median redshift z ∼ 1.61, between that of the samples of these other studies. We construct a simple empirical model to encapsulate this shift in the MEx relation from redshift evolution. This expands on the 0.75 dex shift from Juneau et al. (2014) to Coil et al. (2015), which becomes:
x = log(M * ) + 0.2(1 + z)(4)
where x is defined above, and 0.2(1 + z) represents the shift in the x-axis from the Juneau et al. (2014) line (Equations 2 and 3). We arrive at this shift of 0.2(1 + z) as it classifies all X-ray-confirmed AGN with stellar mass log(M /M ) > 9 as AGN or 1σ consistent with the AGN/SF dividing line. This shift keeps the same purity of the Coil et al. (2015) AGN selection, but (as we discuss below) this is more consistent with the X-ray detected [Ne V] sources in our sample which would otherwise be labeled star-forming by the Coil et al. (2015) line. Figure 4 (left panel) shows the MEx diagnostic for the galaxies in our CLEAR [Ne V] sample (where we include all of our galaxies that have S/N > 1 in Hβ and [O III]). The diamonds show the 18 sources in our [Ne V] sample that satisfy this requirement. We denote galaxies in our CLEAR [Ne V] sample detected in X-rays with a "thin X" marker, and we denote galaxies that satisfy the IR-AGN definition with a "hollow X" marker. We also show on the MEx diagram those galaxies detected with S/N > 1 in CLEAR without [Ne V] detections, in the same redshift range as the [Ne V] sample, as small gray points.
Roughly Given the relatively small sample in this work, as well as the biases of MEx and stellar mass derivations, we caution use of this diagnostic when others are available (see Section 5 for more discussion).
The "OHNO" Diagram
We next explore other emission line diagnostics designed to separate galaxies with ionization from AGN and starformation. Backhaus et al. (2022a) showed that division in the OHNO line ratios separates X-ray-selected AGN from non-AGN (based classifications from the deep X-ray data in the CDF-N and CDF-S fields).
The center panel of Figure 4 shows the OHNO diagram for the galaxies in our CLEAR sample. The redshift range which allows for all five of these lines in the HST G102 and G141 grism coverage is 1.39 < z < 2.30. Nine of our [Ne V]-detected objects are well-detected in the four OHNO lines (using the OHNO AGN/star-formation separation from Backhaus et al. 2022a). Of these, 8 (out of 9 = 89%) of the galaxies have line ratios consistent with ionization of AGN. This includes all five (100%) of the X-ray-detected [Ne V] sources in our sample. There is one galaxy in our [Ne V]emitter-sample that falls below the AGN line in the OHNO diagram, but it is consistent with being an AGN within its 1σ uncertainties based on the classification line defined by Backhaus et al. (2022a). Therefore, for the [Ne V]-emitting galaxies in our sample that we can place on the OHNO diagram, all but 1 galaxy are 1σ consistent with ionization from AGN. The single object which lies greater than 1σ outside of the AGN region of the OHNO diagram has large horizontal error bars due to a low S/N [O II] detection (we therefore cannot rule out ionization from an AGN in this object).
In (Veilleux & Osterbrock 1987, VO87 hereafter). The VO87 diagram has been applied to many studies of galaxies (including AGN and star-formation) at z ∼ 0 (Veilleux & Osterbrock 1987;Kauffmann et al. 2003;Kewley et al. 2001Kewley et al. , 2019aTrump et al. 2015 in this panel is categorized as an X-ray AGN. The X-ray undetected [Ne V]-emitter is consistent with the unVO87 division within its uncertainties. There is also one object in the CLEAR sample that is undetected in [Ne V] in this redshift range which is identified as an X-ray AGN, and is classified as such by both the VO87 and unVO87 divisions.
As these emission line ratio diagnostics suggest, the [Ne V] sources in CLEAR appear primarily consistent with ionization from AGN. This is clear (pun intended) from the MEx diagram, and OHNO and VO87 relations in Figure 4, where the majority of the [Ne V] sources fall in regions consistent with ionization from AGN. There are three objects which have contradictory classifications between the MEx and OHNO/VO87 diagnostics, but we favor the classifications from the emission-line diagnostics (OHNO and VO87) as they are not subject to the biases and uncertainties involved with the estimation of stellar masses from SED fitting. In total, all but 4 (21/25 = 87.5%) of the [Ne V]-detected objects in our sample are consistent with AGN classification from these three diagnostics, either by their detected line ratios or limiting behaviors. We discuss the implications of the results of these analyses in Section 5.
USING X-RAY EMISSION TO CHARACTERIZE [Ne V]-EMITTING AGN
In this section, we investigate the properties of [Ne V]emitting galaxies in relation to their X-ray emission. We explore the use of observed [Ne V]λ3426 Å luminosities to probe AGN activity missed by other AGN selection methods, such as X-ray, IR, and emission-line diagnostics.
For the following analysis, we use luminosities of both Xray detections and upper limits in X-rays for galaxies that are undetected ("X-ray nondetections"). We take X-ray fluxes from the Chandra Deep Field North (CDF-N) and South (CDF-S) catalogs (Xue et al. (2016) and Luo et al. (2017), respectively). We calculate the upper limits of the luminosity of the X-ray nondetections using the hard-band flux detection limits from Xue et al. (2016) and Luo et al. (2017) of 5.9 × 10 −17 erg s −1 cm −2 and 2.7 × 10 −17 erg s −1 cm −2 , respectively, and the grism redshifts from the CLEAR catalog. We perform a K-correction assuming a typical effective photon index of 1.8 (Liu et al. 2017;Yang et al. 2016). We transform our 0.5-7 keV luminosities to 2-10 keV luminosities for comparisons with other samples following Yang et al. (2016), where L 2−10 keV = 0.721L 0.5−7 keV .
In Figure 5 we present the intrinsic X-ray luminosities for the 8 [Ne V]-emitting galaxies classified as X-ray AGN by the Xue et al. (2016) and Luo et al. (2017) Gilli et al. (2010) and Maiolino et al. (1998), respectively.
Our first result is that the majority of the CLEAR [Ne V]emitter galaxies have [Ne V] luminosities that exceed the local scaling relation. Figure 5 shows that only two of the eight [Ne V]-detected X-ray AGN are consistent (within 1σ) with the [Ne V]-X-ray relation of Gilli et al. (2010) (which was derived from the observed 2-10 keV luminosities from local Seyferts). However, five of the eight are consistent with local [O III]-X-ray relation (derived from local Seyferts, Maiolino et al. 1998). We do not find anything that differentiates the three galaxies that are outliers on both the [Ne V]-/X-ray and [O III]-/X-ray relations from the rest of the sample: these three galaxies show no special features in their properties nor spectra. We therefore conclude that AGN span a larger variation in [Ne V] and X-ray emission than suggested from local Seyfert samples.
Our sample is biased to high [Ne V] luminosities by selection, which may be in part responsible for this result. However, we note that there are several objects which are consistent with the local X-ray to [Ne V] and [O III] relations. (Gilli et al. 2010;Maiolino et al. 1998). Our emission-line selected sample has preferentially higher [Ne V] compared to X-ray luminosities than local Seyfert relations suggest.
If this result was due to an insufficient absorption correction to the observed X-ray luminosities, we would expect all of the objects from our sample to lie below the local relations. Instead, the large scatter in the X-ray luminosities of objects in our sample indicates that this is not the case: an additional flat correction needed to bring the low X-ray luminosity objects to the local relations would skew the objects with higher X-ray luminosities above the local relations. While a flat correction to all luminosities is likely not a perfect prescription, the objects which are discrepant from the local relations are uniformly distributed across all luminosities.
Previous studies have used the X-ray/[Ne V] luminosity ratio to study AGN. Gilli et al. (2010) argued that the Xray/[Ne V] luminosity ratio (L X /L [Ne V] ) could be a useful indicator of CT AGN. They observed that all Seyferts in their sample with L X /L [Ne V] < 15 showed evidence of CT AGN. However, Gilli et al. (2010) assumed that AGN have a near constant intrinsic L X /L [Ne V] ratio, in which case the lower observed L X /L [Ne V] ratios imply obscuration of the X-ray emission. Figure 6 shows the distribution of the X-ray/[Ne V] luminosity ratio for our sample, with the X-ray detections in blue and upper limits for the non-detections in pink. We also show the median ratio for the local (z < 0.1) Seyferts from Gilli et al. (2010). The gray-shaded regions show the inter-68 and inter-90 percentiles for the Gilli et al. (2010) sample.
The X/[Ne V] distribution for the CLEAR [Ne V]-emitter
sample is systematically lower than that of the low-redshift Seyferts from Gilli et al. (2010). Only two of the objects in our CLEAR [Ne V]-emitter sample have L X /[Ne V] ratios consistent (within the 90 percentile) with those of Gilli et al. (2010). The majority of the galaxies in our CLEAR [Ne V]emitter sample -including four of the X-ray-detected galaxies, and all 21 of the X-ray non-detected galaxies -have Xray/[Ne V] ratios below the canonical value of L X /[Ne V] < 15 used to identify CT AGN Gilli et al. (2010). However, none of the [Ne V]-emitter galaxies in our sample are consistent with being CT AGN given their absorption column densities (i.e., column densities log N H < 24, Li et al. (2019)). Our sample has a range of column densities 21.45 < log N H < 23.95. We therefore conclude that X-ray/[Ne V] does not uniquely identify CT AGN.
One reason the X-ray/[Ne V] ratio is unable to identify CT AGN may be because there are systematic differences in our CLEAR [Ne V]-emitter sample and other X-ray-selected samples of AGN. In particular, our sample of high-redshift galaxies spans different luminosities and redshift. Figure 7 shows the distribution of [Ne V] luminosities and intrinsic Xray luminosities as a function of redshift for the eight galaxies in our [Ne V]-emitter sample detected in X-rays. The Figure compares these to samples of lower redshift (z < 1.5) luminous QSOs and very low-redshift (z < 0.1) Seyferts (Gilli et al. 2010). For completeness, we include in the Figure those galaxies in our CLEAR [Ne V] sample that are un-detected in X-rays using the same prescription as those in Figure 6. Figure 7 shows that the CLEAR [Ne V]-emitter sample has [Ne V] luminosities consistent with QSOs from 0 z 1, but has X-ray luminosities that are much lower, and more consistent with the range of X-ray luminosities seen in local Seyferts. This is evidence that there exists greater variation between the X-ray engine in AGN (presumably the accretion disk) and the narrow line region (which is responsible for the [Ne V] emission). We discuss this variation and its implications of the results of these analyses in Section 5.
DISCUSSION
The Nature of [Ne V] Galaxies through Emission-Line Ratio Diagnostics
The emission line ratio diagnostics in Figure 4 have several implications for the nature of [Ne V]-emitting galaxies. In each of the three diagnostics (MEx, OHNO, and VO87), the [Ne V]-detected objects are consistent with ionization from AGN: the MEx diagnostic classifies 12/18 (67%) of [Ne V]-detections as AGN, while the more reliable, yet less inclusive, OHNO diagram classifies 8/9 (89%) of [Ne V]detections as AGN.
The presence of an AGN is expected given that the energy needed to produce [Ne V] is 97.11 eV. The ionizing spectra of stellar populations are not expected to produce copious radiation at this energy (Olivier et al. 2022). The presence of Figure 7. The relations of [Ne V] (top) and intrinsic X-ray (bottom) luminosity functions for the 8 X-ray AGN in our CLEAR sample (purple diamonds) and 112 lower redshift QSOs (z 1.5) from Gilli et al. (2010) (gray circles). We also show the median of the Xray and [Ne V] luminosities of the Gilli et al. (2010) local (z < 0.1) Seyferts sample as a black line, with the gray shaded region showing the 1σ scatter. We show 1σ uncertainties which may be smaller than the size of the markers. Our higher redshift [Ne V] selection probes a parameter space not observed in the X-ray selected samples, with X-ray luminosities comparable to local Seyferts and higher [Ne V] luminosities more typical of z ∼ 1 QSOs. an AGN is supported by the fact that a large fraction (∼32%) of the [Ne V] galaxies are detected in X-rays and/or selected as AGN from their IR. This is especially true for the [Ne V] sources in the high-excitation (AGN) regions of the plots in Figure 4. Therefore, given the coincidence of AGN among the [Ne V]-selected galaxies, AGN appear to explain the origin of the [Ne V] emission in most of our sample here.
It is noteworthy, however, that there appears to be a population of [Ne V]-detected galaxies in CLEAR that have [O III]/Hβ ratios below the threshold of traditional AGN selection. This includes six sources in the MEx diagram of Figure 4 with lower [O III]/Hβ ( 1), lower stellar masses (log M * /M < 10), and are undetected in the X-rays. While these objects have formally-detected [Ne V] (>3σ), it is important to note that these six objects are among the weaker [Ne V] detections in our sample (3<S/N<5). To account for the [Ne V] emission in these galaxies requires some mechanism other than a bright AGN. This could include one or more of the following:
1. Heavily obscured AGN: It is possible these [Ne V]emitters contain deeply obscured AGN such that the X-ray emission is undetectable (even in the 2 or 7 Ms-depth data available for the CDF-N and -S fields). Furthermore, obscured AGN should be detected as IR AGN (Donley et al. 2012), where again, the CDF-N and -S fields have some of the deepest far-IR imaging on the sky (Barro et al. 2019;Guo et al. 2013 and references therein). The lack of indications of AGN activity in the X-rays nor IR-emission in these galaxies disfavors this interpretation.
2. Weak AGN: The [Ne V] emission could stem from weak AGN, where again such objects would have Xray emission below the detection limit for our sample. This is an intriguing possibility, especially given the lower stellar masses for these galaxies. It is possible they host (lower-mass) intermediate mass black holes (IMBHs) with lower accretion rates, but are still able to produce a strong narrow-line region (Greene et al. 2020). Additional study of these galaxies for other high-ionization emission lines will be able to confirm this possibility (e.g., with rest-frame optical/near-IR spectroscopy from JWST). A related potential explanation for the low observed [O III]/Hβ ratios in these objects is through star formation coincident with the AGN phase boosting the Hβ fluxes, thus hiding the AGN activity given these typical diagnostics.
3. Shocks or other extreme mechanisms: [Ne V] emission has been detected in several low-mass, nearby galaxies, where those studies argue the emission is produced by energetic shocks from supernovae or extreme stellar populations in a lower metallicity, high-density ISM (Thuan & Izotov 2005;Izotov et al. 2012Izotov et al. , 2021Olivier et al. 2022). Leung et al. 2021 also finds that [Ne V] may be produced in higher-metallicity objects that have (in at least one case) indications of shocked gas from AGN-driven winds. It is plausible that some of the [Ne V] emission in the lower-mass galaxies in our sample stem from similarly-produced shocks. For this to be the case, we would expect to see indications of high density, which could be traced by resolved [S II] or other density-sensitive lines (see Appendix A). Currently the HST/WFC3 grism data has insufficient resolution to study these lines at these redshifts, but this would be possible with future JWST spectroscopy at higher spectral resolution.
The Nature of [Ne V] Galaxies via their X-ray Emission
The study of [Ne V] in conjunction with observed-frame 2-10 keV luminosities offers insight into the relative amount of emission in the "hard UV"/"soft X-ray" regime (energies around ∼100 eV) in the spectra of these galaxies. The [Ne V] emission therefore provides information unavailable from studies of AGN using only X-ray or IR. As noted above (in Figure 7), our sample of [Ne V]-emitter galaxies in CLEAR have [Ne V] emission similar to z ∼ 1 QSOs but X-ray luminosities similar to local Seyferts. This means that the galaxies in our [Ne V]-emitter sample have a lower Xray/[Ne V] luminosity ratio than seen in other samples. We discuss here our interpretation of the conditions for the lower X-ray/[Ne V] ratios. Specifically, this must be a result of either (1) reduced X-ray emission and/or (2) enhanced [Ne V] emission in these higher redshift objects.
The preferentially low X-ray/[Ne V] ratios of our sample suggest an excess of ∼0.1 keV photons compared to the emission at > 1 keV emission. Enhanced [Ne V] emission could be caused by several effects (or a combination of effects). Strictly speaking, it requires a higher density of ∼ 100 eV photons. This could result from different geometry of the NLR and accretion disk (Trump et al. 2011), or conditions that conspire to enhance the emission of these "soft X-ray" photons compared to the ionizing spectrum of local objects. This could also be a result of excess [Ne V] from shocked gas from AGN winds (Leung et al. 2021).
If geometry is the culprit of the enhanced [Ne V] emission, then specific conditions seem to be required. For example, the enhanced [Ne V] could be explained by anisotropy in the X-ray emission (Yang et al. 2020) such that the NLR is illuminated by the accretion disk, but the sightline to the central engine is obscured. However, this obscuration would likely be seen in the X-ray absorbing column, where the absorption column densities, N H , of our sample suggest that our objects are not CT (i.e. log N H < 24) (Li et al. 2019). As such, we conclude that our [Ne V]-selected sample are not CT, in spite of their very low X-ray/[Ne V] ratios. This contrasts with findings presented in Gilli et al. (2010) and Mignoli et al. (2013), who argued that low X-ray/[Ne V] emission in type 1 Seyferts and QSOs should be indicative of CT AGN 4 . It therefore seems unlikely that viewing angle combined with anisotropic emission can by itself explain the enhanced [Ne V] in our sample. However, we note that there is an important difference in the selection methods of our sample and the Gilli et al. (2010) and Mignoli et al. (2013) samples, where these works are at lower redshift than this work and have shallower X-ray data (100-200 ks) than the Chandra Deep Fields (2-7 Ms). These selection effects will lead to higher X-ray/[Ne V] ratios in Gilli et al. (2010) and Mignoli et al. (2013), as these samples will be insensitive to lower X-ray luminosities; e.g., the sample in Mignoli et al. (2013) reaches a flux limit of 7.3×10 −16 erg s −1 cm −2 , where the CDF catalogs reach an order of magnitude fainter (2.7 × 10 −17 erg s −1 cm −2 in CDF-S). These differences in sample selection may account for some of the X-ray to [Ne V] ratio discrepancies found in this work. It will be informative to study these potential selection biases in both samples with future studies of larger samples of [Ne V]-detected objects in deep X-ray surveys.
It is still possible that the geometry of the accretion disk itself is able to produce the conditions for enhanced [Ne V] emission. This could result from an AGN with an excess of emission from the inner disk (to produce the soft X-ray photons) but less coronal emission. If this latter case applies to the galaxies in our sample of [Ne V] galaxies, then it predicts we should observe SEDs with exceptionally bright UV and soft X-rays (i.e., a prominent soft X-ray excess) (see, e.g., Done et al. 2012). We may test this in future work with rest-frame far UV spectroscopy of these galaxies to more precisely constrain their hard UV/soft X-ray spectra.
The geometry may also manifest itself in the form of absorption from a warm wind. AGN with a "soft excess" (of X-ray photons around 0.1 keV) have been observed in samples of Seyferts and QSOs (Walter & Fink 1993). The origin of this emission is unclear, as the shape of the spectral energy distribution is not consistent with models of pure optically thin nor thick accretion disks (Walter & Fink 1993;Gierliński & Done 2004). One explanation for the soft excess that is tied to the geometry is that the excess is an artifact of absorption of highly ionized atoms (e.g., O VI, O VII, and iron) in a warm, relativistic wind ejected from the accretion disk which preferentially absorbs ∼ 1 keV photons (Gierliński & Done 2004). The soft excess could provide the number density of ∼0.1 keV photons to power the [Ne V] emission in our sample. If this is the case, we would expect to possibly see a correlation between [Ne V] emission lines and broad absorption features in X-ray spectra, or with the spectral shape of the X-ray emission from ∼0.1 -50 keV data. Currently these observations are beyond the sensitivity of X-ray telescopes.
Regardless, our results add evidence that there is a greater diversity and variation in the intrinsic X-ray/[Ne V] ratio given the complexities of the relationship between the NLR and X-ray emission. We will be able to explore these models more deeply with larger samples from Nancy Grace Ro-man Space Telescope and greater wavelength coverage from JWST. A full suite of UV/optical emission features in conjunction with mid-IR photometry will give a more complete picture of the physical mechanisms of these extreme highionization systems, and with coverage to much higher redshifts (6 < z < 11 with JWST/NIRCam and NIRSpec).
SUMMARY AND CONCLUSIONS
In this work, we used HST G102 and G141 grism observations to study a sample of 25 galaxies in the CLEAR survey displaying significant [Ne V] λ3426 Å emission at redshift 1.40 < z < 2.29. We consider these objects of interest due to the extremely high energy (97.11 eV) required to create [Ne V] compared to other strong UV/optical emission lines. Our sample selection required S/N>3 of the stronger [Ne V] line (3426Å) and [O III] and minimal contamination in the 1D and 2D spectra by visual inspection.
The primary findings of this work are as follows:
• Galaxies with [Ne V] detections are much more likely to be X-ray AGN than the general population of galaxies in the CLEAR survey. We cross-matched our sample of [Ne V]-emitting galaxies in CLEAR with the deep (2 and 7 Ms) catalogs from the Chandra X-ray Observatory from Luo et al. (2017) and Xue et al. (2016). We find that about one third (32%, 8/25) of the [Ne V] detected objects in our sample are X-raydetected AGN, compared to 6.5% of the galaxies in the mass and redshift matched CLEAR parent sample that are un-detected in [Ne V].
• We use optical emission line ratios (based primarily on [O III]/Hβ) to study the ionization of the [Ne V]emitting galaxies. The three spectral classifications include the mass-excitation (MEx), "OHNO", and "unVO87" diagrams, which are shown in Figure 4. • There are several [Ne V]-emitting galaxies which are not classified as AGN by X-ray or IR emission or by emission-line ratio diagnostics in Figure 4. These are mostly at lower stellar masses (log M * /M < 10) and suggests that [Ne V] selections probe AGN at intermediate mass scales or that other highly energetic photoionization mechanisms or shocks are driving the line emission.
• We explore (and reject) the possibility that the [Ne V]emitters in our sample are produced by heavily obscured AGN by studying the X-ray/[Ne V] luminosity ratio. We find that the X-ray/[Ne V] emission for our X-ray detected [Ne V]-emitters (and upper limits for galaxies undetected in X-rays) cannot be used to diagnose Compton thick (CT) AGN for our objects. The hydrogen absorption column densities for our objects from Li et al. (2019) support that objects in our sample are not in the CT regime log N H > 24. The use of the X-ray/[Ne V] ratio to select CT AGN seems restricted to more luminous objects, such as X-ray selected QSOs and unobscured (type 1) Seyferts, which have much higher intrinsic X-ray luminosities than our [Ne V]-selected sample.
• We argue that the [Ne V] emission in our sample provides evidence for increased variation and diversity in the nature of the accretion disk and NLR of AGN at z > 1. To account for the enhanced [Ne V] requires an excess of "soft X-ray" / "hard UV" photons (at energies around ∼0.1 keV, the energy required to produce [Ne V]). This could be a related to the "soft excess" seen in the spectra of other QSOs and AGN. It could also be related to changes in the geometry, or possibly from absorption of moderately ionized gas in a relativistic wind blown off from the accretion disk. These models can be tested by studying the spectral energy distribution of the X-ray emission, and/or by studying additional line ratios to better trace the ionizing spectrum, which should be possible with studies from, e.g., JWST.
Our results show that [Ne V] emission probes highly energetic photoionization (∼100 eV). We attribute [Ne V] production predominantly to AGN activity, and we use [Ne V] to probe AGN missed by other methods (X-ray and IR). Other potential creation mechanisms not explored in this work, which will be explored in future studies, include energetic shocks from supernovae and extreme ionizing stellar populations.
Our results motivate future observations of [Ne V] emission to measure the excitation of galaxies within a much larger redshift range, including the epoch of reionization (z 6). The James Webb Space Telescope (JWST) will reach a flux limit that is an order of magnitude fainter than our CLEAR data for similar exposure times, enabling detection of fainter [Ne V]-line emission. JWST/MIRI will be particularly beneficial in the detection of 15-30 µm emission from the AGN in our sample to resolve the geometry of the X-ray anisotropy. JWST is outfitted with NIRSpec and NIRCam, which will give both slit and slitless spectroscopy covering strong UV high-ionization emission lines, like the [Ne V] doublet, at 0.8 < z < 14.4. JWST/NIRISS will also give slitless coverage of [Ne V] at slightly lower redshift ranges (3 < z < 7).
The Nancy Grace Roman Space Telescope will also be able to study the spectra of high-ionization systems in samples orders of magnitude larger than any previous work. With the Wide Field Instrument (WFI), Roman will give lowresolution (R∼600) multi-object slitless grism spectroscopy with wavelength coverage 1-1.93 µm, similar to that of HST/WFC3 G102+G141 but with a field of view two orders of magnitude larger in area.
Lastly, we note that first-look JWST/NIRSpec spectra have already shown strong detections of UV and optical spectral features in this redshift range (Trump et al. 2023;Katz et al. 2023;Brinchmann 2022;Cleri et al. 2023). Early results show great promise that this new generation of spectroscopic data will give critical insight into the nature of galaxies in the early Universe, and may decisively answer questions about the key contributors to the epoch of reionization.
Software: grizli (Brammer et al. 2008), FAST (Kriek et al. 2009), EAZY (Brammer et al. 2008;Wuyts et al. 2011), Astropy (Astropy Collaboration et al. 2013), NumPy Harris et al. (2020), Matplotlib (Hunter 2007) , PyNeb (Luridiana et al. 2015), seaborn (Waskom 2021), pandas (Reback et al. 2022) tute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-03127. Support for program No. JWST-ERS01345 was provided through a grant from the STScI under NASA contract NAS5-03127.
APPENDIX A. COMPARISONS WITH PHOTOIONIZATION MODELS
In this Appendix, we present photoionization models of the emissivities of several of the spectral features of importance in this work. We employ the PyNeb photoionization modeling code (Luridiana et al. 2015). PyNeb does not invoke a particular ionizing spectrum, instead modeling the emissivity of each species from an ionized gas regardless of the initial conditions of said gas.
Given the relatively low spectral resolution of the G102 and G141 grisms, several of the emission lines are unresolved. We define the [ We note that the neon line emissivities do not significantly evolve with density and do evolve with temperature. The oxygen lines evolve both with temperature and density.
In Figure 9 we show ratios of emissivities of several of the emission lines from Figure 8 and O32 ratios, as defined by equations A1 A2, respectively, vary with both temperature and density.
B. SAMPLE CHARACTERISTICS, DERIVED QUANTITIES, AND EMISSION-LINE FLUXES
In this Appendix, we present the sample characteristics and derived quantities (Table 1) and the emission-line fluxes for relevant lines used in this work (Table 2).
Figure 1 .
1Grism redshift distribution of our sample of 25 [Ne V] galaxies. The G141 grism wavelength range limits the detection of [Ne V] and [O III] to 1.39 < z < 2.30. Our sample has a redshift range of 1.40 < z < 2.29, with a median grism redshift of 1.61. The spike in sources at z ∼ 1.6 is consistent with an overdensity of sources in GOODS-S at this redshift.
Figure 2 .
2Rest-frame one-dimensional spectra for five [Ne V]-emitting objects in the CLEAR sample, ordered by increasing redshift. The G102 (blue) and G141 (red) spectra show the median points with 1σ uncertainties from all exposures for this object.The dotted lines indicate emission features of interest: [Ne V] λλ3346,3426,[O II] λλ3726,3729, [Ne III] λ3869, Hγ λ4340, Hβ λ4861, and [O III] λλ4959,5007.
half of the sources in our [Ne V] sample lie in the AGN region of the MEx plot. If we compare to the MEx definition defined by Juneau et al. (2014) (J14) then 12 / 18 (=66%) of the CLEAR [Ne V] sample show evidence of AGN ionization. This includes all of the sources detected in X-rays or identified as IR-AGN. Using the MEx definition of Coil et al. (2015) (C15), this fraction drops to 8/18 (44%), and misses two of the X-ray sources, but includes all the IR-AGN. When we compare the MEx selection using the redshift offset for our [Ne V] sample (assuming the median redshift of the sample z = 1.61 with Eqn. 4, as indicated by the solid lines in the MEx panel of Figure 4) we would select 11/18 (61%) of the [Ne V] sources including all the X-ray sources and IR-AGN. The results of this MEx analysis are broadly consistent with previous work done with [Ne V] galaxies. Mignoli et al. 2013 finds an AGN fraction of ∼80% of [Ne V] emitting galaxies at z ∼ 0.8 are consistent with AGN classification via MEx, although the analyses are not directly comparable, as the Mignoli et al. 2013 uses the Juneau et al. 2011 z 0.1 MEx division and does not perform the same redshift evolution of the MEx line as in this work.
Figure 4 .
4Emission line diagnostics of star-formation/AGN activity. Each panel requires S/N>1 for all represented emission lines. Galaxies which lie above the respective dividing line are classified as AGN, and galaxies which lie below the line are classified as star-forming. X-ray AGN are shown with a solid X, and IR-AGN are shows with a hollow X.[Ne V]-detected objects are colored by their detected emission line pairs: gray diamonds have detections in [O III]/Hβ only, dark purple diamonds have [O III]/Hβ and [Ne III]/[O II] (OHNO), and red diamonds have [O III]/Hβ, [S II]/[Hα + [N II]] (VO87). Redshift ranges for the coverage of all respective lines are shown in the top left of each panel. Left: Mass-excitation diagram: the relation of log([O III]/Hβ) vs. stellar mass. Galaxies in CLEAR that are undetected in [Ne V] in this redshift range ([Ne V] nondetections) are shown as small grey points. The blue and pink shaded regions show the local Juneau et al. (2014) star-forming and z ∼ 2.3 Coil et al. (2015) AGN regions, respectively. The z ∼ 1.6 redshift-evolved MEx line from Equation 4 is shown in black. Center: the OHNO diagram using log([O III]/Hβ) versus log([Ne III]/[O II]). We also show the limits of line ratios for objects in the cases where objects are undetected in various permutations of Hβ, [Ne III], or [O II]. Right: The VO87 diagram for log([O III]/Hβ) versus log([S II]/[Hα + [N II]]). The dashed line shows the Veilleux & Osterbrock (1987) line for z ∼ 0, and the dotted line shows the Backhaus et al. (2022a) "unVO87" dividing line for galaxies at z ∼ 1. The limited redshift range for the detection of all five of these lines leaves much smaller samples than other diagnostics. Both of the galaxies with VO87 lines also have detected [Ne III]/[O II]. Both the points with well detected line ratios and limit behaviors of other [Ne V] detections suggest a broad preference for [Ne V] detections to be classified as AGN in all three of these diagnostics. Based on these line diagnostic plots, the CLEAR [Ne V]-emitting galaxies are broadly consistent with ionization from AGN.
Figure 5 .
5The relation between intrinsic X-ray luminosity and [Ne V] luminosity (left) and [O III] luminosity (right) for objects in CLEAR matching to the Xue et al. (2016) and Luo et al. (2017) X-ray catalogs, in the redshift range of our [Ne V] sample. [Ne V]-detected objects are shown as purple diamonds. We also show the 1σ upper limits for the [Ne V] nondetections as left-facing black triangles. The black dashed lines and gray shaded regions in each panel show the median and 1σ relations for local Seyferts
Figure 6 .
6The distribution of the intrinsic X-ray/[Ne V] luminosity ratio for the 8 X-ray detected (blue) and upper limits for the 21 Xray nondetected (pink) in our [Ne V]-detected sample. The dashed black line and gray shaded regions show the median, 1σ, and 90% ranges of the unobscured Seyferts in theGilli et al. (2010) local sample. All but two objects in our sample lie below the 90% lower limits of the unobscured local Seyferts.
Figure 8 .
8PyNeb models for several relevant emission line emissivities as a function of temperature and density. The emissivities are all given on the same colormap scale. We see that the neon species shown here ([Ne V] and [Ne III]) evolve only with temperature, while the oxygen species have evolution with both temperature and density in this parameter space. Given the spectral resolution of the HST/WFC3 grisms, we also show the coadded [O II] and [O III] emission grids.
Figure 9 .
9PyNeb models for several relevant emission line ratios as a function of temperature and density. The ratio of the individual features of the [Ne V] and [O III] doublets are constant in this temperature and density parameter space, and their constant ratios are given (2.73 and 2.98, respectively). The [Ne V]/[Ne III] ratio is an indicator of temperature, where [O II]/[O II] and [Ne III]/[O II] ratios are primarily functions of density.
). These works have shown that [Ne III] traces [O III] emission, and that the [Ne III]/[O II] ratio can be used as a spectral classifier of AGN and star formation in conjunction with [O III]/Hβ.
4 < z < 2.3. We combine the [Ne V] emission with information from several of the other bright rest frame UV/optical emission-line features of [Ne III], [O III], [O II],
). The most relevant of these spectral properties for our analysis are redshifts, line fluxes, and emission-line maps. The [Ne V] doublet at 3346 and 3426 Å is fit with a free ratio, i.e., grizli does not force a ratio of [Ne V] λ3426/[Ne V] λ3346 = 2.73 (the expected ratio under typical nebular conditions, see Appendix A
Ne V] λλ3346,3426, [O II] λλ3726,3729, [Ne III] λ3869, Hγ, Hβ, and [O III] λλ4959,5007.
One such diagnostic is the [O III]/Hβ and [Ne III]/[O II] (the "OHNO") diagram (Zeimann et al. 2015; Backhaus et al. 2022a). This diagnostic compares ratios of emission lines at similar wavelengths ([O II] λλ3726,3729 Å, [Ne III] λ3869 Å, Hβ λ4861 Å, and [O III] λλ4959,5007) where the production of [O III] and [Ne III] both require higher photon energies: the ionization energy of O 0 is 13.6 eV, that of O + is 35.1 eV, and that of Ne + is 41.0 eV. Galaxies with strong [O III]/Hβ and/or [Ne III]/[O II] require harder radiation fields, typically found in the emission-line regions of AGN.
the OHNO panel, we also show the limiting cases for objects in our sample which do not have well-detected Hβ, [Ne III], or [O II] emission lines, in various different permutations of undetected lines (i.e., those not detected in Hβ, [Ne III], [O II], or some combination thereof). This analysis shows that even the [Ne V]-detected objects without all of the necessary lines preferentially lie in the AGN region. 3.3. The "unVO87" Diagram Another diagnostic used to separate AGN and star-forming galaxies is the relation between [O III]/Hβ and [S II] combined with Hα
). The original VO87 relation to divide AGN and star formation is given by CLEAR in the (rather) narrow redshift range which allows for HST G102 and G141 grism coverage of [Ne V] along with all of [O III], Hβ, [S II], and Hα (1.39 < z < 1.45). We show both the original z ∼ 0 VO87 relation and the z ∼ 1 unresolved unVO87 AGN/SF dividing lines. Given the very limited redshift range to allow for all four lines needed for unVO87 and [Ne V], there are only two [Ne V]-emitting objects in this subsample. These are shown in red. One of the [Ne V]-detectionslog
[O III]
Hβ
=
0.48
log([S II]/Hα) + 0.10
+ 1.3
(5)
At the resolution of the HST/WFC3 grisms, the [S II] lines
are blended with each other, and Hα is blended with [N II].
We therefore use the "unresolved" VO87 relation (hence-
forth "unVO87") which has been tested at z ∼ 1 for galax-
ies where these lines are blended (unresolved, see Backhaus
et al. 2022a). In this case, Backhaus et al. (2022a) define
an empirically derived relation to separate AGN and star-
forming galaxies as follows:
log
[O III]
Hβ
=
0.48
log [S II]/[Hα + [N II]] + 0.12
+ 1.3 (6)
where galaxies lying above the curve (higher [O III]/Hβ) are
classified as AGN and those below the curve are classified as
star forming for both the Veilleux & Osterbrock (1987) and
Backhaus et al. (2022a) curves.
Figure 4 (right panel) shows the unVO87 diagram for
objects in
CDF catalogs, in relation to the [Ne V] and [O III] luminosities. In the left panel, we also show the 1σ upper limits of the [Ne V] luminosity for X-ray AGN in the CDF catalogs not detected in [Ne V]. We show the local Seyfert X-ray vs [Ne V] and [O III] luminosity relations from
They show that most of the [Ne V] emitters are consistent with ionization with AGN, with the most reliable of these (OHNO) classifying 89% of [Ne V] galaxies as AGN. This is particularly true for X-ray-detected [Ne V] sources, where all X-ray and [Ne V] sources are consistent with AGN. In this work, we also include an updated redshift dependence of the MEx diagnostic, which we quantify in Equation 4 as a shift in mass from the localJuneau et al. (2014) relation.
Ne III]/[O II] (Ne3O2) in terms of the coadded [O II] doublet and the [O III]/[O II] ratio (O32) in terms of the coadded [O III] and [O II] doublets In Figure 8 we show emissivity maps as a function of temperature and density for several relevant emission features of neon and oxygen ([Ne V] 3426 and 3346, [Ne III] 3869, [O III] 5007 and 4959, and [O II] 3726 and 3729). We also show the coadded [O III] 5007+4959 and [O II] 3726+3729 doublets.[Ne III]
[O II]
≡
[Ne III] λ3869
[O II] λλ3726, 3729
(A1)
O 32 ≡
[O III] λλ4959, 5008
[O II] λλ3726, 3729
(A2)
computed from PyNeb. The [Ne V] 3426/[Ne V] 3346 and [O III]5007/[O III]4959 ratios are both constant with temperature and density with values 2.73 and 2.98, respectively. We see that the [Ne V] 3426/[Ne III] 3869 ratio increases with temperature. The [Ne III]/[O II] ratio evolves primarily with density within this parameter space. We also show the [O II] 3726/[O II] 3729 ratio, which increases solely as a function of density in this parameter space. The [Ne III]/[O II]
Table 1 .
1Sample characteristics and derived quantities
For the HST G102 and G141 grism spectral resolution, the [O III] λλ4959, 5007 lines are blended, so we use the blended flux for these analyses.
It may be that CT AGN have low X-ray/[Ne V] ratios, as argued byGilli et al. (2010) andMignoli et al. (2013). However, the X-ray/[Ne V] ratios of the [Ne V] galaxies in our CLEAR sample imply that low X-ray/[Ne V] ratios would then be a necessary but not sufficient condition for CT AGN
ACKNOWLEDGMENTSThe authors wish to thank our colleagues in the CLEAR collaboration for their work on this project, and their assistance and support. NJC also thanks Maeve Curliss for significant discussion on the data visualization in this work. NJC also thanks Justin Spilker (Justin with the PhD) and Justin Cole (Justin without the PhD) for insightful discussions throughout the course of this work. NJC also acknowledges the rejected acronym for AGN exhibiting strong UV/optical features with X-ray emission weaker/comparable to local Seyferts (Strong UV/Optical emission-line, Normal X-ray (STONX) AGN).This work is based on data obtained from the Hubble Space Telescope through program number GO-14227. Support for Program number GO-14227 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. NJC, JRT, and BEB acknowledge support from NSF grant CAREER-1945546 and NASA grants JWST-ERS-01345 and 18-2ADAP18-0177. NJC and CP also acknowledge support from NASA/HST AR 16609. This work acknowledges support from the NASA/ESA/CSA James Webb Space Telescope through the Space Telescope Science Insti-i Grism redshifts from CLEAR ii From the 3D-HST catalog(Skelton et al. 2014). Masses have characteristic uncertainty of 0.3 dex.iii From the CANDELS/SHARDS catalog(Barro et al. 2019). Objects without uncertainties on photometry are nondetections in both filters.
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy CollaborationA&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
. B E Backhaus, J R Trump, N J Cleri, ApJ. 926161Backhaus, B. E., Trump, J. R., Cleri, N. J., et al. 2022a, ApJ, 926, 161
. B E Backhaus, J S Bridge, J R Trump, arXiv:2207.11265arXiv e-printsBackhaus, B. E., Bridge, J. S., Trump, J. R., et al. 2022b, arXiv e-prints, arXiv:2207.11265
. G Barro, P G Pérez-González, A Cava, ApJS. 24322Barro, G., Pérez-González, P. G., Cava, A., et al. 2019, ApJS, 243, 22
. D A Berg, J Chisholm, D K Erb, L3 -. 2021ApJL. 878170ApJBerg, D. A., Chisholm, J., Erb, D. K., et al. 2019, ApJL, 878, L3 -. 2021, ApJ, 922, 170
. G B Brammer, P G Van Dokkum, P Coppi, ApJ. 6861503Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, ApJ, 686, 1503
. J Brinchmann, arXiv:2208.07467arXiv e-printsBrinchmann, J. 2022, arXiv e-prints, arXiv:2208.07467
. G Bruzual, S Charlot, MNRAS. 3441000Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
. G Chabrier, Publications of the Astronomical Society of the Pacific. 115763Chabrier, G. 2003, Publications of the Astronomical Society of the Pacific, 115, 763
. N J Cleri, J R Trump, B E Backhaus, ApJ. 9293Cleri, N. J., Trump, J. R., Backhaus, B. E., et al. 2022, ApJ, 929, 3
. N J Cleri, G M Olivier, T A Hutchison, arXiv:2301.07745arXiv e-printsCleri, N. J., Olivier, G. M., Hutchison, T. A., et al. 2023, arXiv e-prints, arXiv:2301.07745
. A L Coil, J Aird, N Reddy, ApJ. 80135Coil, A. L., Aird, J., Reddy, N., et al. 2015, ApJ, 801, 35
. C Done, S W Davis, C Jin, O Blaes, M Ward, MNRAS. 4201848Done, C., Davis, S. W., Jin, C., Blaes, O., & Ward, M. 2012, MNRAS, 420, 1848
. J L Donley, A M Koekemoer, M Brusa, ApJ. 748142Donley, J. L., Koekemoer, A. M., Brusa, M., et al. 2012, ApJ, 748, 142
. D K Erb, A E Shapley, M Pettini, ApJ. 644813Erb, D. K., Shapley, A. E., Pettini, M., et al. 2006, ApJ, 644, 813
. V Estrada-Carpenter, C Papovich, I Momcheva, ApJ. 870171ApJEstrada-Carpenter, V., Papovich, C., Momcheva, I., et al. 2019, ApJ, 870, 133 -. 2020, ApJ, 898, 171
. G G Fazio, J L Hora, L E Allen, ApJS. 15410Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, ApJS, 154, 10
. M Gierliński, C Done, MNRAS. 3497Gierliński, M., & Done, C. 2004, MNRAS, 349, L7
. R Gilli, C Vignali, M Mignoli, A&A. 51992Gilli, R., Vignali, C., Mignoli, M., et al. 2010, A&A, 519, A92
. J E Greene, J Strader, L C Ho, ARA&A. 58257Greene, J. E., Strader, J., & Ho, L. C. 2020, ARA&A, 58, 257
. N A Grogin, D D Kocevski, S M Faber, ApJS. 19735Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, ApJS, 197, 35
. Y Guo, H C Ferguson, M Giavalisco, ApJS. 20724Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, ApJS, 207, 24
. C R Harris, K J Millman, S J Van Der Walt, 10.1038/s41586-020-2649-2Nature. 585Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357. https://doi.org/10.1038/s41586-020-2649-2
. T M Heckman, A Ptak, A Hornschemeier, G Kauffmann, ApJ. 634161Heckman, T. M., Ptak, A., Hornschemeier, A., & Kauffmann, G. 2005, ApJ, 634, 161
. R C Hickox, D M Alexander, ARA&A. 56625Hickox, R. C., & Alexander, D. M. 2018, ARA&A, 56, 625
. J D Hunter, Computing in Science and Engineering. 990Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
. Y I Izotov, T X Thuan, N G Guseva, MNRAS. 5082556Izotov, Y. I., Thuan, T. X., & Guseva, N. G. 2021, MNRAS, 508, 2556
. Y I Izotov, T X Thuan, G Privon, MNRAS. 4271229Izotov, Y. I., Thuan, T. X., & Privon, G. 2012, MNRAS, 427, 1229
. S Juneau, M Dickinson, D M Alexander, S Salim, ApJ. 736104Juneau, S., Dickinson, M., Alexander, D. M., & Salim, S. 2011, ApJ, 736, 104
. S Juneau, F Bournaud, S Charlot, ApJ. 78888Juneau, S., Bournaud, F., Charlot, S., et al. 2014, ApJ, 788, 88
. I Jung, C Papovich, S L Finkelstein, ApJ. 93387Jung, I., Papovich, C., Finkelstein, S. L., et al. 2022, ApJ, 933, 87
. H Katz, A Saxena, A J Cameron, MNRAS. 518592Katz, H., Saxena, A., Cameron, A. J., et al. 2023, MNRAS, 518, 592
. G Kauffmann, T M Heckman, C Tremonti, MNRAS. 3461055Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, MNRAS, 346, 1055
. Robert C Kennicutt, J , ApJ. 498541Kennicutt, Robert C., J. 1998, ApJ, 498, 541
. L J Kewley, C A Heisler, M A Dopita, S Lumsden, ApJS. 13237Kewley, L. J., Heisler, C. A., Dopita, M. A., & Lumsden, S. 2001, ApJS, 132, 37
. L J Kewley, D C Nicholls, R Sutherland, ApJ. 88016Kewley, L. J., Nicholls, D. C., Sutherland, R., et al. 2019a, ApJ, 880, 16
. L J Kewley, D C Nicholls, R S Sutherland, ARA&A. 57511Kewley, L. J., Nicholls, D. C., & Sutherland, R. S. 2019b, ARA&A, 57, 511
. A M Koekemoer, S M Faber, H C Ferguson, ApJS. 19736Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, ApJS, 197, 36
. M Kriek, P G Van Dokkum, I Labbé, ApJ. 700221Kriek, M., van Dokkum, P. G., Labbé, I., et al. 2009, ApJ, 700, 221
. M Lacy, L J Storrie-Lombardi, A Sajina, ApJS. 154166Lacy, M., Storrie-Lombardi, L. J., Sajina, A., et al. 2004, ApJS, 154, 166
. E L Lambrides, M Chiaberge, T Heckman, ApJ. 897160Lambrides, E. L., Chiaberge, M., Heckman, T., et al. 2020, ApJ, 897, 160
Flux Calibration Monitoring: WFC3/IR G102 and G141 Grisms. J C Lee, N Pirzkal, B Hilbert, Tech. repLee, J. C., Pirzkal, N., & Hilbert, B. 2014, Flux Calibration Monitoring: WFC3/IR G102 and G141 Grisms, Tech. rep.
. B D Lehmer, D M Alexander, F E Bauer, ApJ. 724559Lehmer, B. D., Alexander, D. M., Bauer, F. E., et al. 2010, ApJ, 724, 559
. G C K Leung, A L Coil, D S N Rupke, S Perrotta, ApJ. 91417Leung, G. C. K., Coil, A. L., Rupke, D. S. N., & Perrotta, S. 2021, ApJ, 914, 17
. E M Levesque, M L A Richardson, ApJ. 780100Levesque, E. M., & Richardson, M. L. A. 2014, ApJ, 780, 100
. J Li, Y Xue, M Sun, ApJ. 8775Li, J., Xue, Y., Sun, M., et al. 2019, ApJ, 877, 5
. T Liu, P Tozzi, J.-X Wang, ApJS. 2328Liu, T., Tozzi, P., Wang, J.-X., et al. 2017, ApJS, 232, 8
. B Luo, W N Brandt, Y Q Xue, The Astrophysical Journal Supplement Series. 2282Luo, B., Brandt, W. N., Xue, Y. Q., et al. 2017, The Astrophysical Journal Supplement Series, 228, 2
. V Luridiana, C Morisset, R A Shaw, A&A. 57342Luridiana, V., Morisset, C., & Shaw, R. A. 2015, A&A, 573, A42
. P Madau, M Dickinson, ARA&A. 52415Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415
. R Maiolino, M Salvati, L Bassani, A&A. 338781Maiolino, R., Salvati, M., Bassani, L., et al. 1998, A&A, 338, 781
. D Masters, P Mccarthy, B Siana, ApJ. 785153Masters, D., McCarthy, P., Siana, B., et al. 2014, ApJ, 785, 153
. J Matharu, C Papovich, R C Simons, arXiv:2205.08543arXiv e-printsMatharu, J., Papovich, C., Simons, R. C., et al. 2022, arXiv e-prints, arXiv:2205.08543
. A J Mendez, A L Coil, J Aird, ApJ. 77040Mendez, A. J., Coil, A. L., Aird, J., et al. 2013, ApJ, 770, 40
. M Mignoli, C Vignali, R Gilli, A&A. 55629Mignoli, M., Vignali, C., Gilli, R., et al. 2013, A&A, 556, A29
. I G Momcheva, G B Brammer, P G Van Dokkum, ApJS. 22527Momcheva, I. G., Brammer, G. B., van Dokkum, P. G., et al. 2016, ApJS, 225, 27
. K Nakajima, M Ouchi, MNRAS. 442900Nakajima, K., & Ouchi, M. 2014, MNRAS, 442, 900
. G M Olivier, D A Berg, J Chisholm, ApJ. 93816Olivier, G. M., Berg, D. A., Chisholm, J., et al. 2022, ApJ, 938, 16
. C Papovich, R C Simons, V Estrada-Carpenter, arXiv:2205.05090arXiv e-printsPapovich, C., Simons, R. C., Estrada-Carpenter, V., et al. 2022, arXiv e-prints, arXiv:2205.05090
. P G Pérez-González, A Cava, G Barro, ApJ. 76246Pérez-González, P. G., Cava, A., Barro, G., et al. 2013, ApJ, 762, 46
Trace and Wavelength Calibrations of the WFC3 G102 and G141 IR Grisms. N Pirzkal, R Ryan, G Brammer, Instrument Science Report WFC3 2016-15, 25 pagesPirzkal, N., Ryan, R., & Brammer, G. 2016, Trace and Wavelength Calibrations of the WFC3 G102 and G141 IR Grisms, Instrument Science Report WFC3 2016-15, 25 pages, ,
. N Pirzkal, S Malhotra, R E Ryan, ApJ. 84684Pirzkal, N., Malhotra, S., Ryan, R. E., et al. 2017, ApJ, 846, 84
. N Aghanim, Planck CollaborationY Akrami, Planck CollaborationA&A. 6416Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A6
. J Reback, W Mckinney, 10.5281/zenodo.3509134et al. 2022, pandas-dev/pandas: Pandas 1.4.2, Zenodo, vv1.4.2, ZenodoReback, J., jbrockmendel, McKinney, W., et al. 2022, pandas-dev/pandas: Pandas 1.4.2, Zenodo, vv1.4.2, Zenodo, doi:10.5281/zenodo.3509134
. J E Rhoads, I G B Wold, S Harish, arXiv:2207.13020arXiv e-printsRhoads, J. E., Wold, I. G. B., Harish, S., et al. 2022, arXiv e-prints, arXiv:2207.13020
. R C Simons, C Papovich, I Momcheva, ApJ. 923203Simons, R. C., Papovich, C., Momcheva, I., et al. 2021, ApJ, 923, 203
. R C Simons, C Papovich, I G Momcheva, arXiv:2303.09570arXiv e-printsSimons, R. C., Papovich, C., Momcheva, I. G., et al. 2023, arXiv e-prints, arXiv:2303.09570
. R E Skelton, K E Whitaker, I G Momcheva, ApJS. 21424Skelton, R. E., Whitaker, K. E., Momcheva, I. G., et al. 2014, ApJS, 214, 24
. C C Steidel, G C Rudie, A L Strom, ApJ. 795165Steidel, C. C., Rudie, G. C., Strom, A. L., et al. 2014, ApJ, 795, 165
. D Stern, P Eisenhardt, V Gorjian, ApJ. 631163Stern, D., Eisenhardt, P., Gorjian, V., et al. 2005, ApJ, 631, 163
. M Tang, D P Stark, J Chevallard, MNRAS. 5013238Tang, M., Stark, D. P., Chevallard, J., et al. 2021, MNRAS, 501, 3238
. T X Thuan, Y I Izotov, ApJS. 161240Thuan, T. X., & Izotov, Y. I. 2005, ApJS, 161, 240
. J R Trump, C D Impey, B C Kelly, ApJ. 73360Trump, J. R., Impey, C. D., Kelly, B. C., et al. 2011, ApJ, 733, 60
. J R Trump, M Sun, G R Zeimann, ApJ. 81126Trump, J. R., Sun, M., Zeimann, G. R., et al. 2015, ApJ, 811, 26
. J R Trump, P Haro, R C Simons, ApJ. 94535Trump, J. R., Arrabal Haro, P., Simons, R. C., et al. 2023, ApJ, 945, 35
. S Veilleux, D E Osterbrock, ApJS. 63295Veilleux, S., & Osterbrock, D. E. 1987, ApJS, 63, 295
. R Walter, H H Fink, A&A. 274105Walter, R., & Fink, H. H. 1993, A&A, 274, 105
. M Waskom, The Journal of Open Source Software. 63021Waskom, M. 2021, The Journal of Open Source Software, 6, 3021
. S Wuyts, Förster, N M Schreiber, D Lutz, ApJ. 738106Wuyts, S., Förster Schreiber, N. M., Lutz, D., et al. 2011, ApJ, 738, 106
. Y Q Xue, B Luo, W N Brandt, ApJS. 22410ApJSXue, Y. Q., Luo, B., Brandt, W. N., et al. 2016, ApJS, 224, 15 -. 2011, ApJS, 195, 10
. R Yan, L C Ho, J A Newman, ApJ. 72838Yan, R., Ho, L. C., Newman, J. A., et al. 2011, ApJ, 728, 38
. G Yang, W N Brandt, B Luo, ApJ. 831145Yang, G., Brandt, W. N., Luo, B., et al. 2016, ApJ, 831, 145
. G Yang, M Boquien, V Buat, MNRAS. 491740Yang, G., Boquien, M., Buat, V., et al. 2020, MNRAS, 491, 740
. G R Zeimann, R Ciardullo, H Gebhardt, ApJ. 79029ApJZeimann, G. R., Ciardullo, R., Gebhardt, H., et al. 2014, ApJ, 790, 113 -. 2015, ApJ, 798, 29
| [
"https://github.com/gbrammer/grizli/"
] |
[
"Quantum Cohomology at Higher Genus: Topological Recursion Relations and Virasoro Conditions",
"Quantum Cohomology at Higher Genus: Topological Recursion Relations and Virasoro Conditions"
] | [
"Tohru Eguchi \nDepartment of Physics\nFaculty of Science\nYukawa Institute for Theoretical Physics\nUniversity of Tokyo\n113TokyoJapan\n",
"Chuan-Sheng Xiong \nKyoto University\n606KyotoJapan\n"
] | [
"Department of Physics\nFaculty of Science\nYukawa Institute for Theoretical Physics\nUniversity of Tokyo\n113TokyoJapan",
"Kyoto University\n606KyotoJapan"
] | [] | We construct topological recursion relations (TRR's) at higher genera g ≥ 2 for general 2-dimensional topological field theories coupled to gravity. These TRR's when combined with Virasoro conditions enable one to determine the number of higher genus holomorphic curves in any Fano varieties. In the case of CP 2 we reproduce the known results at genus g = 2. | 10.4310/atmp.1998.v2.n1.a9 | [
"https://export.arxiv.org/pdf/hep-th/9801010v2.pdf"
] | 14,206,465 | hep-th/9801010 | c4832e3caf56fecfeb2a003d8f4c293858cf43ba |
Quantum Cohomology at Higher Genus: Topological Recursion Relations and Virasoro Conditions
4 Mar 1998 January, 1998
Tohru Eguchi
Department of Physics
Faculty of Science
Yukawa Institute for Theoretical Physics
University of Tokyo
113TokyoJapan
Chuan-Sheng Xiong
Kyoto University
606KyotoJapan
Quantum Cohomology at Higher Genus: Topological Recursion Relations and Virasoro Conditions
4 Mar 1998 January, 1998arXiv:hep-th/9801010v2
We construct topological recursion relations (TRR's) at higher genera g ≥ 2 for general 2-dimensional topological field theories coupled to gravity. These TRR's when combined with Virasoro conditions enable one to determine the number of higher genus holomorphic curves in any Fano varieties. In the case of CP 2 we reproduce the known results at genus g = 2.
Introduction
Recently we have proposed that the partition functions of topological string theories compactified on an arbitrary Kähler manifold are highest-weight states of a Virasoro algebra [1,2,3]. We have shown that the Virasoro conditions together with topological recursion relations reproduce known instanton numbers (numbers of holomorphic curves) at genus 0 and 1 of various Fano varieties. Topological recursion relations (TRR's) exist at genus g = 0 and g = 1 [4] and convert correlation functions of gravitational descendants into those of primary fields which may be evaluated directly by the method of algebraic geometry. In the case of higher-genera g ≥ 2, however, no TRR have been so far available and it was not possible to compare the predictions of Virasoro conditions with the known geometrical data on the higher genus curves.
Recently Getzler has announced the existence of a TRR at genus g = 2 [5] which involves descendants of degree n, n − 1 and n − 2. His analysis is based on the study of a linear relation among homology cycles on the moduli space of holomorphic maps. A precise form of his formula, however, is not yet available.
In this paper we propose a somewhat different form of TRR at higher genera starting from a simple assumption on the dependence of higher-genus free energies on genus=0 primary correlation functions. Our recursion relation at genus g involves gravitational descendants of degree n, n − 1, · · · , n − 3g + 1. In the case of genus 2 descendants of degree n, n − 1, · · · , n − 5 appear and thus our relation is somewhat weaker than that of Getzler's. Nevertheless, these TRR's can be used together with the Virasoro conditions in order to completely determine the number of holomorphic curves of arbitrary degree and genus. In the case of CP 2 at genus g = 2 we explicitly verify that our TRR and Virasoro conditions reproduce the known results of [6,7,8].
Topological Recursion Relation at Higher Genus
Our basic assumption is that the genus-g free energy of topological string theory is a function depending only on the primary multi-point functions of genus g = 0. Let us consider the case of a theory with primary fields {O α }(α = 0, 1 · · · , N ) coupled to the perturbation parameters {t α }. Gravitational descendants and their couplings are denoted as {σ n (O α )} and {t α n } (n = 0, 1, 2, · · ·), respectively. We define genus=0 correlation functions as u α 1 α 2 ···α j ≡ P O α 1 O α 2 · · · O α j 0 where P denotes the puncture operator. Our assumption is that the genus-g free energy F g is a function only of the genus=0 correlation functions u α 1 , u α 1 α 2 , · · · , u α 1 α 2 ···α 3g−1
F g (t) = F g (u α 1 (t), u α 1 α 2 (t), · · · , u α 1 α 2 ···α 3g−1 (t)), g ≥ 1.(1)
Here t stands for all the couplings {t α n } of the large phase space. Note that the dependence on the parameters t of the free energy F g (t) occurs only through the functions u α 1 α 2 ···α j (t) (j = 1, 2, · · · , 3g − 1).
Equation (1) is known to hold in the 2-dimensional pure gravity theory. For instance, the genus 1,2 and 3 free energies are given by [9,10]
F 1 = 1 24 log u ′ , F 2 = u ′′3 360u ′4 − 7u ′′ u (3) 1920u ′3 + u (4) 1152u ′2 ,(2)F 3 = − 5u ′′ 6 648u ′ 8 + 59u ′′ 4 u (3) 3024u ′ 7 − 83u ′′ 2 u (3) 2 7168u ′ 6 + 59u (3) 3 64512u ′ 5 − 83u ′′ 3 u (4) 15120u ′ 6 + 1273u ′′ u (3) u (4) 322560u ′ 5 − 103u (4) 2 483840u ′ 4 + 353u ′′ 2 u (5) 322560u ′ 5 − 53u (3) u (5) 161280u ′ 4 − 7u ′′ u (6) 46080u ′ 4 + u (7) 82944u ′ 3(3)
where u = P P 0 and ′ denotes the t 0 -derivative. Eq.(1) is also known to hold in some cases of the 2-dimensional gravity coupled to minimal matter at lower genera [11]. It is also valid in the case of CP 1 model [12]. Eq.(1) means that higher genus amplitudes are expressed in terms of the genus=0 data and suggests a possible reinterpretation of the world-sheet topological theory as a field theory on the target space [10,11]. We now assume that (1) is a universal feature of 2-dimensional topological field theories coupled to gravity. It is then easy to derive our TRR's. Let us first consider for simplicity the case of genus=1. Genus-1 free energy depends on {u α } and {u αβ }. We then have
∂F 1 ∂t α n = ∂u µ ∂t α n ∂F 1 ∂u µ + ∂u µν ∂t α n ∂F 1 ∂u µν = σ n (O α )O µ P 0 ∂F 1 ∂u µ + ∂ σ n (O α )O ν P 0 ∂t µ ∂F 1 ∂u µν .(4)
At n = 0 eq.(4) becomes
O α 1 = O α O µ P 0 ∂F 1 ∂u µ + O α O µ O ν P 0 ∂F 1 ∂u µν .(5)
We use the genus=0 TRR
σ n (O α )AB 0 = σ n−1 (O α )O γ 0 O γ AB 0(6)
to rewrite (4) as
σ n (O α ) 1 = σ n−1 (O α )O β 0 O β O µ P 0 ∂F 1 ∂u µ + σ n−1 (O α )O β 0 O β O µ O ν P 0 + σ n−1 (O α )O µ O β 0 O β O ν P 0 ∂F 1 ∂u µν .(7)
Putting n = 1 in (7) gives
σ 1 (O α ) 1 = O α O β 0 O β O µ P 0 ∂F 1 ∂u µ + O α O β 0 O β O µ O ν P 0 + O α O µ O β 0 O β O ν P 0 ∂F 1 ∂u µν = O α O β 0 O β 1 + O α O µ O β 0 O β O ν P 0 ∂F 1 ∂u µν(8)
where (5) has been used. By making use of (5) and (8) eq.(7) is then reexpressed as
σ n (O α ) 1 = σ n−1 (O α )O β 0 O β 1 + σ n−2 (O α )O γ 0 σ 1 (O γ ) 1 − O γ O β 0 O β 1(9)
Eq.(9) is our TRR at genus=1. It appears somewhat different from the standard TRR [4]
σ n (O α ) 1 = 1 24 σ n−1 (O α )O β O β 0 + σ n−1 (O α )O β 0 O β 1 .(10)
However, when one uses the structure of the genus=1 free energy
F 1 = 1 24 log det(u αβ ) + f 1 (u α )(11)
one may easily check (9) and (10) are equivalent. By repeating the same procedure as above we can derive the TRR at genus=2
σ n+5 (O α ) 2 = σ n+4 (O α )O β 0 A β 0 + σ n+3 (O α )O β 0 A β 1 + σ n+2 (O α )O β 0 A β 2 + σ n+1 (O α )O β 0 A β 3 + σ n (O α )O β 0 A β 4(12)
where
A β 0 ≡ O β 2 (13) A β 1 ≡ σ 1 (O β ) 2 − O β O γ 0 A γ 0 (14) A β 2 ≡ σ 2 (O β ) 2 − σ 1 (O β )O γ 0 A γ 0 − O β O γ 0 · A γ 1 (15) A β 3 ≡ σ 3 (O β ) 2 − σ 2 (O β )O γ 0 A γ 0 − σ 1 (O β )O γ 0 · A γ 1 − O β O γ 0 · A γ 2 (16) A β 4 ≡ σ 4 (O β ) 2 − σ 3 (O β )O γ 0 A γ 0 − σ 2 (O β )O γ 0 · A γ 1 − σ 1 (O β )O γ 0 · A γ 2 − O β O γ 0 · A γ 3 .(17)
Thus all the descendants {σ n (O α ), n ≥ 5} may be eliminated in favor of {σ i (O α ), i = 1, 2, 3, 4} at genus g = 2. An explicit form of the above TRR is presented in the Appendix. Similarly, TRR's at arbitrary genus (g ≥ 1) are given by
σ n+3g−1 (O α ) g = 3g−2 j=0 σ n+3g−2−j (O α O β ) 0 A β j ,(18)A β 0 ≡ O β g ,(19)A β j = σ j (O β ) g − j−1 k=0 σ k (O β )O γ 0 A γ j−k−1 .(20)
Thus the descendant degrees are reduced to n ≤ 3g−2 at genus g. These TRR are not quite as efficient as the standard TRR at genus 0 and 1 which reduce the descendant degrees all the way to zero. As we shall see, however, they are powerful enough to determine instanton numbers of arbitrary genera when combined together with the Virasoro conditions.
Virasoro Conditions and Higher Genus Curves in CP 2
We now turn to the application of our TRR. In order to fix our discussions let us consider the case of CP 2 and determine the number of its genus=2 curves.
Let us first recall the Virasoro conditions for CP 2 [1]
L n Z = 0, n ≥ −1(21)
where
L −1 = 2 α=0 ∞ m=1 t α m ∂ m−1,α + 1 2λ 2 2 α=0 t α t α ,(22)L 0 = 2 α=0 ∞ m=0 (b α + m)t α m ∂ m,α + 3 1 α=0 ∞ m=0 t α m ∂ m−1,α+1 + 1 2λ 2 1 α=0 3t α t α+1 − 5 16 ,(23)L n = ∞ m=0 2 α=0 2−α j=0 3 j C (j) α (m, n)t α m ∂ m+n−j,α+j n ≥ 1 + λ 2 2 2 α=0 2−α j=0 n−j−1 m=0 3 j D (j) α (m, n)∂ α m ∂ n−m−j−1,α+j + 1 2λ 2 1−n α=0 3 n+1 t α t α+n+1 .(24)
Here b 0 = −1/2, b 1 = 1/2, b 2 = 3/2 and b α = 1 − b α and the coefficient functions are defined by
C (j) α (m, n) ≡ Γ(b α + m + n + 1) Γ(b α + m) m≤ℓ 1 <ℓ 2 <···<ℓ j ≤m+n j i=1 1 b α + ℓ i ,(25)D (j) α (m, n) ≡ Γ(b α + m + 1) Γ(b α + n − m) Γ(b α ) Γ(b α ) × −m−1≤ℓ 1 <ℓ 2 <···<ℓ j ≤n−m−1 j i=1 1 b α + ℓ i .(26)
Conventional notations are t 0 ≡ t P , t 1 ≡ t Q and t 2 ≡ t R . A parameter λ denotes the string coupling constant and the free energy has the genus expansion
Z = log F, F = g=0 λ 2g−2 F g .(27)
In the small phase space (t α n = 0, n > 0 except t P 1 = −1) the genus-g free energy has a structure
F g = F cl g + d=1 N g d (3d + g − 1)! (t R ) 3d+g−1 e dt Q(28)
where N g d denotes the number of degree d, genus g irreducible curves passing through 3d + g − 1 points in CP 2 . The classical part of the free energy F cl g is non-vanishing only at g = 0 and 1, F cl 0 = t P (t Q ) 2 /2 + (t P ) 2 t R /2, F cl 1 = −t Q /8. In order to determine the instanton expansion (28) for g = 2 free energy, we first have to determine the values of the "initial" descendants σ i (O α ) 2 (i = 1, 2, 3, 4). We consider the Virasoro conditions (21) for n = 1, 2 · · · , 12 in the small phase space and substitute into these equations the known values of the correlation functions at genus 0, 1. We also use TRR at g = 2 (12) to rewrite one-point functions of higher descendants in terms of those of initial descendants. Then the Virasoro conditions provide 12 linear relations for 12 unknowns σ i (O α ) 2 (α = 0, 1, 2, i = 1, 2, 3, 4) and one can determine them order by order in the instanton expansion.
If one further considers the next Virasoro condition L n with n = 13 and substitutes into it the determined values of the initial descendants, it then gives a prediction on the g = 2 free energy. One finds upto degree 10
F 2 (t Q , t R ) =27+ 1301798459308709880 28! (t R ) 28 e 9t Q + 6383405726993645784000 31! (t R ) 31 e 10t Q .
First 3 terms in the RHS of (29) agree with those of Caporaso and Harris [8] (Caporaso-Harris gives the number of curves which contains reducible components. A convenient method of subtraction of reducible part is described in [13]). The rest is our predictions. One can check higher Virasoro conditions L n n > 13 are satisfied simultaneously by (29).
Discussions
Precise agreement of our predictions for the genus 2 free energies of CP 2 with the known geometrical data gives a strong support for the validity of our Virasoro conditions and TRR's. It now seems that by making use of these equations one may be able to compute in principle the number of curves of any genus and degree in arbitrary Fano varieties. For instance, the genus-2 free energy of CP 3 upto degree 3 is computed as
F 2 (t Q , t R , t S ) = − 1 288 + 1 360 (t S ) 2 2! e t Q + 1 360 (t R ) 2 (t S ) 1 2! · 1! e t Q + 1 180 (t R ) 4 4! e t Q + 7 240 (t R ) 2 (t S ) 3 2! · 3! e 2t Q + 7 60 (t R ) 4 (t S ) 2 4! · 2! e 2t Q + 21 40 (t R ) 6 (t S ) 1 6! · 1! e 2t Q + 161 60 (t R ) 8 8! e 2t Q +1 12
(t S ) 6 6! e 3t Q + 5 12
(t R ) 2 (t S ) 5 2! · 5! e 3t Q + 5 2 (t R ) 4 (t S ) 4 4! · 4! e 3t Q + 46 3 (t R ) 6 (t S ) 3 6! · 3! e 3t Q(30)+ 307 3 (t R ) 8 (t S ) 2 8! · 2! e 3t Q + 747 (t R ) 10 (t S ) 1 10! · 1! e 3t Q + 5930 (t R ) 12 12! e 3t Q .
Here the variables t Q , t R , t S are those dual to the Käler class ω and ω 2 , ω 3 of CP 3 , respectively. Number of genus=3 curves in CP 2 is currently under study. Eq.(1) for the genus g free energy is a deep result of the 2-dimensional gravity theory. One may imagine having a set of space-time fields {φ α } whose propagator is given by u −1 αβ and the j-point vertex by u α 1 α 2 ···α j . It is known that the genus g free energy F g equals the sum of the Feynman amplitudes of g-loop diagrams made of propagators and j-point vertices (j = 3, 4, · · · , 3g − 1) [10,11]. The number of different vertices increases as the genus is increased and hence the system has the characteristic of non-polynomial closed string field theory. It is quite curious such a space-time interpretation exists behind our TRR.
Virasoro conditions L n Z = 0 (n ≥ −1) are not independent in the large phase space since they form the algebra [L n , L m ] = (n − m)L n+m . In the small phase space, however, they become all independent and are used to determine unknown correlation functions. For the computation of genus 2 curves in CP 2 a large number (13) of Virasoro constraints were imposed. At the moment the logical relationship between TRR's and the infinite set of Virasoro conditions is not completely clear. In the case of 2-dimensional pure gravity Virasoro conditions alone completely determined the amplitude. Hence they imply TRR's. On the other hand in the case of 2-dimensional gravity coupled to minimal matter, additional W-algebra conditions were necessary to completely determine the amplitudes. Thus the TRR's are independent of Virasoro conditions. The present case of topological string theories appears similar to that of the 2-dimensional gravity coupled to matter. It is an important issue if there exist analogues of W conditions in our present problem.
After completing this manuscript we received a new paper by Getzler [14] where his TRR at genus=2 is presented. It is easy to check that his equation (6) is consistent with our genus=2 TRR.
Note Added
We have obtained the genus=3 free energy of CP 2 up to degree 10 First 3 terms of (31) agree with [8].
F 3 (t Q , t R ) = 1 14! (t R ) 14 e 4t Q +σ n+5 (O α ) 2 = σ n (O α )O β 0 σ 4 (O β ) 2 + σ n+1 (O α )O β 0 − σ n (O α )O γ 0 O γ O β 0 σ 3 (O β ) 2 + σ n+2 (O α )O β 0 − σ n+1 (O α )O γ 0 O γ O β 0 + σ n (O α )O µ 0 O µ O ρ 0 O ρ O β 0 − σ 1 (O µ )O β 0 σ 2 (O β ) 2 + σ n+3 (O α )O β 0 − σ n+2 (O α )O γ 0 O γ O β 0 + σ n+1 (O α )O µ 0 O µ O ρ 0 O ρ O β 0 − σ 1 (O µ )O β 0 + σ n (O α )O µ 0 − σ 2 (O µ )O β 0 + σ 1 (O µ )O ρ 0 O ρ O β 0 + O µ O ρ 0 σ 1 (O ρ )O β 0 − O µ O ρ 0 O ρ O σ 0 O σ O β 0 σ 1 (O β ) 2 + σ n+4 (O α )O β 0 − σ n+3 (O α )O γ 0 O γ O β 0 + σ n+2 (O α )O µ 0 O µ O ρ 0 O ρ O β 0 − σ 1 (O µ )O β 0 + σ n+1 (O α )O µ 0 − σ 2 (O µ )O β 0 + σ 1 (O µ )O ρ 0 O ρ O β 0 + O µ O ρ 0 σ 1 (O ρ )O β 0 − O µ O ρ 0 O ρ O σ 0 O σ O β 0 + σ n (O α )O µ 0 − σ 3 (O µ )O β 0 + σ 2 (O µ )O ρ 0 O ρ O β 0 + σ 1 (O µ )O ρ 0 σ 1 (O ρ )O β 0 − σ 1 (O µ )O γ 0 O γ O ρ 0 O ρ O β 0 + O µ O ρ 0 σ 2 (O ρ )O β 0 − O µ O γ 0 σ 1 (O γ )O ρ 0 O ρ O β 0 − O µ O γ 0 O γ O ρ 0 σ 1 (O ρ )O β 0 + O µ O ν 0 O ν O ρ 0 O ρ O γ 0 O γ O β 0 O β 2
R ) 32 e 10t Q .7915
17!
(t R ) 17 e 5t Q +
34435125
20!
(t R ) 20 e 6t Q
+
153796445095
23!
(t R ) 23 e 7t Q +
800457740515775
26!
(t R ) 26 e 8t Q
(31)
+
5039930694167991360
29!
(t R ) 29 e 9t Q +
38747510483053595091600
32!
(t
AcknowledgmentWe would like to thank E. Getzler and S.K. Yang for discussions. T.E. also would like to thank I. Ciocan-Fontanine, C. Faber, S. Katz, Z. Qin and Y. Ruan for their interests in this work.Appendix Genus 2 Topological Recursion Relation
. T Eguchi, K Hori, C S Xiong, Phys. Lett. 40271T. Eguchi, K. Hori and C.S. Xiong, Phys. Lett. B402 (1997) 71.
hep-th/9710 to appear in Nucl. Phys. T Eguchi, M Jinzenji, C S Xiong, T. Eguchi, M. Jinzenji and C.S. Xiong, hep-th/9710 to appear in Nucl. Phys.
. S Katz, S. Katz, unpublished, March 1997.
. E Witten, Nucl. Phys. 340281E. Witten, Nucl. Phys. B340 (1990) 281.
E Getzler, talk given at Taniguchi Symposium "Integrable Systems and Algebraic Geometry. KyotoE. Getzler, talk given at Taniguchi Symposium "Integrable Systems and Algebraic Geometry", Kyoto, July 1997.
Enumeration of n-fold Tangent Hyperplanes to a Surface. I Vainsencher, alg- geom/9312012I. Vainsencher, Enumeration of n-fold Tangent Hyperplanes to a Surface, alg- geom/9312012.
Quantum Intersection Rings. P , Di Francesco, C Itzykson, The Moduli Space of Curves. R. Dijkgraaf, C. Faber and G. van der Geer edsBoston-Basel-BerlinBirkhäuserP. Di Francesco and C. Itzykson, Quantum Intersection Rings, in R. Dijkgraaf, C. Faber and G. van der Geer eds., The Moduli Space of Curves, (Birkhäuser, Boston- Basel-Berlin, 1995).
Counting Plane Curves of Any Genus. L Caporaso, J Harris, alg-geom/9608025L. Caporaso and J. Harris, Counting Plane Curves of Any Genus, alg-geom/9608025.
. R Dijkgraaf, E Witten, Nucl. Phys. 342486R. Dijkgraaf and E. Witten, Nucl. Phys. B342 (1990) 486.
. C Itzykson, J.-B Zuber, Int. J. Mod. Phys. 75661C. Itzykson and J.-B. Zuber, Int. J. Mod. Phys. A7 (1992) 5661.
. T Eguchi, Y Yamada, S.-K Yang, Rev. Math. Phys. 7279T. Eguchi, Y. Yamada and S.-K. Yang, Rev. Math. Phys. 7 (1995) 279.
. T Eguchi, S.-K Yang, Mod. Phys. Lett. 92893T. Eguchi and S.-K. Yang, Mod. Phys. Lett. A9 (1994) 2893;
. T Eguchi, K Hori, S.-K Yang, Int. J. Mod. Phys. 104203T. Eguchi, K. Hori and S.-K. Yang, Int. J. Mod. Phys. A10 (1995) 4203.
E Getzler, alg-geom/9612004Intersection Theory onM 1,4 and Elliptic Gromov-Witten Invariants. E. Getzler, Intersection Theory onM 1,4 and Elliptic Gromov-Witten Invariants, alg-geom/9612004.
Topological Recursion Relations in Genus 2. E Getzler, math.AG/9801003E. Getzler, Topological Recursion Relations in Genus 2 , math.AG/9801003.
| [] |
[
"NUMERICAL ANALYSIS FOR A SYSTEM COUPLING CURVE EVOLUTION ATTACHED ORTHOGONALLY TO A FIXED BOUNDARY, TO A REACTION-DIFFUSION EQUATION ON THE CURVE",
"NUMERICAL ANALYSIS FOR A SYSTEM COUPLING CURVE EVOLUTION ATTACHED ORTHOGONALLY TO A FIXED BOUNDARY, TO A REACTION-DIFFUSION EQUATION ON THE CURVE"
] | [
"Vanessa Styles ",
"James Van Yperen "
] | [] | [] | We consider a semi-discrete finite element approximation for a system consisting of the evolution of a planar curve evolving by forced curve shortening flow inside a given bounded domain Ω ⊂ R 2 , such that the curve meets the boundary ∂Ω orthogonally, and the forcing is a function of the solution of a reaction-diffusion equation that holds on the evolving curve. We prove optimal error bounds for the resulting approximation and present numerical experiments.Key words. surface PDE, forced curve shortening flow, prescribed boundary contact, parametric finite elements, error analysis AMS subject classifications. 65M60, 65M15, 35K55, 53C44, 74N20 ψ = x t · τ , v = x t · ν, (1.8) and we assume that ∂Ω is given by a smooth function F such that ∂Ω = { p ∈ R 2 : F ( p) = 0} * | 10.1002/num.22861 | [
"https://arxiv.org/pdf/2003.06910v1.pdf"
] | 212,726,176 | 2003.06910 | b177846b2bc8417f0c55bf840e3757f21867a541 |
NUMERICAL ANALYSIS FOR A SYSTEM COUPLING CURVE EVOLUTION ATTACHED ORTHOGONALLY TO A FIXED BOUNDARY, TO A REACTION-DIFFUSION EQUATION ON THE CURVE
Vanessa Styles
James Van Yperen
NUMERICAL ANALYSIS FOR A SYSTEM COUPLING CURVE EVOLUTION ATTACHED ORTHOGONALLY TO A FIXED BOUNDARY, TO A REACTION-DIFFUSION EQUATION ON THE CURVE
We consider a semi-discrete finite element approximation for a system consisting of the evolution of a planar curve evolving by forced curve shortening flow inside a given bounded domain Ω ⊂ R 2 , such that the curve meets the boundary ∂Ω orthogonally, and the forcing is a function of the solution of a reaction-diffusion equation that holds on the evolving curve. We prove optimal error bounds for the resulting approximation and present numerical experiments.Key words. surface PDE, forced curve shortening flow, prescribed boundary contact, parametric finite elements, error analysis AMS subject classifications. 65M60, 65M15, 35K55, 53C44, 74N20 ψ = x t · τ , v = x t · ν, (1.8) and we assume that ∂Ω is given by a smooth function F such that ∂Ω = { p ∈ R 2 : F ( p) = 0} *
Introduction.
We consider a family of planar curves Γ(t) evolving by forced curve shortening flow inside a given bounded domain Ω ⊂ R 2 , such that the curve meets the boundary ∂Ω orthogonally, and the forcing is a function of the solution of a reaction-diffusion equation that holds on Γ(t). We combine the parametrisation presented in [7], for the setting in which Γ(t) is a closed curve, with the parametrisation presented in [5], for the setting in which Γ(t) meets the boundary ∂Ω orthogonally, to yield the following system of partial differential equations: find x : [0, 1] × [0, T ] → R 2 and w : [0, 1] × [0, T ] → R such that
α x t + (1 − α)( x t · ν) ν − x ρρ | x ρ | 2 = f (w) ν in (0, 1) × (0, T ), (1.1) (| x ρ | w) t − w ρ | x ρ | ρ − (ψ w) ρ = | x ρ | g(v, w)
in (0, 1) × (0, T ), (1.2) F ( x(0, t)) = F ( x(1, t)) = 0 t ∈ [0, T ], (1.3) x ρ (0, t) · ∇ ⊥ F ( x(0, t)) = x ρ (1, t) · ∇ ⊥ F ( x(1, t)) = 0 t ∈ [0, T ], (1.4) w(0, t) = w(1, t) = w b t ∈ [0, T ], (1.5) x(ρ, 0) = x 0 (ρ), w(ρ, 0) = w 0 (ρ) ρ ∈ (0, 1). (1.6) Here α ∈ (0, 1], x(·, t) denotes the parametrisation of Γ(t) with x 0 parametrising the initial curve Γ(0), τ and ν respectively denote unit tangent and normal vectors of Γ(t) such that
τ = x ρ | x ρ | , ν = τ ⊥ (1.7)
where, for some p ∈ R 2 , we fix ( p 0 , p 1 ) ⊥ = (− p 1 , p 0 ), ψ and v respectively denote the tangential and normal velocities of Γ(t) which in addition we assume satisfies |∇F ( p)| = 1 p ∈ ∂Ω. (1.9) For a closed curve Γ(t) in R 2 , the formulation of curve shortening flow, in the form of (1.1) with f (w) = 0, was presented and analysed in [7], where the DeTurck trick is used in coupling the motion of the curve to the harmonic map heat flow with the parameter α ∈ (0, 1] being such that 1/α corresponds to the diffusion coefficient in the harmonic map heat flow. Setting α ∈ (0, 1] introduces a tangential component in the velocity which, at the numerical level, gives rise to a good distribution of the mesh points along the curve. Setting α = 1 one recovers the formulation introduced and analysed in [3], while formally setting α = 0 yields the approach introduced in [2]. The associated closed curve formulation of (1.1)-(1.6) was studied in [1], in which the authors proved optimal error bounds for a fully discrete finite element approximation of the coupled system. While in [10] an alternative formulation, again for closed curves in R 2 , was presented together with optimal error bounds for a semi-discrete finite element approximation of the coupled system. Setting α = 1 and f (w) = 0 in (1.1) and coupling the resulting equation to (1.4), (1.3) gives rise to the model presented and analysed in [5], in which optimal order error bounds for a semi-discrete finite element approximation of curve shortening flow with a prescribed normal contact to a fixed boundary are presented.
The coupled system (1.1)-(1.6), and the associated closed curve formulation studied in [1], can both be used to model diffusion induced grain boundary motion, [8]. This phenomenon can be observed if a polycrystalline film of metal is placed in a vapour containing another metal: atoms from the vapour diffuse into the film along the grain boundaries that separate the crystals in the film, this gives rise to variations of elastic energy in the film that cause the grain boundaries to move. Physically Ω(t) represents the polycrystalline film, Γ(t) represents a grain boundary and w represents the concentration of atoms from the vapour. The closed curve formulation arises from the physical set-up in which the polycrystalline film is assumed to be very thin in the x 3 −direction, and the resulting two-dimensional problem is obtained by assuming independence in the x 3 −direction. While in the set-up we consider here, the film is assumed to be infinitely long in the x 2 −direction such that the resulting two-dimensional problem is obtained by assuming independence in the x 2 −direction, and the grain boundary is assumed to span the width (x 3 -direction) of the film such that it meets the upper and lower surfaces of the film orthogonally. A more in-depth derivation of the physical set-up can be found in [8] and [9].
2. Weak formulation and finite element approximation.
2.1. Notation for function spaces. We set I = (0, 1) and adopt the standard notation for Sobolev spaces W l,p (I), where l ∈ N 0 and p ∈ [1, ∞], with the Sobolev l, p norm of a function f on the interval I defined to be f l,p and its associated seminorm to be |f | l,p . For the special case of p = 2, we denote W l,2 (I) by H l (I) with the associated norm and seminorm denoted by f l and |f | l respectively. When the function is vector-valued, the function spaces are naturally extended to [W l,p (I)] n and [H l (I)] n with appropriately defined norms and seminorms, we, however, leave the notation for the norms unchanged. We extend the notation to include time-dependent spaces W l,p (I; X), where I ⊂ R is the time-dependent space and X is a Banach space, with the standard associated norm and seminorm f W l,p (I;X) and |f | W l,p (I;X) respectively. We denote the L 2 (I)-inner product by (f, g).
Weak formulation.
Multiplying (1.1) by ξ | x ρ | 2 , where ξ ∈ [H 1 (I)] 2 is a test function, integrating over I and using integration by parts gives
| x ρ | 2 (α x t + (1 − α)( x t · ν) ν), ξ + x ρ , ξ ρ = x ρ · ξ 1 0 + | x ρ | 2 f (w) ν, ξ . (2.1)
Using (1.4) and (1.9) we have
x ρ · ξ = ∇F ( x)( x ρ · ∇F ( x)) + ∇ ⊥ F ( x)( x ρ · ∇ ⊥ F ( x)) · ∇F ( x)( ξ · ∇F ( x)) + ∇ ⊥ F ( x)( ξ · ∇ ⊥ F ( x)) = ∇F ( x)( x ρ · ∇F ( x)) · ∇F ( x)( ξ · ∇F ( x)) + ∇ ⊥ F ( x)( ξ · ∇ ⊥ F ( x)) = (∇F ( x) · ∇F ( x))( x ρ · ∇F ( x))( ξ · ∇F ( x)) + (∇F ( x) · ∇ ⊥ F ( x))( x ρ · ∇F ( x))( ξ · ∇ ⊥ F ( x)) = ( x ρ · ∇F ( x))( ξ · ∇F ( x))
which combined with (2.1) yields the following weak formulation of (1.1), (1.4)
: for all ξ ∈ [H 1 (I)] 2 | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , ξ + x ρ , ξ ρ = ( x ρ · ∇F ( x))( ξ · ∇F ( x)) 1 0 + | x ρ | 2 f (w) ν, ξ . (2.2)
Multiplying (1.2) by a test function η ∈ H 1 0 (I), integrating over I and using integration by parts we have
(| x ρ | w) t , η + w ρ | x ρ | , η ρ + (ψ w, η ρ ) = (| x ρ | g(v, w), η) ∀ η ∈ H 1 0 (I). (2.3)
We assume that there is a unique solution ( x, w) of (2.2), (2.3) on the time interval [0, T ] that satisfies the boundary condition (1.3) and the initial data (1.6). Furthermore we assume that this unique solution, and the data, satisfy
x ∈ W 1,∞ (0, T ; [H 2 (I)] 2 ) ∩ W 2,∞ (0, T ; [L 2 (I)] 2 ); (2.4) w ∈ C([0, T ]; H 2 (I)) ∩ W 1,∞ (0, T ; H 1 (I)); (2.5) f ∈ C 1,1 (R); (2.6) g ∈ C 1,1 (R 2 ); (2.7) F ∈ C 2,1 (R 2 ); (2.8) m ≤ | x ρ | ≤ M in [0, 1] × [0, T ], for some m, M ∈ R >0 . (2.9)
We note that from (2.4) and (2.5), for any t ∈ [0, T ], we have
x(·, t) 2 + x t (·, t) 1 + τ (·, t) 1 + ψ(·, t) 1 + w(·, t) 2 + w t (·, t) 1 ≤ C. (2.10) We also note that, due to (1.3), for any t ∈ [0, T ], we have
0 = d dt F ( x(ρ, t)) = x t (ρ, t) · ∇F ( x(ρ, t)) for ρ ∈ {0, 1}. (2.11) 2.3. Finite Element approximation. We partition the interval [0, 1] such that [0, 1] = ∪ J j=1 σ j , where σ j = (ρ j−1 , ρ j ). We set h := max j=1,...,J h j , where h j = ρ j − ρ j−1 and we assume that for C > 0 h ≤ C h j , j = 1, . . . , J. (2.12)
We define the finite element spaces
V h := {χ h ∈ C([0, 1]) : χ h |σ j is affine, j = 1, . . . , J} ⊂ H 1 (I), (2.13) V h 0 := {χ h ∈ V h : χ h (0) = χ h (1) = 0} (2.14)
and we define the basis functions of V h to be φ i (ρ j ) = δ j i . We set I h : C([0, 1]) → V h to be the standard Lagrange interpolation operator defined as (I h η)(ρ j ) = η(ρ j ), for j = 0, . . . , J, whereby we denote I h j := I h |σ j to be the local interpolation operator. Considering p ∈ (1, ∞], k ∈ {0, 1} and l ∈ {1, 2}, the following standard interpolation results hold for j = 1, . . . , J
h 1 p j |η h | 0,∞,σj + h j |η h | 1,p,σj ≤ C |η h | 0,p,σj ∀ η h ∈ V h , (2.15) |(I − I h j )η| k,p,σj ≤ C h −k j |η| ,p,σj ∀ η ∈ W ,p (σ j ), (2.16) |(I − I h j )η| −1,∞,σj ≤ C h 1 2 j |η| ,σj ∀ η ∈ H (σ j ), (2.17)
where |η| l,p,σj is the seminorm of W l,p (σ j ). We define the discrete inner product (·, ·) h and its induced norm
· h by (η 1 , η 2 ) h := J j=1 σj I h j (η 1 η 2 ) dρ, η 2 h := (η, η) h . (2.18)
Standard interpolation theory states that, for all η h , χ h ∈ V h , and j = 1, . . . , J, the following results hold
σj |η h | 2 dρ ≤ σj I h j |η h | 2 dρ ≤ 3 σj |η h | 2 dρ, (2.19a) σj (I − I h j )(η h χ h ) dρ ≤ C h 2 j |η h | 1,σj |χ h | 1,σj ≤ C h j |η h | 1,σj |χ h | 0,σj . (2.19b)
We assign to an element x h ∈ [V h ] 2 a piecewise constant discrete unit tangent and normal, denoted respectively by τ h and ν h , and on σ j we approximate the tangential velocity and normal velocity respectively by ψ h and v h , where
τ h = x h ρ | x h ρ | , ν h = ( τ h ) ⊥ , ψ h = x h t · τ h , v h = x h t · ν h on σ j , j = 1 · · · , J. (2.20)
Employing the above notation we introduce the following, continuous in time, finite element approximation of (2.2), (2.3): find
x h : [0, 1] × [0, T ] → R 2 and w h : [0, 1] × [0, T ] → R such that x h (·, t) ∈ [V h ] 2 and w h (·, t) − w b ∈ V h 0 , for t ∈ [0, T ], and | x h ρ | 2 α x h t + (1 − α) x h t · ν h ν h , ξ h h + x h ρ , ξ h ρ = | x h ρ | 2 f (w h ) ν h , ξ h h + x h ρ · ∇F ( x h ) ξ h · ∇F ( x h ) 1 0 ∀ ξ h ∈ [V h ] 2 , (2.21) | x h ρ | w h t , η h h + w h ρ | x h ρ | , η h ρ + ψ h w h , η h ρ h = | x h ρ | g(v h , w h ), η h h ∀ η h ∈ V h 0 , (2.22) where x h ρ (0, t) = ( x h ρ ) |σ 1 and x h ρ (1, t) = ( x h ρ ) |σ J
, and x h , w h satisfy the boundary and initial data
F ( x h (0, t)) = F ( x h (1, t)) = 0 t ∈ [0, T ], (2.23) w h (ρ, 0) = I h w 0 (ρ), x h (ρ, 0) = I h x 0 (ρ) ρ ∈ I. (2.24)
Using (1.3) and (2.24) we have F ( x h (ρ, 0)) = F (I h x 0 (ρ)) = F ( x 0 (ρ)) = 0, for ρ ∈ {0, 1}, and hence the conditions (2.23) are equivalent to
x h Let us formulate the main theorem, which will be proved in Section 3.
Theorem 2.1. Let x h (·, 0) = I h x 0 (·) ∈ [V h ] 2 and w h (·, 0) = I h w 0 (·) ∈ V h . There exists h > 0 such that for all h ∈ (0, h ] the semi-discrete problem (2.21)-(2.23) has a unique solution ( x h , w h ) ∈ [V h ] 2 × V h on [0, T ] and the following error bounds hold sup s∈[0,T ] | x(·, s) − x h (·, s)| 2 1 + |w(·, s) − w h (·, s)| 2 0 + T 0 | x t (·, s) − x h t (·, s)| 2 0 + |w(·, s) − w h (·, s)| 2 1 ds ≤ Ch 2 ,
for some C > 0 independent of h.
3. Error Analysis. For the proof of Theorem 2.1, and hence throughout this section, we choose h , γ ∈ R >0 so that
(3.1) e γ T (h ) 1 2 ≤ min 1 2C 1 , β and γ ≥ max 1, 32C 2 m 2 α ,
where C 1 , C 2 ∈ R >0 and β ∈ (0, 1], are independent of h and will be chosen a posteriori. Standard ODE theory implies that there exists a unique solution (
x h , w h ) of (2.21)-(2.24) on some time interval [0, T h ] (T h > 0).
For simplicity of notation we define
E := I h x − x h and Z := I h w − w h such that x − x h = (I − I h ) x + E and w − w h = (I − I h )w + Z.
For the proof of Theorem 2.1 we adapt the arguments presented in [4] and define for some
C 1 ∈ R >0 T h := sup t ∈ [0, T ] : ( x h , w h ) solves (2.21)-(2.23), m 2 ≤ | x h ρ | ≤ 2M in [0, 1] × [0, t], w h C([0,t];L ∞ (I)) ≤ 2C w w C([0,T ];H 1 (I)) , and sup s∈[0,t] e −γs | E(·, s)| 2 1 + |Z(·, s)| 2 0 + t 0 e −γs | E t (·, s)| 2 0 + |Z(·, s)| 2 1 ds < 2C 1 h 2 .
We then prove the result of Theorem 2.1 on [0, T h ], for C independent of T h , thus enabling us to show that T h = T and hence proving the theorem. By the definition of T h we have the following bounds
m 2 ≤ | x h ρ | ≤ 2M in [0, 1] × [0, T h ) (3.2) w h C([0,T h );L ∞ (I)) ≤ 2C w w C([0,T ];H 1 (I)) (3.3) sup s∈[0,T h ] e −γs | E(·, s)| 2 1 + |Z(·, s)| 2 0 + T h 0 e −γs | E t (·, s)| 2 0 + |Z(·, s)| 2 1 ds < 2C 1 h 2 . (3.4)
The main part of the proof of Theorem 2.1 is split into the following two lemmas:
Lemma 3.1. There exists C 2 ∈ R >0 , such that for all t ∈ [0, T h ), we have 1 4 e −γt | E| 2 1 + m 2 α 16 t 0 e −γs | E t | 2 0 ds ≤ C 2 t 0 e −γs h 2 + | E| 2 1 + |Z| 2 0 + h −1 | E| 4 0,∞ ds. (3.5) Lemma 3.2. There exists h > 0 and C 3 ∈ R >0 , such that for all h ∈ (0, h ] and t ∈ [0, T h ), we have m 4 e −γt |Z| 2 0 + 1 4M t 0 e −γs |Z| 2 1 ds ≤ C 3 t 0 e −γs h 2 + | E t | 2 0 + | E| 2 1 + |Z| 2 0 ds. (3.6)
Before proving Lemmas 3.1 and 3.2 and subsequently Theorem 2.1 we note the following useful bounds for t ∈ [0, T h ).
Using (2.16) and (2.10), we have
| x − x h | 1 ≤ |(I − I h ) x| 1 + | E| 1 ≤ C h | x| 2 + | E| 1 ≤ C h + | E| 1 , (3.7)
as well as
| x t − x h t | 0 ≤ |(I − I h ) x t | 0 + | E t | 0 ≤ C h | x t | 1 + | E t | 0 ≤ C h + | E t | 0 . (3.8)
If we use (2.19a), (2.10), (2.15) and (2.12), we get
| x h t | s ≤ |I h x t | s + | E t | s ≤ C 1 + h −s | E t | 0 for s = 0, 1. (3.9)
Using (2.16) and (2.10), we have
|w − w h | 0 ≤ |(I − I h )w| 0 + |Z| 0 ≤ C h |w| 1 + |Z| 0 ≤ C [h + |Z| 0 ] , (3.10)
and from (2.19a), (2.10), (2.15) and (2.12), we have
|w h | 1 ≤ |I h w| 1 + C h −1 |Z| 0 ≤ C h −1 [h + |Z| 0 ] . (3.11)
Using (2.9), (3.2) and (3.7), we have
1 | x ρ | − 1 | x h ρ | 0 ≤ | x ρ | − | x h ρ | | x ρ | | x h ρ | 0 ≤ 2 m 2 | x − x h | 1 ≤ C h + | E| 1 . (3.12)
In the same way, with (1.7), (2.20), (2.9) and (3.7), we have
| τ − τ h | 0 ≤ x h ρ | x h ρ | − | x ρ | | x ρ | | x h ρ | 0 + 1 | x ρ | ( x ρ − x h ρ ) 0 ≤ 2 m | x − x h | 1 ≤ C h + | E| 1 which yields | τ − τ h | 0 + | ν − ν h | 0 ≤ C h + | E| 1 .|ψ − ψ h | 0 ≤ | x t · ( τ − τ h )| 0 + | τ h · ( x t − x h t )| 0 ≤ C | x t | 0,∞ h + | E| 1 + C h + | E t | 0 ≤ C h + | E t | 0 + | E| 1
and thus we obtain
|ψ − ψ h | 0 + |v − v h | 0 ≤ C h + | E t | 0 + | E| 1 . (3.14)
With (2.10), (2.15), (2.12) and (3.14), we gain
|v h | 1 ≤ |v| 1 + C h −1 |v − v h | 0 ≤ C h −1 h + | E t | 0 + | E| 1 . (3.15)
Proof of Lemma 3.1: In this proof we combine techniques used in [1] and [5]. Taking ξ = E t in (2.2) and ξ h = E t in (2.21), subtracting the resulting equations and noting
x ρ − (I h x) ρ , ξ h ρ = 0, (3.16) we obtain | x h ρ | 2 α E t + (1 − α)( E t · ν h ) ν h , E t h + E ρ , E ρ,t = | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t h − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , E t + | x ρ | 2 f (w) ν, E t − | x h ρ | 2 f (w h ) ν h , E t h + x ρ · ∇F ( x) E t · ∇F ( x) − x h ρ · ∇F ( x h ) E t · ∇F ( x h ) 1 0 =: 3 i=1 I i . (3.17)
Using (3.2) and (2.19a), we note that the left-hand side of (3.17) is bounded below
| x h ρ | 2 α E t + (1 − α)( E t · ν h ) ν h , E t h + E ρ , E ρ,t ≥ m 2 4 α E t 2 h + (1 − α) E t · ν h 2 h + 1 2 d dt | E| 2 1 ≥ m 2 α 4 | E t | 2 0 + 1 2 d dt | E| 2 1 . (3.18)
We now proceed to bound I 1 , I 2 and I 3 in (3.17), beginning with I 1 .
I 1 = | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t h − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , E t = | x h ρ | 2 − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , E t + (1 − α) | x h ρ | 2 ( x t · ( ν h − ν)) ν + ( x t · ν h ) ν h − ν , E t + | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t h − | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t + | x h ρ | 2 α (I h − I) x t + (1 − α)((I h − I) x t · ν h ) ν h , E t =: I 1,1 + I 1,2 . (3.19)
Using (2.9), (3.2), (3.7), (3.13), Sobolev embeddings and (2.10), we see that (2.16) and (2.10), we get
I 1,1 = | x h ρ | 2 − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , E t + (1 − α) | x h ρ | 2 ( x t · ( ν h − ν)) ν + ( x t · ν h ) ν h − ν , E t ≤ | x t | 0,∞ | x ρ | + | x h ρ | 0,∞ | x − x h | 1 + 8M 2 (1 − α) | ν − ν h | 0 | E t | 0 ≤ C h + | E| 1 | E t | 0 . (3.20) From (2.19a,b), (3.2),I 1,2 = | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t h − | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , E t + | x h ρ | 2 α (I h − I) x t + (1 − α)((I h − I) x t · ν h ) ν h , E t ≤ C h J j=1 I h j x t 1,σj | x h ρ | 2 α E t + (1 − α)( E t · ν h ) ν h 0,σj + C |(I − I h ) x t | 0 | E t | 0 ≤ C h | x t | 1 | E t | 0 ≤ C h | E t | 0 . (3.21) Combining (3.19)-(3.21) we have |I 1 | ≤ m 2 α 24 | E t | 2 0 + C h 2 + | E| 2 1 . (3.22) I 2 = | x ρ | 2 f (w) ν, E t − | x h ρ | 2 f (w h ) ν h , E t h = | x ρ | 2 − | x h ρ | 2 f (w) ν + | x h ρ | 2 f (w) ν − ν h , E t + | x h ρ | 2 f (w) − f (w h ) ν h , E t + | x h ρ | 2 (I − I h )f (w h ) ν h , E t + | x h ρ | 2 (I h − I)f (w h ) ν h , E t h + | x h ρ | 2 I h (f (w h )) ν h , E t − | x h ρ | 2 I h (f (w h )) ν h , E t h =: I 2,1 + I 2,2 . (3.23)
Using (2.9), (3.2), (3.7), (3.13), (2.6) and (3.10), we see that
I 2,1 = | x ρ | 2 − | x h ρ | 2 f (w) ν + | x h ρ | 2 f (w) ν − ν h , E t + | x h ρ | 2 f (w) − f (w h ) ν h , E t ≤ |f | 0,∞ | x ρ | + | x h ρ | 0,∞ | x ρ − x h ρ | 0 + 4M 2 | ν − ν h | 0 | E t | 0 + C |w − w h | 0 | E t | 0 ≤ C h + |Z| 0 + | E| 1 | E t | 0 .I 2,2 = | x h ρ | 2 (I − I h )f (w h ) ν h , E t + | x h ρ | 2 (I h − I)f (w h ) ν h , E t h + | x h ρ | 2 I h (f (w h )) ν h , E t − | x h ρ | 2 I h (f (w h )) ν h , E t h ≤ C |(I − I h )f (w h )| 0 | E t | 0 + C h J j=1 I h (f (w h )) 1,σj | x h ρ | 2 E t · ν h 0,σj ≤ C h |f (w h )| 1 | E t | 0 ≤ C h |f (w h )| 0,∞ |w h | 1 | E t | 0 ≤ C [h + |Z| 0 ] | E t | 0 .|I 2 | ≤ m 2 α 24 | E t | 2 0 + C h 2 + |Z| 2 0 + | E| 2 1 . (3.26)
We now bound I 3 , to this end we set
b(ρ, t) := x ρ · ∇F ( x), b h (ρ, t) := x h ρ · ∇F ( x h )
and note that
x(ρ, t) = I h ( x(ρ, t)) for ρ ∈ {0, 1}, t ∈ [0, T ]. (3.27)
Using (2.11), (2.25), and (3.27) we see that
I 3 = b(ρ, t) E t · ∇F ( x) − b h (ρ, t) E t · ∇F ( x h ) 1 0 = b(ρ, t) E t · [∇F ( x) − ∇F ( x h )] 1 0 + (b(ρ, t) − b h (ρ, t)) x t · [∇F ( x h ) − ∇F ( x)]|b t (ρ, t)| ≤ | x ρ,t (ρ, t)| + M |D 2 F ( x(ρ, t)) x t (ρ, t)| ≤ C, (3.29b)
as well as
|∇F ( x(ρ, t)) − ∇F ( x h (ρ, t))| ≤ L ∇F | x(ρ, t) − x h (ρ, t)| ≤ C | E(·, t)| 0,∞ . (3.29c) A Taylor's expansion yields ∇F ( x) − ∇F ( x h ) =D 2 F ( x)( x − x h ) + 1 0 (D 2 F (s x + (1 − s) x h ) − D 2 F ( x))( x − x h ) ds (3.30)
which together with (3.29a,b), (2.8), (3.27), (2.15), and (2.12), gives
I 3,1 = b(ρ, t) E t · [∇F ( x) − ∇F ( x h )] 1 0 = b(ρ, t) E T t D 2 F ( x) E + b(ρ, t) 1 0 E T t [D 2 F (s x + (1 − s) x h ) − D 2 F ( x)] E ds 1 0 = 1 2 d dt b(ρ, t) E T D 2 F ( x) E − 1 2 b t (ρ, t) E T D 2 F ( x) E − 1 2 b(ρ, t) E T d dt (D 2 F ( x)) E + b(ρ, t) 1 0 E T t [D 2 F (s x + (1 − s) x h ) − D 2 F ( x)] E ds 1 0 ≤ 1 2 d dt b(ρ, t) E T D 2 F ( x) E 1 0 + C | E| 2 0,∞ 1 + | E t | 0,∞ ≤ 1 2 d dt b(ρ, t) E T D 2 F ( x) E 1 0 + C | E| 2 0,∞ 1 + h − 1 2 | E t | 0 . (3.31) Denoting x(0, t) := x 0 (t), taking ξ = (1 − ρ)∇F ( x 0 ) in (2.2) and noting (1.9), we have | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ)∇F ( x 0 ) − ( x ρ , ∇F ( x 0 )) = | x ρ | 2 f (w) ν, (1 − ρ)∇F ( x 0 ) + (1 − ρ) x ρ · ∇F ( x) ∇F ( x 0 ) · ∇F ( x) 1 0 = | x ρ | 2 f (w) ν, (1 − ρ)∇F ( x 0 ) − b(0, t), and hence b(0, t) = ( x ρ , ∇F ( x 0 )) + | x ρ | 2 f (w) ν, (1 − ρ)∇F ( x 0 ) − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ)∇F ( x 0 ) .b h (0, t) = x h ρ , ∇F ( x h 0 ) + | x h ρ | 2 f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) h − | x h ρ | 2 α x h t + (1 − α)( x h t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h . (3.33)
Hence, subtracting (3.33) from (3.32) yields
b(0, t) − b h (0, t) = ( x ρ , ∇F ( x 0 )) − x h ρ , ∇F ( x h 0 ) + | x ρ | 2 f (w) ν, (1 − ρ)∇F ( x 0 ) − | x h ρ | 2 f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) h + | x h ρ | 2 α x h t + (1 − α)( x h t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ)∇F ( x 0 ) =: 3 i=1 B i .B 1 = ( x ρ , ∇F ( x 0 )) − x h ρ , ∇F ( x h 0 ) = x ρ , ∇F ( x 0 ) − ∇F ( x h 0 ) + E ρ , ∇F ( x h 0 ) ≤ C | E| 0,∞ + | E| 1 . (3.35)
We now bound B 2 .
B 2 = | x ρ | 2 f (w) ν, (1 − ρ)∇F ( x 0 ) − | x h ρ | 2 f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) h = | x ρ | 2 f (w) ν, (1 − ρ) ∇F ( x 0 ) − ∇F ( x h 0 ) + | x h ρ | 2 f (w) − f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) + | x ρ | 2 − | x h ρ | 2 f (w) ν + | x h ρ | 2 f (w) ν − ν h , (1 − ρ)∇F ( x h 0 ) + | x h ρ | 2 (I − I h )f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) + | x h ρ | 2 (I h − I)f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) h + | x h ρ | 2 I h (f (w h )) ν h , (1 − ρ)∇F ( x h 0 ) − | x h ρ | 2 I h (f (w h )) ν h , (1 − ρ)∇F ( x h 0 ) h =: 5 i=1 B 2,i . (3.36)
Using (2.9), (3.29c) and (2.6), we have
B 2,1 = | x ρ | 2 f (w) ν, (1 − ρ) ∇F ( x 0 ) − ∇F ( x h 0 ) ≤ C |f (w)| 0 |1 − ρ| 0 | E| 0,∞ ≤ C | E| 0,∞ . (3.37)
Noting (2.26), and using similar arguments to those used in proving (3.24) and (3.25), we have
B 2,2 = | x h ρ | 2 f (w) − f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) ≤ C |f (w) − f (w h )| 0 |1 − ρ| 0 ≤ C [h + |Z| 0 ] , (3.38) B 2,3 = | x ρ | 2 − | x h ρ | 2 f (w) ν + | x h ρ | 2 f (w) ν − ν h , (1 − ρ)∇F ( x h 0 ) ≤ C | x ρ − x h ρ | 0 + | ν − ν h | 0 |1 − ρ| 0 ≤ C h + | E| 1 , (3.39) B 2,4 = | x h ρ | 2 (I − I h )f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) + | x h ρ | 2 (I h − I)f (w h ) ν h , (1 − ρ)∇F ( x h 0 ) h ≤ C |(I − I h )f (w h )| 0 |1 − ρ| 0 ≤ C [h + |Z| 0 ] , (3.40) B 2,5 = | x h ρ | 2 I h (f (w h )) ν h , (1 − ρ)∇F ( x h 0 ) − | x h ρ | 2 I h (f (w h )) ν h , (1 − ρ)∇F ( x h 0 ) h ≤ C h J j=1 I h j (f (w h )) 1,σj | x h ρ | 2 (1 − ρ) ∇F ( x h 0 ) · ν h 0,σj ≤ C h |f (w h )| 1 |1 − ρ| 0 ≤ C [h + |Z| 0 ] . (3.41)
We now bound B 3 in a similar way.
B 3 = | x h ρ | 2 α x h t + (1 − α)( x h t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h − | x ρ | 2 α x t + (1 − α)( x t · ν) ν , (1 − ρ)∇F ( x 0 ) = | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ) ∇F ( x h 0 ) − ∇F ( x 0 ) + | x h ρ | 2 α x h t − I h x t + (1 − α)(( x h t − I h x t ) · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h + | x h ρ | 2 − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ)∇F ( x h 0 ) + (1 − α) | x h ρ | 2 ( x t · ( ν h − ν)) ν + ( x t · ν h ) ν h − ν , (1 − ρ)∇F ( x h 0 ) + | x h ρ | 2 α (I h − I) x t + (1 − α)((I h − I) x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) + | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h − | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) =: 5 i=1 B 3,i . (3.42)
Using (2.9), (3.29c) and (2.10), we have
B 3,1 = | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ) ∇F ( x h 0 ) − ∇F ( x 0 ) ≤ C | x t | 0 |1 − ρ| 0 | E| 0,∞ ≤ C | E| 0,∞ .(B 3,2 = | x h ρ | 2 α x h t − I h x t + (1 − α)(( x h t − I h x t ) · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h ≤ C E t h 1 − ρ h ≤ C | E t | 0 . (3.44)
Noting (2.26) and using similar arguments to those used in proving (3.20) and (3.21) we have
B 3,3 = | x h ρ | 2 − | x ρ | 2 [α x t + (1 − α)( x t · ν) ν] , (1 − ρ)∇F ( x h 0 ) + (1 − α) | x h ρ | 2 ( x t · ( ν h − ν)) ν + ( x t · ν h ) ν h − ν , (1 − ρ)∇F ( x h 0 ) ≤ C | x ρ − x h ρ | 0 + | ν − ν h | 0 |1 − ρ| 0 ≤ C h + | E| 1 , (3.45) B 3,4 = | x h ρ | 2 α (I h − I) x t + (1 − α)((I h − I) x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) ≤ C |(I − I h ) x t | 0 |1 − ρ| 0 ≤ C h, (3.46) B 3,5 = | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) h − | x h ρ | 2 α I h x t + (1 − α)(I h x t · ν h ) ν h , (1 − ρ)∇F ( x h 0 ) ≤ C h J j=1 I h j ( x t ) 1,σj | x h ρ | 2 (1 − ρ) ∇F ( x h 0 ) + (∇F ( x h 0 ) · ν h ) ν h 0,σj ≤ C h | x t | 1 |1 − ρ| 0 ≤ C h.|b(0, t) − b h (0, t)| ≤ C h + | E| 0,∞ + | E t | 0 + |Z| 0 + | E| 1 .
We remark that the above bound does not depend on ρ and so also holds for ρ = 1 and hence we have
|b(0, t) − b h (0, t)| + |b(1, t) − b h (1, t)| ≤ C h + | E| 0,∞ + | E t | 0 + |Z| 0 + | E| 1 .I 3,2 = (b(ρ, t) − b h (ρ, t))( x t · (∇F ( x) − ∇F ( x h ))) 1 0 ≤ C| E| 0,∞ h + | E| 0,∞ + | E t | 0 + |Z| 0 + | E| 1 . (3.49)
Hence, combining (3.28) with (3.31) and (3.49), we have
I 3 ≤ 1 2 d dt ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 + m 2 α 24 | E t | 2 0 + C h 2 + | E| 2 0,∞ + |Z| 2 0 + | E| 2 1 + h −1 | E| 4 0,∞ .d dt | E| 2 1 + m 2 α 8 | E t | 2 0 ≤ 1 2 d dt ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 + C h 2 + | E| 2 0,∞ + |Z| 2 0 + | E| 2 1 + h −1 | E| 4 0,∞ . (3.51)
Multiplying (3.51) by e −γs , for γ ≥ 1, and integrating with respect to s ∈ (0, t) with t ≤ T h , and noting | E(·, 0)| = 0, we have
1 2 e −γt | E| 2 1 + γ 2 t 0 e −γs | E| 2 1 ds + m 2 α 8 t 0 e −γs | E t | 2 0 ds ≤ 1 2 e −γt ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 + γ 2 t 0 e −γs ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 ds + C t 0 e −γs h 2 + | E| 2 0,∞ + |Z| 2 0 + | E| 2 1 + h −1 | E| 4 0,∞ ds =: I 4 + C t 0 e −γs h 2 + | E| 2 0,∞ + |Z| 2 0 + | E| 2 1 + h −1 | E| 4 0,∞ ds. (3.52)
Using (2.8), Sobolev embeddings, (2.10) and the inequality |η| 2 0,∞ ≤ C|η| 0 η 1 ≤ ε |η| 2 1 + C(ε)|η| 2 0 , for η ∈ H 1 (I), (3.53) we see that
I 4 = 1 2 e −γt ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 + γ 2 t 0 e −γs ( x ρ · ∇F ( x)) E T D 2 F ( x) E 1 0 ds ≤ e −γt | x| 1,∞ |D 2 F ( x)| 0,∞ | E| 2 0,∞ + γ t 0 e −γs | x| 1,∞ |D 2 F ( x)| 0,∞ | E| 2 0,∞ ds ≤ 1 4 e −γt | E| 2 1 + Ce −γt | E| 2 0 + t 0 e −γs γ 4 | E| 2 1 + Cγ| E| 2 0 ds. (3.54)
Substituting (3.54) into (3.52) and using (3.53), gives
1 4 e −γt | E| 2 1 + γ 4 t 0 e −γs | E| 2 1 ds + m 2 α 8 t 0 e −γs | E t | 2 0 ds ≤ Ce −γt | E| 2 0 + Cγ 2 t 0 e −γs | E| 2 0 ds + C t 0 e −γs h 2 + |Z| 2 0 + | E| 2 1 + h −1 | E| 4 0,∞ ds. (3.55) Since | E(·, 0)| = 0, we have e −γt | E(·, t)| 2 0 = t 0 d ds e −γs | E| 2 0 ds ≤ −γ t 0 e −γs | E| 2 0 ds + 2 t 0 e −γs | E| 0 | E t | 0 ds ≤ − γ 2 t 0 e −γs | E| 2 0 ds + 2 γ t 0 e −γs | E t | 2 0 ds,
and hence there exists C 2 ∈ R >0 such that first two terms on the right hand side of (3.55) can be bounded as follows
Ce −γt | E| 2 0 + Cγ 2 t 0 e −γs | E| 2 0 ds ≤ 2C 2 γ t 0 e −γs | E t | 2 0 ds. (3.56)
Combining (3.55) and (3.56), with γ chosen large enough such that γ ≥ max{1, 32C2 m 2 α }, yields the desired result.
Proof of Lemma 3.2: In the proof of this lemma we follow the techniques used in [1]. We first note that from (3.4) and (3.1), for h ∈ (0, h ] and t ∈ [0, T h ), we have
| E(·, t)| 2 1 + |Z(·, t)| 2 0 ≤ 2C 1 h 2 e γT ≤ 2C 1 (h )(| x ρ |w) t , Z − | x h ρ | w h t , Z h + 1 | x h ρ | Z ρ , Z ρ + (ψ w, Z ρ ) − ψ h w h , Z h ρ h = w ρ 1 | x h ρ | − 1 | x ρ | , Z ρ + (| x ρ | g(v, w), Z) − | x h ρ | g(v h , w h ), Z h . Since | x h ρ | I h w t , Z h − | x h ρ | w h t , Z h = 1 2 d dt | x h ρ | Z, Z h + 1 2 | x h ρ | t Z, Z h , we have 1 2 d dt | x h ρ | Z, Z h + 1 | x h ρ | Z ρ , Z ρ = − 1 2 | x h ρ | t Z, Z h + | x h ρ | I h w t , Z h − (| x ρ | w) t , Z + w ρ 1 | x h ρ | − 1 | x ρ | , Z ρ + ψ h w h , Z ρ h − (ψ w, Z ρ ) + (| x ρ | g(v, w), Z) − | x h ρ | g(v h , w h ), Z h =: 5 i=1 T i . (3.58)
Using (3.2), the left hand side of (3.58) is bounded below by
1 2 d dt | x h ρ | Z, Z h + 1 | x h ρ | Z ρ , Z ρ ≥ 1 2 d dt | x h ρ | Z, Z h + 1 2M |Z| 2 1 . (3.59)
Now we bound T i , i = 1, . . . , 5, by starting with T 1 . Noting that | x h ρ | t = x h ρ,t · τ h we have
T 1 = − 1 2 | x h ρ | t Z, Z h = 1 2 ( x h ρ,t · ( τ − τ h )) Z, Z − 1 2 ( x h ρ,t · τ ) Z, Z + 1 2 ( x h ρ,t · τ h ) Z, Z − ( x h ρ,t · τ h ) Z, Z h =: 3 i=1 T 1,i . (3.60)
Using (3.9), (3.13), (3.57) and (3.53) gives
T 1,1 = 1 2 ( x h ρ,t · ( τ − τ h )) Z, Z ≤ 1 2 |Z| 2 0,∞ | x h t | 1 | τ − τ h | 0 ≤ C|Z| 2 0,∞ 1 + h −1 | E t | 0 h + | E| 1 ≤ Ch 3 4 |Z| 2 0,∞ + Ch 1 2 Z 1 | E t | 0 . (3.61) 1 2 + h − 1 2 | E| 1 1 + h −1 | E t | 0 h + | E| 1 |Z| 0,∞ ≤ C h + | E t | 0 |Z| 0,∞ .T 2,4 = w | x h ρ | ( τ − τ h ) · τ t , Z ≤ C |w| 0,∞ | τ t | 0 | τ − τ h | 0 |Z| 0,∞ ≤ C h + | E| 1 |Z| 0,∞ . (3.69)
Setting P h := I − τ h ⊗ τ h , where ⊗ represents the outer product, and noting that | x h ρ | τ h t = P h x h ρ,t , we have
T 2,5 = − w | x h ρ | τ · τ h t , Z = − w P h x h ρ,t · τ , Z = w P h E ρ,t · τ , Z − w P h I h x ρ,t · τ , Z =: T 2,5,1 + T 2,5,2 . (3.70)
Since P h is constant on each sub-interval σ j and Z ∈ V h 0 , using integration by parts over the sub-intervals yields
T 2,5,1 = w P h E ρ,t · τ , Z = J j=1 σj w P h E ρ,t · τ Z dρ = J j=1 w P h E t · τ Z ρj ρj−1 − J j=1 σj P h E t · (w τ Z) ρ dρ = − J−1 j=1 (P h |σ j+1 − P h |σ j ) E t (ρ j , t) · (w(ρ j , t) τ (ρ j , t) Z(ρ j , t)) − P h E t · (w τ ) ρ , Z − w P h E t · τ , Z ρ . (3.71)
To bound the first term in (3.71) we first note that
P h |σ j+1 − P h |σ j = τ h |σ j+1 ⊗ ( τ h |σ j − τ h |σ j+1 ) + ( τ h |σ j − τ h |σ j+1 ) ⊗ τ h |σ j , and τ h |σ j+1 − τ h |σ j = 1 |( x h ρ ) |σ j+1 | ( x h ρ ) |σ j+1 − ( x h ρ ) |σ j + τ h |σ j |( x h ρ ) |σ j+1 | |( x h ρ ) |σ j | − |( x h ρ ) |σ j+1 | .
For any χ ∈ R 2 , we set ξ h = φ j χ, j = 1, · · · , J − 1, in (2.21) to obtain
( x h ρ ) |σ j+1 − ( x h ρ ) |σ j · χ = − h j+1 |( x h ρ ) |σ j+1 | 2 ( ν h |σ j+1 · χ) + h j |( x h ρ ) |σ j | 2 ( ν h |σ j · χ) f (w h (ρ j , t)) + α h j+1 |( x h ρ ) |σ j+1 | 2 + h j |( x h ρ ) |σ j | 2 ( x h t (ρ j , t) · χ) + (1 − α) h j+1 |( x h ρ ) |σ j+1 | 2 ( x h t (ρ j , t) · ν h |σ j+1 ) ( ν h |σ j+1 · χ) + (1 − α) h j |( x h ρ ) |σ j | 2 ( x h t (ρ j , t) · ν h |σ j ) ( ν h |σ j · χ)
Combining the three equations above and using (3.2), (2.6), (3.3) and (2.10), we have
|P h |σ j+1 − P h |σ j | ≤ C |( x h ρ ) |σ j+1 − ( x h ρ ) |σ j | ≤ C h |f (w h )| 0,∞ + | x h t (ρ j , t)| ≤ C h 1 + | E t (ρ j , t)| . (3.72)
Hence, using (3.72), (2.12), (2.5), (2.15), and (3.57), we have
J−1 j=1 (P h |σ j+1 − P h |σ j ) E t (ρ j , t) · (w(ρ j , t) τ (ρ j , t) Z(ρ j , t)) ≤ C h J−1 j=1 1 + | E t (ρ j , t)| | E t (ρ j , t)| |w(ρ j , t)| |Z(ρ j , t)| ≤ C | E t | 0 + | E t | 2 0 |Z| 0,∞ ≤ C | E t | 0 |Z| 0,∞ + C h − 1 2 | E t | 2 0 |Z| 0 ≤ C | E t | 0 |Z| 0,∞ + C | E t | 2 0 . (3.73)
From (3.73), (3.2), (2.10) and Sobolev embeddings, we have
T 2,5,1 = − J−1 j=1 (P h | σj+1 − P h | σj ) E t (ρ j , t) · (w(ρ j , t) τ (ρ j , t) Z(ρ j , t)) − P h E t · (w τ ) ρ , Z − w P h E t · τ , Z ρ ≤ C | E t | 0 |Z| 0,∞ + C | E t | 2 0 + |P h | 0,∞ |w τ | 1 | E t | 0 |Z| 0,∞ + |P h | 0,∞ |w| 0,∞ | E t | 0 |Z| 1 ≤ C | E t | 0 |Z| 0,∞ + C | E t | 2 0 + C | E t | 0 |Z| 1 .
(3.74)
Since P h is symmetric and P h τ = τ − τ h + 1 2 | τ − τ h | 2 τ h , using Sobolev embeddings, (2.19a), (2.10), (3.13) and the fact that
| τ − τ h | ≤ | τ | + | τ h | ≤ 2, we have T 2,5,2 = − w P h I h x ρ,t · τ , Z = − w P h τ · I h x ρ,t , Z = w ( τ h − τ ) · I h x ρ,t , Z − 1 2 w | τ − τ h | 2 ( τ h · I h x ρ,t ), Z ≤ 2 |w| 0,∞ | τ − τ h | 0 |I h x t | 1 |Z| 0,∞ ≤ C h + | E| 1 |Z| 0,∞ . (3.75)
Thus, combining (3.65)-(3.75), we have
T 2,5 ≤ C h + | E t | 0 + | E| 1 [|Z| 0,∞ + |Z| 0 + |Z| 1 ] + C | E t | 2 0 . (3.76)
Using (3.2), (2.16), Sobolev embeddings, (2.10) and (3.9), we see that
T 2,6 = | x h ρ |(I h − I)w t , Z + | x h ρ | t (I h − I)w, Z ≤ C |(I − I h )w t | 0 |Z| 0 + |(I − I h )w| 0,∞ |( x h ρ,t · τ h )| 0 |Z| 0 ≤ C h |w t | 1 |Z| 0 + C h |w| 2 | x h t | 1 |Z| 0 ≤ C h + | E t | 0 |Z| 0 . (3.77)
From (2.19a,b), Sobolev embeddings, (2.10), (3.2) and (3.9), we obtain
T 2,7 = | x h ρ | I h w t , Z h − | x h ρ | I h w t , Z ≤ C h J j=1 |Z| 1,σj | x h ρ | I h w t 0,σj ≤ C h |I h w| 0,∞ | x h t | 1 + |I h w t | 0 |Z| 1 ≤ C h + | E t | 0 |Z| 1 .|T 2 | ≤ 1 8M |Z| 2 1 + C h 2 + |Z| 2 0,∞ + | E t | 2 0 + |Z| 2 0 + | E| 2 1 .
and hence, using (3.53), we have
|T 2 | ≤ 1 16M |Z| 2 1 + C h 2 + | E t | 2 0 + |Z| 2 0 + | E| 2 1 . (3.79)
From Sobolev embeddings, (2.10) and (3.12), we gain
T 3 = w ρ 1 | x h ρ | − 1 | x ρ | , Z ρ ≤ |w| 1,∞ 1 | x ρ | − 1 | x h ρ | 0 |Z| 1 ≤ C h + | E| 1 |Z| 1 ≤ 1 16M |Z| 2 1 + C h 2 + | E| 2 1 . (3.80)
We now bound T 4 .
T 4 = ψ h w h , Z ρ h − (ψ w, Z ρ ) = ψ h − I h ψ w h + I h ψ w h − I h w , Z ρ h + I h ψ I h w, Z ρ h − I h ψ I h w, Z ρ + (I h − I)ψ I h w, Z ρ + ψ (I h − I)w, Z ρ =: T 4,1 + T 4,2 .T 4,1 = ψ h − I h ψ w h + I h ψ w h − I h w , Z ρ h ≤ max{|w h | 0,∞ , |I h ψ| 0,∞ } (I − I h )ψ h + ψ − ψ h h + Z h Z ρ h ≤ C h + | E t | 0 + |Z| 0 + | E| 1 |Z| 1 ,(3.e −γt | E(·, t)| 2 0,∞ ≤ e −γt | E(·, t)| 2 1 + Ce −γt | E(·, t)| 2 0 ≤ CC 1 h 2 , and hence, for t ∈ [0, T h ), we have Ch −1 t 0 e −γs | E| 4 0,∞ ds ≤ Ch −1 e γT t 0 e −γs | E| 2 0,∞ 2 ds ≤ C(C 1 ) 2 T e γT h 3
which, together with (3.92), and on noting (3.1), yields
sup s∈[0,T h ] e −γs | E| 2 1 + |Z| 2 0 + T h 0 e −γs | E t | 2 0 + |Z| 2 1 ds ≤ C 1 h 2 + C(C 1 ) 2 T e γT h 3 ≤ C 1 h 2 + CC 1 T (h ) 1 2 h 2 ≤ C 1 h 2 + 1 2 C 1 h 2 ≤ 3 2 C 1 h 2 , (3.93)
provided h is chosen small enough. We now follow the argument in [4] to show that T h = T . If it were not the case that T h = T we would have T h < T , and using (2.9), (2.17), (2.15), (2.12), (2.10), (
Numerical simulations of diffusion induced grain boundary motion.
We conclude the numerical results with two simulations of diffusion induced grain boundary motion (DIGM).
Example 2
The set-up we consider here is similar to that considered in [6]. Indeed the evolution law for the parametric system derived in [6] can be obtained from (1.1) by setting α = 1 and F ( p) = | p 0 | − 1, for some p ∈ R 2 , and considering a slightly different formulation of the reaction-diffusion equation (1.2). Setting T = 2.5 and
x 0 (ρ) = (2ρ − 1, 0) T , w 0 (ρ) = 0, f (w) = w 2 , g(v, w) = vw, ρ ∈ [0, 1] with the boundary data F ( p) = | p 0 | − 1, w b = 1, yields the results displayed in Figure 4.1 in which a travelling wave solution is reached, the left hand plot shows the evolution of the interface at t = 0, 0.5, 1, 1.5, 2, 2.5, while the right hand plot shows the evolution of the solute, plotted against ρ, at the same times. These results are consistent with Figure 5-8 in [6].
Example 3
In this example we use the same data as Example 2 with the exception that we replace the simple straightsided geometry of Ω that arose from setting F ( p) = | p 0 | − 1, with the more complex geometry Ω := { p ∈ R 2 : 0.05 cos(20 p 1 ) + 0.95 > p 0 , −0.05 cos(12 p 1 ) − 0.5 < p 0 } for which we note that (1.9) does not hold. The results are presented in Figure 4.2, with the left hand plot displaying the evolution of the interface at t = 0, 1.5, 3, 4.5, 6, 7.5, together with the geometry Ω (black line) while the right hand plot shows the evolution of the solute, plotted against ρ, at the same times, with T = 7.5. From this figure we see that the complex nature of the domain destroys the travelling wave solution that was present in Example 2.
noting (3.27), for ρ ∈ {0, 1} and t ∈ [0, T ] we have |b(ρ, t)| ≤ M, (3.29a)
denoting x h (0, t) := x h 0 (t), taking ξ h = (1 − ρ)∇F ( x h 0 ) in(2.21) and noting (2.26), we have
B 1 , using (2.10), (3.29c) and (2.26), and noting (3.16), we have
3.48) with Sobolev embeddings, (2.10) and (3.29c), noting (3.27), we have
= Z in (2.3), subtracting the resulting equation from (2.22) with η h = Z, and noting (3.16), gives
embeddings, (2.19a), (2.10), (2.16) and (3.14), we have
Fig. 4 . 1 :
41DIGM simulation in a domain with straight boundaries.
Fig. 4 . 2 :
42DIGM simulation in a complex geometry.
82 )
82while using Sobolev embeddings together with(2.19a,b), (2.10) and (2.16) givesT 4,2 = I h ψ I h w, Z ρ h − I h ψ I h w, Z ρ + (I h − I)ψ I h w, Z ρ + ψ (I h − I)w, Z ρ (w)| 1,σj |Z ρ | 0,σj + max{|I h w| 0,∞ , |ψ| 0,∞ } |(I − I h )ψ| 0 + |(I − I h )w| 0 |Z| 1 ≤ C h |I h ψ I h w| 1 + |ψ| 1 |w| 1 |Z| 1 ≤ C h |Z| 1 . (3.83)Proof of Theorem 2.1: Multiplying (3.6) by ω, where ω ∈ R >0 is chosen such that C 3 ω ≤ m 2 α 32 , and adding the resulting inequality to(3.5), for t ∈ [0, T h ), we have where C ω,γ depends on ω, γ and T , but not T h . Dividing byC = min{ 1ds ≤ C 1 h 2 + Ch −1Using (3.53), (3.56) and(3.4), for t ∈ [0, T h ), we have≤ C h
J
j=1
|I h
j (ψ)I h
j e −γs | x h
ρ | Z, Z
h ds +
1
4M
t
0
e −γs |Z| 2
1 ds
≤ C 3
t
0
e −γs h 2 + | E t | 2
0 + |Z| 2
0 + | E| 2
1 ds.
(3.91)
From (3.2) and (2.19a), we have
1
2
e −γt | x h
ρ |Z, Z
h +
γ
2
t
0
e −γs | x h
ρ |Z, Z
h ds ≥
m
4
e −γt |Z| 2
0 +
γ m
4
t
0
e −γs |Z| 2
0 ds,
which together with (3.91) yields the desired result.
1
4
e −γt | E| 2
1 +
m ω
4
e −γt |Z| 2
0 +
m 2 α
32
t
0
e −γs | E t | 2
0 ds +
ω
4M
t
0
e −γs |Z| 2
1 ds
≤ C(1 + ω)
t
0
e −γs h 2 + | E| 2
1 + |Z| 2
0 + h −1 | E| 4
0,∞ ds.
An application of Gronwall's lemma then gives
sup
s∈[0,T h ]
e −γs 1
4
| E| 2
1 +
m ω
4
|Z| 2
0 +
m 2 α
32
T h
0
e −γs | E t | 2
0 ds +
ω
4M
T h
0
e −γs |Z| 2
1 ds
≤ C ω,γ
T h
0
e −γs h 2 + h −1 | E| 4
0,∞ ds
4 , m ω
4 , m 2 α
32 , ω
4M }, we obtain
sup
s∈[0,T h ]
e −γs | E| 2
1 + |Z| 2
0 +
T h
0
e −γs | E t | 2
0 + |Z| 2
1
T h
0
e −γs | E| 4
0,∞ ds.
(3.92)
3.93) and (3.1), for ρ ∈ [0, 1], we would have E 3 × e −5Table 4.1: α = 1, ∆t = h 2 . Table 4.2: α = 0.1, ∆t = h 2 . J N E 1 × e −3 eoc 1 E 2 × e −3 eoc 2 E 3 × e −6 eoc 3 E 4 × e −7 eoc 4 .04379 1.96 0.01323 1.95 0.003523 2.08 0.03073 1.98 Table 4.3: α = 1, ∆t = 0.4h.J N E 1 × e −3 eoc 1 E 2 × e −3 eoc 2 E 3 × e −6 eoc 3 E 4 × e −7 eoc 4Table 4.4: α = 0.1, ∆t = 0.4h.eoc 3
E 4 × e −5
eoc 4
10
80
44.54
-
147.0
-
1.123
-
5.522
-
20
320
5.587
3.55
13.34
3.46
0.06858
4.03
0.3491
3.98
40
1280
0.3812
3.88
0.9244
3.85
0.004296
4.00
0.02186
4.00
80
5120
0.02436 3.97 0.05933 3.96 0.0002686 4.00
0.001367
4.00
160 20480 0.00153 3.99 0.003733 3.99 0.00001679 4.00 0.00008549 4.00
J
N
E 1 × e −4
eoc 1
E 2 × e −5
eoc 2
E 3 × e −5
eoc 3
E 4 × e −4
eoc 4
10
80
2.904
-
8.342
-
2.415
-
1.189
-
20
320
0.1855
3.97
0.6048
3.79
0.1519
4.00
0.07460
3.99
40
1280
0.01166
3.99
0.03941
3.94
0.009504
4.00
0.004667
4.00
80
5120
0.0007296 4.00 0.002490 3.98 0.0005942 4.00 0.0002918 4.00
160 20480 0.00004562 4.00 0.0001560 4.00 0.00003714 4.00 0.00001824 4.00
40
80
7.651
-
2.111
-
1.608
-
9.976
-
80
160
2.325
1.72
0.6703
1.66
0.3149
2.35
1.976
2.34
160 320
0.6454
1.85
0.1909
1.81 0.06636 2.25
0.4782
2.05
320 640
0.1704
1.92 0.05110 1.90 0.01494 2.15
0.1211
1.98
640 1280 040
80
6.874
-
1.931
-
1.591
-
10.41
-
80
160
2.205
1.64
0.6407
1.59
0.3148
2.34
1.856
2.49
160 320
0.6285
1.81
0.1866
1.78 0.06678 2.24
0.4542
2.03
320 640
0.1681
1.90 0.05053 1.88 0.01509 2.15
0.1182
1.94
640 1280 0.04351 1.95 0.01316 1.94 0.003563 2.08 0.03052 1.95
t (0, t) · ∇F ( x h (0, t)) = x h t (1, t) · ∇F ( x h (1, t)) = 0 t ∈ [0, T ], (2.25)which is the discrete analogue of (2.11). Similarly we present the discrete analogue of (1.9), namely|∇F ( x h (0, t))| = |∇F ( x h (1, t))| = 1t ∈ [0, T ].(2.26)
Acknowledgements JVY gratefully acknowledges the support of the EPSRC grant 1805391. VS would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Geometry, compatibility and structure preservation in computational differential equations when work on this paper was undertaken. This work was supported by: EPSRC grant number EP/R014604/1.Using integration by parts with (2.10) and(3.9), noting Z ∈ V h 0 , yields(3.62)while(2.19b) and (3.9) yieldThus, noting that (2.10) and(3.3)imply that |Z| 0,∞ ≤ C, combining (3.60) with (3.61)-(3.63), and using (3.53), we haveand henceUsing Sobolev embeddings, (2.10) and (3.7), we see thatApplying integration by parts, (2.10), Sobolev embeddings, (3.8), (3.7) and noting that Z ∈ V h 0 , we have thatCombining (3.81)-(3.83), we haveBy using the continuity of g, we bound T 5 in the following wayThus, combining (3.85)-(3.88), yieldsMultiplying (3.90) by e −γs , for γ ≥ 1, integrating with respect to s ∈ (0, t), with t ≤ T h , and noting |Z(·, 0)| = 0, we have
Numerical analysis for a system coupling curve evolution to reaction diffusion on the curve. J W Barrett, K Deckelnick, V Styles, SIAM Journal on Numerical Analysis. 55J.W. Barrett, K. Deckelnick and V. Styles: Numerical analysis for a system coupling curve evolution to reaction diffusion on the curve. SIAM Journal on Numerical Analysis, vol 55 (2017), p.1080-1100
The approximation of planar curve evolutions by stable fully implicit finite element schemes that equidistribute. J W Barrett, H Garcke, R Nürnberg, Numerical Methods for Partial Differential Equations. 27J.W. Barrett, H. Garcke and R. Nürnberg: The approximation of planar curve evolutions by stable fully implicit finite element schemes that equidistribute. Numerical Methods for Partial Differential Equations, vol 27 (2011), pp. 1-30.
On the approximation of the curve shortening flow. Calculus of Variations. K Deckelnick, G Dziuk, Applications and Computations. PitmanK. Deckelnick and G. Dziuk: On the approximation of the curve shortening flow. Calculus of Variations, Applications and Computations, (Pitman, 1995), p.100-108
Error analysis for the elastic flow of parametrized curves. K Deckelnick, G Dziuk, Mathematics of Computation. 78K. Deckelnick and G. Dziuk: Error analysis for the elastic flow of parametrized curves. Mathematics of Computation, vol 78 (2009), p.645-671
Finite element error bounds for a curve shrinking with prescribed normal contact to a fixed boundary. K Deckelnick, C M Elliott, IMA Journal of Numerical Analysis. 18Oxford University PressK. Deckelnick and C.M. Elliott: Finite element error bounds for a curve shrinking with prescribed normal contact to a fixed boundary. IMA Journal of Numerical Analysis, vol 18 (Oxford University Press, 1998), p.635-654
Numerical diffusion-induced grain boundary motion. K Deckelnick, C M Elliott, V Styles, Interfaces and Free Boundaries. 3K. Deckelnick, C.M. Elliott and V. Styles: Numerical diffusion-induced grain boundary motion. Interfaces and Free Boundaries, vol 3 (1998), p.393-414
On approximations of the curve shortening flow and of the mean curvature flow based on the DeTurck trick. C M Elliott, H Fritz, IMA Journal of Numerical Analysis. 37C.M. Elliott and H. Fritz: On approximations of the curve shortening flow and of the mean curvature flow based on the DeTurck trick. IMA Journal of Numerical Analysis, vol 37 (2016), p.543-603
Diffusion-induced grain boundary migration in thin films. Noyes Data Corporation, Diffusion Phenomena in Thin Films and Microelectronic Materials. C Handwerker, C. Handwerker: Diffusion-induced grain boundary migration in thin films. Noyes Data Corporation, Diffusion Phenomena in Thin Films and Microelectronic Materials, (1989), p.245-322
Classical solutions for diffusion-induced grain-boundary motion. U F Mayer, G Simonett, Journal of Mathematical Analysis and Applications. 234ElsevierU.F. Mayer and G. Simonett: Classical solutions for diffusion-induced grain-boundary motion. Journal of Mathematical Analysis and Applications, vol 234 (Elsevier, 1999), p.660-674
Curve shortening flow coupled to lateral diffusion. P Pozzi, B Stinner, Numerische Mathematik. 135SpringerP. Pozzi and B. Stinner: Curve shortening flow coupled to lateral diffusion. Numerische Mathematik, vol 135 (Springer, 2017), p.1171-1205
| [] |
[
"Asynchronous Cell-Free Massive MIMO With Rate-Splitting",
"Asynchronous Cell-Free Massive MIMO With Rate-Splitting",
"Asynchronous Cell-Free Massive MIMO With Rate-Splitting",
"Asynchronous Cell-Free Massive MIMO With Rate-Splitting"
] | [
"Jiakang Zheng ",
"Senior Member, IEEEJiayi Zhang ",
"Fellow, IEEEJulian Cheng ",
"Life Fellow, IEEEVictor C M Leung ",
"Derrick Wing ",
"Fellow, IEEEKwan Ng ",
"Fellow, IEEEBo Ai ",
"J Zheng ",
"J Zhang ",
"B Ai ",
"South Wales ",
"Australia ",
"Jiakang Zheng ",
"Senior Member, IEEEJiayi Zhang ",
"Fellow, IEEEJulian Cheng ",
"Life Fellow, IEEEVictor C M Leung ",
"Derrick Wing ",
"Fellow, IEEEKwan Ng ",
"Fellow, IEEEBo Ai ",
"J Zheng ",
"J Zhang ",
"B Ai ",
"South Wales ",
"Australia "
] | [] | [] | In practical cell-free (CF) massive multiple-input multiple-output (MIMO) networks with distributed and low-cost access points, the asynchronous arrival of signals at the user equipments increases multiuser interference that degrades the system performance. Meanwhile, rate-splitting (RS), exploiting the transmission of both common and private messages, has demonstrated to offer considerable spectral efficiency (SE) improvements and its robustness against channel state information (CSI) imperfection.The signal performance of a CF massive MIMO system is first analyzed for asynchronous reception capturing the joint effects of propagation delays and oscillator phases of transceivers. Taking into account the imperfect CSI caused by asynchronous phases and pilot contamination, we derive novel and closed-form downlink SE expressions for characterizing the performance of both the RS-assisted and conventional non-RS-based systems adopting coherent and non-coherent data transmission schemes, respectively. Moreover, we formulate the design of robust precoding for the common messages as an optimization problem that maximizes the minimum individual SE of the common message. To address the non-convexity of the design problem, a bisection method is proposed to solve the problem optimally.Simulation results show that asynchronous reception indeed destroys both the orthogonality of the pilots and the coherent data transmission resulting in poor system performance. Besides, thanks to the uniform coverage properties of CF massive MIMO systems, RS with a simple low-complexity precoding for the common message obtained by the equal ratio sum of the private precoding is able to achieve substantial downlink sum SE gains, while the application of robust precoding to the common message is shown to be useful in some extreme cases, e.g., serious oscillator mismatch and unknown delay phase.Index TermsCell-free massive MIMO, asynchronous reception, rate-splitting, spectral efficiency, precoding. | 10.1109/jsac.2023.3240709 | [
"https://export.arxiv.org/pdf/2212.02811v1.pdf"
] | 254,274,876 | 2212.02811 | 37535d301ac47c4f730ea6639e78f2ef0a193fc3 |
Asynchronous Cell-Free Massive MIMO With Rate-Splitting
6 Dec 2022
Jiakang Zheng
Senior Member, IEEEJiayi Zhang
Fellow, IEEEJulian Cheng
Life Fellow, IEEEVictor C M Leung
Derrick Wing
Fellow, IEEEKwan Ng
Fellow, IEEEBo Ai
J Zheng
J Zhang
B Ai
South Wales
Australia
Asynchronous Cell-Free Massive MIMO With Rate-Splitting
6 Dec 20221 are with Beijing Jiaotong University, China; J. Cheng is with the University of British Columbia, Canada; V. C. M. Leung is with Shenzhen University, China, and also with the University of British Columbia, Canada; D. W. K. Ng is with the University of New
In practical cell-free (CF) massive multiple-input multiple-output (MIMO) networks with distributed and low-cost access points, the asynchronous arrival of signals at the user equipments increases multiuser interference that degrades the system performance. Meanwhile, rate-splitting (RS), exploiting the transmission of both common and private messages, has demonstrated to offer considerable spectral efficiency (SE) improvements and its robustness against channel state information (CSI) imperfection.The signal performance of a CF massive MIMO system is first analyzed for asynchronous reception capturing the joint effects of propagation delays and oscillator phases of transceivers. Taking into account the imperfect CSI caused by asynchronous phases and pilot contamination, we derive novel and closed-form downlink SE expressions for characterizing the performance of both the RS-assisted and conventional non-RS-based systems adopting coherent and non-coherent data transmission schemes, respectively. Moreover, we formulate the design of robust precoding for the common messages as an optimization problem that maximizes the minimum individual SE of the common message. To address the non-convexity of the design problem, a bisection method is proposed to solve the problem optimally.Simulation results show that asynchronous reception indeed destroys both the orthogonality of the pilots and the coherent data transmission resulting in poor system performance. Besides, thanks to the uniform coverage properties of CF massive MIMO systems, RS with a simple low-complexity precoding for the common message obtained by the equal ratio sum of the private precoding is able to achieve substantial downlink sum SE gains, while the application of robust precoding to the common message is shown to be useful in some extreme cases, e.g., serious oscillator mismatch and unknown delay phase.Index TermsCell-free massive MIMO, asynchronous reception, rate-splitting, spectral efficiency, precoding.
I. INTRODUCTION
Cell-free (CF) massive multiple-input multiple-output (MIMO) has been envisioned as a disruptive technology to provide uniform spectral efficiency (SE) improvement and ubiquitous connectivity for the sixth-generation (6G) wireless communication networks [1]. The key idea of CF massive MIMO systems is that through spatial multiplexing on the same time-frequency resources, a large number of geographically distributed access points (APs) connected to a central processing unit (CPU) are deployed to serve the user equipments (UEs) coherently [2], [3].
Thanks to the law of large numbers, two phenomena characterizing the propagation environment of CF massive MIMO systems are channel hardening and favorable propagation, which facilitate simple precoding design and interference management [4], [5]. Additionally, the user-centric paradigm prevents the UE from perceiving the cell boundary, greatly simplifying its actual implementation [6]. In addition, the rich macro-diversity gain of CF massive MIMO systems brought by the joint transmission and reception has drawn much academic research interest in this topic [7]. For instance, it was shown that CF massive MIMO systems can outperform small-cell systems in terms of 95%-likely per-user SE due to the joint interference cancellation capability of the former [8]. In addition, compared with traditional cellular networks, CF massive MIMO systems-based joint signal processing can effectively alleviate the influence of non-ideal practical factors, such as channel aging, hardware impairments, etc. [9]- [11], thanks to the large number of spatial degrees of freedom. Moreover, it was revealed in [12] that the joint power control in CF massive MIMO systems is able to enhance the wireless power transfer efficiency compared with its counterpart in conventional systems. Besides, it was proved that the error probabilities of both the centralized and distributed joint activity detection in CF massive MIMO systems approach zero when the number of APs is sufficiently large [5]. Furthermore, numerous joint optimization algorithms were proposed for CF massive MIMO systems to enhance the practicality of the system, including achieving the energy-efficient load balancing [13], reducing the complexity of channel estimation and decoding [14], and establishing a scalable framework by AP scheduling [3]. Therefore, joint coherent processing is an important and fundamental topic for the practical implementation of CF massive MIMO systems.
Despite the fruitful research in the literature, most current works on CF massive MIMO assume the availability of perfect synchronization to ensure efficient joint coherent processing. Yet, this assumption is impractical for communication networks adopting a distributed architecture [15]. One reason is that the geographically distributed APs cause inevitable signal arrival time differences from the UEs [16]. In particular, the incurred delay phases impair the received signals creating a challenge for distributed massive MIMO systems to achieve coherent transmission.
Another reason is that the transmitter and receiver hardware are not perfect and identical such that the imperfection may cause different random phase shifts on the channels [17], [18]. Besides, these oscillator phases introduce a multiplicative factor to the channel and this factor drifts gradually over time at each channel use following the Wiener process [19]. Unfortunately, the impairments hinder the acquisition of accurate channel state information (CSI) and lead to severe multi-user interference. These jeopardize the achievable signal-to-noise ratio (SNR) and remain a major obstacle for the practical deployment of distributed massive MIMO systems [20]. In fact, accurate synchronization (e.g., phase, frequency) has always been an important and interesting research subject in the evolution of MIMO systems. For example, a profound study on the optimal estimator-detector receivers for the joint detection and synchronization was initiated for conventional digital communication [21]. More importantly, the results indicated that effective coherent multiuser joint processing is only possible when a sufficient level of relative timing and phase synchronization accuracy is achieved at the APs [22]. As such, three effective approaches, from centralized to distributed implementations, were proposed to achieve phase synchronization in coherent MIMO radar systems [23]. Besides, a high-accuracy frequency synchronization technique was proposed for the uplink of multiuser orthogonal-frequency-division multiplexingbased massive MIMO systems [24]. However, the computational complexity of these algorithms increase significantly with the communication network size. In addition, although inserting a time-frequency interval to a resource block is an effective method to resolve the synchronization issues, it requires significant system resources for signaling overhead that may in turn reduce the overall system rate performance [25]. Moreover, a recent performance analysis of CF massive MIMO systems for revealing the impacts of asynchronous reception was investigated [16], [20].
Yet, an effective multi-user interference management for the case of imperfect CSI incurred by asynchronous CF massive MIMO systems has not been reported in the literature.
Recently, rate-splitting (RS) has been developed to harness multiuser interference particularly in the existence of imperfect CSI [26], showing its rich potential in addressing the negative impacts caused by asynchronous signal receptions in CF massive MIMO systems. Indeed, the RS strategy divides the message for each user into two parts: a common message to be decoded by all the UEs and a private message to be decoded by the corresponding UE only [27], respectively. Then, all the common parts are combined into a super common message and superposition coding is used to transmit these message streams simultaneously. At the UE, the common message is decoded first, with all the private messages treated as noise. Then, the disired private message is decoded after the common message has been removed via successive interference cancellation (SIC) [28]. The ability of RS to generalize two extreme existing approaches, i.e., treating interference as noise and interference for decoding, is what makes this strategy attractive for practical implementation [29]. For instance, the RS-based beamforming scheme performs strictly better than that based on non-orthogonal multiple access (NOMA) in both partially and fully loaded systems [30]. Besides, compared to conventional linear precoding approaches, the RS-assisted non-orthogonal unicast and multicast transmission techniques are more spectrally and energy efficient in a wide range of user deployment and network load scenarios [31]. In addition to the system performance gains, relaxed CSI quality requirements and enhanced achievable rate regions are two other benefits of RS [32]. Moreover, it was proved that the RS strategy is a robust and handy alternative to conventional methods for mitigating the detrimental effects of hardware impairments [33], mobility [34], and limited feedback [29] in massive MIMO systems.
In fact, the advantages of RS in massive MIMO can be well retained in CF massive MIMO systems, e.g., mitigating the pilot contamination [35]. To the best of the authors' knowledge, the research of RS in addressing the issues in CF massive MIMO systems is still incomplete and not thorough. Therefore, thorough analysis and insights are required to unleash the potential of RS in enhancing the performance the CF massive MIMO system. Motivated by the aforementioned observations, we first analyze the downlink performance of CF massive MIMO systems with asynchronous reception, including both the delay and oscillator phases. To alleviate the multiuser interference caused by imperfect CSI, we implement RS based on the transmission of common and private messages. Finally, we propose a robust precoding for the common message to further reduce the impact of asynchronous reception on CF massive MIMO systems. The specific contributions of this work are listed as follows:
• Taking into account the imperfect CSI caused by asynchronous phase and pilot contamination, we first derive novel and closed-form downlink SE expressions for coherent and non-coherent data transmission schemes, respectively. Our results show that the existence of asynchronous reception destroys the orthogonality of pilots and the coherent transmission of data, leading to an unsatisfactory SE performance of CF massive MIMO systems.
• To compensate the system performance loss, we apply the RS strategy to CF massive MIMO systems by splitting the messages into common and private parts. We derive the closedfrom sum SE expression for the RS-assisted CF massive MIMO system with asynchronous reception adopting an arbitrary linear precoding of the common messages. We also develop an efficient method to determine the optimal power allocating to the two messages. It is discovered that RS can considerably enhance the performance of CF massive MIMO systems. However, RS is negatively impacted by asynchronous phase, particularly the oscillator phase of the UE and delay phase.
• We then propose a design of robust precoding for the common messages to enhance the performance of RS-assisted CF massive MIMO systems by using the bisection method to solve the optimization problem that maximize the minimum individual SE of common message.
The designed robust RS-based precoding can alleviate the negative impacts of asynchronous reception to a certain extent, especially when the asynchronous issues are moderate. It is highlighted that the uniform coverage characteristics of CF massive MIMO systems allow the RS with a simple low-complexity common precoding to achieve significant performance advantages.
Note that the conference version of this paper [36] investigated the downlink performance of CF massive MIMO systems with asynchronous reception. The rest of the paper is organized as follows. In Section II, we describe the system model for capturing the joint effects caused by asynchronous reception and the basic principle of RS. Next, Section III presents the achievable downlink SE of the CF massive MIMO system with asynchronous reception for both coherent and non-coherent transmission schemes. Then, Section IV derives the sum SE expression for the RS-assisted CF massive MIMO system and the robust precoding for the common messages to reduce the effect of asynchronous reception. We provide numerical results and discussions in Section V. Finally, Section VI concludes this paper with a brief summary.
Notation: We use boldface lowercase letters x and boldface uppercase letters X to represent column vectors and matrices, respectively. The n × n identity matrix is I n . Superscripts x * ,
x T , and x H are used to denote conjugate, transpose, and conjugate transpose, respectively. The absolute value, the Euclidean norm, the trace operator, and the definitions are denoted by |·|, · , tr (·), and , respectively. Finally, x ∼ CN (0, σ 2 ) represents a circularly symmetric complex Gaussian random variable x with variance σ 2 .
II. SYSTEM MODEL
We consider a CF massive MIMO system comprising L APs and K UEs as illustrated in Fig. 1.
Besides, a single antenna and N antennas are deployed for each UE and AP, respectively. It is assumed that all the L APs simultaneously serve all the K UEs via the same time-frequency resources. Moreover, we adopt the time-division duplex protocol with a standard coherence block model consisting of τ c time instants (channel uses) with the uplink training phase occupying τ p time instants and the downlink transmission phase assuming τ c − τ p time instants. Besides, the frequency-flat channel between AP l ∈ {1, . . . , L} and UE k ∈ {1, . . . , K} at each coherence block is modeled as Rayleigh fading [8]
h kl ∼ CN (0, R kl ) ,(1)
where R kl ∈ C N ×N represents the spatial correlation matrix and β kl tr (R kl ) /N denotes the large-scale fading coefficient.
A. Asynchronous Reception
The asynchronous reception of the wireless transceiver arises mainly from two factors: signal propagation delay differences and hardware oscillator errors. Specifically, these introduces a multiplicative random phase to the channel which are detailed as follows 1 .
1 Note that we mainly study the impact of asynchronous reception in terms of phase-asynchronization. However, performing coherent processing also requires synchronizations in time and frequency which has been widely studied in the literature [37].
1) Delay Phase:
Due to the different physical positions of the APs in the CF architecture, the distances between each AP and a certain UE are different, resulting in asynchronous signal arrival to the UEs. This asynchronous reception introduces a constant phase shift as [16] θ kl = e −j2π ∆t kl Ts ,
where ∆t kl = ∆d kll ′ /c is the timing offset of the signal intending for the kth UE and transmitted by the lth AP. Besides, ∆d kll ′ , c, and T s are the propagation delay, speed of light, and symbol duration, respectively. Without loss of generality, we assume that the first arrived signal to UE k is from AP l ′ and its timing offset is ∆t kl ′ = 0.
2) Oscillator Phase: Each AP and UE are assumed to be equipped with their own freerunning oscillator [18], and the phase of the transmission symbol in each channel use is altered due to the phase noise. Then, the overall oscillator phase between AP l and UE k at each time instant can be defined by ϑ kl [n] exp (j (ϕ k [n] + φ l [n])) with the discrete-time Wiener phase model [19]
ϕ k [n] = ϕ k [n − 1] + δ ue k [n] ,(3)φ l [n] = φ l [n − 1] + δ ap l [n] ,(4)
where ϕ k [n] and φ l [n] are the oscillator phase of UE k and AP l at the nth time instant, respectively. Besides, the additive terms δ ue k [n] ∼ CN (0, σ 2 k ) and δ ap l [n] ∼ CN (0, σ 2 l ) are the phase increment of AP l and UE k at the nth time instant. Note that σ 2 i = 4π 2 f 2 c c i T s , i = k, l, denote the phase increment variance [19], where f c is the carrier frequency and c i is a constant dependent on the oscillator. Remark 1. Note that we focus on the scenario which each AP and UE has its oscillator such that their oscillator phase processes are considered as mutually independent. For the case of analysis, we assume independent and identically distributed oscillator phase statistics across different APs and UEs, i.e., σ 2 k = σ 2 ue and σ 2 l = σ 2 ap , ∀k, l.
Considering the effect of both the delay and oscillator phases, the channel between the kth UE and the lth AP at the nth time instant is expressed as
g kl [n] = θ kl h kl [n] = θ kl ϑ kl [n] h kl , n = 1, . . . , τ c ,(5)
where h kl [n] ∈ C N ×1 is the channel that combines the oscillator phase and it is random in each time instant. Besides, the parameter θ kl is mainly determined by the positions of APs and UEs, and thus can be considered as a constant among multiple coherent blocks 2 .
B. Channel Estimation
We assume τ p mutually orthogonal time-multiplexed pilot sequences are employed [9], where the different sequences are temporally orthogonal since the pilot sequence t equates to transmitting a pilot signal only at time instant t [19]. This can also be seen that the pilot sequence t is a τ p -dimensional vector with all zeros except one in the tth channel use. Besides, a large-scale network with K > τ p is studied such that different UEs may use the same time instant for channel estimation. Moreover, the index of the time instant allocated to UE k is denoted by t k ∈ {1, . . . , τ p }, and the other UEs that exploit the same time instant for pilot transmission as
UE k is defined by P k = {i : t i = t k } ∈ {1, . . . , K}.
Considering the effect of asynchronous reception, the received signal at AP l from UE k at time instant t k is given by
z l [t k ] = i∈P k √ p i g il [t i ] + w l [t k ],(6)
where p i denotes the pilot transmit power of UE i and w l [t k ] ∼ CN (0, σ 2 I N ) represents the receiver noise at AP l with the noise power σ 2 . This received signal is exploited to estimate (or predict) the channel realization at any time instant in the block. Yet, the accuracy of the channel estimate is degraded as the temporal gap between the considered channel realization and the pilot transmission grows. Without lost of generality, we consider the estimates of the channels at time instant τ p + 1. In addition, λ = τ p + 1 is defined to simplify the notation and (6) can be rewritten as
z l [t k ] = √ p k θ kl Θ * kl [λ − t k ] h kl [λ] + i∈P k /{k} √ p i θ il h il [t i ] + w l [t k ],(7)
where Θ kl [λ−t k ] denotes the phase increment from time instant t k to time instant λ, defined as
Θ kl [λ − t k ] = ϑ kl [λ] ϑ * kl [t k ] = exp j λ s=t k +1 (δ ue k [s] + δ ap l [s]) .(8)
By exploiting the characteristic function of Gaussian random variable, the mean of (8) is
E {Θ kl [λ − t k ]} = e − λ−t k 2 (σ 2 ap +σ 2 ue ) .(9)
Then, applying the standard minimum mean square error (MMSE) estimation [2], the MMSE
estimateĥ kl [λ] of the channel coefficient h kl [λ] can be computed by each AP l aŝ h kl [λ] = E h kl [λ] z H l [t k ] (E {z l [t k ] z H l [t k ]}) z l [t k ] (a) = √ p k θ * kl E {Θ kl [λ − t k ]} E h kl [λ] h H kl [λ] E {z l [t k ] z H l [t k ]} z l [t k ] (b) = √ p k e − λ−t k 2 (σ 2 ap +σ 2 ue ) θ * kl R kl i∈P k p i R il + σ 2 I N z l [t k ] ,(10)
where (a) follows from the independence between phases and channels, and from that channels between different UEs are independent of each other. (b) follows from computing the variances of the channel (1) and the received signal (7), and from using the fact in (9). In addition, the distribution of the estimateĥ kl [λ] and the estimation errorh
kl [λ] = h kl [λ] −ĥ kl [λ] are CN (0, Q kl ) and CN (0, R kl − Q kl ), where Q kl = p k e −(λ−t k )(σ 2 ap +σ 2 ue) R kl Ψ kl R kl ,(11)
with
Ψ kl = i∈P k p i R il + σ 2 I N −1 .(12)
Moreover, to simplify the notation, we definē
Q kil = √ p k p i e −(λ−t k )(σ 2 ap +σ 2 ue) R il Ψ kl R kl ,(13)
that represents the covariance matrix of the estimate for UE k and UE i at AP l. Note that each UE is assumed to be aware of the channel statistics and the signal detection is performed with the required channel distribution information available.
Remark 2.
To compare the estimation quality under different degrees of asynchronous reception, we utilize the normalized mean square error (NMSE) given as
NMSE kl = tr (R kl − Q kl ) tr (R kl ) , ∀k, l,(14)
which is an appropriate metric to measure the relative estimation error. Note thatĥ kl [λ] and h kl [λ] are independent due to the properties of MMSE estimation. Therefore, the values of NMSE are between 0 and 1, which denote perfect and extremely impaired estimation, respectively. Following the similar steps as in [38, Section III], we derive the NMSE of the least-squares (LS) estimator for comparison as
NMSE ls kl = e (λ−t k )(σ 2 ap +σ 2 ue ) tr Ψ −1 kl p k tr (R kl ) − 1, ∀k, l.(15)
Note that since the estimate and the estimation error of LS are correlated, the value of NMSE can be higher than one.
As illustrated in Fig. 2, we compare the NMSE of MMSE and LS channel estimations with different degrees of oscillator phase as the SNR = 30 dB for all the links. It is clear that the MMSE estimation quality gets worse with the increasing of the oscillator phase variance.
The reason is that the random oscillator phase destroys the orthogonality of the received pilot signal. We also find that the NMSE is sensitive to the oscillator phase when the number of pilots are sufficient. In some extreme asynchronous cases, exploiting non-orthogonal pilots can achieve a more accurate estimation. This result can be explained by (11) and (14) as that pilot contamination makes the value of NMSE worse and change slowly. It is also found that the estimation accuracy of LS is generally lower than that of MMSE and the LS estimator performs very poorly when the oscillator phase variances are large.
C. Rate-Splitting Strategy
Since imperfect CSI is inevitable in the considered CF system due to asynchronous issues, we propose the use of RS for the downlink to compensate the possible performance degradation. Fig. 3: A RS-assisted CF massive MIMO system [26].
n N Split Encode Channel Decode Precode c,l W c,k W c,K W p,1 W p,k W p,K W c W c s 1 s k s K s k y ĉ W c, k W p, k Wˆk W k W 1 W K W
The rationale behind the RS strategy for multi-user downlink is to first split the message for each UE into a private message and a common message, then combine all the common sub-messages into a super common message, and finally all the messages are simultaneously transmitted by means of superposition coding [26]. The principle of RS is shown in Fig. 3, specifically, the message W k ∈ C intended for UE k at the transmitter is split into a private part W p,k ∈ C and a common part W c,k ∈ C. The private parts W p,1 , . . . , W p,K are independently encoded into the private streams s 1 , . . . , s K , respectively, and the common parts of all the users, W c,1 , . . . , W c,K , are combined into a common message W c , which is encoded into a common stream s c using a public codebook. Then, these K + 1 streams are transmitted after linear precoding. At the receivers, each UE firstly decodes the common stream by treating all the private streams as noise. Then, SIC is adopted to remove the decoded common stream from the received signal under the assumption of error-free decoding 3 [29]. Each user then decodes its private stream by treating other private streams as noise.
Utilizing the common precoding vector v c,l ∈ C N ×1 and the private precoding vector v kl ∈ C N ×1 in asynchronous CF massive MIMO systems with RS, the transmitted signal from AP l at the nth time instant is given by
x l [n] = √ p dc η l v c,l s c [n] + √ p dp µ l K i=1 v il s i [n],(16)
where s c ∈ C is the common message and s i ∈ C is the private message of UE i. Besides, p dc and p dp are the powers allocated to the common and private message, respectively. Moreover, η l and µ l are the power normalization parameters for the common and private precoding, respectively, and are respectively denoted by
η l 1 E v c,l 2 ,(17)µ l 1 K i=1 E v il 2 .(18)
We assume that the downlink maximum transmission power of each AP is the same represented by p d [2]. Besides, the power splitting factor ρ (0 ρ 1) is defined to adjust the fraction of the power allocated to the transmission of the common messages at each AP 4 . Hence, we have p dc = ρp d and p dp = (1 − ρ) p d . In Section IV, we will introduce the binary search method to determine the optimal power splitting factor. When the power splitting factor ρ = 0, RS-assisted CF massive MIMO systems degenerate into the conventional CF massive MIMO.
Remark 3. The RS strategy is promising in multi-user transmission with imperfect CSI [33].
In contrast to the conventional strategy that treats any multi-user interference originating from the imperfect CSI as noise, the RS strategy has the ability to treat the interference as noise and perform interference decoding through the presence of a common message. This ability of the decoding part of the interference is the key for boosting the sum SE performance.
III. DOWNLINK ASYNCHRONOUS TRANSMISSION
In this part, we first focus on the effect of asynchronous reception on CF massive MIMO systems. Therefore, we assume the power splitting factor ρ = 0 for a conventional approach 5 , which makes p dc = 0 and p dp = p d . Then, considering both the coherent and non-coherent downlink transmission, we adopt both the delay phase used (DU) and delay phase forgotten (DF) maximum-ratio (MR) private precoding to derive the closed-form SE performance expressions.
We assume that the CF massive MIMO applies the coherent joint transmission for the downlink transmission, which means that each AP sends the same data symbol to each UE as the other APs [39]. When the power splitting factor ρ = 0 (without RS), the transmitted signal in (16) reduces to
x l [n] = √ p dp µ l K i=1 v il s i [n].(19)
Then, the received signal of the kth UE at the nth time instant is
y p,co k [n] = √ p dp L l=1 g H kl [n] √ µ l v kl s k [n] + √ p dp K i =k L l=1 g H kl [n] √ µ l v il s i [n] + w k [n] ,(20)
where w k [n] ∼ CN (0, σ 2 d ) denotes the receiver noise at UE k. For minimizing the MSE,
MSE kl = E |s k [n] − y p,co k [n] (v kl )| 2 ĥ il [λ] , we derive the local DU-MMSE private precoding vector as v kl = θ kl p dp K i=1 p dp ĥ il [λ]ĥ H il [λ] + C il + σ 2 I N −1ĥ kl [λ] ,(21)
where C il = R il − Q il is the variance of the estimation error. Moreover, low-complexity MR private precoding is exploited to obtain analytical results, which does not affect the validity of our conclusions. In the following, we introduce a theorem to study the system performance.
log 2 1 + SINR p,co,du k [n] ,(22)
with SINR p,co,du
k [n] = η ap n η ue n p dp L l=1 √ µ l tr (Q kl ) 2 p dp Ξ p,co,du k [n] − η ap n η ue n p dp L l=1 √ µ l tr (Q kl ) 2 + σ 2 d ,(23)
where Ξ p,co,du
k [n] = K i=1 L l=1 µ l tr (Q il R kl ) + (1 − η ap n ) K i∈P k L l=1 µ l tr Q kil 2 + η ap n K i∈P k L l=1 √ µ l tr Q kil 2 .(24)
Note that we have η ap n e −(n−λ)σ 2 ap and η ue n e −(n−λ)σ 2 ue .
Proof: See Appendix A.
Corollary 1. Only keeping the oscillator phase parameters, the approximate SINR expression of (23) under which the number of antennas tends to infinity (LN → ∞) can be derived
as 1/ (1/ (Lη ap n η ue n ) + (L − 1) / (Lη ue n ) + a),
where a is a constant. It is clear that the SINR decreases as σ 2 ap and σ 2 ue increase, and σ 2 ue has a larger effect on the SINR compared with σ 2 ap . Moreover, increasing the number of APs can only significantly reduce the influence of the oscillator phase at the APs.
From (23), we can find that the delay phase has no impact on the SINR expression when the DU-MR private precoding is adopted. To study and characterize the influence of the delay phase on the system, we investigate the SE performance using DF-MR private precoding with v kl =ĥ kl [λ]. Following the similar steps as in Theorem 1, we obtain SINR p,co,df
k [n] = η ap n η ue n p dp L l=1 θ * kl √ µ l tr (Q kl ) 2 p dp Ξ p,co,df k [n] − η ap n η ue n p dp L l=1 θ * kl √ µ l tr (Q kl ) 2 + σ 2 d ,(25)
where Ξ p,co,df
k [n] = K i=1 L l=1 µ l tr (Q il R kl ) + (1 − η ap n ) K i∈P k L l=1 µ l tr Q kil 2 + η ap n K i∈P k L l=1 θ * il √ µ l tr Q kil 2 .(26)
Comparing (23) and (25), it is clear that the propagation delay caused by different physical positions of the APs introduces random phases to the desired signal, which will reduce the SE performance of systems.
B. Non-Coherent Downlink Transmission
For alleviating the phase-synchronization requirements of the APs imposed by coherent transmission, we apply a non-coherent joint transmission in the downlink CF massive MIMO such that each AP sends the different data symbol to each UE than the other APs [39]. Therefore, the transmitted signal from AP l at time instant n is expressed as
x l [n] = √ p dp µ l K i=1 v il s il [n],(27)
where s il [n] ∈ C is the symbol sent to UE i which is different for all the APs. Then, the received signal of the kth UE at the nth time instant is
y p,nc k [n] = √ p dp L l=1 g H kl [n] √ µ l v kl s kl [n] + √ p dp K i =k L l=1 g H kl [n] √ µ l v il s il [n] + w k [n] .(28)
Note that the kth UE needs to employ SIC after receiving the signals from all the L APs in order to detect the signals sent by the different APs [39]. Specifically, the UE first detects the signal received from the first AP and the remaining signal is treated as interference. By that analogy, the UE detects the signal transmitted by the lth AP and considers the signal sent from the (l + 1)th AP to the Lth AP as interference, thereby detecting the signal s kl [n]. Keep in mind that the SE is not affected by the APs' relative order of decoding. However, a specific decoding order must be chosen before the individual signals may be encoded [9], [39].
with SINR p,nc,du k [n] = η ap n η ue n p dp L l=1 µ l |tr (Q kl )| 2 p dp Ξ p,nc,du
k [n] − η ap n η ue n p dp L l=1 µ l |tr (Q kl )| 2 + σ 2 d ,(30)
where Ξ p,nc,du
k [n] = K i=1 L l=1 µ l tr (Q il R kl ) + K i∈P k L l=1 µ l tr Q kil 2 .(31)
Proof: Follow the SIC operation for non-coherent transmission in [9, Appendix C] and the similar steps for proving Theorem 1.
Remark 4. Note that if the DF-MR private precoding is used, we can also derive SINR p,nc,df
k [n]
having the same SINR expression as (30). Therefore, we conclude that the non-coherent transmission can effectively overcome the influence of asynchronous reception, and even eliminate the influence of delay phase, at the expense of poor SE. In future CF networks, a trade-off between high performance and strong robustness may be achieved by a hybrid scheme that includes both coherent and non-coherent transmission schemes.
C. Power Control for Private Messages
Moreover, various power control methods can be adopted in our systems to further improve the system performance. To this end, we rewrite (19) as
x l [n] = √ p dp K i=1 √ µ il v il s i [n],(32)
where µ il 0 is the power control coefficients chosen to satisfy the downlink power constraint as E |x l [n]| 2 p dp . With the help of the statistical channel cooperation power control in our previous work [9], we derive
µ kl =β α k K i=1 tr (Q il )β α i , k = 1, . . . K, l = 1, . . . L,(33)
where the channel inversion rate α = −1 is chosen to enhance the SE performance of poor UEs andβ k denotes the global statistical channel given as
β k = L l=1 β kl L .(34)
Note that max-min and max-sum SE power control methods can also be applied to the case, which has been investigated in [38], [40]. However, the sum SE gain of power control schemes on the high-density CF massive MIMO system is limited, because the global statistical channel difference of each UE is significantly reduced with the increase of the number of APs [40].
IV. RATE-SPLITTING ASSISTED CF MASSIVE MIMO
In this section, we focus on the performance of the RS-assisted CF massive MIMO systems with asynchronous reception. By treating the private messages as noise, we first derive the closed-form SE performance expressions for the common messages. After decoding the common messages, the remaining private messages are decoded by following the same steps as in Section III. Besides, we propose a binary search-based method to find the optimal power splitting factor to maximize the downlink sum SE performance. Moreover, we design the robust precoding for the common message to mitigate the impact caused by asynchronization.
A. Coherent Downlink Transmission
Adopting the coherent joint transmission in RS-assisted CF massive MIMO systems, the transmitted signal can be divided into the common and the private parts as (16). Then, the received signal by UE k at the nth time instant is expressed as
y c,co k [n] = L l=1 g H kl [n] x l [n] + w k [n] = √ p dc L l=1 √ η l g H kl [n] v c,l s c [n] + √ p dp K i=1 L l=1 √ µ l g H kl [n] v il s i [n] + w k [n] ,(35)
where w k [n] is the receiver noise. Having in mind that the channels of each UE in the high antenna density regime tend to be asymptotically orthogonal [4], [29], we assume that v c,l can be written as a linear sum of the private precoding as v c,l = K i=1 a il v il . It is worth noting that one simple low-complexity common precoding is given by a il = 1, ∀i, l, which is applied in our analysis as a benchmark scheme. [n] + p dp Ξ p,co,du
k [n] + σ 2 d ,(37)
where Γ c,co,du
k [n] = (1 − η ap n ) L l=1 η l K i∈P k K j∈P i a * il a jl tr Q kil tr Q kjl + L l=1 η l K i=1 K j∈P i a * il a jl tr Q ijl R kl + η ap n (1 − η ue n ) L l=1 √ η l K i∈P k a il tr Q kil 2 .(38)
Proof: See Appendix B.
Corollary 2. The same operation as in Theorem 1 for (37), its approximate SINR expression with LN → ∞ is obtained as 1/ (1/ (Lη ap n η ue n ) + (L − 1) / (Lη ue n ) + b), where b is a constant. It is obvious that the decoding of common messages is weakened by the oscillator phase especially the aspect of UE, which leads to a negative impact on the SE performance of RS.
Furthermore, we investigate the SE performance of the common message by adopting the DF-MR private precoding v kl =ĥ kl [λ] and the simple common precoding. Following the similar steps as in Theorem 3, we obtain SINR c,co,df
k [n] = p dc η ap n η ue n L l=1 √ η l K i∈P k a il θ * il tr Q kil 2 p dc Γ c,co,df k [n] + p dp Ξ p,co,df k [n] + σ 2 d ,(39)
where Γ c,co,df
k [n] = (1 − η ap n ) L l=1 η l K i∈P k K j∈P i a * il a jl θ il θ * jl tr Q kil tr Q kjl + L l=1 η l K i=1 K j∈P i a * il a jl θ il θ * jl tr Q ijl R kl + η ap n (1 − η ue n ) L l=1 √ η l K i∈P k a il θ * il tr Q kil 2 .(40)
From (39), we obtain that the delay phase effect on common messages is related to both AP and UE, specifically different physical positions of APs and sharing pilots among partial UEs.
B. Non-Coherent Downlink Transmission
When applying the non-coherent joint transmission in RS-assisted CF massive MIMO systems, each AP can send different common and private data symbols than the other APs. Therefore, the transmit signal from AP l at time instant n is expressed as
x l [n] = √ p dc η l v c,l s c,l [n] + √ p dp µ l K i=1 v il s il [n].(41)
Compared with (16), each AP no longer cooperates with each other in (41), which greatly reduces the requirement for synchronization. Then, the received signal by UE k at the nth time instant is given by
y c,nc k [n] = L l=1 g H kl [n] x l [n] + w k [n] = √ p dc L l=1 √ η l g H kl [n] v c,l s c,l [n] + √ p dp K i=1 L l=1 √ µ l g H kl [n] v il s il [n] + w k [n] . (42)
where Γ c,nc,du
k [n] = L l=1 η l K i=1 K j∈P i a * il a jl tr Q ijl R kl + (1 − η ap n η ue n ) L l=1 η l K i∈P k a il tr Q kil 2 .(45)
Proof: Follow the similar steps as those for proving Theorems 2 and 3, but treating the private messages as interference.
Moreover, we study the SE performance of the common message by using DF-MR private
precoding v kl =ĥ kl [λ] and the simple common precoding v c,l = K i=1 v il . Following the similar steps in Theorem 4, the SINR of the common message is derived as
SINR c,nc,df k [n] = p dc η ap n η ue n L l=1 η l K i∈P k a il θ * il tr Q kil 2 p dc Γ c,nc,df k [n] + p dp Ξ p,nc,df k [n] + σ 2 d ,(46)
where Γ c,nc,df
k [n] = L l=1 η l K i=1 K j∈P i a * il a jl θ il θ * jl tr Q ijl R kl +(1−η ap n η ue n ) L l=1 η l K i∈P k a il θ * il tr Q kil 2 . (47)
Remark 5. It is worth noting that the common messages are decoded by all the UEs, which is different from the private messages only decoded by the corresponding UE. Therefore, when adopting the non-coherent transmission and the delay phase is unknown, the SE of the private message is not affected by the delay phase as stated in Remark 4. However, the SE of the common message is still affected by the delay phase as shown in (47).
C. Power Splitting Factor to Maximize the Downlink Sum SE
After the common messages have been removed from the received signal using SIC, the private messages can be decoded from the remained signal as shown in Section III. Finally, we can obtain the downlink sum SE as
SSE = SE c + K k=1 SE p k .(48)
Remark 6. It is clear that with the increasing of the power split factor ρ, the SE of the common and private messages increases and decreases, respectively. Therefore, the downlink sum SE is
Algorithm 1 The Proposed Binary Search for Optimal Power Allocation
Initialization: Choose the initial values ρ min = 0 and ρ max = 1; Chose a tolerance ε > 0 and an increase 0 < ∆ρ ≪ ε;
Output: the power splitting factor ρ;
1: SSE max = SSE (ρ * ) = max {[SSE (ρ min ) , SSE (ρ max )]} and ρ = ρ * ; 2: while ρ max − ρ min > ε do 3:
Set ρ next = (ρ max + ρ min ) /2; 4: SSE next = SSE (ρ next ) and SSE ∆ = SSE (ρ next + ∆ρ);
5:
If SSE ∆ > SSE next , then set ρ min = ρ next , else set ρ max = ρ next ; 6: If SSE next > SSE max , then set SSE max = SSE next and ρ ρ next ; 7: end while a monotonic function and there is an optimal ρ to maximize it 6 . The optimal ρ can be found via a simple binary search method as Algorithm 1.
D. Robust Precoding Design for Common Message
In the downlink, given realizations of the large-scale fading, we design the precoding weights for the common messages 7 a il , i = 1, . . . , K, l = 1, . . . , L, that maximize the minimum of the downlink common SE of all the UEs, under the power constraint. At the optimum point, all the users should achieve the same SE, then we have following max-min optimization problem: 6 The optimal power splitting factor maximizing the sum SE varies with the simulation parameters, e.g., the SNR, the number of APs and UEs [26]. Therefore, it is significant to adjust the power splitting factor for unleashing the potential of RS technology. 7 Here, we focus on the precoding of the common message, since the precoding technology for the private message in CF massive MIMO is already well established, including MR, zero-forcing, and MMSE [2], [8].
max {a kl } min k=1,...,K SE c k [n] s.t. E v c,l 2 1, ∀l, a kl 0, ∀k, l,(49)
where SE c k [n] is given by (37). After some straight-forward transformation of (37), the problem in (49) is equivalent to 8
max {a} min k=1,...,K p dc η ap n η ue n a H b k 2 p dc (1−η ap n ) a H H k a+p dc a H M k a+p dc η ap n (1−η ue n ) |a H b k | 2 +p dp Ξ co,du k [n]+σ 2 d s.t. Θ 1 2 l a l 1, ∀l, a kl , 0, ∀k, l,(50)
with
a = [a T 1 , . . . , a T L ] T ∈ C KL ,(51)b k = [b k11 , . . . , b kK1 , . . . , b k1L , . . . , b kKL ] T ∈ C KL ,(52)H k = diag (H k1 , . . . , H kL ) ∈ C KL×KL ,(53)M k = diag (M k1 , . . . , M kL ) ∈ C KL×KL ,(54)Θ l = b 11l · · · b 1Kl . . . . . . . . . b K1l · · · b KKl ∈ C K×K ,(55)
where
a l = [a 1l , . . . , a K1 ] T ∈ C K ,(56)b kil = tr Q kil , i ∈ P k 0, i / ∈ P k ,(57)H kl = h k11 · · · h k1K . . . . . . . . . h kK1 · · · h kKK with h kij = tr Q kil tr Q kjl , i, j ∈ P k 0, i / ∈ P k /j / ∈ P k ,(58)M kl = m k11 · · · m k1K . . . . . . . . . m kK1 · · · m kKK with m kij = tr Q ijl R kl , j ∈ P i 0, j / ∈ P i .(59)
With the help of [2, Proposition 1], it can be shown that the problem in (50) is quasi-concave.
Therefore, the problem in (50) can be efficiently solved by using the bisection method, which 8 Compared with [41], although the same motivation is adopted to design the precoding for the common messages, our design takes into account the residual interference for the common message precoder while the former did not.
Algorithm 2 Bisection Algorithm for Solving (50) Initialization: Initialize t min and t max , where t min and t max define a range of relevant values of the objective function in (50). Choose a tolerance ε > 0.
Output: The percoding weight matrix a;
1: while t max − t min > ε do 2:
Set t = (t max + t min ) /2. Solve the following convex feasibility program:
p dc η ap n η ue n a H b k √ t u n,k , ∀k, Θ 1 2 l a l 1, ∀l,
a kl 0, ∀k, l.
(60) 3: Besides, u n,k is defined as
p dc (1−η ap n )H 1 2 k a T , √ p dc M 1 2 k a T , p dc η ap n (1−η ue n )a H b k , p dp Ξ co,du k [n]+σ 2 d T 4:
If problem (60) is feasible, then set t min t, else set t max t.
5: end while
solving a sequence of convex feasibility problems in each step. The detailed scheme is presented in Algorithm 2. The outer while-loop in Algorithm 2 performs a bisection search for the optimal SINR value, which halves the search space for the max-min SINR value in every iteration.
Therefore, the overall algorithm converge, rapidly.
V. NUMERICAL RESULTS AND DISCUSSION
We adopt the three-slope propagation model in a simulation setup in which L APs and K UEs are uniformly and independently distributed within a square of size 100 m × 100 m 9 [2].
It is assumed that the carrier frequency is f c = 2 GHz, the bandwidth is B = 20 MHz, and the noise power is σ 2 = −96 dBm. In addition, the pilot and data transmission power are p = 20 dBm and p d = 23 dBm, respectively [2]. Besides, the length of a coherence block is T = 2 ms consisting of τ c = 200 channel uses [8]. Moreover, the symbol duration T s = 10 µs and the oscillator coefficients for all the APs and the UEs are c i = 1 × 10 −18 , i = k, l [19]. 9 Deploying high-density APs in a small area facilitates the establishment of channel hardening and favorable propagation to ensure the accuracy of our results [4]. Taking into account the oscillator and delay phase respectively, Fig. 4 shows the CDF of the downlink sum SE for the considered CF massive MIMO under the coherent transmission with DU-MMSE and DU-MR precoding. It is clear that the DU-MMSE precoding outperforms the DU-MR precoding, due to its excellent interference cancellation capability. Moreover, the oscillator phase has a deteriorated effect on the SE performance, as the coherent data transmission gain of CF massive MIMO systems is destroyed by the asynchronous reception. Besides, since the delay phase is perfectly known and is adopted for DU-MR precoding, the coherent transmission and the SE performance is uneffected, as accurately predicted by (23). We also find that the application of the RS can increase the downlink sum SE by 2.2 bit/s/Hz under DU-MR precoding, due to part of the interference is broadcasted such that it is decoded and cancelled by all the UEs.
Moreover, our simulation results have also confirmed the accuracy of our derived closed-form SE expressions.
Considering different capacity-bound techniques, including UatF bound [38], estimation bound [4], and achievable rate (UEs have perfect CSI [42]), Fig. 5 (4000 APs/km 2 ) 10 enhances the channel hardening such that the UatF rate becomes a tight bound and the varying oscillator phase limits the use of long coherent block length. Moreover, we also find that the statistical channel cooperation power control can achieve a good SE performance gain for UEs with poor channel conditions. increases. The reason is that coherent transmission requires highly accurate synchronization.
For example, in the case with DF-MR, the existence of the delay phase degrades the efficiency of coherent transmission. Note that the delay phase caused by the offset of heterogeneous propagation distances is often several hundred times compared to the wavelength (λ) that affects the system much more severer than the oscillator phase. Therefore, for the precoding design of the distributed architecture, it is necessary to obtain the accurate delay phase by some advanced methods such as positioning technology [15], [23]. In addition, we also find that when the 10 The CF massive MIMO system allows the high-density deployment of APs, e.g., via radio strips, which is highly practical for deploying to some hot spots, like railway stations, museums and factories [43].
oscillator phase variance varies from −50 dB to −20 dB, the sum SE gain of RS in the case with DU-MR precoding decreases from 2.2 bit/s/Hz to 0.5 bit/s/Hz. Even worse, for the case with DF-MR precoding, the sum SE gain of having RS is less more 0.1 bit/s/Hz. This is because accurate synchronization is the key to realize the gains of RS. For the non-coherent case in Fig. 6 (b) that there is no cooperation among APs, the delay phase has no impact on the CF massive MIMO with non-coherent transmission when the RS technology is not adopted. In fact, as a certain accuracy in the delay synchronization is needed for RS, the sum SE gain of RS with DU-MR precoding outperforms the case with DF-MR precoding under non-coherent transmission.
Besides, by comparing Fig. 6 (a) and Fig. 6 (b), we also find that the sum SE performance of non-coherent transmission case is better than that of the coherent transmission when the delay phase is unknown. In particular, the random phase shift not only hinders effective cooperation among the APs, but also increases the multiuser interference.
Taking the user-centric concept into consideration and achieving it through dynamic cooperation clustering [40], [42], [44], Fig. 7 presents the downlink sum SE of CF massive MIMO systems against the oscillator phase variance under coherent transmission. In order to verify whether the dynamic cooperation clustering has an impact on the asynchronous reception error, we do not consider pilot contamination by setting τ p = K to ensure a relatively fair comparison.
It can be found that dynamic cooperation clustering does have a certain positive effect on CF massive MIMO systems when the delay phase is unaware, but it is not obvious. This is due to the fact that a small cluster size aids in overcoming the issue of geographically distributed APs introducing differences in the signal arrival time. However, when compared with the wavelength, the offset of heterogeneous propagation distances within the same cluster is still considerably large. Therefore, dynamic cooperation clustering cannot fundamentally eliminate the asynchronous reception error but can be adopted to assist existing synchronization schemes to achieve low-complexity and high-speed calibration in CF massive MIMO systems [22].
The downlink sum SE for the CF massive MIMO system with the DU-MR precoding and coherent transmission is shown in Fig. 8, as a decreasing function of the oscillator phase variances at AP and UE. We notice that the oscillator phase of the UE has a more significant negative impact on the SE performance than that of the APs. For instance, for the case LN = 80, increasing σ 2 ap from −50 dB to −20 dB will result in 6.4 bit/s/Hz sum SE loss, but the same amount of increment on σ 2 ue would cause 11.3 bit/s/Hz sum SE loss. It is worthy noting that the negative influence becomes more pronounced as the number of antennas increases, which also The reason is that although the power of the desired signal increases, the power of inter-user interference due to imperfect CSI also increases. Besides, it can be observed that RS can enhance the system performance in such cases, but with diminishing returns in the high transmit power regime. In fact, the residual interference caused by the beamforming gain uncertainty term of the UatF bound always leads to the downlink sum SE performance saturation, even after the application of RS. From Fig. 9, it is observed that the achievable rate in perfect CSI without RS also saturates due to the strong inter-user interference caused by asynchronous issues.
Remarkably, RS has shown its robustness in this case, since the downlink sum SE does not saturate due to the introduction of the common message. Due to the power splitting factors for each AP are assumed to be the same, the sum achievable rate of the RS-assisted CF massive MIMO system grows slowly with the increase of the transmit power when the RS starts to work. precoding. In addition, the case without RS and the case with simple common precoding are considered for comparison. We first assume that the APs are randomly deployed in the square simulation area. It is found that the downlink sum SE of RS with the simple common precoding is increased by 1.5 bit/s/Hz compared to that without RS. Yet, the downlink sum SE of RS with robust common precoding is only increased by 0.2 bit/s/Hz compared to that with simple common precoding. The reason is that the SE of the common messages in RS is limited by the channel condition of the worst UE for ensuring common messages are successfully decoded by all the UEs. Fortunately, uniform coverage properties of CF massive MIMO systems make RS suitable for deployment with simple common precoding to obtain large performance gains. Besides, we consider an extreme scenario with a non-uniform coverage where the APs are deployed in a radio strip on the one side of the square simulation area. It is clear that the sum SE gain of the simple common precoding is less than half of the sum SE gain of the robust common precoding.
Actually, the oscillator phase of different UEs is different, which also increases performance differences among UEs. From Fig. 10, it can be observed that when the oscillator of each UEs is different, the sum SE gain of the simple common precoding is only 0.1 bit/s/Hz, whereas the sum SE gain of the robust common precoding can reach 1 bit/s/Hz. Therefore, it is necessary to adopt robust common precoding in the case of serious oscillator mismatch.
The downlink sum SE of asynchronous CF massive MIMO with our proposed robust common precoding under coherent transmission and DF-MR private precoding is illustrated in Fig. 11.
Due to the strict delay synchronization requirements imposed by the RS, the sum SE gain of the simple precoding for the common messages is not more than 0.1 bit/s/Hz. However, the RS with the robust common precoding can effectively overcome the influence of delay asynchronous and obtain 1.6 bit/s/Hz sum SE gain. Therefore, when the RS is deployed in CF massive MIMO systems with un-awared delay phase, robust precoding for the common messages is necessary.
VI. CONCLUSION
We investigated the performance of CF massive MIMO systems with asynchronous reception, including both the delay and oscillator phases. Besides, the RS strategy relying on the transmission of common and private messages was adopted to reduce the multiuser interference caused by imperfect CSI. Moreover, a robust precoding for the common message was designed to mitigate the effect of asynchronous reception on CF massive MIMO systems. Specifically, taking into account the coherent and non-coherent transmission, we first derived novel closedform SE expressions for RS-assisted CF massive MIMO systems with channel estimation errors caused by phase-asynchronization and pilot contamination. It was shown that asynchronous reception destroys the pilot orthogonality and coherent data transmission resulting in poor system performance. In particular, obtaining an accurate delay phase is important for CF massive MIMO systems to realize coherent transmission. Moreover, it is interesting that the oscillator phase of UEs has a dominated effect on the SE performance than that of the APs while increasing the number of antennas can only significantly reduce the influence of the oscillator phase at the APs.
Furthermore, it was proved that RS significantly improves the performance of CF massive MIMO systems, but it is seriously affected by the asynchronous phases, especially the delay phase and the oscillator phase of the UEs. Also, we designed a robust precoding to maximize the minimum individual SE of the common message. It was found that the proposed robust precoding for the common messages performs well in some extreme scenarios, e.g., serious oscillator mismatch and unknown delay phase. In our future work, we will investigate how the RS realizes scalable applications in user/cell-centric CF networks [44]. Besides, the impacts of imperfect SIC in RS systems should also be considered [45].
APPENDIX A PROOF OF THEOREM 1
We can derive the use-and-then-forget (UatF) capacity bound with SINR p k [n] is given by [9] p dp
L l=1 E g H kl [n] √ µ l v kl DS p kl [n] 2 / p dp K i=1 E L l=1 g H kl [n] √ µ l v il 2 INT p i [n]
−p dp 11 . It is assumed the DU-MR precoding v kl = θ klĥkl [λ] is used. Then, the normalization parameter regarding the precoding in (18) can be written as µ l = 1/ K i=1 tr (Q il ). Substituting (5) into DS p kl [n], and using the definition and property of Θ kl in (8) and (9)
+ L l=1 L m =l E g H kl [n] √ µ l θ ilĥil [λ] * g H km [n] √ µ m θ imĥim [λ] Υ 2 .(62)
Besides, using (5), (8) and the property that |exp (jx)| = 1 for any real number x and j being the imaginary unit, we derive
Υ 1 = E h H kl [λ] Θ * kl [n − λ] θ * kl √ µ l θ ilĥil [λ] 2 = µ l E h H kl [λ]ĥ il [λ] 2 .
(63) 11 Note that DS k is a part of INT k , which is subtracted by DS k to obtain the beamforming gain uncertainty as the part of residual interference part.
Following the similar steps of [9, Eq. (62)], we can write (63) as Υ 1 = µ l tr (Q il R kl ) + µ l tr Q kil 2 , i ∈ P k 0, i / ∈ P.
By using (5) and (8), we obtain
Υ 2 = θ kl θ * il θ * km θ im √ µ l µ m E {Θ kl [n − λ] Θ * km [n − λ]} Υ 3 × E h H kl [λ]ĥ il [λ] Υ 4 * E h H km [λ]ĥ im [λ] .(65)
By utilizing the definition and property of Θ kl in (8) and (9), we derive
Υ 3 = E e
Based on the properties of MMSE estimation withĥ kl [λ] andh kl [λ] are independent, we have
Υ 4 = E ĥ H kl [λ]ĥ il [λ] = tr E ĥ il [λ]ĥ H kl [λ] .(67)
Substituting (10) into (67), we then obtain
Υ 4 = θ kl θ * il tr Q kil , i ∈ P k 0, i / ∈ P k .(68)
Finally, with the help of (66) and (68) and plugging (64)
0, i / ∈ P k ,(69)
and this completes the proof.
APPENDIX B PROOF OF THEOREM 3
Using the UatF bound [9], we can derive the SINR c k [n] of the common messages as
p dc L l=1 E g H kl [n] √ η l v c,l DS c kl [n] 2 / p dc E L l=1 g H kl [n] √ η l v c,l 2 INT c k [n] −p dc L l=1 DS c kl [n] 2 +p dp K i=1 INT p i [n]+σ 2 d ,
where DS c kl [n] is the desired common signal. Besides, INT c k [n] is subtracted by DS c kl [n] to obtain the residual interference caused by beamforming gain uncertainty [9]. Note that INT p i [n] is the interference from the private signal, which is given by (69). It is assumed that the DU-MR private precoding v kl = θ klĥkl [λ] and linear common precoding v c,l = K i=1 a il v il is used. Substituting (5) . (72) Substituting (5) and v c,l = K i=1 a il θ ilĥil [λ] into Υ 5 , we have
Υ 5 = K i=1 a il a il E h H kl [λ]ĥ il [λ] 2 Υ 7 + K i=1 K j =i a il a jl θ * il θ jl E h H kl [λ]ĥ il [λ] * h H kl [λ]ĥ jl [λ] Υ 8 .
With the help of (66) and (70), we obtain Υ 6 = e −(n−λ)σ 2 ap K i∈P k a il tr Q kil K j∈P k a jm tr Q kjm .
Besides, Υ 7 can be derived by (64). Following the similar steps, we can obtain Υ 8 as Υ 8 = θ il θ * jl tr R klQijl + θ il θ * jl tr Q kil tr Q kjl , i ∈ P k 0, i / ∈ P k , j ∈ P i .
and the result follows immediately.
Fig. 1 :
1Asynchronous reception in a CF massive MIMO system.
Fig. 2 :
2The NMSE of MMSE and LS channel estimation with different oscillator phase variances at SNR = 30 dB (L = 100, N = 2, σ 2 ap = σ 2 ue = σ 2 ).
Theorem 1 .
1With the help of the channel estimate (10) and the received signal (20), using the DU-MR private precoding v kl = θ klĥkl [λ], the downlink achievable rate of UE k
Theorem 2 .
2Based on the received signal (28) and using the DU-MR private precoding v kl = θ klĥkl [λ], the downlink achievable rate of UE k is lower bounded as SE p,
Theorem 3 .
3For the DU-MR private precoding v kl = θ klĥkl [λ] and the simple common precoding v c,l = K i=1 v il , a lower bound on the achievable rate of common message of UE k is SE c,
Theorem 4 .
4For the DU-MR precoding v kl = θ klĥkl [λ] and the simple common precoding, then the achievable rate of common message of UE k is lower bounded by SE c,nc,du k = 1 τ c τc n=λ log 2 1 + SINR c,nc,du k [n] ,
Fig. 4 :
4CDF of the downlink sum SE for CF massive MIMO systems under coherent transmission (L = 40, K = 8, N = 2, τ p = 4).
a) DU-MMSE; (b) DU-MR.
Fig. 5 :
5CDF of the downlink per-user SE for synchronous CF massive MIMO systems with DU-MR combining under coherent transmission (L = 40, K = 8, N = 2, τ p = 4).
Fig. 6 :Fig. 7 :
67Downlink sum SE for CF massive MIMO systems with different oscillator phase variances (L = 40, K = 8, N = 2, τ p = 4, σ 2 ap = σ 2 ue = σ 2 ). Downlink sum SE for CF massive MIMO systems with different oscillator phase variances under coherent transmission (L = 40, K = 8, N = 2, τ p = K, σ 2 ap = σ 2 ue = σ 2 ).
Fig. 6
6compares the downlink sum SE of the CF massive MIMO system against the oscillator phase variance under the coherent and non-coherent transmissions. It is clear that the performance of the coherent transmission and DU-MR drops rapidly with the oscillator phase variance
Fig. 8 :
8Downlink sum SE for CF massive MIMO systems against different oscillator phase variances at AP and UE (K = 8, τ p = 4).
Fig. 9 :
9Downlink sum SE for CF massive MIMO systems against different transmit power per AP (L = 20, K = 4, N = 2, τ p = 4).can be obtained by Remark 1. Moreover, it is shown that varying the number of antennas from LN = 80 to LN = 400 introduces a sum SE gain of 1.4 bit/s/Hz for the case σ 2 ap = −50 dB, σ 2 ue = −20 dB, and leads to 8.1 bit/s/Hz sum SE gain for the case σ 2 ap = −20 dB, σ 2 ue = −50 dB. The reason is that the more antennas promise higher antenna array gains offering more degrees of freedom to compensate the negative impact caused by the oscillator phase at the AP. Besides, due to the common messages are decoded by all the UEs, the performance gain of the RS is also more sensitive to the oscillator phase of the UEs rather than that of the APs. For example, the sum SE gain of RS for the case σ 2 ap = −20 dB, σ 2 ue = −50 dB is 2.2 bit/s/Hz, but the sum SE gain of it for the case σ 2 ap = −50 dB, σ 2 ue = −20 dB is only 0.8 bit/s/Hz. Taking into account the impacts of perfect and imperfect CSI,Fig. 9shows the downlink sum SE for CF massive MIMO systems against the data transmit power under coherent transmission and DU-MR private precoding. It is clear that the downlink sum SE of conventional CF massive MIMO systems (UatF bound without RS) gradually saturates as the data transmit power increases.
Fig. 10 :Fig. 11 :
1011Downlink sum SE for asynchronous CF massive MIMO systems with three signal processing schemes under different AP settings (L = 40, K = 8, N = 2, τ p = 4). Downlink sum SE for asynchronous CF massive MIMO systems with square setting for APs under coherent transmission (L = 40, K = 8, N = 2, τ p = 4).
Fig. 10
10presents the downlink sum SE of asynchronous CF massive MIMO with our proposed robust precoding for the common message under the coherent transmission and DU-MR private
a
il a jl tr R klQijl + e −(
where DS p kl [n] denotes the desired signal and INT p i [n] is the interference from other UEsL
l=1
DS p
kl [n]
2
+σ 2
d
,
, we can deriveDS p kl [n] = E h H kl [λ] Θ * kl [n − λ] θ * kl √ µ l θ klĥkl [λ] = E {Θ * kl [n − λ]} √ µ l E h H kl [λ]ĥ kl [λ] = e − n−λ 2 (σ 2 ue) √ µ l tr (Q kl ) .Moreover, with the help of [9, Eq. (69)], we haveap +σ 2
(61)
INT p
i [n] =
L
l=1
E g H
kl [n]
√ µ l θ ilĥil [λ]
2
Υ 1
into DS c kl [n] and with the help of (8),(9), and (68), we haveDS c kl [n] = K i=1 a il θ * kl θ il E h H kl [λ]ĥ il [λ] E {Θ * kl [n]} = e − n−λ 2 (σ 2 a il tr Q kil . (70)Moreover, we also can obtain the normalization parameter regarding the precoding in(17)as√ η l η m E g H kl [n] v c,l * g H km [n] v c,map +σ 2
ue)
K
i∈P k
η l = 1/E
K
k=1
a kl θ klĥkl [λ]
2
= 1/
K
k=1
K
i∈P k
a *
kl a il tr Q kil .
(71)
For deriving INT c
k [n], we first expand it into
INT c
k [n] =
L
l=1
η l E g H
kl [n] v c,l
2
Υ 5
+
L
l=1
L
m =l
Υ 6
We assume that the delay phase can be perfectly known by positioning or other technologies. However, for the design of downlink precoding in the sequel, we will consider both the cases of with or without (used/forgotten) delay phase to quantify its impact[20].
It is tolerable to assume error-free decoding to obtain preliminary analytical results because the considered one-layer RS scheme requires each UE executes SIC only once[27].
Due to the small channel differences in CF massive MIMO networks, we assume that the power splitting factors are identical at all APs to arrive at a preliminary conclusion. In the future work, we will design the optimal power splitting factor for each AP to further improve the system performance.5 Note that the obtained results for ρ = 0 will serve as a building block for the study of RS-assisted CF massive MIMO systems in Section IV.
Massive access for 5G and beyond. X Chen, D W K Ng, W Yu, E G Larsson, N Al-Dhahir, R Schober, IEEE J. Sel. Areas Commun. 393X. Chen, D. W. K. Ng, W. Yu, E. G. Larsson, N. Al-Dhahir, and R. Schober, "Massive access for 5G and beyond," IEEE J. Sel. Areas Commun., vol. 39, no. 3, pp. 615-637, Mar. 2021.
Cell-free massive MIMO versus small cells. H Q Ngo, A Ashikhmin, Y Hong, E G Larsson, T L Marzetta, IEEE Trans. Wireless Commun. 163H. Q. Ngo, A. Ashikhmin, Y. Hong, E. G. Larsson, and T. L. Marzetta, "Cell-free massive MIMO versus small cells," IEEE Trans. Wireless Commun., vol. 16, no. 3, pp. 1834-1850, Mar. 2017.
Joint power control and access point scheduling in fronthaul-constrained uplink cell-free massive MIMO systems. M Guenach, A A Gorji, A Bourdoux, IEEE Trans. Commun. 694M. Guenach, A. A. Gorji, and A. Bourdoux, "Joint power control and access point scheduling in fronthaul-constrained uplink cell-free massive MIMO systems," IEEE Trans. Commun., vol. 69, no. 4, pp. 2709-2722, Apr. 2021.
Channel hardening and favorable propagation in cell-free massive MIMO with stochastic geometry. Z Chen, E Björnson, IEEE Trans. Commun. 6611Z. Chen and E. Björnson, "Channel hardening and favorable propagation in cell-free massive MIMO with stochastic geometry," IEEE Trans. Commun., vol. 66, no. 11, pp. 5205-5219, Nov. 2018.
Joint activity detection and channel estimation in cell-free massive MIMO networks with massive connectivity. M Guo, M C Gursoy, IEEE Trans. Commun. 701M. Guo and M. C. Gursoy, "Joint activity detection and channel estimation in cell-free massive MIMO networks with massive connectivity," IEEE Trans. Commun., vol. 70, no. 1, pp. 317-331, Jan. 2022.
Scalable cell-free massive MIMO systems. E Björnson, L Sanguinetti, IEEE Trans. Commun. 687E. Björnson and L. Sanguinetti, "Scalable cell-free massive MIMO systems," IEEE Trans. Commun., vol. 68, no. 7, pp. 4247-4261, Jul. 2020.
Prospective multiple antenna technologies for beyond 5G. J Zhang, E Björnson, M Matthaiou, D W K Ng, H Yang, D J Love, IEEE J. Sel. Areas Commun. 388J. Zhang, E. Björnson, M. Matthaiou, D. W. K. Ng, H. Yang, and D. J. Love, "Prospective multiple antenna technologies for beyond 5G," IEEE J. Sel. Areas Commun., vol. 38, no. 8, pp. 1637-1660, Aug. 2020.
Making cell-free massive MIMO competitive with MMSE processing and centralized implementation. E Björnson, L Sanguinetti, IEEE Trans. Wireless Commun. 191E. Björnson and L. Sanguinetti, "Making cell-free massive MIMO competitive with MMSE processing and centralized implementation," IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 77-90, Jan. 2020.
Impact of channel aging on cell-free massive MIMO over spatially correlated channels. J Zheng, J Zhang, E Björnson, B Ai, IEEE Trans. Wireless Commun. 2010J. Zheng, J. Zhang, E. Björnson, and B. Ai, "Impact of channel aging on cell-free massive MIMO over spatially correlated channels," IEEE Trans. Wireless Commun., vol. 20, no. 10, pp. 6451-6466, Oct. 2021.
UAV communications with WPT-aided cell-free massive MIMO systems. J Zheng, J Zhang, B Ai, IEEE J. Sel. Areas Commun. 3910J. Zheng, J. Zhang, and B. Ai, "UAV communications with WPT-aided cell-free massive MIMO systems," IEEE J. Sel. Areas Commun., vol. 39, no. 10, pp. 3114-3128, Oct. 2021.
Wireless energy transfer in RIS-aided cell-free massive MIMO systems: Opportunities and challenges. E Shi, J Zhang, S Chen, J Zheng, Y Zhang, D W Kwan Ng, B Ai, IEEE Commun. Mag. 603E. Shi, J. Zhang, S. Chen, J. Zheng, Y. Zhang, D. W. Kwan Ng, and B. Ai, "Wireless energy transfer in RIS-aided cell-free massive MIMO systems: Opportunities and challenges," IEEE Commun. Mag., vol. 60, no. 3, pp. 26-32, Mar. 2022.
Joint power control and LSFD for wireless-powered cell-free massive MIMO. Ö T Demir, E Björnson, IEEE Trans. Wireless Commun. 203Ö. T. Demir and E. Björnson, "Joint power control and LSFD for wireless-powered cell-free massive MIMO," IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1756-1769, Mar. 2021.
Joint power allocation and load balancing optimization for energy-efficient cell-free massive MIMO networks. T Van Chien, E Björnson, E G Larsson, IEEE Trans. Wireless Commun. 1910T. Van Chien, E. Björnson, and E. G. Larsson, "Joint power allocation and load balancing optimization for energy-efficient cell-free massive MIMO networks," IEEE Trans. Wireless Commun., vol. 19, no. 10, pp. 6798-6812, Oct. 2020.
Joint power and user grouping optimization in cell-free massive MIMO systems. F Guo, H Lu, Z Gu, IEEE Trans. Wireless Commun. 212F. Guo, H. Lu, and Z. Gu, "Joint power and user grouping optimization in cell-free massive MIMO systems," IEEE Trans. Wireless Commun., vol. 21, no. 2, pp. 991-1006, Feb. 2022.
User-centric cell-free massive MIMO networks: A survey of opportunities, challenges and solutions. H A Ammar, R Adve, S Shahbazpanahi, G Boudreau, K V Srinivas, IEEE Commun. Surveys Tuts. 241H. A. Ammar, R. Adve, S. Shahbazpanahi, G. Boudreau, and K. V. Srinivas, "User-centric cell-free massive MIMO networks: A survey of opportunities, challenges and solutions," IEEE Commun. Surveys Tuts., vol. 24, no. 1, pp. 611-652, Dec. 2022.
Asynchronous reception effects on distributed massive MIMO-OFDM system. H Yan, I.-T Lu, IEEE Trans. Commun. 677H. Yan and I.-T. Lu, "Asynchronous reception effects on distributed massive MIMO-OFDM system," IEEE Trans. Commun., vol. 67, no. 7, pp. 4782-4794, Jul. 2019.
Uplink performance of time-reversal MRC in massive MIMO systems subject to phase noise. A Pitarokoilis, S K Mohammed, E G Larsson, IEEE Trans. Wireless Commun. 142A. Pitarokoilis, S. K. Mohammed, and E. G. Larsson, "Uplink performance of time-reversal MRC in massive MIMO systems subject to phase noise," IEEE Trans. Wireless Commun., vol. 14, no. 2, pp. 711-723, Feb. 2015.
Cell-free massive MIMO systems with oscillator phase noise: Performance analysis and power control. Y Fang, L Qiu, X Liang, C Ren, pp. 10 048-10 064IEEE Trans. Veh. Technol. 7010Y. Fang, L. Qiu, X. Liang, and C. Ren, "Cell-free massive MIMO systems with oscillator phase noise: Performance analysis and power control," IEEE Trans. Veh. Technol., vol. 70, no. 10, pp. 10 048-10 064, Oct. 2021.
Massive MIMO with non-ideal arbitrary arrays: Hardware scaling laws and circuit-aware design. E Björnson, M Matthaiou, M Debbah, IEEE Trans. Wireless Commun. 148E. Björnson, M. Matthaiou, and M. Debbah, "Massive MIMO with non-ideal arbitrary arrays: Hardware scaling laws and circuit-aware design," IEEE Trans. Wireless Commun., vol. 14, no. 8, pp. 4353-4368, Aug. 2015.
Impacts of asynchronous reception on cell-free distributed massive MIMO systems. J Li, M Liu, P Zhu, D Wang, X You, IEEE Trans. Veh. Technol. 7010J. Li, M. Liu, P. Zhu, D. Wang, and X. You, "Impacts of asynchronous reception on cell-free distributed massive MIMO systems," IEEE Trans. Veh. Technol., vol. 70, no. 10, pp. 11 106-11 110, Oct. 2021.
Digital communication receivers: Synchronization, channel estimation, and signal processing. H Meyr, M Moeneclaey, S A Fechtel, Wiley Online Library444H. Meyr, M. Moeneclaey, and S. A. Fechtel, Digital communication receivers: Synchronization, channel estimation, and signal processing. Wiley Online Library, 1998, vol. 444.
Scalable synchronization and reciprocity calibration for distributed multiuser MIMO. R Rogalin, O Y Bursalioglu, H Papadopoulos, G Caire, A F Molisch, A Michaloliakos, V Balan, K Psounis, IEEE Trans. Wireless Commun. 134R. Rogalin, O. Y. Bursalioglu, H. Papadopoulos, G. Caire, A. F. Molisch, A. Michaloliakos, V. Balan, and K. Psounis, "Scalable synchronization and reciprocity calibration for distributed multiuser MIMO," IEEE Trans. Wireless Commun., vol. 13, no. 4, pp. 1815-1831, Apr. 2014.
Phase synchronization for coherent MIMO radar: Algorithms and their analysis. Y Yang, R S Blum, IEEE Trans. Signal Process. 5911Y. Yang and R. S. Blum, "Phase synchronization for coherent MIMO radar: Algorithms and their analysis," IEEE Trans. Signal Process., vol. 59, no. 11, pp. 5538-5557, Nov. 2011.
Frequency synchronization for OFDM-based massive MIMO systems. P Sabeti, A Farhang, N Marchetti, L Doyle, IEEE Trans. Signal Process. 6711P. Sabeti, A. Farhang, N. Marchetti, and L. Doyle, "Frequency synchronization for OFDM-based massive MIMO systems," IEEE Trans. Signal Process., vol. 67, no. 11, pp. 2973-2986, Jun. 2019.
BDMA for millimeter-wave/terahertz massive MIMO transmission with per-beam synchronization. L You, X Gao, G Y Li, X.-G Xia, N Ma, IEEE J. Sel. Areas Commun. 357L. You, X. Gao, G. Y. Li, X.-G. Xia, and N. Ma, "BDMA for millimeter-wave/terahertz massive MIMO transmission with per-beam synchronization," IEEE J. Sel. Areas Commun., vol. 35, no. 7, pp. 1550-1563, Jul. 2017.
Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution. B Clerckx, H Joudeh, C Hao, M Dai, B Rassouli, IEEE Commun. Mag. 545B. Clerckx, H. Joudeh, C. Hao, M. Dai, and B. Rassouli, "Rate splitting for MIMO wireless networks: A promising PHY-layer strategy for LTE evolution," IEEE Commun. Mag., vol. 54, no. 5, pp. 98-105, May 2016.
Rate-splitting multiple access for downlink communication systems: Bridging, generalizing, and outperforming SDMA and NOMA. Y Mao, B Clerckx, V O Li, EURASIP journal on wireless communications and networking. 20181Y. Mao, B. Clerckx, and V. O. Li, "Rate-splitting multiple access for downlink communication systems: Bridging, generalizing, and outperforming SDMA and NOMA," EURASIP journal on wireless communications and networking, vol. 2018, no. 1, pp. 1-54, Jan. 2018.
Robust transmission in downlink multiuser MISO systems: A rate-splitting approach. H Joudeh, B Clerckx, IEEE Trans. Signal Process. 6423H. Joudeh and B. Clerckx, "Robust transmission in downlink multiuser MISO systems: A rate-splitting approach," IEEE Trans. Signal Process., vol. 64, no. 23, pp. 6227-6242, Dec. 2016.
A rate splitting strategy for massive MIMO with imperfect CSIT. M Dai, B Clerckx, D Gesbert, G Caire, IEEE Trans. Wireless Commun. 157M. Dai, B. Clerckx, D. Gesbert, and G. Caire, "A rate splitting strategy for massive MIMO with imperfect CSIT," IEEE Trans. Wireless Commun., vol. 15, no. 7, pp. 4611-4624, Jul. 2016.
Rate-splitting for max-min fair multigroup multicast beamforming in overloaded systems. H Joudeh, B Clerckx, IEEE Trans. Wireless Commun. 1611H. Joudeh and B. Clerckx, "Rate-splitting for max-min fair multigroup multicast beamforming in overloaded systems," IEEE Trans. Wireless Commun., vol. 16, no. 11, pp. 7276-7289, Nov. 2017.
Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis. Y Mao, B Clerckx, V O K Li, IEEE Trans. Commun. 6712Y. Mao, B. Clerckx, and V. O. K. Li, "Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission: Spectral and energy efficiency analysis," IEEE Trans. Commun., vol. 67, no. 12, pp. 8754-8770, Dec. 2019.
Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: A rate-splitting approach. H Joudeh, B Clerckx, IEEE Trans. Commun. 6411H. Joudeh and B. Clerckx, "Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: A rate-splitting approach," IEEE Trans. Commun., vol. 64, no. 11, pp. 4847-4861, Nov. 2016.
Rate-splitting to mitigate residual transceiver hardware impairments in massive MIMO systems. A Papazafeiropoulos, B Clerckx, T Ratnarajah, IEEE Trans. Veh. Technol. 669A. Papazafeiropoulos, B. Clerckx, and T. Ratnarajah, "Rate-splitting to mitigate residual transceiver hardware impairments in massive MIMO systems," IEEE Trans. Veh. Technol., vol. 66, no. 9, pp. 8196-8211, Sep. 2017.
Rate-splitting multiple access to mitigate the curse of mobility in (massive) MIMO networks. O Dizdar, Y Mao, B Clerckx, IEEE Trans. Commun. 6910O. Dizdar, Y. Mao, and B. Clerckx, "Rate-splitting multiple access to mitigate the curse of mobility in (massive) MIMO networks," IEEE Trans. Commun., vol. 69, no. 10, pp. 6765-6780, Oct. 2021.
Mitigating intra-cell pilot contamination in massive MIMO: A rate splitting approach. A Mishra, Y Mao, C K Thomas, L Sanguinetti, B Clerckx, IEEE Trans. Wireless Commun. to appearA. Mishra, Y. Mao, C. K. Thomas, L. Sanguinetti, and B. Clerckx, "Mitigating intra-cell pilot contamination in massive MIMO: A rate splitting approach," IEEE Trans. Wireless Commun., to appear, 2022.
Performance analysis of cell-free massive MIMO systems with asynchronous reception. J Zheng, Z Zhao, J Zhang, J Cheng, V C M Leung, Proc. IEEE GLOBECOM Workshops. IEEE GLOBECOM Workshopsto appearJ. Zheng, Z. Zhao, J. Zhang, J. Cheng, and V. C. M. Leung, "Performance analysis of cell-free massive MIMO systems with asynchronous reception," in Proc. IEEE GLOBECOM Workshops, to appear, 2022.
Synchronization and localization in wireless networks. B Etzlinger, H Wymeersch, Foundations and Trends® in Signal Processing. 121B. Etzlinger, H. Wymeersch et al., "Synchronization and localization in wireless networks," Foundations and Trends® in Signal Processing, vol. 12, no. 1, pp. 1-106, 2018.
Massive MIMO networks: Spectral, energy, and hardware efficiency. E Björnson, J Hoydis, L Sanguinetti, Foundations and Trends® in Signal Processing. 113-4E. Björnson, J. Hoydis, and L. Sanguinetti, "Massive MIMO networks: Spectral, energy, and hardware efficiency," Foundations and Trends® in Signal Processing, vol. 11, no. 3-4, pp. 154-655, 2017.
Performance of cell-free massive MIMO with Rician fading and phase shifts. Ö Özdogan, E Björnson, J Zhang, IEEE Trans. Wireless Commun. 1811Ö.Özdogan, E. Björnson, and J. Zhang, "Performance of cell-free massive MIMO with Rician fading and phase shifts," IEEE Trans. Wireless Commun., vol. 18, no. 11, pp. 5299-5315, Nov. 2019.
Cell-free massive MIMO-OFDM for high-speed train communications. J Zheng, J Zhang, E Björnson, Z Li, B Ai, IEEE J. Sel. Areas Commun. 4010J. Zheng, J. Zhang, E. Björnson, Z. Li, and B. Ai, "Cell-free massive MIMO-OFDM for high-speed train communications," IEEE J. Sel. Areas Commun., vol. 40, no. 10, pp. 2823-2839, Oct. 2022.
Rate-splitting assisted massive machine-type communications in cell-free massive MIMO. A Mishra, Y Mao, L Sanguinetti, B Clerckx, IEEE Commun. Lett. 266A. Mishra, Y. Mao, L. Sanguinetti, and B. Clerckx, "Rate-splitting assisted massive machine-type communications in cell-free massive MIMO," IEEE Commun. Lett., vol. 26, no. 6, pp. 1358-1362, Jun. 2022.
Foundations of user-centric cell-free massive MIMO. Ö T Demir, E Björnson, L Sanguinetti, Foundations and Trends® in Signal Processing. 143-4Ö. T. Demir, E. Björnson, L. Sanguinetti et al., "Foundations of user-centric cell-free massive MIMO," Foundations and Trends® in Signal Processing, vol. 14, no. 3-4, pp. 162-472, Jan. 2021.
MMSE-optimal sequential processing for cell-free massive MIMO with radio stripes. Z H Shaik, E Björnson, E G Larsson, IEEE Trans. Commun. 6911Z. H. Shaik, E. Björnson, and E. G. Larsson, "MMSE-optimal sequential processing for cell-free massive MIMO with radio stripes," IEEE Trans. Commun., vol. 69, no. 11, pp. 7775-7789, Nov. 2021.
Scalability aspects of cell-free massive MIMO. G Interdonato, P Frenger, E G Larsson, Proc. IEEE ICC. IEEE ICCG. Interdonato, P. Frenger, and E. G. Larsson, "Scalability aspects of cell-free massive MIMO," in Proc. IEEE ICC, May 2019, pp. 1-6.
Dual-polarized massive MIMO-RSMA networks: Tackling imperfect SIC. A S Sena, P H Nardelli, D B Da Costa, P Popovski, C B Papadias, M Debbah, IEEE Trans. Wireless Commun. to appear 2022A. S. De Sena, P. H. Nardelli, D. B. Da Costa, P. Popovski, C. B. Papadias, and M. Debbah, "Dual-polarized massive MIMO-RSMA networks: Tackling imperfect SIC," IEEE Trans. Wireless Commun., to appear 2022.
| [] |
[
"SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models",
"SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models"
] | [
"Abdul Rafae Khan [email protected] \nStevens Institute of Technology\n\n",
"Hrishikesh Kanade [email protected] \nStevens Institute of Technology\n\n",
"Girish Amar Budhrani [email protected] \nStevens Institute of Technology\n\n",
"Preet Jhanglani \nStevens Institute of Technology\n\n",
"Jia Xu \nStevens Institute of Technology\n\n"
] | [
"Stevens Institute of Technology\n",
"Stevens Institute of Technology\n",
"Stevens Institute of Technology\n",
"Stevens Institute of Technology\n",
"Stevens Institute of Technology\n"
] | [
"Proceedings of the Seventh Conference on Machine Translation (WMT)"
] | This paper describes the Stevens Institute of Technology's submission for the WMT 2022 Shared Task: Code-mixed Machine Translation (MixMT). The task consisted of two subtasks, subtask 1 Hindi/English to Hinglish and subtask 2 Hinglish to English translation. Our findings lie in the improvements made through the use of large pre-trained multilingual NMT models and in-domain datasets, as well as backtranslation and ensemble techniques. The translation output is automatically evaluated against the reference translations using ROUGE-L and WER. Our system achieves the 1 st position on subtask 2 according to ROUGE-L, WER, and human evaluation, 1 st position on subtask 1 according to WER and human evaluation, and 3 rd position on subtask 1 with respect to ROUGE-L metric. | 10.48550/arxiv.2210.11670 | [
"https://www.aclanthology.org/2022.wmt-1.114.pdf"
] | 253,080,330 | 2210.11670 | 8cb03028126657c4e5ca6b2eabf822bcc965eead |
SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models
December 7-8, 2022
Abdul Rafae Khan [email protected]
Stevens Institute of Technology
Hrishikesh Kanade [email protected]
Stevens Institute of Technology
Girish Amar Budhrani [email protected]
Stevens Institute of Technology
Preet Jhanglani
Stevens Institute of Technology
Jia Xu
Stevens Institute of Technology
SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models
Proceedings of the Seventh Conference on Machine Translation (WMT)
the Seventh Conference on Machine Translation (WMT)December 7-8, 2022
This paper describes the Stevens Institute of Technology's submission for the WMT 2022 Shared Task: Code-mixed Machine Translation (MixMT). The task consisted of two subtasks, subtask 1 Hindi/English to Hinglish and subtask 2 Hinglish to English translation. Our findings lie in the improvements made through the use of large pre-trained multilingual NMT models and in-domain datasets, as well as backtranslation and ensemble techniques. The translation output is automatically evaluated against the reference translations using ROUGE-L and WER. Our system achieves the 1 st position on subtask 2 according to ROUGE-L, WER, and human evaluation, 1 st position on subtask 1 according to WER and human evaluation, and 3 rd position on subtask 1 with respect to ROUGE-L metric.
Introduction
Code-mixing (or code-switching) is the phenomenon when another language like Hindi is interleaved with English words in the same sentence. This code-mixed language is mostly used in social media text and is colloquially referred to as Hinglish. Despite Hindi being the fourth most widely spoken language in the world (Lewis, 2009), research in Hinglish translation has been a relatively unexplored task.
A major challenge in creating a translation system for code-mixed text is the limited amount of parallel data (Ranathunga et al., 2021). Typical methods use standard back-translation techniques (Sennrich et al., 2015a) for generating synthetic parallel data for training. Massive multilingual neural machine translation (NMT) models have recently been shown to improve the translation performances for low-resource and even zero-shot settings. We propose using such large multilin-gual NMT models for our code-mixed translation tasks.
Previous work has only used smaller multilingual architectures (Gautam et al., 2021). We use pre-trained multilingual models trained in up to 200 language directions. We finetune these models for the Hindi to Hinglish and Hinglish to English tasks. One major challenge when using these massive models is the GPU memory constraint. Another issue is the ratio of English and Hinglish words interleaved for each translation output. We use multiple state-of-the-art GPUs with model parallelization to overcome the memory issue. For the amount of English in the outputs, we tune the model parameters including learning rate, dropout, and the number of epochs to get the optimal translations.
Along with these pre-trained multilingual NMT models, we also use additional indomain data, back-translation to generate additional parallel data, and using multi-run ensemble to improve the final performance. All these methods gave us an improvement of +24.4 BLEU for Hindi to Hinglish translation (subtask 1) and +28.1 BLEU points for Hinglish to English translation (subtask 2) compared to using only the organizer provided data and the baseline experiment.
In this paper, we discuss our submission for the WMT 2022 MixMT shared task. We participate in both the subtasks and our submission system includes the following:
• Tune very large pre-trained multilingual NMT models and finetune on in-domain datasets;
• Back-translation to create synthetic data for in-domain monolingual data;
• Multi-run ensemble to combine models trained on various datasets;
• Tune model parameters to enhance model performance.
Related Work
Multilingual Neural Machine Translation (MNMT) Word and subword-level tokenizations are widely used in natural language processing, including NMT/MNMT. Morishita et al. (2018) propose to incorporate hierarchical subword features to improve neural machine translation. Massively multilingual NMT models are proposed by Aharoni et al. (2019) and Arivazhagan et al. (2019). They are trained on a large number of language pairs and show a strong and positive impact on low-resource languages. However, these models tend to have representation bottlenecks (Dabre et al., 2020), due to the large vocabulary size and the large diversity of training languages. Two MNMT systems (Tan et al., 2019;Xiong et al., 2021) are proposed to solve this problem by modifying the model architectures, adding special constraints on training, or designing more complicated preprocessing methods. Xiong et al. (2021) adopt the contrastive learning scheme in many-to-many MNMT. Tan et al. (2019) propose a distillation-based approach to boost the accuracy of MNMT systems. However, these word/subword-based models still need complex preprocessing steps such as data augmentation or special model architecture design.
Code-mixed NMT
Background
Task Description
The WMT 2022 CodeMix MT task consists of two subtasks. Subtask 1 is to use Hindi or English as input and automatically translate it into Hinglish. Subtask 2 is to input a Hinglish text and translate it into English. Participation in both subtasks was compulsory for the competition. We use Hindi only as the source for subtask 1.
Neural Machine Translation
The Neural Machine Translation (NMT) task uses a neural network-based model to translate a sequence of tokens from one human language to another. More formally, given a sequence of tokens in source language x = {x 1 , x 2 , · · · , x n }, the model outputs another sequence of tokens in target language y = {y 1 , y 2 , · · · , y m }. The input sequence x is encoded into the latent representation by a neural network-based encoder module, and these representations are decoded by the neural network-based decoder module. We train transformer-based encoder-decoder models (Vaswani et al., 2017) to translate the data. These models use a self-attention mechanism in their architectures to boost performance.
Multilingual NMT (MNMT)
Initial NMT systems were only capable of handling two languages. However, lately, there has been a focus on NMT models which can handle input from more than two languages (Dong et al., 2015;Firat et al., 2016;Johnson et al., 2017). Such models, commonly called Multilingual NMT (MNMT) models, have shown improvement in low-resource or zeroshot Neural Machine Translation settings. Instead of translating a sequence of tokens in source language x to another sequence in tar-get language y, the MNMT system uses multiple sources and target languages. There are two main approaches: (1) use a separate encoder and decoder for each of the source and target languages (Gu et al., 2018), and (2) use a single encoder/decoder which shares the parameters across the different languages (Johnson et al., 2017).
The issue with the first approach is that it requires a much larger memory due to multiple encoders and decoders (Vázquez et al., 2018). The second approach is much more memory efficient due to parameter sharing (Arivazhagan et al., 2019).
Training a model using the second approach can be done by adding a language tag to the source and target sequence. Specifically, when the decoding starts, an initial target language tag is given as input, which forces the model to output in that specific language.
Methods
For the initial set of experiments, we use the baseline transformer model (Vaswani et al., 2017). For all the other experiments, we use pre-trained multilingual NMT models and finetuned them for the specific datasets. We can divide these into three groups based on the number of parameters. (1) smaller models including mBART-50 (Tang et al., 2020) and Table 1.
For both subtasks, we use Hindi as the source language tag and English as the target language tag.
Pre-trained Models
To train the transformer, mBART-50, and M2M-100 models, we use the Fairseq toolkit (Ott et al., 2019), and the larger NLLB-200, mT5-XL, and mT5-XXL models use the Huggingface toolkit (Wolf et al., 2019). Table 1 lists the parameter count for each pre-trained multilingual model.
Model
Params
mBART-50 611M M2M-100 1.2B NLLB-200 3.3B mT5-XL 3.7B mT5-XXL 13B
Data Augmentation
We use three different ways to add additional in-domain data for training our models.
Additional in-domain data We use additional in-domain parallel data and add it to the training data for accuracy improvement. Since our focus is on Hindi for subtask 1 and Hinglish for subtask 2, we tried to look for data from additional domains with Hindi or Hinglish as the source. We use Kaggle Hi-En (Chokhra, 2020) and MUSE Hi-En dictionary (Lample et al., 2017) for subtask 1. For subtask 2, we use Kaggle Hg-En data (Tom, 2022), CMU movie reviews data (Zhou et al., 2018), and CALCS'21 Hg-En dataset (Solorio et al., 2021). We also use selected WMT'14 News Hi-En sentences (Bojar et al., 2014) and the MTNT Fr-En and Ja-En data (Michel and Neubig, 2018). Table 2 all lists these datasets.
Back-translation
A common technique used to increase the data size for low-resource languages is to use in-domain monolingual data and generate synthetic translations using a reverse translation system (Sennrich et al., 2015a). We use google translate for back-translation. We translate samples from the English side of Tatoeba Spanish to the English dataset (Tatoeba, 2022) and Sentiment140 dataset (Go et al., 2009) into Hinglish and use the synthetic translations as additional bilingual data.
Ensemble
We use a multi-run ensemble (Koehn, 2020) to combine multiple model's best checkpoints to boost the final performance. We average the probability distribution over the vocabulary for all the models to generate a final probability distribution and use that to predict the target sequence.
Datasets
The competition provided one dataset for each of the subtasks, HinGE Hi-Hg (Srivastava and Singh, 2021) for subtask 1 and PHINC Hg-En (Srivastava and Singh, 2020) for subtask 2. The competition also provided the validation data. In addition to these, we also use additional in-domain and out-of-domain datasets.
Due to a large overlap of English and Hinglish vocabulary, we use Hindi-English (Hi-En) and Hindi-Hinglish (Hi-Hg) datasets for subtask 1. For subtask 2, we use various Hinglish-English datasets. All the competition provided datasets, the additional in-domain datasets, and the additional out-of-domain datasets used for both the subtasks are listed in Table 2. As HinGE En-Hg has multiple Hinglish translations for a single English sentence. We duplicated the English to increase the size of the data. For the WMT'14 Hi-En dataset, we selected the closest 15K sentences, selected using cosine similarity with sourceside validation data.
To preprocess the data, we tokenize using the Moses tokenizer (Koehn et al., 2007) or the model-specific tokenizer provided by Huggingface. Additionally, we apply either Byte pair encoding (BPE) (Sennrich et al., 2015b) for the baseline transformer model and sentence piece (Kudo and Richardson, 2018) for all other models including mBART-50, M2M-100, NLLB-200, mT5-XL and mT5-XXL to split words into subwords tokens.
Experiments
This section describes the experimental details, including the toolkits, the parameter settings for the model training and decoding, and the results.
Tools & Hardware
For the Models mentioned in Section 4.2, we train the smaller models on 32GB NVIDIA Tesla V100 GPUs, and the medium and larger models require multiple 80GB NVIDIA A100 GPUs. We use a total of 4 V100 GPUs and 16 A100 GPUs. Due to GPU memory usage (see Section 1), we parallelized the training of the medium and larger models using the Deep-Speed package (Rasley et al., 2020).
Training Details
As an NMT baseline, we use the baseline transformer model (Vaswani et al., 2017) provided by the Fairseq toolkit. The model has half number of attention heads and the feedforward network dimension compared to the Transformer (base) model in Vaswani et al. (2017). The rest of the network architecture is the same. We train this model from scratch by adding additional datasets and finally tuning it on the validation data.
We use the Fairseq toolkit for training the baseline transformer from scratch and for finetuning the mBART-50 and M2M-100 models. For finetuning NLLB-200, mT5-XL, and mT5-XXL models, we use the Huggingface toolkit. For the pre-trained multilingual models, we use the Hindi language encoder and English language decoder for finetuning and decoding.
As shown in Table 4, we finetune the models with the listed datasets for each subtask. We initially fine-tune these models on ID 4 dataset mentioned in Table 4. Finally, we further finetune the models on the validation datasets provided by the organizers.
Hyper-parameter settings
We train the Transformer model from scratch and finetune all the multilingual pre-trained models. We train Transformer, mBART-50, and M2M-100 models for 10 epochs on the ID 4 datasets and 5 epochs on the validation dataset. We finetune the larger models listed in Table 3, for a maximum of 3 epochs before tuning on the validation for 7 epochs for subtask 1 and 4 epochs for subtask 2, respectively. We set the Adam betas to 0.9 and 0.98 for all the models and tuned the learning rates between 1e −5 and 9e −5 . We opt for higher learning rates for the initial epochs and use lower learning rates for the remaining epochs. Finetuning with a high learning rate for fewer epochs is particularly helpful as larger models take much more time per epoch, even with the larger GPU memory. We also experiment with tuning the dropout between 0.1 and 0.15, and we get the best performance with the dropout rate set to 0.1. The batch size is limited to smaller values due to memory constraints. We set the batch size to 10 or 20 for larger models and 40 or 50 for medium-sized or smaller models.
Decoding parameters For the decoding step for both tasks, we set English as the target language tag for all the models. We tune the beam size, and the optimal beam size is 17 for both subtasks on the validation set. We limit the maximum sentence length to 128 only for the medium and larger models like NLLB-200, mT5-XL, and mT5-XXL. Finally, we detokenize the translation output as a postprocessing step (Koehn et al., 2007).
Additional Experiments
We also perform additional experiments that are helpful but not included in the final submission due to limited time. These are the MTNT datasets and the ensemble methods. Firstly, we use the MTNT dataset as an additional bilingual in-domain data set containing different source languages. We also apply the multi-run ensemble method to combine models trained on multiple datasets together (Koehn and Knowles, 2017). For both tasks, we train M2M-100 models on the MTNT Fr-En data and the MTNT Ja-En data before tuning them on the baseline datasets, respec-tively. Additionally, we first fine-tune the WMT'14 News Hi-En data and then fine-tune the baseline data. Then we ensemble these two models with the original base model.
Results
We evaluate the models with respect to the BLEU score using sacrebleu. Table 5 shows the results of the experiments for both tasks and all the models. In general, we get improvement with larger multilingual models and with validation finetuning. Table 4 shows the results of training from scratch using the transformer model with additional in-domain datasets. We get a maximum improvement of 9.3 for subtask 1 and 4.0 for subtask 2 using the additional datasets. Finally, tuning on validation gave an additional boost of +1.1 and +0.2 BLEU for subtasks 1 and 2 respectively. Table 5 shows the results for using pre-trained multilingual models on the ID 4 datasets. We get a maximum improvement of 25.6 and 32.6 for subtasks 1 and 2. This is +14.0 and +23.9 BLEU points higher than the best transformer model's results in Table 4. Table 6 shows the ensemble results of a multi-run ensemble of the three models: (1) The baseline M2M-100 model in Table 5, (2) The M2M-100 model first trained on MTNT data and then on the baseline data, and (3) Training the M2M-model on MTNT data, then on WMT data, and finally on the baseline data. We get a slight decrease of −0.3 BLEU for subtask 1 compared to the baseline. However, for subtask 2, the performance improves by +0.8 BLEU points.
Analysis
We analyze the translation outputs of NLLB, mT5-XL, and mT5-XXL models. For subtask 1, the issues we faced were that the sentences were translated entirely to English and did not contain any Hinglish words. Some words were translated partially to Hinglish, and a portion of the words remained in the Hindi language. For subtask 2, the issues we faced were that the names of animal species were not translated correctly. And idioms lose their meaning in translation. Examples of these issues are shown in Table 7
Conclusion
This paper describes our submitted translation system for the WMT 2022 shared task MixMT competition. We train five different multilingual NMT models including mBART-50, M2M-100, NLLB-200, mT5-XL, and mT5-XXL, for both subtasks. We finetune on indomain datasets including the validation data and significantly enhance our translation quality from 1.2 to 25.6 and 4.5 to 32.6 for subtasks 1 and 2 respectively. Additionally, we also apply data-augmentation techniques including back-translation, tuning on in-domain data, and checkpoint ensemble. Our system got the 1 st position in subtask 2 for both ROUGE-L and WER metrics, the 1 st position in subtask 1 for WER, and 3 rd position in subtask 1 for ROUGE-L.
Facebook M2M-100 medium model (Fan et al., 2021) (M2M-100), (2) the medium models include the Facebook NLLB-200 (Costa-jussà et al., 2022) (NLLB-200) and Google mT5 XL (Xue et al., 2021) (mT5-XL), and (3) for large model we use the Google mT5 XXL model (Xue et al., 2021) (mT5-XXL). The parameter count for each of the models and the training time per epoch for baseline datasets are mentioned in
Table 1 :
1Parameter count for each pre-trained multilingual model.
Table 2 :
2Datasets provided by the organizers and
additional in-domain and out-of-domain datasets
used for subtask 1 and 2. V R is the number of
running words and V is the vocabulary size.
Table 3 :
3Per epoch training time for each of the
models. The training time is for ID 4 datasets in
Table 4.
& 8.ID
Datasets
Hi-Hg
1
HinGE
1.2
2
[1]+Kaggle
6.4
3
[2]+WMT'14 News
10.3
4 [3]+Facebook MUSE
10.5
5
[4]+val tune
11.6
ID
Datasets
Hg-En
1
PHINC
4.5
2
[1]+HinGE
5.1
3
[2]+CALCS'21
5.2
4 [3]+Back-translation
8.5
5
[4]+val tune
8.7
Table 4 :
4Adding in-domain datasets. Baseline: Transformer(Vaswani et al., 2017). Evaluation critierion: BLEU[%]. add citation of the datasets. Training from scratch without pre-trained models. '+val tune' is further finetuning on validation data. All the results are evaluated on the competition's test data.Pretrained Multilingual Model
subtask 1
subtask 2
baseline +val tune baseline +val tune
mBART-50
16.9
-
18.3
-
M2M-100
18.9
-
23.8
-
NLLB-200
11.5
-
23.8
30.3
mT5-XL
18.8
25.6
24.0
31.7
mT5-XXL
18.5
24.0
24.9
32.6
Table 5 :
5Initialization with pre-trained models. BLEU scores (%) for subtask 1 and 2. 'baseline' experiment is finetuning the pre-trained model on the ID 4 datasets inTable 4. '+val tune' is further finetuning on validation data. All the results are evaluated on the competition's test data. bold results are the final submission.Task
Models
BLEU
subtask 1
Base
18.9
Base+MTNT+WMT
18.6
subtask 2
Base
23.8
Base+MTNT+WMT
24.6
Table 6 :
6Checkpoint ensemble results for subtask 2 trained on M2M-100 model evaluated on the competition's test data. The base is the baseline M2M-100 experiment. MTNT is first training on MTNT data and then tuning on the baseline. WMT tunes on MTNT, then WMT, and finally on baseline data. दे श क राष्टर् ीय िक्रके ट टीम ... NLLB The national cricket team in the country... mT5-XL desh ki national cricket team... mT5-XXL country ki national cricket team... Ref desh ki national cricket team... Src यह प्रमा णत हो चु का है जो एक चमत्कार है । NLLB It has been proven which is a miracle. mT5-XL yah pramanit ho chuka hai jo ek miracle hai. mT5-XXL yah pramanit ho chuka hai jo ek चमtkaar hai. Ref yah pramanit ho chuka hai jo miracle hai.Src
Table 7 :
7Examples of errors for subtask 1.
Src lol...gayi bhains paani mein... NLLB lol... went bhains in water... mT5-XL lol... animals went in water... mT5-XXL Lol... Goat got in the water... Ref lol.. buffalo went in the water... Src ye video dekh kar to khoon khaul gya NLLB After seeing this video, blood came out. mT5-XL seeing this video, my blood bleed. mT5-XXL Blood boiled after watching this video. Ref By watching this video, blood boiled.
Table 8 :
8Examples of errors for subtask 2.
AcknowledgmentsWe appreciate the National Science Foundation (NSF) Award No. 1747728 and NSF CRAFT Award, Grant No. 22001 to fund this research. We are also thankful for the support of the Google Cloud Research Program. We especially thank Xuting Tang, Yu Yu, and Mengjiao Zhang to help editing the paper.
Proceedings of the third workshop on computational approaches to linguistic code-switching. Gustavo Aguilar, Fahad Alghamdi, Victor Soto, Thamar Solorio, Mona Diab, Julia Hirschberg, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingGustavo Aguilar, Fahad AlGhamdi, Victor Soto, Thamar Solorio, Mona Diab, and Julia Hirschberg. 2018. Proceedings of the third work- shop on computational approaches to linguis- tic code-switching. In Proceedings of the Third Workshop on Computational Approaches to Lin- guistic Code-Switching.
Roee Aharoni, Melvin Johnson, Orhan Firat, arXiv:1903.00089Massively multilingual neural machine translation. arXiv preprintRoee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089.
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, arXiv:1907.05019Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprintNaveen Arivazhagan, Ankur Bapna, Orhan Fi- rat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Fos- ter, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019.
Joining hands: Exploiting monolingual treebanks for parsing of code-mixing data. Ahmad Irshad, Riyaz Ahmad Bhat, Manish Bhat, Dipti Misra Shrivastava, Sharma, arXiv:1703.10772arXiv preprintIrshad Ahmad Bhat, Riyaz Ahmad Bhat, Man- ish Shrivastava, and Dipti Misra Sharma. 2017. Joining hands: Exploiting monolingual tree- banks for parsing of code-mixing data. arXiv preprint arXiv:1703.10772.
Hindencorp-hindi-english and hindi-only corpus for machine translation. Ondřej Bojar, Vojtěch Diatka, Pavel Rychlỳ, Pavel Straňák, Vít Suchomel, Aleš Tamchyna, Daniel Zeman, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Ondřej Bojar, Vojtěch Diatka, Pavel Rychlỳ, Pavel Straňák, Vít Suchomel, Aleš Tamchyna, and Daniel Zeman. 2014. Hindencorp-hindi-english and hindi-only corpus for machine translation. In Proceedings of the Ninth International Con- ference on Language Resources and Evaluation (LREC'14), pages 3550-3555.
Hindi to hinglish corpus. Parth Chokhra, Parth Chokhra. 2020. Hindi to hinglish cor- pus. https://www.kaggle.com/datasets/ parthplc/hindi-to-hinglish.
No language left behind: Scaling human-centered machine translation. James Marta R Costa-Jussà, Onur Cross, Maha Çelebi, Kenneth Elbayad, Kevin Heafield, Elahe Heffernan, Janice Kalbassi, Daniel Lam, Jean Licht, Maillard, arXiv:2207.04672arXiv preprintMarta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffer- nan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left be- hind: Scaling human-centered machine transla- tion. arXiv preprint arXiv:2207.04672.
A survey of multilingual neural machine translation. Raj Dabre, Chenhui Chu, Anoop Kunchukuttan, ACM Computing Surveys (CSUR). 535Raj Dabre, Chenhui Chu, and Anoop Kunchukut- tan. 2020. A survey of multilingual neural ma- chine translation. ACM Computing Surveys (CSUR), 53(5):1-38.
Multi-task learning for multiple language translation. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, Haifeng Wang, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceed- ings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732.
Beyond englishcentric multilingual machine translation. Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, J. Mach. Learn. Res. 22107Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Man- deep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english- centric multilingual machine translation. J. Mach. Learn. Res., 22(107):1-48.
Multi-way, multilingual neural machine translation with a shared attention mechanism. Orhan Firat, Kyunghyun Cho, Yoshua Bengio, arXiv:1601.01073arXiv preprintOrhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073.
Code-switched language models using dual rnns and same-source pretraining. Saurabh Garg, Tanmay Parekh, Preethi Jyothi, arXiv:1809.01962arXiv preprintSaurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched language models using dual rnns and same-source pretraining. arXiv preprint arXiv:1809.01962.
Comet: Towards code-mixed translation using parallel monolingual sentences. Devansh Gautam, Prashant Kodali, Kshitij Gupta, Anmol Goel, Manish Shrivastava, Ponnurangam Kumaraguru, Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. the Fifth Workshop on Computational Approaches to Linguistic Code-SwitchingDevansh Gautam, Prashant Kodali, Kshitij Gupta, Anmol Goel, Manish Shrivastava, and Ponnu- rangam Kumaraguru. 2021. Comet: Towards code-mixed translation using parallel monolin- gual sentences. In Proceedings of the Fifth Work- shop on Computational Approaches to Linguistic Code-Switching, pages 47-55.
Twitter sentiment classification using distant supervision. CS224N project report. Alec Go, Richa Bhayani, Lei Huang, 1StanfordAlec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant su- pervision. CS224N project report, Stanford, 1(12):2009.
Ltrcmt simple & effective hindi-english neural machine translation systems at wat 2019. Vikrant Goyal, Dipti Misra Sharma, Proceedings of the 6th Workshop on Asian Translation. the 6th Workshop on Asian TranslationVikrant Goyal and Dipti Misra Sharma. 2019. Ltrc- mt simple & effective hindi-english neural ma- chine translation systems at wat 2019. In Pro- ceedings of the 6th Workshop on Asian Transla- tion, pages 137-140.
Universal neural machine translation for extremely low resource languages. Jiatao Gu, Hany Hassan, Jacob Devlin, O K Victor, Li, arXiv:1802.05368arXiv preprintJiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural ma- chine translation for extremely low resource lan- guages. arXiv preprint arXiv:1802.05368.
Exploring text-to-text transformers for english to hinglish machine translation with synthetic code-mixing. arXiv:2105.08807Ganesh Jawahar, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, and Laks VS Lakshmanan. 2021arXiv preprintGanesh Jawahar, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, and Laks VS Lak- shmanan. 2021. Exploring text-to-text trans- formers for english to hinglish machine transla- tion with synthetic code-mixing. arXiv preprint arXiv:2105.08807.
Google's multilingual neural machine translation system: Enabling zero-shot translation. Melvin Johnson, Mike Schuster, V Quoc, Maxim Le, Yonghui Krikun, Zhifeng Wu, Nikhil Chen, Fernanda Thorat, Martin Viégas, Greg Wattenberg, Corrado, Transactions of the Association for Computational Linguistics. 5Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wat- tenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
Neural machine translation. Philipp Koehn, Cambridge University PressPhilipp Koehn. 2020. Neural machine translation. Cambridge University Press.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions. the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessionsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine transla- tion. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177-180.
Six challenges for neural machine translation. Philipp Koehn, Rebecca Knowles, Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationAssociation for Computational LinguisticsPhilipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Ma- chine Translation, pages 28-39. Association for Computational Linguistics.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. Taku Kudo, John Richardson, arXiv:1808.06226arXiv preprintTaku Kudo and John Richardson. 2018. Sentence- piece: A simple and language independent sub- word tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
Unsupervised machine translation using monolingual corpora only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, Marc'aurelio Ranzato, arXiv:1711.00043arXiv preprintGuillaume Lample, Alexis Conneau, Ludovic De- noyer, and Marc'Aurelio Ranzato. 2017. Unsu- pervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.
Neural machine translation: English to hindi. Abinash Sahinur Rahman Laskar, Partha Dutta, Sivaji Pakray, Bandyopadhyay, 2019 IEEE conference on information and communication technology. IEEESahinur Rahman Laskar, Abinash Dutta, Partha Pakray, and Sivaji Bandyopadhyay. 2019. Neu- ral machine translation: English to hindi. In 2019 IEEE conference on information and com- munication technology, pages 1-6. IEEE.
Ethnologue: Languages of the World. M. Paul LewisDallas, Texas, USASixteenth edition. SIL InternationalM. Paul Lewis, editor. 2009. Ethnologue: Lan- guages of the World, Sixteenth edition. SIL In- ternational, Dallas, Texas, USA.
Mtnt: A testbed for machine translation of noisy text. Paul Michel, Graham Neubig, arXiv:1809.00388arXiv preprintPaul Michel and Graham Neubig. 2018. Mtnt: A testbed for machine translation of noisy text. arXiv preprint arXiv:1809.00388.
Improving neural machine translation by incorporating hierarchical subword features. Makoto Morishita, Jun Suzuki, Masaaki Nagata, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsMakoto Morishita, Jun Suzuki, and Masaaki Na- gata. 2018. Improving neural machine transla- tion by incorporating hierarchical subword fea- tures. In Proceedings of the 27th International Conference on Computational Linguistics, pages 618-629.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, arXiv:1904.01038fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprintMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, ex- tensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.
Language modeling for code-mixing: The role of linguistic theory based synthetic data. Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, Kalika Bali, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Adithya Pratapa, Gayatri Bhat, Monojit Choud- hury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th An- nual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1543-1553.
Surangika Ranathunga, Annie En-Shiun, Marjana Prifti Lee, Ravi Skenduli, Shekhar, arXiv:2106.15115Mehreen Alam, and Rishemjit Kaur. 2021. Neural machine translation for low-resource languages: A survey. arXiv preprintSurangika Ranathunga, En-Shiun Annie Lee, Mar- jana Prifti Skenduli, Ravi Shekhar, Mehreen Alam, and Rishemjit Kaur. 2021. Neural ma- chine translation for low-resource languages: A survey. arXiv preprint arXiv:2106.15115.
Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, Yuxiong He, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningJeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: Sys- tem optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD Inter- national Conference on Knowledge Discovery & Data Mining, pages 3505-3506.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, arXiv:1511.06709arXiv preprintRico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine trans- lation models with monolingual data. arXiv preprint arXiv:1511.06709.
Rico Sennrich, Barry Haddow, Alexandra Birch, arXiv:1508.07909Neural machine translation of rare words with subword units. arXiv preprintRico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
Proceedings of the fifth workshop on computational approaches to linguistic code-switching. Thamar Solorio, Shuguang Chen, Alan W Black, Mona Diab, Sunayana Sitaram, Victor Soto, Emre Yilmaz, Anirudh Srinivasan, Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. the Fifth Workshop on Computational Approaches to Linguistic Code-SwitchingThamar Solorio, Shuguang Chen, Alan W Black, Mona Diab, Sunayana Sitaram, Victor Soto, Emre Yilmaz, and Anirudh Srinivasan. 2021. Proceedings of the fifth workshop on computa- tional approaches to linguistic code-switching. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code- Switching.
PHINC: A parallel Hinglish social media codemixed corpus for machine translation. Vivek Srivastava, Mayank Singh, 10.18653/v1/2020.wnut-1.7Proceedings of the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020). the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020)Association for Computational LinguisticsOnlineVivek Srivastava and Mayank Singh. 2020. PHINC: A parallel Hinglish social media code- mixed corpus for machine translation. In Pro- ceedings of the Sixth Workshop on Noisy User- generated Text (W-NUT 2020), pages 41-49, On- line. Association for Computational Linguistics.
HinGE: A dataset for generation and evaluation of codemixed Hinglish text. Vivek Srivastava, Mayank Singh, 10.18653/v1/2021.eval4nlp-1.20Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems. the 2nd Workshop on Evaluation and Comparison of NLP SystemsPunta Cana, Dominican RepublicAssociation for Computational LinguisticsVivek Srivastava and Mayank Singh. 2021. HinGE: A dataset for generation and evaluation of code- mixed Hinglish text. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 200-208, Punta Cana, Dominican Republic. Association for Computa- tional Linguistics.
Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, Tie-Yan Liu, arXiv:1908.09324Multilingual neural machine translation with language clustering. arXiv preprintXu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. arXiv preprint arXiv:1908.09324.
Multilingual translation with extensible multilingual pretraining and finetuning. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan, arXiv:2008.00401arXiv preprintYuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and fine- tuning. arXiv preprint arXiv:2008.00401.
Tatoeba. 2022. Spanish english bilingual dataset. Tatoeba. 2022. Spanish english bilingual dataset. https://www.manythings.org/anki/.
. Louis Tom, Louis Tom. 2022. Codemixed. https://www. kaggle.com/datasets/louistom/codemixed.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. Advances in neural in- formation processing systems, 30.
Multilingual nmt with a language-independent attention bridge. Raúl Vázquez, Alessandro Raganato, Jörg Tiedemann, Mathias Creutz, arXiv:1811.00498arXiv preprintRaúl Vázquez, Alessandro Raganato, Jörg Tiede- mann, and Mathias Creutz. 2018. Multilin- gual nmt with a language-independent attention bridge. arXiv preprint arXiv:1811.00498.
Codeswitched language models using neural based synthetic data from parallel sentences. Andrea Genta Indra Winata, Chien-Sheng Madotto, Pascale Wu, Fung, arXiv:1909.08582arXiv preprintGenta Indra Winata, Andrea Madotto, Chien- Sheng Wu, and Pascale Fung. 2019. Code- switched language models using neural based synthetic data from parallel sentences. arXiv preprint arXiv:1909.08582.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, arXiv:1910.03771arXiv preprintThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Mor- gan Funtowicz, et al. 2019. Huggingface's trans- formers: State-of-the-art natural language pro- cessing. arXiv preprint arXiv:1910.03771.
Contrastive multi-view multiplex network embedding with applications to robust network alignment. Hao Xiong, Junchi Yan, Li Pan, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningHao Xiong, Junchi Yan, and Li Pan. 2021. Con- trastive multi-view multiplex network embed- ding with applications to robust network align- ment. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1913-1923.
Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, 10.18653/v1/2021.naacl-main.41Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsLinting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.
A dataset for document grounded conversations. Kangyan Zhou, Shrimai Prabhumoye, Alan W Black, 10.18653/v1/D18-1076Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsKangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Lan- guage Processing, pages 708-713, Brussels, Bel- gium. Association for Computational Linguis- tics.
| [] |
[
"Localization Distillation for Object Detection",
"Localization Distillation for Object Detection"
] | [
"Journal Of L A T E X Class ",
"Files "
] | [] | [] | Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the prediction logits due to its inefficiency in distilling the localization information. In this paper, we investigate whether logit mimicking always lags behind feature imitation. Towards this goal, we first present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student. Second, we introduce the concept of valuable localization region that can aid to selectively distill the classification and localization knowledge for a certain region. Combining these two new components, for the first time, we show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking under-performs for years. The thorough studies exhibit the great potential of logit mimicking that can significantly alleviate the localization ambiguity, learn robust feature representation, and ease the training difficulty in the early stage. We also provide the theoretical connection between the proposed LD and the classification KD, that they share the equivalent optimization effect. Our distillation scheme is simple as well as effective and can be easily applied to both dense horizontal object detectors and rotated object detectors. Extensive experiments on the MS COCO, PASCAL VOC, and DOTA benchmarks demonstrate that our method can achieve considerable AP improvement without any sacrifice on the inference speed. Our source code and pretrained models are publicly available at https://github.com/HikariTJU/LD. | 10.1109/tpami.2023.3248583 | [
"https://export.arxiv.org/pdf/2204.05957v2.pdf"
] | 232,035,721 | 2204.05957 | 027feda5a85e0265a051a1c8afbeac3cea15b13c |
Localization Distillation for Object Detection
AUGUST 2015 1
Journal Of L A T E X Class
Files
Localization Distillation for Object Detection
148AUGUST 2015 1Index Terms-Object detectionlocalization distillationknowledge distillationrotated object detection !
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the prediction logits due to its inefficiency in distilling the localization information. In this paper, we investigate whether logit mimicking always lags behind feature imitation. Towards this goal, we first present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student. Second, we introduce the concept of valuable localization region that can aid to selectively distill the classification and localization knowledge for a certain region. Combining these two new components, for the first time, we show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking under-performs for years. The thorough studies exhibit the great potential of logit mimicking that can significantly alleviate the localization ambiguity, learn robust feature representation, and ease the training difficulty in the early stage. We also provide the theoretical connection between the proposed LD and the classification KD, that they share the equivalent optimization effect. Our distillation scheme is simple as well as effective and can be easily applied to both dense horizontal object detectors and rotated object detectors. Extensive experiments on the MS COCO, PASCAL VOC, and DOTA benchmarks demonstrate that our method can achieve considerable AP improvement without any sacrifice on the inference speed. Our source code and pretrained models are publicly available at https://github.com/HikariTJU/LD.
INTRODUCTION
A S a model compression technology, knowledge distillation (KD) [1], [2] has been an efficient technique in learning compact models to mitigate the computational burden. It has been widely validated to be useful for boosting the performance of small-sized student networks by transferring the generalized knowledge captured by largesized teacher networks [1], [2], [3], [4], [5], [6]. Speaking of KD in object detection, there are mainly three popular KD pipelines as shown in Fig. 1. Logit mimicking [1], also known as classification KD, is originally designed for image classification, where the KD process operates on the logits of the teacher-student pair. Feature imitation, motivated by the pioneer work FitNet [2], aims to enforce the consistency of the feature representations between the teacher-student pair. The last one, namely the pseudo bounding box regression, uses the predicted bounding boxes from the teacher as an addition supervision to the bounding box prediction branch of the student.
Among these methods, the original logit mimicking technique [1] for classification is often inefficient as it only transfers the classification knowledge while neglects the importance of localization knowledge distillation. Therefore, existent KD methods for object detection mostly focus on feature imitation, and demonstrate that distilling the feature representations is more advantageous than distilling the [1]. y Feature Imitation: recent popular methods distill intermediate features based on various distillation regions, which usually need adaptive layers to align the size of the student's feature map. z Pseudo BBox Regression: treating teachers' predicted bounding boxes as additional regression targets [7], [8].
logits [9], [10], [11]. We summarize three crucial reasons for this phenomenon: First of all, the effectiveness of logit mimicking partially relies on the number of classes which may vary in different application scenarios [9]. Second, the logit mimicking can only be applied to the classification head, which cannot distill the localization information. At last, in the framework of multi-task learning, feature imitation can transfer the hybrid knowledge of classification and localization which can benefit the downstream classification and localization tasks. In this work, we examine the aforementioned common belief in object detection KD, and challenge whether feature imitation always stays ahead of logit mimicking? For this purpose, we firstly present a simple yet effective localization distillation (LD) method which is inspired by an interesting observation that the bounding box distributions generated by the teacher [12], [13] can serve as a strong supervision to the student detector. The bounding box distribution [12], [13] is originally designed to model the real distributions of bounding boxes, an efficient way to solve the localization ambiguity as shown in Fig. 2. With the discretized probability distribution representations, the localizer can reflect the localization ambiguity by the flatness and sharpness of the distribution, which is not held in the conventional Dirac delta representation of bounding boxes [14], [15], [16], [17]. This allows our LD to efficiently transfer richer localization knowledge from the teacher to the student than using pseudo bounding box regression (right part in Fig. 1).
Combining the proposed LD and the classification KD yields a unified KD method based on a pure logit mimicking framework for both the classification branch and the localization branch. As logit mimicking enables us to separately distill the classification knowledge and the localization one, we found that these two sub-tasks favor different distillation regions. Motivated by this, we introduce the concept of valuable localization region (VLR) and propose to conduct distillation in a selective region distillation manner. We will show the advantage of using VLR in our distillation framework in the experiment section.
Furthermore, we comprehensively discuss the technical details of LD and elaborate on the behavior of logit mimicking and feature imitation. Intriguingly, we observe that logit mimicking can outperform feature imitation for the first time, which indicates that the absence of localization distillation is actually the key reason why logit mimicking under-performs in object detection for years. Another observation is that we find the reason why logit mimicking works is not because of the consistency of the feature representations between the teacher-student pair. Conversely, the student learns significantly different feature representations from the teacher's in terms of the l n distance and linear correlation. We also observe that if the student is trained with feature imitation, it tends to produce a sharp AP score landscape in the feature subspace, and aggravates the training difficulty in the early training stage.
The above observations reflect the great potentials of logit mimicking over feature imitation: 1) being able to separately transfer different types of knowledge, 2) learning more robust feature representations, and 3) easing the training difficulty. Our method is simple and can be easily equipped with in both horizontal and rotated object detectors to improve their performance without introducing any inference overhead. Extensive experiments on MS COCO show that without bells and whistles, we can lift the AP score of the strong baseline GFocal [12] with ResNet-50-FPN backbone from 40.1 to 42.1, and AP 75 from 43.1 to 45.6. Our best model using ResNeXt-101-32x4d-DCN backbone can achieve a single-scale test of 50.5 AP, which surpasses all existing detectors under the same backbone, neck, and test settings. PyTorch [18] and Jittor [19] version of the source code and pretrained models are publicly available at https://github.com/HikariTJU/LD.
The main contributions of this paper are four-fold:
1) We present a novel localization distillation method that greatly improve the distillation efficiency of logit mimicking in object detection. 2) We provide exploratory experiments and analysis for the behavior of logit mimicking and feature imitation. To our best knowledge, this is the first work revealing the great potential of logit mimicking over feature imitation. 3) We present a selective region distillation based on the newly introduced valuable localization region to better distill the student detector. 4) We extend our LD to a rotated version so that it can be applied to arbitrary-oriented object detection.
This paper is a substantial extension of its previous conference version [20]. In particular, (a) We provide theoretical connection for the proposed LD and the classification KD that they share the equivalent optimization effects; (b) We conduct more detailed and insightful analysis for logit mimicking and feature imitation, including the different characteristics of the learned feature representations and logits, and the training difficulty of feature imitation; (c) We extend the original LD to a more generic version, namely rotated LD, which can distill arbitrary-oriented object detectors.
RELATED WORK
Knowledge Distillation
Knowledge distillation [1], [21], [22], [23], [24], [25], as a hot research topic, has been deeply studied recently. The fundamental idea is to use a well-performed large-sized teacher network to transfer the captured knowledge to the smallsized student network. Logit mimicking, a.k.a. classification KD, was first introduced by Hinton et al. [1], where the logit outputs of the student classifier are supervised by those of the teacher classifier. Later, FitNet [2] extends the teacherstudent learning framework by mimicking the intermediatelevel hints from the hidden layers of the teacher model. Knowledge distillation was first applied to object detection in [7], where the hint learning, classification KD, and pseudo bounding box regression were simultaneously used for multi-class object detection. However, an object detector requires not only precise classification ability, but also strong localization ability. The absence of localization knowledge distillation limits the performance of the conventional KD method.
To tackle the above issue, many feature imitation methods have been developed, most of which focus on where to distill and loss function weighting. Among these, Li et al. [26] proposed to mimic the features within the region proposal for Faster R-CNN. Wang et al. [9] imitated the finegrained features on close anchor box locations. Recently, Dai et al. [27] introduced the General Instance Selection Module to mimic deep features within the discriminative patches between teacher-student pairs. DeFeat [28] leverages different loss weights when conducting feature imitation on the object regions and the background region. There are also various feature imitation methods from the perspective of weighted imitation loss, including Gaussian mask weighted [8], feature richness weighted [29], and prediction-guided imitation loss [30]. Unlike the aforementioned methods, our work introduces localization distillation and demonstrate that logit mimicking can outperform feature imitation for KD in object detection.
Object Localization
Object localization is a fundamental issue in object detection [31], [32], [33], [34], [35], [36], [37], [38], [39], [40]. Bounding box regression is the most popular way so far for localization in object detection [14], [15], [16], [41], [42], where the Dirac delta distribution representation has been used for years. R-CNN series [16], [43], [44], [45] adopt multiple regression stages to refine the detection results, while YOLO series [14], [46], [47], [48], SSD series [15], [49], [50], and FCOS series [12], [17] adopt one-stage regression. In [51], [52], [53], [54], IoU-based loss functions are proposed to improve the localization quality of bounding boxes. Recently, bounding box representation has evolved from Dirac delta distribution [14], [15], [16] to Gaussian distribution [55], [56], and further to probability distribution [12], [13]. The probability distribution of bounding boxes is more comprehensive for describing the uncertainty of bounding boxes, and has been validated to be the most advanced bounding box representation so far.
Localization Quality Estimation
As the name suggests, Localization Quality Estimation (LQE) predicts a score that measures the localization quality of the bounding boxes predicted by the detector. LQE is usually used to cooperate with the classification task during training [57], i.e., enhancing the consistency between classification and localization. It can also be applied in joint decision-making during post-processing [14], [17], [58], which considers both the classification score and LQE when performing NMS. Early research can be dated to YOLOv1 [14], where the predicted object confidence is used to penalize the classification score. Then, box/mask IoU [58], [59] and box/polar centerness [17], [60] are proposed to model the uncertainty of detections in object detection and instance segmentation, respectively. For bounding box representation, Softer-NMS [55] and Gaussian YOLOv3 [56] predict variances for each edge of the bounding boxes. LQE is a preliminary approach to model localization ambiguity.
Arbitrary-Oriented Object Detection
Driven by the success of object detection, rotated object detection has become a hot topic in computer vision recently [61]. The mainstream rotated object detectors, such as RRPN [62], generate rotated proposals based on Faster R-CNN [16], while Rotated-RetinaNet [63] directly predicts an additional rotated angle based on RetinaNet. To address the boundary discontinuity and square-like problems, SCRDet [37] and RSDet [64] propose IoU-smooth L1 loss and modulated loss respectively for attaining smoother boundary loss, and CSL [65] proposes to use angle classification instead of angle regression.
Different from the horizontal bounding box regression which can easily leverage the IoU-based losses (e.g., GIoU [52], DIoU [53], and CIoU [54]) to enhance localization ability, the Skew IoU loss for rotated bounding box regression is quite difficult to implement due to the complexity of the backward propagation in existing deep learning libraries [18], [19], [66]. PIoU loss [67] approximates the Skew IoU by accumulating the pixels of the intersection and union of two rotated bounding boxes. GWD [38] and KLD [39] model the rotated bounding boxes via the 2D Gaussian Distribution representation and propose to use the Gaussian Wasserstein distance and KL divergence to simulate the Skew IoU loss, respectively. More recently, based on the 2D Gaussian distribution representation of rotated bounding boxes, Yang et al. [68] proposed the KFIoU loss by exploiting the Kalman filter formulation to mimic the Skew IoU in the trend level. To sum up, the rotated regressionbased detectors are still dominating this task owing to their simplicity and strong performance.
APPROACH
To begin with, we revisit the knowledge distillation background, including logit mimicking and feature imitation. Next, we describe our simple yet effective localization distillation (LD) and explain how to apply LD for rotated object detection. Then, we analyze the property of the proposed LD loss, especially the theoretical connection to the classification KD. In addition, we also introduces the concept of valuable localization region for better distilling the localization knowledge in our framework. Finally, we describe the selective region distillation based on the newly introduced valuable localization region and give the optimization objective.
Preliminaries
In the KD pipeline of object detection, the input image is fed into two object detectors, i.e., the student detector and the frozen teacher detector. The distillation process forces the outputs of the student to mimic those of the teacher. There are two mainstream paradigms of KD methods in object detection.
Logit mimicking. The logit mimicking (LM) is first developed for image classification [1], in which the student model can be improved by mimicking the soft output of the teacher classifier. Let z S , z T ∈ R W ×H×C be the logits predicted by the student and the teacher, respectively. W and H represent the output size of the logit maps. C denotes the number of Only the localization branch is visualized here. S(·, τ ) is the generalized SoftMax function with temperature τ . For a given detector, we first switch the bounding box representation to probability distribution. Then, we determine where to distill via region weighting on the main distillation region and the valuable localization region. Finally, we calculate the LD loss between two probability distributions predicted by the teacher and the student.
classes. These logits are then transformed into probability distributions p τ and q τ by using the generalized SoftMax function. We can train the network by minimizing the loss:
L = L CE + λL KD (1) = H(p, g) + λH(p τ , q τ ),(2)
where p is the predicted probability vectors, g = {0, 1} n is the one-hot ground-truth label, H is the cross-entropy loss, and λ balances the two loss terms. For object detection, the distillation can be carried out on some pre-defined distillation region R.
Feature imitation.
Recently, it has been found that feature imitation (FI), which aims to transfer knowledge by imitating the deep features between teacher-student pairs, works better than the classification KD [2], [9]. Mathematically, the feature imitation procedure can be formulated as:
L FI = 1 |R| r∈R ||M S (r) − M T (r)|| 2 ,(3)
where R is the imitation region, and | · | is the cardinality of the region. Note that an adaptive layer is needed to transform the size of student's feature map M S to be the same as the teacher's M T , so thatM S , M T ∈ R W ×H×D .
Bounding box representation.
For a given bounding box B, the conventional representations have two forms, i.e., {δ x , δ y , δ w , δ h } (encoding the coordinate mappings of the central point, the width and the height from the anchor box to the ground-truth box) [14], [15], [16] and {t, b, l, r} (the distances from the sampled point to the top, bottom, left, and right edges) [17]. These two forms actually follow the Dirac delta distribution that only focuses on the groundtruth locations but cannot model the ambiguity of bounding boxes as shown in Fig. 2. This is also clearly demonstrated in some previous works [12], [55].
Localization Distillation
In this subsection, we present localization distillation (LD), a new way to enhance the distillation efficiency for object detection. Our LD is evolved from the view of probability distribution representation of bounding boxes (anchor free [12] and anchor-based [13]), which is originally designed for generic object detection and carries abundant localization information. The working principle of our LD can be seen in Fig. 3. The procedure is the same to both anchor-based and anchor-free detectors.
Given an object detector, we follow [12], [13] to convert the bounding box representation from a quaternary representation to a probability distribution. Let e ∈ B be one of the regression variables of bounding box, whose regression range is [e min , e max ]. The bounding box distribution quantizes the continuous regression range into a uniform discretized variable e = [e 0 , e 1 , · · · , e n ] ∈ R n+1 with n subintervals, where e 0 = e min and e n = e max . The localization head predicts n + 1 logits z = {z 0 , z 1 , · · · , z n }, corresponding to the endpoints of the subintervals {e 0 , e 1 , · · · , e n }. Each edge of the given bounding box can be represented as a probability distribution by using the SoftMax function. For the number of the subinterval n, we follow the settings of GFocal [12], and a recommended choice of n is 8 ∼ 16. Different from [12], [13], we transform z S and z T into the probability distributions p τ and q τ using the generalized SoftMax function S(·, τ ). Note that when τ = 1, it is equivalent to the original SoftMax function. When τ → 0, it tends to be a Dirac delta distribution. When τ → ∞, it will be a uniform distribution. Empirically, τ > 1 is set to soften the distribution, making the bounding box distribution carry more information. The localization distillation for measuring the similarity between the two probability vectors p τ , q τ ∈ R n for one of the bounding box representation e is attained by:
L e LD = H(p τ , q τ ) (4) = H(S(z S , τ ), S(z T , τ )).(5)
Then, LD for all the four edges of some bounding box B can be formulated as:
L LD (B S , B T ) = e∈B L e LD ,(6)
where B S , B T are the predicted bounding boxes of the student and the teacher, respectively.
Rotated LD
Our LD can also be flexibly used to distill rotated bounding box detectors. Parametric regression is the most popular manner in the classical dense regression-based rotated object detection [37], [38], [39], [69]. B = {δ x , δ y , δ w , δ h , δ θ } is commonly used to represent a rotated bounding box, where δ θ denotes the encoded rotated angle. To conduct rotated localization distillation, we firstly generate the lower and upper bounds of the regression range [e min , e max ], where e ∈ B.
Note that the rotated angle prediction δ θ usually has a different regression range from δ x , δ y , δ w , δ h . Thus, different lower and upper bounds of regression ranges are set for them. In practice, [e min , e max ] ⊂ [−5, 5] will be an acceptable choice. Then, we convert the rotated bounding box to rotated bounding box distributions, as Sec. 3.2 describes. Finally, the LD loss is calculated according to Eq. (6) for the rotated bounding box distributions.
Property of LD
We can see that our LD holds the formulation of the standard logit mimicking. The question one may ask is: Does LD also inherit the property of the classification KD, especially for the optimization process? Different from the classification task where a unique integer is treated as the ground-truth label, the ground-truth label of the localization task is a float point number e * , whose value, for instance, is ranged in an interval [e i , e i+1 ]. In the following, we show an important property of LD, demonstrating that it can inherit the optimization effects held by the classification KD. Proposition 1. Let s be the student's predicted probability vector, and u 1 , u 2 are two constants with u 1 + u 2 = 1. Then, we have: 1) If p, q are two classification probability vectors, LD effect on the linear combination l = u 1 p + u 2 q is equal to the linear combination of KD effects on p, q; 2) If l is a localization probability vector, LD effect on l is equal to two KD effects on its decomposition p and q.
The above two share the same expression,
∂LD l i = u 1 ∂KD p i + u 2 ∂KD q i ,(7)
where ∂KD p i denotes the derivatives of the KD loss of two probabilities s, p w.r.t. a given logit z i , and ∂LD p i likewise for the LD loss.
The proof can be found in the Appendix (A.1). Proposition 1 provides the theoretical connection between LD and the classification KD. It shows that the optimization effects of LD on a float point number localization problem is functionally equivalent to two KD effects on the integer position classification problems. Therefore, as a direct corollary of [70], LD holds the gradient rescaling to the distribution focal loss (DFL) [12] w.r.t. the relative prediction confidence at two near positions. For the details, we refer to the Appendix (A.2).
Valuable Localization Region
Previous works mostly force the deep features of the student to mimic those of the teacher by minimizing the l 2 loss. However, a straightforward question arises: Should we use the whole imitation regions without discrimination to distill the hybrid knowledge? According to our observation, the
X = {xij}I×J with xij = DIoU (B a i , B gt j ). 2: α vl = γαpos. 3: Select locations with V = {α vl X αpos}. 4: return V
answer is no. In this subsection, we describe the valuable localization region (VLR) to further improve the distillation efficiency, which we believe will be a promising way to train better student detectors.
Specifically, the distillation region is divided into two parts, the main distillation region and the valuable localization region. The main distillation region is intuitively determined by label assignment, i.e., the positive locations of the detection head. The valuable localization region can be obtained by Algorithm 1. First, we calculate the DIoU [53] matrix X between all the anchor boxes B a and the ground-truth boxes B gt . Then, we set the lower bound of DIoU to be α vl = γα pos , where α pos is the positive IoU threshold of label assignment. The VLR can be defined as
V = {α vl X α pos }.
Our method has only one hyperparameter γ 1, which controls the range of the VLRs. When γ = 0, all the locations whose DIoUs between the preset anchor boxes and the GT boxes satisfy 0 x ij α pos will be determined as VLRs. When γ → 1, the VLR will gradually shrink to empty. Here we use DIoU [53] since it gives higher priority to the locations close to the center of the object.
Similar to label assignment, our method assigns attributes to each location across multi-level FPN. In this way, some of locations outside the GT boxes will also be considered. So, we can actually view the VLR as an outward extension of the main distillation region. Note that for anchor-free detectors, like FCOS, we can use the preset anchors on feature maps and do not change its regression form, so that the localization learning maintains to be the anchor-free type. While for anchor-based detectors which usually set multiple anchors per location, we unfold the anchor boxes to calculate the DIoU matrix, and then assign their attributes.
Selective Region Distillation
Given the above descriptions, the total loss of logit mimicking for training the student S can be represented as:
L =λ 0 L cls (C S , C gt ) + λ 1 L reg (B S , B gt ) + λ 2 L DFL (B S , B gt ) +λ 3 I Main L LD (B S , B T ) + λ 4 I VL L LD (B S , B T ) +λ 5 I Main L KD (C S , C T ) + λ 6 I VL L KD (C S , C T ),(8)
where the first three terms are exactly the same to the classification and bounding box regression branches for any regression-based detector, i.e., L cls is the classification loss, L reg is the bounding box regression loss and L DFL is the distribution focal loss [12]. I Main and I VL are the distillation (a) Temperature τ in LD: The generalized Softmax function with large τ brings considerable gains. We set τ = 10 by default. The teacher is ResNet-101 and the student is ResNet-50. masks for the main distillation region and the valuable localization region, respectively. L KD is the KD loss [1], C S as well as C T denote the classification head output logits of the student and the teacher, respectively, and C gt is the ground-truth class label. All the distillation losses will be weighted by the same weight factors according to their types. More clearly, the weight factor of the LD loss follows that of the bbox regression term and the weight factor of the KD loss follows that of the classification term. Also, it is worth mentioning that the DFL loss term can be disabled since LD loss has sufficient guidance ability. In addition, we can enable or disable the four types of distillation losses so as to distill the student in different regions selectively.
EXPERIMENT
In this section, we conduct comprehensive ablation studies and analysis to demonstrate the superiority of the proposed LD and distillation scheme on the challenging large-scale MS COCO [71] benchmark, PASCAL VOC [72], and aerial image DOTA dataset [73].
Experiment Setup
MS COCO. The train2017 (118K images) is utilized for training and val2017 (5K images) is used for validation. We also obtain the evaluation results on MS COCO testdev 2019 (20K images) by submitting to the COCO server. The experiments are conducted under the mmDetection [74] framework. Unless otherwise stated, we use ResNet [75] with FPN [76] as our backbone and neck networks, and the FCOS-style [17] anchor-free head for classification and localization. The training schedule for ablation experiments is set to single-scale 1× mode (12 epochs). For other training and testing hyper-parameters, we follow exactly the GFocal [12] protocol, including QFL loss for classification and GIoU loss for bbox regression, etc. We use the standard COCOstyle measurement, i.e., average precision (AP), for evaluation. All the baseline models are retrained by adopting the same settings so as to fairly compare them with our LD.
PASCAL VOC.
We also provide experimental results on another popular object detection benchmark, i.e., PASCAL VOC [72]. We use the VOC 07+12 training protocol, i.e., the union of VOC 2007 trainval set and VOC 2012 trainval set (16551 images) for training, and VOC 2007 test set (4952 images) for evaluation. The initial learning rate is 0.01 and the total training epochs are set to 4. The learning rate decreases by a factor of 10 after the 3rd epoch. For comprehensively evaluating the localization performance, the average precision (AP) along with 5 mAP across different IoU thresholds are reported, i.e., AP 50 , AP 60 , AP 70 , AP 80 and AP 90 .
DOTA.
As for the evaluation of rotated LD, we report the detection results on the classic aerial image dataset DOTA [73]. We follow the standard mmRotate [61] training and testing protocol. The train set and validation set consist of 1403 images and 468 images, respectively, which are randomly selected in literature. These huge images are cropped into smaller subimages with shape 600 × 600, which is in line with the cropping protocol in official implementation. In practice, we obtain about 15,700 training and 5,300 validation patches. Unless otherwise stated, all the hyperparameters follow the default settings of mmRotate for a fair comparison. We report results in terms of AP and 5 mAPs under different IoU thresholds, which is consistent with PASCAL VOC. Due to the memory limitation, the teachers are ResNet-34 FPN with 2× training schedule (24 epochs), and the students are ResNet-18 FPN with 1× training schedule (12 epochs).
Ablation Study
Temperature τ in LD.
Our LD introduces a hyperparameter, i.e., the temperature τ . Tab. 1(a) reports the results of LD with various temperatures, where the teacher model is ResNet-101 with AP 44.7 and the student model is ResNet-50. Here, only the main distillation region is adopted. Compared to the first row in Tab. 1(a), different temperatures consistently lead to better results. In this paper, we simply set the temperature in LD as τ = 10, which is fixed in all the other experiments.
LD vs. Pseudo BBox Regression. The teacher bounded regression (TBR) loss [7] is a preliminary attempt to enhance the student on the localization head, i.e., the pseudo bbox regression in Fig. 1, which is represented as:
L TBR = λL reg (B s , B gt ), if 2 (B s , B gt ) + ε > 2 (B t , B gt ),(9)
where B s and B t denote the predicted boxes of student and teacher respectively, B gt denotes the ground truth boxes, ε is a predefined margin, and L reg represents the GIoU loss [52]. Here, only the main distillation region is adopted. From Tab. 1(b), we can see that the TBR loss does yield performance gains (+0.4 AP and +0.7 AP 75 ) when using proper threshold ε = 0.1 in Eq. (9). However, it uses the coarse bbox representation, which does not contain any localization uncertainty information of the detector, leading to sub-optimal results. On the contrary, our LD directly produces 41.1 AP and 44.9 AP 75 , since it utilizes the probability distribution of bounding boxes which contains rich localization knowledge.
Various γ in VLR. The newly introduced VLR has the parameter γ which controls the range of VLR. As shown in Tab. 1(c), AP is stable when γ ranges from 0 to 0.5. The variation in AP in this range is around 0.1. As γ increases, the VLR gradually shrinks to empty. The performance also gradually drops to 41.1, i.e., conducting LD on the main distillation region only. The sensitivity analysis experiments on the parameter γ indicate that conducting LD on the VLR has a positive effect on performance. In the rest experiments, we set γ to 0.25 for simplicity.
Selective Region Distillation.
There are several interesting observations regarding the roles of KD and LD and their preferred regions. We report the relevant ablation study results in Tab. 2, where "Main" means that the logit mimicking is conducted on the main distillation region, i.e., the positive locations of label assignment, and "VLR" denotes the valuable localization region. For MS COCO, it can be seen that conducting "Main LD", "VLR LD", and "Main KD" all benefits the student's performance. This indicates that the main distillation regions contain the valuable knowledge for both classification and localization and the classification KD benefits less compared to LD. Then, we impose the classification KD on a larger range, i.e., the VLR. However, we observe that further incorporating "VLR KD" yields no improvement (the last two rows of Tab. 2). This is the main reason why we adopt the proposed selective region distillation for COCO. Next, we check the roles of KD and LD on PASCAL VOC. Tab. 2 shows that it is beneficial to transfer the localization knowledge to both the main distillation region and the VLR. However, due to the different knowledge distribution patterns, it shows a similar degradation of the classification KD. Comparing the 3rd row and the 4th row of Tab. 2, "Main KD" leads to a performance drop, while "VLR KD" produces a positive effect to the student. This indicates that the selective region distillation can take the advantages of
Application to Other Dense Object Detectors.
Our LD can be flexibly applied to other dense object detectors, including either anchor-based or anchor-free types. We employ LD with the divide-and-conquer distillation scheme to several recently popular detectors, such as RetinaNet [63] (anchor-based), FCOS [17] (anchor-free) and ATSS [77] (anchor-based). According to the results in Tab. 4, we can see that our LD can consistently improve the baselines by around 2 AP scores.
Arbitrary-Oriented Object Detectors.
As a direct extension of our LD, the rotated bounding box requires an additional probability distribution, i.e., the rotated angle distribution. We make the necessary and minimum modification to two arbitrary-oriented object detectors, 1) the foundation of dense regression-based rotated detector-Rotated-RetinaNet [63] and 2) the recently popular 2D Gaussian distribution modeling detector-GWD [38]. We follow the mmRotate [61] training and testing protocols. We use ResNet-34 as the teacher and ResNet-18 as the student for GPU memory saving. The results are reported on the validation set of DOTA-v1.0 [73].
The results have been shown in Tab. 5, which demonstrates that our LD can also be successfully applied to rotated object detectors and attain considerable improvement in aerial image detection. Particularly, we obtain impres- 90 . This shows the excellent compatibility of our LD, which can be applied to not only horizontal bounding boxes but also the rotated ones. In addition, it is worth mentioning that our LD does not rely on the representations of bounding boxes and the optimization way of modeling (IoU-based loss for horizontal bounding box prediction [52], [53] and 2D Gaussian modeling for rotated bounding box prediction [38]).
Logit Mimicking v.s. Feature Imitation.
Thus far, we have validated the effectiveness of our LD and the selective region distillation in distilling different types of object detectors. The proposed LD along with the classification KD provides a unified logit mimicking framework. It naturally raises several interesting questions:
• In terms of detection performance, how does logit mimicking perform compared to feature imitation? Does feature imitation stay ahead of logit mimicking?
• What are the characteristics of these two different distillation techniques? Are the deep feature representations and logits learned different?
In this subsection, we shall provide answers to the above questions.
Quantitative Comparison on Numerical Results.
We first compare our proposed LD with several state-of-theart feature imitation methods. We adopt the selective region distillation, i.e., performing KD and LD on the main distillation region, and performing LD on the VLR. Since modern detectors are usually equipped with FPN [76], following previous works [9], [27], [28], we re-implement their methods and impose all the feature imitations on multi-level FPN for a fair comparison. Here, "FitNets" [2] distills the whole feature maps. "DeFeat" [28] means the loss weights of feature imitation outside the GT boxes are larger than those inside the GT boxes. "Fine-Grained" [9] distills the deep features on the close anchor box locations. "GI Imitation" [27] selects the distillation regions according to the discriminative predictions of the student and the teacher. "Inside GT Box" means we select the ground-truth boxes with the same stride on the FPN layers as the feature imitation regions. "Main Region" means we imitate the features within the main distillation region.
From Tab. 6, we can see that distillation within the whole feature maps attains +0.6 AP gains. By setting a larger loss weight for the locations outside the GT boxes (DeFeat [28]), the performance is slightly better than that using the same Logit Mimicking vs. Feature Imitation. "Ours" means we use the selective region distillation, i.e., "Main LD + VLR LD + Main KD". "*" denotes we remove the "Main KD". The teacher is ResNet-101 and the student is ResNet-50 [75]. The results are reported on MS COCO val2017.
Method AP AP50 AP75 APS APM APL Baseline (GFocal [12] loss weight for all locations. Fine-Grained [9] focusing on the locations near GT boxes, produces 41.1 AP, which is comparable to the results of feature imitation using the Main Region. GI imitation [27] searches the discriminative patches for feature imitation and gains 41.5 AP. Due to the large gap in predictions between student and teacher, the imitation regions may appear anywhere.
Despite the notable improvements of these feature imitation methods, they do not explicitly consider the knowledge distribution patterns. On the contrary, our method can transfer the knowledge via a selective region distillation, which directly produces 42.1 AP. It is worth noting that our method operates on logits instead of deep features, indicating that our LD is a critical component for logit mimicking to outperform the feature imitation. Moreover, our method is orthogonal to the aforementioned feature imitation methods. Tab. 6 shows that with these feature imitation methods, our performance can be further improved. Particularly, with GI imitation, we improve the strong GFocal baseline by +2.3 AP and +3.1 AP 75 .
Teacher-Student Error Comparison. We first check the average teacher-student errors of the classification scores One can see that the Fine-Grained feature imitation [9] and GI imitation [27] reduce the two errors as expected, since the classification knowledge and localization knowledge are mixed on feature maps. Our "Main LD" and "Main LD + VLR LD" have comparable or larger classification score average errors than Fine-Grained [9] and GI imitation [27] but lower box probability distribution average errors. This indicates that these two settings with only LD can significantly reduce the box probability distribution distance between the teacher and the student but they cannot reduce this error for the classification head. If we impose the classification KD on the main distillation region, yielding "Main LD + VLR LD + Main KD", both the classification score average error and the box probability distribution average error can be reduced. We also visualize the L1 error summation of the localization head logits between the student and the teacher for each location at the P5 and P6 FPN levels. As shown in Fig. 5, comparing to "Without Distillation", we can see that the GI imitation [27] does decrease the localization discrepancy between the teacher and the student. Notice that we particularly choose a model ("Main LD + VLR LD") with slightly better AP performance than GI imitation for visualization. Our method can clearly reduce this error and alleviate the localization ambiguity.
In Fig. 6, we plot the average errors between the student and the teacher in terms of deep feature, class logit and bbox logit, respectively. It can be seen that these three types of errors show an almost consistent trend as the test resolution changes. Interestingly, we find that even though the logit mimicking can shrink the errors of both the bbox logits and the classification ones, it learns complete different feature representations from the teacher's. From the left side of Fig. 6, our method enlarges the distance between the student's feature representations and those of the teacher.
Moreover, Tab. 7 shows that the logit mimicking produces a nearly zero Pearson correlation coefficient for the feature representations between the teacher-student pair. This indicates that if the student is only trained with logit mimicking, it produces a far different and nonlinearly correlated feature representation to teacher's. Be that as it may, we can still attain well-performed logits for good generalization. The last column of Tab. 7 and Fig. 6 show that the logit mimicking is able to approach the teacher's logits not only in distance but also in linear correlation.
AP Landscape. Distilling an object detector from either the feature level or the logit level is a high-dimensional non-convex optimization problem, which is easy in practice but hard in theory. To better understand the behavior of logit mimicking and feature imitation, we present a new visualization method, termed AP landscape, which is especially designed for object detection to observe the AP changes caused by minute perturbations in the learnt feature representations. A canonical approach was taken in [78], who studied the loss surface visualization by linearly interpolating the parameters of two networks.
In our visualization, we are particularly curious about the empirical characterization of the feature representations and how they affect the final performance. Considering two feature representations M f , M l which are learnt by the detectors trained with feature imitation and logit mimicking, respectively, we visualize the AP landscapes within the 2D projected space M f ⊕ M l . We use two scalar parameters α and β to obtain a new feature representation by using the weighted sum M (α, β) = αM f + βM l . Note that when α = 0 and β = 1, it represents that the feature representations are predicted by the logit mimicking method and inversely the feature imitation when α = 1 and β = 0. Then, we feed M (α, β) to the downstream heads and plot the final AP score. Due to the computational burden, we set α, β ∈ [−0.5, 1.5] to visualize the 2D AP landscapes. From Fig. 7, we see that logit mimicking learns robust feature representations, i.e., the red pentagram at (0, 1), which is surrounded by a flat and well-performed region of AP score. Second, we observe that the GI imitation produces a much sharper AP landscape than logit mimicking. We attribute the landscape sharpness of the GI imitation to the hard l 2 loss supervision. In this case, it is hard for the student to imitate the high-level and advanced feature representations from the teacher, which corresponds to a heavy detector with a longer training schedule and higher accuracy. On the contrary, the logit mimicking gives the feature representations much more liberty to learn, leading to a better generalization. As shown in Fig. 8, logit mimicking can also reduce the optimization difficulty in the early training stage, while feature imitation converges slower and has a poor generalization in the early training stage.
Summary. Based on the above results and observations, we can draw the following conclusions:
• Logit mimicking can outperform feature imitation in object detection when the localization knowledge distillation is explicitly distilled.
• Feature imitation can increase the consistency of the feature representations between the teacher-student pair, but come some drawbacks such as less feature robustness and slow training convergence. Logit mimicking with the selective region distillation can significantly increase the consistency of the logits between the teacher-student pair, keep the learning liberty of features, and thereby speed up training process and benefit the KD performance more. This indicates that the consistency of feature representations between the teacher-student pair is not the crucial factor of improving the KD performance.
Comparison with the State-of-the-Arts
We compare our LD with the state-of-the-art dense object detectors by using our LD to further boost GFocalV2 [57]. For COCO val2017, since most previous works use ResNet-50-FPN backbone with the single-scale 1× training schedule (12 epochs) for validation, we also report the results under this setting for a fair comparison. For COCO test-dev 2019, following a previous work [57], the LD models with the 1333 × [480 : 960] multi-scale 2× training schedule (24 epochs) are included. The training is carried on a machine node with 8 GPUs with a batch size of 2 per GPU and initial learning rate 0.01 for a fair comparison. During inference, single-scale testing ([1333 × 800] resolution) is adopted. For Tab. 8 reports the quantitative results. It can be seen that our LD improves the AP score of the SOTA GFocalV2 by +1.6 and the AP 75 score by +1.8 when using the ResNet-50-FPN backbone. When using the ResNet-101-FPN and ResNeXt-101-32x4d-DCN with multi-scale 2× training, we achieve the highest AP scores, 47.1 and 50.5 , which outperform all existing dense object detectors under the same backbone, neck and test settings. More importantly, our LD does not introduce any additional network parameters or computational overhead and hence can guarantee exactly the same inference speed as GFocalV2.
CONCLUSION
In this paper, we propose a flexible localization distillation for dense object detection and a selective region distillation based on a new valuable localization region. We show that 1) logit mimicking can be better than feature imitation; and 2) the selective region distillation for transferring the classification and localization knowledge is important when distilling object detectors. We hope our method could provide new research intuitions for the object detection community to develop better distillation strategies. In the future, the applications of LD to sparse object detectors (DETR [87] series), the heterogeneous detector pairs, and other relevant fields, e.g., instance segmentation, object tracking and 3D object detection, warrant further research. Besides, since our LD shares the equivalent optimization effect to classification KD, some improved KD methods may also bring gain to LD, e.g., Relational KD [23], Self-KD [88], [89], Teacher Assistant KD [24], and Decoupled KD [90], etc. Cross architecture distillation using recent state-of-the-art classification models [91], [92], [93], [94], [95] as teachers is also an interesting direction to explore. g i = [g 1 , g 2 , · · · , g n ], where g i = 1, and 0 otherwise. e = [e 1 , e 2 , · · · , e n ] ∈ R n is the uniformly discretized variable for the regression range [e min , e max ].
The gradient of the cross-entropy (CE) loss w.r.t. one of the logit z i ∈ z S , i ∈ {1, 2, · · · , n} can be represented as:
∂L CE ∂z i = p i − g i ,(10)
where p i is the predicted class probability at location i and z S is the logit vector produced by the student network. The gradient of the KD loss along with the CE loss w.r.t. one of the logit z i ∈ z S can be represented as:
∂L KD ∂z i = γ(p i − g i ) + λ τ (p τ i − q τ i ),(11)
where γ and λ are the CE and KD loss weights and τ is the temperature. We follow the notations in [70], and denote ∂LCE ∂zi by ∂ i and ∂L KD ∂zi by ∂ KD i ∂i . The ratio of Eq. 11 and Eq. 10 indicates that KD performs gradient rescaling to the CE loss in the logits space. Definition 1. Let p ∈ R n be a predicted probability vector of a network, M i > 0, i ∈ {1, 2, · · · , n} are predefined thresholds. p is called M i -well-performed for a task T if the distance from p to its corresponding ground-truth vector g i is bounded by M i . Lemma 1. If two predicted probability vectors p, q are respectively M i -well-performed and M j -well-performed for the integer position classification with ground-truth vectors g i , g j , then their linear combination u 1 p + u 2 q is M -well-performed for the float point number position localization with ground-truth value y = u 1 e i + u 2 e j , where M = max{M i , M j } and u 1 + u 2 = 1.
Proof. By Def. 1, the two distances satisfy d(p, g i ) M i , d(q, g j ) M j , where g i = g j . Note that d(·, ·) here can be an arbitrary distance metric, e.g., the l 2 distance.
A float point number position localization requires a probability, which can be linearly interpolated by l = u 1 p + u 2 q, and its ground-truth vector is g = u 1 g i + u 2 g j .
Then we get d(l, g) = d(u 1 p + u 2 q, u 1 g i + u 2 g j ) (12) d(u 1 p + u 2 q, u 1 g i + u 2 q)
+ d(u 1 g i + u 2 q, u 1 g i + u 2 g j ) (13) = d(u 1 p, u 1 g i ) + d(u 2 q, u 2 g j ) (14) u 1 M i + u 2 M j (15) max{M i , M j } (16) = M.(17)
Hence the network is M -well-performed for the float point number position localization.
Lemma 2.
If l is a localization probability vector with groundtruth value y = u 1 e i + u 2 e j , where u 1 + u 2 = 1, then l can be decomposed into two classification probabilities p and q with ground-truth vectors g i and g j .
Proof. Let l ∈ R n be a predicted localization probability and g be its ground-truth vector. It is easy to decompose g into two integer position ground-truth vectors g i and g j , satisfying g = u 1 g i + u 2 g j .
Existence of the decomposition of l:
To decompose l into two classification probabilities p and q, satisfying l = u 1 p + u 2 q, we solve the following linear equations,
AX = b ⇐⇒ i p i = 1, i q i = 1,
u 1 p 1 + u 2 q 1 = l 1 , u 1 p 2 + u 2 q 2 = l 2 , . . . u 1 p n + u 2 q n = l n ,
where X = (p 1 , p 2 , · · · , p n , q 1 , q 2 , · · · , q n ) T , and the augmented matrix (A, b) is given by:
1
1 1 · · · 1 0 0 0 · · · 0 1 0 0 0 · · · 0 1 1 1 · · · 1 1 u 1 0 0 · · · 0 u 2 0 0 · · · 0 l 1 0 u 1 0 · · · 0 0 u 2 0 · · · 0 l 2 0 0 u 1 · · · 0 0 0 u 2 · · · 0 l 3 . . . 0 0 0 · · · 0 1 1 1 · · · 1 1 0 0 0 · · · 0 0 0 0
· · · 0 0 ,(20)
and we obtain the rank of the coefficient matrix A is equal to the rank of the augmented matrix (A, b), which is n + 1. Note that n + 1 < 2n when n > 1. Thus the above linear equations have infinite solutions.
The following proposition describes the relation between KD and LD that conducting LD to optimize a localization probability is equivalent to conducting KD to optimize two classification probabilities. Proposition 1. Let s be the student's predicted probability vector, u 1 and u 2 are two constants and their summation is 1. We have 1) If p and q are two classification probabilities, LD effect on the linear combination l = u 1 p + u 2 q is equal to the linear combination of KD effects on p, q. 2) If l is a localization probability, LD effect on l is equal to two KD effects on its decomposition p and q.
Proof. We first denote the derivatives of the KD loss of two probabilities s, p w.r.t. a given logit z i by ∂KD p i , and ∂LD p i likewise for the LD loss. 1) According to Lemma 1, the linear combination l = u 1 p + u 2 q is well defined and the derivatives of the LD loss of s, l w.r.t. a given logit z i is given by:
∂LD l i = s τ i − l τ i (21) = u 1 s τ i + u 2 s τ i − (u 1 p τ i + u 2 q τ i ) (22) = u 1 (s τ i − p τ i ) + u 2 (s τ i − q τ i ) (23) = u 1 ∂KD p i + u 2 ∂KD q i(24)
2) According to Lemma 2, the decomposition of l exists, which is written as l = u 1 p+u 2 q. Then Eq. 24 still holds.
A.2 Gradient Rescaling
We first give the lemma in [70]. Lemma 3. Let q τ t = p τ t + c t + η, where c t is teacher's relative prediction confidence on the ground-truth class t and η is a zeromean random noise. Then the logit's gradient rescaling factor by applying KD is given by:
E η ∂ KD t ∂ t = E η i =t ∂ KD i i =t ∂ i = γ + λ τ c t 1 − p t .(25)
Next, we give the corollary of Lemma 3, which shows that LD performs gradient rescaling to distribution focal loss (DFL) [12] in the logits space. Corollary 1. Let q τ i = p τ i +c i +η i , where c i is teacher's relative prediction confidence at position i, η i is a zero-mean random noise. Then the logit's gradient rescaling factor to DFL by applying LD is given by:
E η ∂ LD i ∂ i = E η s =i ∂ LD s s =i ∂ s = γ + λ τ c i u i − p i ,(26)
where ∂ LD i ∂i denotes the gradient of LD loss along with DFL w.r.t. logits z i , u i and u j are two constants and their summation is 1, γ and λ are the loss weights of the DFL and LD loss respectively, and τ is temperature.
Proof. Following [12], DFL is defined as the linear combination of two CE loss at position i and j,
L DF L = u i H(p, g i ) + u j H(p, g j ),(27)
where g i = {0, 1} n are ground-truth labels whose value is 1 at position i and 0 otherwise. One can easily get the gradient of DFL w.r.t. the logit z i ,
∂L DFL ∂z i = u i (p i − g i ) + u j p i = p i − u i ,(28)
and we still use the notation ∂ i to represent ∂LDFL ∂zi . With LD, the total loss is given by: L LD = γ(u i H(p, g i ) + u j H(p, g j )) + λH(p τ , q τ ), (29) The gradient of LD loss along with DFL w.r.t. the logit z i ∈ z S can be represented as:
∂L LD ∂z i = γu i (p i − g i ) + γu j p i + λ τ (p τ i − q τ i ),(30)
and we still denote ∂ LD i = ∂LLD ∂zi . According to Lemma 3, we have the ratio of Eq. 30 and Eq. 28,
E η ∂ LD i ∂ i = γu i p i − g i p i − u i + γ u j p i p i − u i − λ τ c i p i − u i (31) = γ + λ τ c i u i − p i .(32)
Thus, the sum of the incorrect position gradients is given by:
s =i ∂ LD s = γu i s =i p s + γu j s =i,j p s + γu j (p j − g j ) + λ τ s =i (p τ s − q τ s ) = γu i (g i − p i ) + γu j (g i − p i ) − γu j g j + λ τ (q τ s − p τ s ) = γu i (g i − p i ) − γu j p i + λ τ (q τ s − p τ s ) = −∂ LD i .(33)
Similarly applies for ∂ s , and hence the proof.
Fig. 1 .
1Existing KD pipelines for object detection. x Logit Mimicking: classification KD in
Fig. 2 .
2Bottom edge for "elephant" and right edge for "surfboard" are ambiguous to locate.
Fig. 3 .
3Illustration of localization distillation (LD) for an edge e ∈ B.
ε
AP AP 50 AP 75 AP S AP M AP L LD vs. Pseudo BBox Regression [7]:The localization knowledge can be more efficiently transferred by our LD compared to the pseudo bbox regression. The teacher is ResNet-101 and the student is ResNet-50. γ AP AP 50 AP 75 AP S AP M AP L Role of γ in VLR: Conducting LD on valuable localization region has a positive effect on performance. We set γ = 0.25 by default. The teacher is ResNet-101 and the student is ResNet-50.
Fig. 4 .
4Visual comparisons of SOTA feature imitation and our LD. We show the average L1 error of classification scores and box probability distributions between teacher and student at the P4, P5, P6 and P7 FPN levels. The teacher is ResNet-101 and the student is ResNet-50. The results are evaluated on MS COCO val2017.
Fig. 5 .
5Visual comparisons between the state-of-the-art feature imitation and our LD. We show the per-location L1 error summation of the localization head logits between the teacher and the student as the P5 (first row) and P6 (second row) FPN levels. The teacher is ResNet-101 and the student is ResNet-50. We can see that compared to the GI imitation[27], our method ("Main LD + VLR LD") can significantly reduce the errors for almost all the locations. Darker is better. Best viewed in color.
Fig. 6 .Fig. 7 .
67Average teacher-student error on (left) deep feature representation, (middle) class logits, and (right) bbox logits. "Ours" denotes "Main LD + VLR LD + Main KD". The curves are evaluated on MS COCO val2017. The 2D contour plots of AP landscapes in feature subspace. The AP landscapes are evaluated on MS COCO val2017.
Fig. 8 .
8The average precision (AP) during the early training stage. The feature imitation significantly slows down the convergence and gets a sub-optimal generalization. Logit mimicking (Ours) can reduce the training difficulty in the early training stage.
Algorithm 1 Valuable Localization RegionRequire: A set of anchor boxes B a = {B a i } and a set of ground truth boxes B gt = {B gt j }, 1 i I , 1 j J. Positive threshold αpos of label assignment. Ensure: V = {vij}I×J , vij ∈ {0, 1} encodes final location of VLR, where 1 denotes VLR and 0 indicates ignore. 1: Compute DIoU matrix
TABLE 1
1Ablations. We show ablation experiments for LD and VLR on MS COCO val2017. τ AP AP 50 AP 75 AP S AP M AP L -40.1 58.2 43.1 23.3 44.4 52.5 1 40.3 58.2 43.4 22.4 44.0 52.4 5 40.9 58.2 44.3 23.2 45.0 53.2 10 41.1 58.7 44.9 23.8 44.9 53.6 15 40.7 58.5 44.2 23.5 44.3 53.3 20 40.5 58.3 43.7 23.8 44.1 53.5
TABLE 2
2Evaluation of selective region distillation for KD and our LD.The
TABLE 3
3Quantitative results of LD for lightweight detectors. The teacher is ResNet-101. The results are reported on MS COCO val2017. Quantitative results of LD on various popular dense object detectors. The teacher is ResNet-101 and the student is ResNet-50. The results are reported on MS COCO val2017. both KD and LD on their respective favorable regions. LD for Lightweight Detectors. Tab. 3 reports the results of our distillation scheme ("Main LD + VLR LD + Main KD" on COCO), where a series of lightweight students are distilled, including ResNet-18, ResNet-34, and ResNet-50. For all given students, our LD can stably improve the detection performance without any bells and whistles.Student
LD AP AP50 AP75 APS APM APL
ResNet-18
35.8 53.1
38.2
18.9 38.9 47.9
37.5 54.7
40.4
20.2 41.2 49.4
ResNet-34
38.9 56.6
42.2
21.5 42.8 51.4
41.0 58.6
44.6
23.2 45.0 54.2
ResNet-50
40.1 58.2
43.1
23.3 44.4 52.5
42.1 60.3
45.6
24.5 46.2 54.8
TABLE 4
Student
LD AP AP50 AP75 APS APM APL
RetinaNet [63]
36.9 54.3 39.8 21.2 40.8 48.4
39.0 56.4 42.4 23.1 43.2 51.1
FCOS [17]
38.6 57.2 41.5 22.4 42.2 49.8
40.6 58.4 44.1 24.3 44.1 52.3
ATSS [77]
39.2 57.3 42.4 22.7 43.1 51.5
41.6 59.3 45.3 25.2 45.2 53.3
From
these results, we can see that our LD improves the students
ResNet-18, ResNet-34, ResNet-50 by +1.7, +2.1, +2.0 in AP,
and +2.2, +2.4, +2.5 in AP 75 , respectively.
TABLE 5
5Quantitative results of rotated LD on the popular arbitrary-oriented object detectors. The teacher is ResNet-34 and the student is ResNet-18. The results are reported on the validation set of DOTA-v1.0.Student
AP AP50 AP60 AP70 AP80 AP90
R-RetinaNet [63] 33.7 58.0 54.5 42.3 22.9
4.7
LD (ours)
39.1 63.8 61.1 48.8 28.7
8.8
GWD [38]
37.1 63.1 60.1 46.7 24.7
6.2
LD (ours)
40.2 66.4 63.6 50.3 28.2
8.5
sive improvements for the mAP under more rigorous IoU
thresholds, e.g., AP 70 , AP 80 , AP
TABLE 6
6
Box Pr. Dist. Avg. Error Class Score Avg. Error) 40.1 58.2 43.1 23.3 44.4 52.5
FitNets [2]
40.7 58.6 44.0 23.7 44.4 53.2
Inside GT Box
40.7 58.6 44.2 23.1 44.5 53.5
Main Region
41.1 58.7 44.4 24.1 44.6 53.6
Fine-Grained [9]
41.1 58.8 44.8 23.3 45.4 53.1
DeFeat [28]
40.8 58.6 44.2 24.3 44.6 53.7
GI Imitation [27]
41.5 59.6 45.2 24.3 45.7 53.6
Ours
42.1 60.3 45.6 24.5 46.2 54.8
Ours + FitNets
42.1 59.9 45.7 25.0 46.3 54.4
Ours + Inside GT Box 42.2 60.0 45.9 24.3 46.3 55.0
Ours + Main Region 42.1 60.0 45.7 24.6 46.3 54.7
Ours + Fine-Grained 42.4 60.3 45.9 24.7 46.5 55.4
Ours* + Fine-Grained 42.1 59.7 45.6 24.8 46.1 54.8
Ours + DeFeat
42.2 60.0 45.8 24.7 46.1 54.4
Ours + GI Imitation
42.4 60.3 46.2 25.0 46.6 54.5
Main LD
Main LD + VLR LD
w/o Distillation
GI Imitation
Main LD + VLR LD + Main KD
Fine-Grained
0.55
0.31
0.37
0.43
0.49
0.25
0.145
0.117
0.124
0.131
0.138
0.110
TABLE 7
7The average Pearson correlation coefficient between the teacher-student pair. 'GI': GI imitation. 'Ours': our logit mimicking scheme with the selective region distillation. The results are evaluated on MS COCO val2017.w/o distillation
GI
Ours Ours + GI
deep features
-0.0042
0.8175 -0.0031
0.8373
bbox logits
0.9222
0.9326 0.9733
0.9745
and the box probability distributions, as shown in Fig 4.
TABLE 8
8Comparison with state-of-the-art methods on COCO val2017 and test-dev2019 . TS: Traning Schedule. '1×': single-scale training 12 epochs. '2×': multi-scale training 24 epochs. different students ResNet-50, ResNet-101 and ResNeXt-101-32x4d-DCN [79], [80], we also choose different networks ResNet-101, ResNet-101-DCN and Res2Net-101-DCN [81] as their teachers, respectively.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Fitnets: Hints for thin deep nets. A Romero, N Ballas, S E Kahou, A Chassang, C Gatta, Y Bengio, Int. Conf. Learn. Represent. A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, "Fitnets: Hints for thin deep nets," in Int. Conf. Learn. Represent., 2015.
Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. S Zagoruyko, N Komodakis, Int. Conf. Learn. Represent. S. Zagoruyko and N. Komodakis, "Paying more attention to atten- tion: Improving the performance of convolutional neural networks via attention transfer," in Int. Conf. Learn. Represent., 2017.
Paraphrasing complex network: network compression via factor transfer. J Kim, S Park, N Kwak, Adv. Neural Inform. Process. Syst. J. Kim, S. Park, and N. Kwak, "Paraphrasing complex network: network compression via factor transfer," in Adv. Neural Inform. Process. Syst., 2018, pp. 2765-2774.
Knowledge distillation via route constrained optimization. X Jin, B Peng, Y Wu, Y Liu, J Liu, D Liang, J Yan, X Hu, Int. Conf. Comput. Vis. X. Jin, B. Peng, Y. Wu, Y. Liu, J. Liu, D. Liang, J. Yan, and X. Hu, "Knowledge distillation via route constrained optimization," in Int. Conf. Comput. Vis., 2019.
Distilling knowledge by mimicking features. G.-H Wang, Y Ge, J Wu, IEEE Trans. Pattern Anal. Mach. Intell. G.-H. Wang, Y. Ge, and J. Wu, "Distilling knowledge by mimicking features," IEEE Trans. Pattern Anal. Mach. Intell., 2021.
Learning efficient object detection models with knowledge distillation. G Chen, W Choi, X Yu, T Han, M Chandraker, Adv. Neural Inform. Process. Syst. G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker, "Learning efficient object detection models with knowledge distillation," in Adv. Neural Inform. Process. Syst., 2017.
Distilling object detectors with task adaptive regularization. R Sun, F Tang, X Zhang, H Xiong, Q Tian, arXiv:2006.13108arXiv preprintR. Sun, F. Tang, X. Zhang, H. Xiong, and Q. Tian, "Distilling object detectors with task adaptive regularization," arXiv preprint arXiv:2006.13108, 2020.
Distilling object detectors with fine-grained feature imitation. T Wang, L Yuan, X Zhang, J Feng, IEEE Conf. Comput. Vis. Pattern Recog. T. Wang, L. Yuan, X. Zhang, and J. Feng, "Distilling object detec- tors with fine-grained feature imitation," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Improve object detection with feature-based knowledge distillation: Towards accurate and efficient detectors. L Zhang, K Ma, Int. Conf. Learn. Represent. L. Zhang and K. Ma, "Improve object detection with feature-based knowledge distillation: Towards accurate and efficient detectors," in Int. Conf. Learn. Represent., 2020.
Instanceconditional knowledge distillation for object detection. Z Kang, P Zhang, X Zhang, J Sun, N Zheng, Adv. Neural Inform. Process. Syst. Z. Kang, P. Zhang, X. Zhang, J. Sun, and N. Zheng, "Instance- conditional knowledge distillation for object detection," in Adv. Neural Inform. Process. Syst., 2021.
Generalized Focal Loss: learning qualified and distributed bounding boxes for dense object detection. X Li, W Wang, L Wu, S Chen, X Hu, J Li, J Tang, J Yang, Adv. Neural Inform. Process. Syst. X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, and J. Yang, "Generalized Focal Loss: learning qualified and dis- tributed bounding boxes for dense object detection," in Adv. Neural Inform. Process. Syst., 2020.
Offset bin classification network for accurate object detection. H Qiu, H Li, Q Wu, H Shi, IEEE Conf. Comput. Vis. Pattern Recog. H. Qiu, H. Li, Q. Wu, and H. Shi, "Offset bin classification network for accurate object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2020.
You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, IEEE Conf. Comput. Vis. Pattern Recog. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2016.
Ssd: Single shot multibox detector. W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, Eur. Conf. Comput. Vis. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector," in Eur. Conf. Comput. Vis., 2016.
Faster R-CNN: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Adv. Neural Inform. Process. Syst. S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," in Adv. Neural Inform. Process. Syst., 2015.
FCOS: Fully convolutional one-stage object detection. Z Tian, C Shen, H Chen, T He, Int. Conf. Comput. Vis. Z. Tian, C. Shen, H. Chen, and T. He, "FCOS: Fully convolutional one-stage object detection," in Int. Conf. Comput. Vis., 2019.
Automatic differentiation in pytorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z Devito, Z Lin, A Desmaison, L Antiga, A Lerer, Adv. Neural Inform. Process. Syst. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. Devito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, "Automatic differ- entiation in pytorch," in Adv. Neural Inform. Process. Syst., 2017.
Jittor: A novel deep learning framework with meta-operators and unified graph execution. S.-M Hu, D Liang, G.-Y Yang, G.-W Yang, W.-Y Zhou, Science China Information Sciences. 63222103S.-M. Hu, D. Liang, G.-Y. Yang, G.-W. Yang, and W.-Y. Zhou, "Jittor: A novel deep learning framework with meta-operators and unified graph execution," Science China Information Sciences, vol. 63, no. 222103, pp. 1-21, 2020.
Localization distillation for dense object detection. Z Zheng, R Ye, P Wang, D Ren, W Zuo, Q Hou, M Cheng, IEEE Conf. Comput. Vis. Pattern Recog. Z. Zheng, R. Ye, P. Wang, D. Ren, W. Zuo, Q. Hou, and M. Cheng, "Localization distillation for dense object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2022.
Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. S Zagoruyko, N Komodakis, Int. Conf. Learn. Represent. S. Zagoruyko and N. Komodakis, "Paying more attention to atten- tion: Improving the performance of convolutional neural networks via attention transfer," in Int. Conf. Learn. Represent., 2017.
Densely distilled flow-based knowledge transfer in teacher-student framework for image classification. J.-H Bae, D Yeo, J Yim, N.-S Kim, C.-S Pyo, J Kim, IEEE Transactions on Image Processing. 29J.-H. Bae, D. Yeo, J. Yim, N.-S. Kim, C.-S. Pyo, and J. Kim, "Densely distilled flow-based knowledge transfer in teacher-student frame- work for image classification," IEEE Transactions on Image Process- ing, vol. 29, pp. 5698-5710, 2020.
Relational knowledge distillation. W Park, D Kim, Y Lu, M Cho, IEEE Conf. Comput. Vis. Pattern Recog. W. Park, D. Kim, Y. Lu, and M. Cho, "Relational knowledge distillation," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Improved knowledge distillation via teacher assistant. S I Mirzadeh, M Farajtabar, A Li, N Levine, A Matsukawa, H Ghasemzadeh, Association for the Advancement of Artificial Intelligence. S. I. Mirzadeh, M. Farajtabar, A. Li, N. Levine, A. Matsukawa, and H. Ghasemzadeh, "Improved knowledge distillation via teacher assistant," in Association for the Advancement of Artificial Intelligence, 2020.
Densely guided knowledge distillation using multiple teacher assistants. W Son, J Na, J Choi, W Hwang, Int. Conf. Comput. Vis. W. Son, J. Na, J. Choi, and W. Hwang, "Densely guided knowledge distillation using multiple teacher assistants," in Int. Conf. Comput. Vis., 2021.
Mimicking very efficient network for object detection. Q Li, S Jin, J Yan, IEEE Conf. Comput. Vis. Pattern Recog. Q. Li, S. Jin, and J. Yan, "Mimicking very efficient network for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
General instance distillation for object detection. X Dai, Z Jiang, Z Wu, Y Bao, Z Wang, S Liu, E Zhou, IEEE Conf. Comput. Vis. Pattern Recog. X. Dai, Z. Jiang, Z. Wu, Y. Bao, Z. Wang, S. Liu, and E. Zhou, "General instance distillation for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
Distilling object detectors via decoupled features. J Guo, K Han, Y Wang, H Wu, X Chen, C Xu, C Xu, IEEE Conf. Comput. Vis. Pattern Recog. J. Guo, K. Han, Y. Wang, H. Wu, X. Chen, C. Xu, and C. Xu, "Distilling object detectors via decoupled features," in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
Distilling object detectors with feature richness. D Zhixing, R Zhang, M Chang, S Liu, T Chen, Y Chen, Adv. Neural Inform. Process. Syst. D. Zhixing, R. Zhang, M. Chang, S. Liu, T. Chen, Y. Chen et al., "Distilling object detectors with feature richness," in Adv. Neural Inform. Process. Syst., 2021.
Knowledge distillation for object detection via rank mimicking and predictionguided feature imitation. G Li, X Li, Y Wang, S Zhang, Y Wu, D Liang, Association for the Advancement of Artificial Intelligence. G. Li, X. Li, Y. Wang, S. Zhang, Y. Wu, and D. Liang, "Knowledge distillation for object detection via rank mimicking and prediction- guided feature imitation," in Association for the Advancement of Artificial Intelligence, 2022.
Locnet: Improving localization accuracy for object detection. S Gidaris, N Komodakis, IEEE Conf. Comput. Vis. Pattern Recog. S. Gidaris and N. Komodakis, "Locnet: Improving localization accuracy for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2016.
Region proposal by guided anchoring. J Wang, K Chen, S Yang, C C Loy, D Lin, IEEE Conf. Comput. Vis. Pattern Recog. J. Wang, K. Chen, S. Yang, C. C. Loy, and D. Lin, "Region proposal by guided anchoring," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Side-aware boundary localization for more precise object detection. J Wang, W Zhang, Y Cao, K Chen, J Pang, T Gong, J Shi, C C Loy, D Lin, Eur. Conf. Comput. Vis. J. Wang, W. Zhang, Y. Cao, K. Chen, J. Pang, T. Gong, J. Shi, C. C. Loy, and D. Lin, "Side-aware boundary localization for more precise object detection," in Eur. Conf. Comput. Vis., 2020.
Feature selective anchor-free module for single-shot object detection. C Zhu, Y He, M Savvides, IEEE Conf. Comput. Vis. Pattern Recog. C. Zhu, Y. He, and M. Savvides, "Feature selective anchor-free module for single-shot object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Grid R-CNN. X Lu, B Li, Y Yue, Q Li, J Yan, IEEE Conf. Comput. Vis. Pattern Recog. X. Lu, B. Li, Y. Yue, Q. Li, and J. Yan, "Grid R-CNN," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Deep feature pyramid reconfiguration for object detection. T Kong, F Sun, C Tan, H Liu, W Huang, Eur. Conf. Comput. T. Kong, F. Sun, C. Tan, H. Liu, and W. Huang, "Deep feature pyramid reconfiguration for object detection," in Eur. Conf. Com- put. Vis., 2018.
Scrdet: Towards more robust detection for small, cluttered and rotated objects. X Yang, J Yang, J Yan, Y Zhang, T Zhang, Z Guo, X Sun, K Fu, Int. Conf. Comput. Vis. X. Yang, J. Yang, J. Yan, Y. Zhang, T. Zhang, Z. Guo, X. Sun, and K. Fu, "Scrdet: Towards more robust detection for small, cluttered and rotated objects," in Int. Conf. Comput. Vis., 2019.
Rethinking rotated object detection with gaussian wasserstein distance loss. X Yang, J Yan, Q Ming, W Wang, X Zhang, Q Tian, International Conference on Machine Learning (ICML). 2021X. Yang, J. Yan, Q. Ming, W. Wang, X. Zhang, and Q. Tian, "Rethinking rotated object detection with gaussian wasserstein distance loss," in International Conference on Machine Learning (ICML), 2021.
Learning high-precision bounding box for rotated object detection via kullback-leibler divergence. X Yang, X Yang, J Yang, Q Ming, W Wang, Q Tian, J Yan, Adv. Neural Inform. Process. Syst. X. Yang, X. Yang, J. Yang, Q. Ming, W. Wang, Q. Tian, and J. Yan, "Learning high-precision bounding box for rotated object detection via kullback-leibler divergence," in Adv. Neural Inform. Process. Syst., 2021.
Varifocalnet: An iou-aware dense object detector. H Zhang, Y Wang, F Dayoub, N Sünderhauf, IEEE Conf. Comput. Vis. Pattern Recog. H. Zhang, Y. Wang, F. Dayoub, and N. Sünderhauf, "Varifocalnet: An iou-aware dense object detector," in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
Object detection with discriminatively trained part-based models. P F Felzenszwalb, R B Girshick, D Mcallester, D Ramanan, IEEE Trans. Pattern Anal. Mach. Intell. 329P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, "Object detection with discriminatively trained part-based mod- els," IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627- 1645, 2009.
Livestock detection in aerial images using a fully convolutional network. L Han, P Tao, R R Martin, Computational Visual Media. 52L. Han, P. Tao, and R. R. Martin, "Livestock detection in aerial images using a fully convolutional network," Computational Visual Media, vol. 5, no. 2, p. 221 -228, 2019.
Cascade R-CNN: Delving into high quality object detection. Z Cai, N Vasconcelos, IEEE Conf. Comput. Vis. Pattern Recog. Z. Cai and N. Vasconcelos, "Cascade R-CNN: Delving into high quality object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2018.
Libra R-CNN: Towards balanced learning for object detection. J Pang, K Chen, J Shi, H Feng, W Ouyang, D Lin, IEEE Conf. Comput. Vis. Pattern Recog. J. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, and D. Lin, "Libra R-CNN: Towards balanced learning for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Dynamic R-CNN: Towards high quality object detection via dynamic training. H Zhang, H Chang, B Ma, N Wang, X Chen, Eur. Conf. Comput. Vis. H. Zhang, H. Chang, B. Ma, N. Wang, and X. Chen, "Dynamic R-CNN: Towards high quality object detection via dynamic train- ing," in Eur. Conf. Comput. Vis., 2020.
Yolo9000: better, faster, stronger. J Redmon, A Farhadi, IEEE Conf. Comput. Vis. Pattern Recog. J. Redmon and A. Farhadi, "Yolo9000: better, faster, stronger," in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
arXiv:1804.02767Yolov3: An incremental improvement. arXiv preprint--, "Yolov3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
Yolov4: Optimal speed and accuracy of object detection. A Bochkovskiy, C.-Y. Wang, H.-Y M Liao, arXiv:2004.10934arXiv preprintA. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "Yolov4: Op- timal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020.
DSSD: Deconvolutional single shot detector. C.-Y Fu, W Liu, A Ranga, A Tyagi, A C Berg, arXiv:1701.06659C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, "DSSD: Deconvolutional single shot detector," arXiv:1701.06659, 2017.
Scale-transferrable object detection. P Zhou, B Ni, C Geng, J Hu, Y Xu, IEEE Conf. Comput. Vis. Pattern Recog. P. Zhou, B. Ni, C. Geng, J. Hu, and Y. Xu, "Scale-transferrable object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2018.
UnitBox: an advanced object detection network. J Yu, Y Jiang, Z Wang, Z Cao, T Huang, ACM Int. Conf. Multimedia. J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, "UnitBox: an advanced object detection network," in ACM Int. Conf. Multimedia, 2016.
Generalized Intersection over Union: A metric and a loss for bounding box regression. H Rezatofighi, N Tsoi, J Gwak, A Sadeghian, I Reid, S Savarese, IEEE Conf. Comput. Vis. Pattern Recog. H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, "Generalized Intersection over Union: A metric and a loss for bounding box regression," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Distance-IoU Loss: Faster and better learning for bounding box regression. Z Zheng, P Wang, W Liu, J Li, R Ye, D Ren, Association for the Advancement of Artificial Intelligence. Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, "Distance-IoU Loss: Faster and better learning for bounding box regression," in Association for the Advancement of Artificial Intelligence, 2020.
Enhancing geometric factors in model learning and inference for object detection and instance segmentation. Z Zheng, P Wang, D Ren, W Liu, R Ye, Q Hu, W Zuo, IEEE Transactions on Cybernetics. Z. Zheng, P. Wang, D. Ren, W. Liu, R. Ye, Q. Hu, and W. Zuo, "Enhancing geometric factors in model learning and inference for object detection and instance segmentation," IEEE Transactions on Cybernetics, 2021.
Bounding box regression with uncertainty for accurate object detection. Y He, C Zhu, J Wang, M Savvides, X Zhang, IEEE Conf. Comput. Vis. Pattern Recog. Y. He, C. Zhu, J. Wang, M. Savvides, and X. Zhang, "Bounding box regression with uncertainty for accurate object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Gaussian YOLOv3: An accurate and fast object detector using localization uncertainty for autonomous driving. J Choi, D Chun, H Kim, H.-J Lee, Int. Conf. Comput. Vis. J. Choi, D. Chun, H. Kim, and H.-J. Lee, "Gaussian YOLOv3: An accurate and fast object detector using localization uncertainty for autonomous driving," in Int. Conf. Comput. Vis., 2019.
Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection. X Li, W Wang, X Hu, J Li, J Tang, J Yang, IEEE Conf. Comput. Vis. Pattern Recog. X. Li, W. Wang, X. Hu, J. Li, J. Tang, and J. Yang, "Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
Acquisition of localization confidence for accurate object detection. B Jiang, R Luo, J Mao, T Xiao, Y Jiang, Eur. Conf. Comput. Vis. B. Jiang, R. Luo, J. Mao, T. Xiao, and Y. Jiang, "Acquisition of localization confidence for accurate object detection," in Eur. Conf. Comput. Vis., 2018.
Mask scoring R-CNN. Z Huang, L Huang, Y Gong, C Huang, X Wang, IEEE Conf. Comput. Vis. Pattern Recog. Z. Huang, L. Huang, Y. Gong, C. Huang, and X. Wang, "Mask scoring R-CNN," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Polarmask: Single shot instance segmentation with polar representation. E Xie, P Sun, X Song, W Wang, X Liu, D Liang, C Shen, P Luo, IEEE Conf. Comput. Vis. Pattern Recog. E. Xie, P. Sun, X. Song, W. Wang, X. Liu, D. Liang, C. Shen, and P. Luo, "Polarmask: Single shot instance segmentation with polar representation," in IEEE Conf. Comput. Vis. Pattern Recog., 2020.
Mmrotate: A rotated object detection benchmark using pytorch. Y Zhou, X Yang, G Zhang, J Wang, Y Liu, L Hou, X Jiang, X Liu, J Yan, C Lyu, W Zhang, K Chen, ACM Int. Conf. Multimedia. Y. Zhou, X. Yang, G. Zhang, J. Wang, Y. Liu, L. Hou, X. Jiang, X. Liu, J. Yan, C. Lyu, W. Zhang, and K. Chen, "Mmrotate: A rotated object detection benchmark using pytorch," in ACM Int. Conf. Multimedia, 2022.
Arbitrary-oriented scene text detection via rotation proposals. J Ma, W Shao, H Ye, L Wang, H Wang, Y Zheng, X Xue, IEEE Transactions on Multimedia. 2011J. Ma, W. Shao, H. Ye, L. Wang, H. Wang, Y. Zheng, and X. Xue, "Arbitrary-oriented scene text detection via rotation proposals," IEEE Transactions on Multimedia, vol. 20, no. 11, pp. 3111-3122, 2018.
Focal loss for dense object detection. T.-Y Lin, P Goyal, R Girshick, K He, P Dollár, Int. Conf. Comput. Vis. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Int. Conf. Comput. Vis., 2017.
Learning modulated loss for rotated object detection. W Qian, X Yang, S Peng, Y Guo, J Yan, Association for the Advancement of Artificial Intelligence. W. Qian, X. Yang, S. Peng, Y. Guo, and J. Yan, "Learning modulated loss for rotated object detection," in Association for the Advancement of Artificial Intelligence, 2021.
Arbitrary-oriented object detection with circular smooth label. X Yang, J Yan, Eur. Conf. Comput. Vis. X. Yang and J. Yan, "Arbitrary-oriented object detection with circular smooth label," in Eur. Conf. Comput. Vis., 2020.
{TensorFlow}: A system for {Large-Scale} machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, 12th USENIX symposium on operating systems design and implementation. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., "{TensorFlow}: A system for {Large-Scale} machine learning," in 12th USENIX symposium on operating systems design and implementation (OSDI 16), 2016, pp. 265-283.
Piou loss: Towards accurate oriented object detection in complex environments. Z Chen, K Chen, W Lin, J See, H Yu, Y Ke, C Yang, Eur. Conf. Comput. Vis. Z. Chen, K. Chen, W. Lin, J. See, H. Yu, Y. Ke, and C. Yang, "Piou loss: Towards accurate oriented object detection in complex environments," in Eur. Conf. Comput. Vis., 2020.
The kfiou loss for rotated object detection. X Yang, Y Zhou, G Zhang, J Yang, W Wang, J Yan, X Zhang, Q Tian, arXiv:2201.12558arXiv preprintX. Yang, Y. Zhou, G. Zhang, J. Yang, W. Wang, J. Yan, X. Zhang, and Q. Tian, "The kfiou loss for rotated object detection," arXiv preprint arXiv:2201.12558, 2022.
R3det: Refined single-stage detector with feature refinement for rotating object. X Yang, Q Liu, J Yan, A Li, Z Zhang, G Yu, Association for the Advancement of Artificial Intelligence. X. Yang, Q. Liu, J. Yan, A. Li, Z. Zhang, and G. Yu, "R3det: Refined single-stage detector with feature refinement for rotating object," in Association for the Advancement of Artificial Intelligence, 2021.
Understanding and improving knowledge distillation. J Tang, R Shivanna, Z Zhao, D Lin, A Singh, E H Chi, S Jain, arXiv:2002.03532arXiv preprintJ. Tang, R. Shivanna, Z. Zhao, D. Lin, A. Singh, E. H. Chi, and S. Jain, "Understanding and improving knowledge distillation," arXiv preprint arXiv:2002.03532, 2020.
Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Eur. Conf. Comput. Vis. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft coco: Common objects in context," in Eur. Conf. Comput. Vis., 2014.
The pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman, International Journal of Computer Vision. 882M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes (voc) challenge," International Journal of Computer Vision, vol. 88, no. 2, pp. 303-338, 2010.
Dota: A large-scale dataset for object detection in aerial images. G.-S Xia, X Bai, J Ding, Z Zhu, S Belongie, J Luo, M Datcu, M Pelillo, L Zhang, IEEE Conf. Comput. Vis. Pattern Recog. G.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu, M. Pelillo, and L. Zhang, "Dota: A large-scale dataset for object detection in aerial images," in IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 3974-3983.
MMDetection: Open mmlab detection toolbox and benchmark. K Chen, J Wang, J Pang, Y Cao, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Xu, Z Zhang, D Cheng, C Zhu, T Cheng, Q Zhao, B Li, X Lu, R Zhu, Y Wu, J Dai, J Wang, J Shi, W Ouyang, C C Loy, D Lin, arXiv:1906.07155arXiv preprintK. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu, Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, "MMDetection: Open mmlab detection toolbox and benchmark," arXiv preprint arXiv:1906.07155, 2019.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conf. Comput. Vis. Pattern Recog. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conf. Comput. Vis. Pattern Recog., 2016.
Feature pyramid networks for object detection. T.-Y Lin, P Dollár, R Girshick, K He, B Hariharan, S Belongie, IEEE Conf. Comput. Vis. Pattern Recog. T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Be- longie, "Feature pyramid networks for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. S Zhang, C Chi, Y Yao, Z Lei, S Z Li, IEEE Conf. Comput. Vis. Pattern Recog. S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, "Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection," in IEEE Conf. Comput. Vis. Pattern Recog., 2020.
Visualizing the loss landscape of neural nets. H Li, Z Xu, G Taylor, C Studer, T Goldstein, Adv. Neural Inform. Process. Syst. H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, "Visualizing the loss landscape of neural nets," Adv. Neural Inform. Process. Syst., 2018.
Aggregated residual transformations for deep neural networks. S Xie, R Girshick, P Dollár, Z Tu, K He, IEEE Conf. Comput. Vis. Pattern Recog. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in IEEE Conf. Comput. Vis. Pattern Recog., 2017.
Deformable convnets v2: More deformable, better results. X Zhu, H Hu, S Lin, J Dai, IEEE Conf. Comput. Vis. Pattern Recog. X. Zhu, H. Hu, S. Lin, and J. Dai, "Deformable convnets v2: More deformable, better results," in IEEE Conf. Comput. Vis. Pattern Recog., 2019.
Res2net: A new multi-scale backbone architecture. S.-H Gao, M.-M Cheng, K Zhao, X.-Y Zhang, M.-H Yang, P Torr, IEEE Trans. Pattern Anal. Mach. Intell. 432S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. Torr, "Res2net: A new multi-scale backbone architecture," IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 2, pp. 652-662, 2021.
Soft anchor-point object detection. C Zhu, F Chen, Z Shen, M Savvides, IEEE Conf. Comput. Vis. Pattern Recog. C. Zhu, F. Chen, Z. Shen, and M. Savvides, "Soft anchor-point object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2020.
Borderdet: Border feature for dense object detection. H Qiu, Y Ma, Z Li, S Liu, J Sun, Eur. Conf. Comput. Vis. H. Qiu, Y. Ma, Z. Li, S. Liu, and J. Sun, "Borderdet: Border feature for dense object detection," in Eur. Conf. Comput. Vis., 2020.
Autoassign: Differentiable label assignment for dense object detection. B Zhu, J Wang, Z Jiang, F Zong, S Liu, Z Li, J Sun, arXiv:2007.03496arXiv preprintB. Zhu, J. Wang, Z. Jiang, F. Zong, S. Liu, Z. Li, and J. Sun, "Autoas- sign: Differentiable label assignment for dense object detection," arXiv preprint arXiv:2007.03496, 2020.
Probabilistic anchor assignment with iou prediction for object detection. K Kim, H S Lee, Eur. Conf. Comput. Vis. K. Kim and H. S. Lee, "Probabilistic anchor assignment with iou prediction for object detection," in Eur. Conf. Comput. Vis., 2020.
OTA: Optimal transport assignment for object detection. Z Ge, S Liu, Z Li, O Yoshie, J Sun, IEEE Conf. Comput. Vis. Pattern Recog. Z. Ge, S. Liu, Z. Li, O. Yoshie, and J. Sun, "OTA: Optimal transport assignment for object detection," in IEEE Conf. Comput. Vis. Pattern Recog., 2021.
End-to-end object detection with transformers. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, Eur. Conf. Comput. Vis. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, "End-to-end object detection with transformers," in Eur. Conf. Comput. Vis., 2020.
Born again neural networks. T Furlanello, Z Lipton, M Tschannen, L Itti, A Anandkumar, International Conference on Machine Learning (ICML). T. Furlanello, Z. Lipton, M. Tschannen, L. Itti, and A. Anandku- mar, "Born again neural networks," in International Conference on Machine Learning (ICML), 2018, pp. 1607-1616.
Be your own teacher: Improve the performance of convolutional neural networks via self distillation. L Zhang, J Song, A Gao, J Chen, C Bao, K Ma, Int. Conf. Comput. Vis. L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, "Be your own teacher: Improve the performance of convolutional neural networks via self distillation," in Int. Conf. Comput. Vis., 2019, pp. 3713-3722.
Decoupled knowledge distillation. B Zhao, Q Cui, R Song, Y Qiu, J Liang, IEEE Conf. Comput. Vis. Pattern Recog. B. Zhao, Q. Cui, R. Song, Y. Qiu, and J. Liang, "Decoupled knowledge distillation," in IEEE Conf. Comput. Vis. Pattern Recog., 2022.
P2T: Pyramid pooling transformer for scene understanding. Y.-H Wu, Y Liu, X Zhan, M.-M Cheng, IEEE Trans. Pattern Anal. Mach. Intell. Y.-H. Wu, Y. Liu, X. Zhan, and M.-M. Cheng, "P2T: Pyramid pooling transformer for scene understanding," IEEE Trans. Pattern Anal. Mach. Intell., 2022.
Conv2former: A simple transformer-style convnet for visual recognition. Q Hou, C.-Z Lu, M.-M Cheng, J Feng, arXiv:2211.11943arXiv preprintQ. Hou, C.-Z. Lu, M.-M. Cheng, and J. Feng, "Conv2former: A simple transformer-style convnet for visual recognition," arXiv preprint arXiv:2211.11943, 2022.
Coatnet: Marrying convolution and attention for all data sizes. Z Dai, H Liu, Q Le, M Tan, Adv. Neural Inform. Process. Syst. 34Z. Dai, H. Liu, Q. Le, and M. Tan, "Coatnet: Marrying convolution and attention for all data sizes," Adv. Neural Inform. Process. Syst., vol. 34, 2021.
A convnet for the 2020s. Z Liu, H Mao, C.-Y Wu, C Feichtenhofer, T Darrell, S Xie, IEEE Conf. Comput. Vis. Pattern Recog. Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," in IEEE Conf. Comput. Vis. Pattern Recog., 2022.
Pct: Point cloud transformer. M.-H Guo, J.-X Cai, Z.-N Liu, T.-J Mu, R R Martin, S.-M Hu, Computational Visual Media. 72M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, "Pct: Point cloud transformer," Computational Visual Media, vol. 7, no. 2, p. 187 -199, 2021.
| [
"https://github.com/HikariTJU/LD.",
"https://github.com/HikariTJU/LD."
] |
[
"Generating Functions with τ -Invariance and Vertex Representations of Quantum Affine Algebras U r,s ( g) (I): Simply-laced Cases",
"Generating Functions with τ -Invariance and Vertex Representations of Quantum Affine Algebras U r,s ( g) (I): Simply-laced Cases"
] | [
"Naihong Hu ",
"Honglian Zhang "
] | [] | [] | We put forward the exact version of two-parameter generating functions with τ -invariance, which allows us to give a unified and inherent definition for the Drinfeld realization of two-parameter quantum affine algebras for all the untwisted types. As verification, we first construct their level-one vertex representations of Ur,s( g) for simply-laced types, which in turn well-detect the effectiveness of our definitions both for (r, s)-generating functions and (r, s)-Drinfeld realization in the framework of establishing the two-parameter vertex representation theory. | null | [
"https://arxiv.org/pdf/1401.4925v1.pdf"
] | 119,138,078 | 1401.4925 | 064dcdfdb5943a53323e4cda54f29701e6606024 |
Generating Functions with τ -Invariance and Vertex Representations of Quantum Affine Algebras U r,s ( g) (I): Simply-laced Cases
20 Jan 2014
Naihong Hu
Honglian Zhang
Generating Functions with τ -Invariance and Vertex Representations of Quantum Affine Algebras U r,s ( g) (I): Simply-laced Cases
20 Jan 2014
We put forward the exact version of two-parameter generating functions with τ -invariance, which allows us to give a unified and inherent definition for the Drinfeld realization of two-parameter quantum affine algebras for all the untwisted types. As verification, we first construct their level-one vertex representations of Ur,s( g) for simply-laced types, which in turn well-detect the effectiveness of our definitions both for (r, s)-generating functions and (r, s)-Drinfeld realization in the framework of establishing the two-parameter vertex representation theory.
Introduction
In 2000, Benkart and Witherspoon revitalized the research of two-parameter quantum groups. They studied the structures of two-parameter quantum groups U r,s (g) for g = gl n , or sl n in [BW1] previously obtained by Takeuchi [T], and the finite-dimensional representations and Schur-Weyl duality for type A in [BW2], and obtained some new finite-dimensional pointed Hopf algebras u r,s (sl n ) in [BW3], which possess new ribbon elements under some conditions (may yield new invariants of 3-manifolds). Since 2004, Bergeron, Gao and Hu [BGH1] further presented the structures of two-parameter quantum groups U r,s (g) for g = so 2n+1 , sp 2n , so 2n , and investigated the environment condition on the Lusztig symmetries' existence for types A, B, C, D in [BGH1] and type G 2 in [HS]. A generalization of this fact to the multi-parameter cases arising from Drinfeld doubles of bosonizations of Nichols algebras of diagonal type has been obtained by Heckenberger [H], which provides an explicit realization model for the abstract concept "Weyl groupoid" playing a key role in the classification of both Nichols algebras of diagonal type (cf. [H1]) and finite-dimensional pointed Hopf algebras with abelian group algebras as the coradicals ( [AS]). The finite-dimensional weight representation theory for type A ( [BW2]), types B, C, D ( [BGH2]), E ( [BH]) was done.
A unified definition for any types and the explicit formulae of two-parameter quantum groups as two kind of 2-cocycle deformations of one-parameter quantum groups with double group-like elements in generic case, and a general treatment for the deformed finite-dimensional representation category have been intrinsically described by HP2] (Furthermore, for a multiparameter version, see Pei-Hu-Rosso [PHR]). Recently, this important observation on the explicit formula for 2-cocycle deformation serves as a crucial point for categorifying two-parameter quantum groups in generic case. In roots of unity case, the small quantum groups structure u r,s (g), together with the convex PBW-type Lyndon bases was studied explicitly for types B, G 2 ( [HW1,HW2]), C ( [CH]), F 4 ( [CxH]). Especially, Isomorphism Theorem for u r,s (g)'s depending on parameter-pairs (r, s) established by HW2] was used to distinguish the iso-classes of new finitedimensional pointed Hopf algebras in type A for small order of roots of unity (see Benkart at al [BPW]). Surprisingly, the study of two-parameter small quantum groups brings us new examples of non-semisimple and non-pointed Hopf algebras with non-pointed duals when García [G] studied the quantum subgroups of twoparameter quantum general linear group GL α,β (n).
On the other hand, Hu-Rosso-Zhang first studied two-parameter quantum affine algebras associated to affine Lie algebras of type A
(1) n , and gave the descriptions of the structure and Drinfeld realization of U r,s ( sl n ), as well as the quantum affine Lyndon basis (see [HRZ]). The discussions for the other affine cases of untwisted types and the corresponding vertex operators constructions for all untwisted types have been done in [HZ1,HZ2,Z]. Recently, using a combinatorial model of Young diagrams, ) gave a fermionic realization of the two-parameter quantum affine algebra of type A (1) n ; while [JZ2] provided a group-theoretic realization of two-parameter quantum toroidal algebras using finite subgroups of SL 2 (C) via McKay correspondence.
In the present paper, we will study two-parameter quantum affine algebras for untwisted types from a uniform approach via working out the exact two-parameter version of generating functions with τ -invariance, and construct the level-one vertex operator representations for simply-laced types, for the other types will be discussed in forthcoming work.
The paper is organized as follows. We first give the definition of two-parameter quantum affine algebras U r,s ( g) (g is of ADE-type) in the sense of Hopf algebra in Section 2. Two-parameter quantum affine algebras U r,s ( g) are characterized as Drinfeld doubles D( B, B ′ ) of Hopf subalgebras B and B ′ with respect to a skewdual pairing we give. In Section 3, we obtain the two-parameter Drinfeld realization via the generating functions we define in the two-parameter setting. It is worthy to mention that a similar 2-cocycle deformation between two-parameter and oneparameter quantum affine algebras doesn't work yet for Drinfeld generators, even though it indeed exists when one works for Chevalley-Kac-Lusztig generators (see [HP2,PHR]). From this point of view, there's the rub that explicit formulae for defining the Drinfeld realization in the two-parameter setting are nontrivial. Note that by comparison with [HRZ], the definition has been slightly revised, where the canonical central element c of affine Lie algebras involved plays a well-connected role in the definition such as a result of the product for the doubled group-like elements γ and γ ′ keeps group-like (see Definitions 2.2 (X1), & 3.1 (3.2)), this also behaviors well when one considers especially those vertex representations of higher levels. While in the case r = q = s −1 , this phenomenon degenerates and is invisible in one-parameter case. In order to recover those inherent features in the two-parameter setting, to some extent, representation theory serves as a nice sample here to help us to achieve some further necessary insights into the algebra structure itself. In Section 4, we prove the Drinfeld Isomorphism from the twoparameter quantum affine algebra U r,s ( g) (g is of type D and E) of Drinfeld-Jimbo type towards the two-parameter Drinfeld realization U r,s ( g) using the quantum calculations for (r, s)-brackets as developed in [HRZ] for type A (1) n . Here we afford an alternative proof of the homomorphism to be injective. In Section 5, we start from the two-parameter quantum Heisenberg algebra to obtain the Fock space, and construct the level-one vertex representations of two-parameter quantum affine algebras for simply-laced cases, which are irreducible. Also, these constructions in turn well-detect the effectiveness of the (r, s)-Drinfeld realization we defined. We include details of some proofs of a few Lemmas as an appendix, through which readers can see the quantum calculations for (r, s)-brackets how to work effectively in the two-parameter setting.
2. Quantum affine algebras U r,s ( g) and Drinfeld double 2.1. Notations and preliminaries. From now on, denote by g the finitedimensional simple Lie algebras of simply-laced types with rank n. Let K ⊃ Q(r, s) denote an algebraically closed field, where the two-parameters r, s are nonzero complex numbers satisfying r 2 = s 2 . Let E be a Euclidean space R n with an inner product ( , ) and an orthonormal basis {ǫ 1 , · · · , ǫ n }.
Let (a ij ) ij∈I (I = {1, 2, · · · , n}) be a Cartan matrix of simple Lie algebra g with Cartan subalgebra h. Let Φ be a root system of g and α i (i ∈ I) be the simple roots. It is possible to regard Φ as a subset of a Euclidean space E. Denote by θ the highest root of Φ.
Letĝ be an affine Lie algebra associated to simple Lie algebra g with Cartan matrix (a ij ) ij∈I0 , where I 0 = {0} ∪ I. In the following, we list the affine Dynkin diagrams of simply-laced types, the labels on vertices fix an identification between I 0 and {0, 1, · · · , n} such that I corresponds to {1, 2, · · · , n}.
A (1) n−1 ❡ ❡ ❡ ❡ ❡ 1 2 n-2 n-1 0 ❡ ❡ ❡ 1 2 3 D (1) n ❡ ❡ n n-2 ❡ 0 ❡ n-1 E (1) 6 ❡ ❡ ❡ ❡ ❡ ❡ ❡ 1 3 4 5 6 2 0
Let δ denote the primitive imaginary root ofĝ, Take α 0 = δ − θ, then Π ′ = {α i | i ∈ I 0 } is a base of simple roots of affine Lie algebraĝ. Denote by c the canonical central element, and h the Coxeter number of affine Lie algebraĝ. We need the following data on (prime) root systems.
Type A n−1 :
Π = {α i = ǫ i − ǫ i+1 | 1 ≤ i ≤ n}, Ψ = {±(ǫ i − ǫ j ) | 1 ≤ i < j ≤ n + 1}, θ = α 1 + · · · + α n , α 0 = δ − θ = δ − ǫ 1 + ǫ n , Π ′ = {α 0 , α 1 , · · · , α n }.
Type D n :
Π = {α i = ǫ i − ǫ i+1 | 1 ≤ i < n} ∪ {α n = ǫ n−1 + ǫ n }, Ψ = {±ǫ i ± ǫ j | 1 ≤ i = j ≤ n}, θ = α 1 + 2α 2 + · · · + 2α n−2 + α n−1 + α n , α 0 = δ − θ = δ − ǫ 1 − ǫ 2 , Π ′ = {α 0 , α 1 , · · · , α n }.
Type E 6 : Π = α 1 = 1 2 (ǫ 1 + ǫ 8 − (ǫ 2 + · · · + ǫ 7 )), α 2 = ǫ 1 + ǫ 2 ,
α 3 = ǫ 2 − ǫ 1 , α 4 = ǫ 3 − ǫ 2 , α 5 = ǫ 4 − ǫ 3 , α 6 = ǫ 5 − ǫ 4 , Ψ = ±ǫ i ± ǫ j | 1 ≤ i = j ≤ n = 5 ∪ 1 2 5 i=1 (−1) k(i) ǫ i − 1 2 (ǫ 6 + ǫ 7 − ǫ 8 )
k(i) = 0, 1, add up to an odd integer ,
θ = α 1 + 2α 2 + 2α 3 + 3α 4 + 2α 5 + α 6 , α 0 = δ − θ = δ − 1 2 (ǫ 1 + ǫ 2 + ǫ 3 + ǫ 4 + ǫ 5 − ǫ 6 − ǫ 7 + ǫ 8 ), Π ′ = {α 0 , α 1 , · · · , α 6 }.
2.2. Two-parameter quantum affine algebras U r,s ( g). In this paragraph, we give the definition of the two-parameter quantum affine algebras U r,s (ĝ) (see [HRZ]
forĝ = A (1) n ).
Assigned to Π ′ , there are two sets of mutually-commutative symbols
W = {ω ±1 i | 0 ≤ i ≤ n} and W ′ = {ω ′ i ±1 | 0 ≤ i ≤ n}. Define a pairing , : τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) W ′ × W −→ K as follows (1Â n−1 ) J = ( ω ′ i , ω j ) = ( i, j ) for A (1) n−1 , where J = rs −1 r −1 1 · · · 1 s s rs −1 r −1 · · · 1 1 · · ·
· · · · · · · · · · · · · · · 1 1 1 · · · rs −1 r −1
r −1 1 1 · · · s rs −1 . (1D n ) J = ( ω ′ i , ω j ) = ( i, j ) for D (1) n , where J = rs −1 (rs) −1 r −1 · · · 1 (rs) 2 rs rs −1 r −1 · · · 1 1 · · · · · · · · · · · · · · · · · · 1 1 1 · · · rs −1 (rs) −1 (rs) −2 1 1 · · · rs rs −1 (1Ê 6 ) J = ( ω ′ i , ω j ) = ( i, j ) for E (1) 6 , where J = rs −1 (rs) −1 r −2 s −1 (rs) −1 rs rs rs rs rs −1 1 r −1 1 1 1 rs 2 1 rs −1 1 r −1 1 1 rs s 1 rs −1 r −1 1 1 (rs) −1 1 s s rs −1 r −1 1 (rs) −1 1 1 1 s rs −1 r −1 (rs) −1 1 1 1 1 s rs −1 . (2) ω ′ i ±1 , ω −1 j = ω ′ i ±1 , ω j −1 = ω ′ i , ω j ∓1 , for anyĝ.
Remark 2.1. The above structure constant matrix J is said the two-parameter quantum affine Cartan matrix, which is the generalization of the classical quantum affine Cartan matrix under the condition r = s −1 = q.
Definition 2.2. Let U r,s (ĝ) be the unital associative algebra over K generated by the elements e j , f j , ω ±1 j , ω ′ ±1 j (j ∈ I 0 ), γ ± 1 2 , γ ′± 1 2 , D ±1 , D ′ ±1 , satisfying the following relations (where c is the canonical central element ofĝ):
(X1) γ ± 1 2 , γ ′± 1 2 are central with γ = ω ′−1 δ , γ ′ = ω −1 δ , γγ ′ = (rs) c , such that ω i ω −1 i = ω ′ i ω ′ −1 i = 1 = DD −1 = D ′ D ′−1 , and [ ω ±1 i , ω ±1 j ] = [ ω ±1 i , D ±1 ] = [ ω ′ ±1 j , D ±1 ] = [ ω ±1 i , D ′±1 ] = 0 = [ ω ±1 i , ω ′ ±1 j ] = [ ω ′ ±1 j , D ′±1 ] = [D ′ ±1 , D ±1 ] = [ ω ′±1 i , ω ′ ±1 j ]. (X2) For i, j ∈ I 0 , D e i D −1 = r δ0i e i , D f i D −1 = r −δ0i f i , ω j e i ω −1 j = ω ′ i , ω j e i , ω j f i ω −1 j = ω ′ i , ω j −1 f i . (X3) For i, j ∈ I 0 , D ′ e i D ′−1 = s δ0i e i , D ′ f i D ′−1 = s −δ0i f i , ω ′ j e i ω ′ −1 j = ω ′ j , ω i −1 e i , ω ′ j f i ω ′ −1 j = ω ′ j , ω i f i . (X4) For i, j ∈ I 0 , we have [ e i , f j ] = δ ij r − s (ω i − ω ′ i ).
(X5) For any i = j, we have the (r, s)-Serre relations ad l e i 1−aij (e j ) = 0,
ad r f i 1−aij (f j ) = 0,
where the definitions of the left-adjoint action ad l e i and the right-adjoint action ad r f i are given in the following sense
ad l a (b) = (a) a (1) b S(a (2) ), ad r a (b) = (a) S(a (1) ) b a (2) , ∀ a, b ∈ U r,s (ĝ),
where ∆(a) = (a) a (1) ⊗ a (2) is given by Proposition 2.3 below.
Hopf algebra and Drinfeld double.
The following is straightforward.
Proposition 2.3. The algebra U r,s (ĝ) (ĝ = A (1) n−1 , D (1) n and E(1)
6 ) is a Hopf algebra under the comultiplication, the counit and the antipode defined below
(0 ≤ i ≤ n) ∆(ω ±1 i ) = ω ±1 i ⊗ ω ±1 i , ∆(ω ′ i ±1 ) = ω ′ i ±1 ⊗ ω ′ i ±1 , ∆(e i ) = e i ⊗ 1 + ω i ⊗ e i , ∆(f i ) = 1 ⊗ f i + f i ⊗ ω ′ i , ε(ω ± i ) = ε(ω ′ i ±1 ) = 1, ε(e i ) = ε(f i ) = 0, S(ω ±1 i ) = ω ∓1 i , S(ω ′ i ±1 ) = ω ′ i ∓1 , S(e i ) = −ω −1 i e i , S(f i ) = −f i ω ′ i −1 .
Remark 2.4. When r = s −1 = q, Hopf algebra U r,s (ĝ) modulo the Hopf ideal generated by the elements
ω ′ i − ω −1 i (0 ≤ i ≤ n), is just the quantum group U q (ĝ) of Drinfel'd-Jimbo type.
Definition 2.5. A skew-dual pairing of two Hopf algebras A and U is a bilinear form , :
U × A −→ K such that f, 1 A = ε U (f ), 1 U , a = ε A (a), f, a 1 a 2 = ∆ op U (f ), a 1 ⊗ a 2 , f 1 f 2 , a = f 1 ⊗ f 2 , ∆ A (a) ,
for all f, f 1 , f 2 ∈ U, and a, a 1 , a 2 ∈ A, where ε U and ε A denote the counits of U and A, respectively, and ∆ U and ∆ A are their respective comultiplications.
LetB =B(ĝ) (resp.B ′ =B ′ (ĝ) ) denote the Hopf subalgebra ofÛ = U r,s (ĝ) generated by e j , ω ±1
j (resp. f j , ω ′ j ±1 ) with 0 ≤ j ≤ n forĝ = A (1)
n−1 , and with 0 ≤ j ≤ n forĝ = D (1) n , respectively. The following result was obtained for the type A (1) n−1 case by [HRZ]. Proposition 2.6. There exists a unique skew-dual pairing , :B ′ ×B −→ K of the Hopf subalgebrasB andB ′ in U r,s (ĝ) such that f i , e j = δij si−ri , and the conditions (1 X ) and (2) are satisfied, and all other pairs of generators are 0. Moreover, we have S(a), S(b) = a, b for a ∈B ′ , b ∈B.
Definition 2.7. For any two skew-paired Hopf algebras A and U by a skewdual pairing , , one may form the Drinfel'd double D(A, U) as in [KS,8.2], which is a Hopf algebra whose underlying coalgebra is A ⊗ U with the tensor product coalgebra structure, and whose algebra structure is defined by
(3) (a ⊗ f )(a ′ ⊗ f ′ ) = S U (f (1) ), a ′ (1) f (3) , a ′ (3) aa ′ (2) ⊗ f (2) f ′ , for a, a ′ ∈ A and f, f ′ ∈ U.
The antipode S is given by
S(a ⊗ f ) = (1 ⊗ S U (f ))(S A (a) ⊗ 1).
Clearly, both mappings A ∋ a → a⊗1 ∈ D(A, U) and U ∋ f → 1⊗f ∈ D(A, U) are injective Hopf algebra homomorphisms. Let us denote the image a ⊗ 1 (resp. 1 ⊗ f ) of a (resp. f ) in D(A, U) byâ (resp.f ). By (3), we have the following cross commutation relations between elementsâ (for a ∈ A) andf (for f ∈ U) in the algebra D(A, U):fâ
= S U (f (1) ), a (1) f (3) , a (3) â (2)f(2) ,(4)f (1) , a (1) f (2)â(2) = â (1)f(1) f (2) , a (2) .(5)
In fact, as an algebra the double D(A, U) is the universal algebra generated by the algebras A and U with cross relations (4) or, equivalently, (5).
Theorem 2.8. The two-parameter quantum affine algebra U = U r,s (ĝ) is isomorphic to the Drinfel'd quantum double D(B,B ′ ).
Triangular decomposition of
U r,s (ĝ). Let U 0 = K[ω ±1 0 , · · · , ω ±1 n ], U 0 = K[ω ±1 0 , · · · , ω ±1 n , ω ′ 0 ±1 , · · · , ω ′ n ±1 ], and U ′ 0 = K[ω ′ 0 ±1 , · · · , ω ′ n ±1
] denote the Laurent polynomial subalgebras of U r,s ( g), B, and B ′ respectively. Clearly, U 0 = U 0 U ′ 0 = U ′ 0 U 0 . Denote by U r,s ( n) (resp. U r,s ( n − ) ) the subalgebra of B (resp. B ′ ) generated by e i (resp. f i ) for all i ∈ I 0 . By definition, B = U r,s ( n)⋊U 0 , B ′ = U ′ 0 ⋉U r,s ( n − ), so that the double D( B, B ′ ) ∼ = U r,s ( n) ⊗ U 0 ⊗ U r,s ( n − ), as vector spaces. On the other hand, if we consider , − : B ′ × B −→ K by b ′ , b − := S(b ′ ), b , the convolution inverse of the skew-dual paring , in Proposition 2.6, the composition with the flip mapping σ then gives rise to a new skew-dual paring | := , − • σ : B × B ′ −→ K, given by b|b ′ = S(b ′ ), b . As a byproduct of Theorem 2.8 (see [ BGH1,Coro. 2.6]), we get the standard triangular decomposition of U r,s ( g).
Corollary 2.9. U r,s ( g) ∼ = U r,s ( n − ) ⊗ U 0 ⊗ U r,s ( n),( g) such that τ (r) = s, τ (s) = r, τ ( ω ′ i , ω j ±1 ) = ω ′ j , ω i ∓1 , and τ (e i ) = f i , τ (f i ) = e i , τ (ω i ) = ω ′ i , τ (ω ′ i ) = ω i , τ (γ) = γ ′ , τ (γ ′ ) = γ, τ (D) = D ′ , τ (D ′ ) = D.
Then B ′ = τ ( B) with those induced defining relations from B, and those cross relations in (X2)-(X4) are antisymmetric with respect to τ .
Drinfeld Realization via Generating Functions with τ -invariance
3.1. Generating functions with τ -invariance and Drinfeld realization. In order to obtain the intrinsic definition of Drinfeld realization of the two-parameter quantum affine algebra U r,s ( g), we need first to construct the generating functions g ± ij (z) (1 ≤ i, j ≤ n) with τ -invariance, which is due to the first author defined as follows (This was initially motivated in part by section 1.1 in [Gr] in the oneparameter setting regardless of τ -invariance there).
Let α i , α j ∈ ∆, we set g ± ij (z) = n∈Z+ ± c (n) αi, αj z n := n∈Z+ ± c (n)
ij z n , a formal power series in z, where the coefficients ± c (n) ij are determined from the Taylor series expansion in the variable z at 0 ∈ C of the function
n∈Z+ ± c (n) ij z n = g ± ij (z) = G ± ij (z, 1) F ± ij (z, 1) ,
where we got some observations from the discussions for type A
(1) n we did in [HRZ]
to define F ± ij (z, w), G ± ij (z, w) as follows. F ± ij (z, w) := z − ( i, j j, i ) ± 1 2 w, G ± ij (z, w) := j, i ±1 z − ( j, i i, j −1 ) ± 1 2 w.
We have a uniform formula for g ± ij (z) as below
g ± ij (z) = j, i ±1 z − ( j, i i, j −1 ) ± 1 2 z − ( i, j j, i ) ± 1 2 = ( j, i i, j −1 ) ± 1 2 ( i, j j, i ) ± 1 2 z − 1 z − ( i, j j, i ) ± 1 2 . (3.1)
Then in both cases (whenever i = j or i = j), we have a uniform expansion
formula for g ± ij (z) = k≥0 ± c (k) ij z k with ± c (0) ij = i, j ∓1 and ± c (k) ij = ± c (0) ij i, i ∓ (k−1)a ij 2 i, i ∓ a ij 2 − i, i ± a ij 2 , for k > 0.
Define the other generating functions in a formal variable z (also see [HRZ]):
δ(z) = n∈Z z n , x ± i (z) = k∈Z x ± i (k)z −k , ω i (z) = m∈Z+ ω i (m)z −m = ω i exp (r−s) ℓ≥1 a i (ℓ)z −ℓ , ω ′ i (z) = n∈Z+ ω ′ i (−n)z n = ω ′ i exp −(r−s) ℓ≥1 a i (−ℓ)z ℓ .
The following property for the generating functions g ± ij (z) is rather crucial for deriving the inherent definition of Drinfeld realization in the two-parameter version.
Proposition 3.1. (The τ -invariance of generating functions) Assume τ (r) = s, τ (s) = r, τ ( i, j ) = j, i −1 , τ (z) = z −1 , τ (x ± i (k)) = x ∓ i (−k), τ (ω i (m)) = ω ′ i (−m), τ (ω ′ i (−m)) = ω i (m), for m ∈ Z + with ω i (0) = ω i , ω ′ i (0) = ω ′ i , then (i) τ (g ± ij (z)) = g ± ij (z), and g ± ij (z) −1 = g ∓ ij (z), g ± ij (z −1 ) = g ∓ ji (z) = g ± ji (z) −1 . (ii) τ (δ(z)) = δ(z), τ (x ± i (z)) = x ∓ i (z), τ (ω i (z)) = ω ′ i (z) and τ (ω ′ i (z)) = ω i (z).
Now let us formulate the inherent definition of Drinfeld realization for two-parameter quantum affine algebra U r,s ( g) via our generating functions with τ -invariance. Set
r i = r di , s i = s di , where A = DB, D = diag{d 0 , · · ·, d n } and B is symmetric. Definition 3.2. (Theorem.)
The (r, s)-Drinfeld realization U r,s ( g) associated to the two-parameter quantum affine algebra U r,s ( g) is the assaciative algebra with unit 1 and generators
x ± i (k), ω i (m), ω ′ i (−n), γ ± 1 2 , γ ′ ± 1 2 , D ±1 , D ′ ±1 i ∈ I, k ∈ Z, m, n ∈ Z + satisfying the relations below with τ -invariance, written in terms of τ -invariant generating functions of formal variables z, w with Q-anti-involution τ such that τ (γ) = γ ′ , τ (γ ′ ) = γ (where c is the canonical central element ofĝ and τ is defined as in Proposition 3.1, set g ij (z) := g + ij (z)): γ ± 1 2 , γ ′± 1 2 are central and mutually inverse such that γγ ′ = (rs) c , (3.2) ω i (0) ±1 , ω ′ j (0) ±1 mutually commute, where ω i (0) = ω i , ω ′ j (0) = ω ′ j , (3.3) ω i (z)ω j (w) = ω j (w)ω i (z), ω ′ i (z)ω ′ j (w) = ω ′ j (w)ω ′ i (z), (3.4) g ij zw −1 (γγ ′ ) 1 2 γ ω ′ i (z)ω j (w) = g ij zw −1 (γγ ′ ) 1 2 γ ′ ω j (w)ω ′ i (z), (3.5) Df i (z)D −1 =f i z r i , D ′ f i (z)D ′−1 =f i z s i , for f i (z)=x ± i (z), ω i (z), ω ′ i (z), (3.6) ω ′ i (z)x ± j (w)ω ′ i (z) −1 = g ij z w (γγ ′ ) 1 2 γ ∓ 1 2 ±1 x ± j (w), (3.7) ω i (z)x ± j (w)ω i (z) −1 = g ji w z (γγ ′ ) 1 2 γ ′± 1 2 ∓1 x ± j (w), (3.8) [ x + i (z), x − j (w) ] = δ ij r i − s i δ(zw −1 γ ′ )ω i (wγ 1 2 ) − δ(zw −1 γ)ω ′ i (zγ ′− 1 2 ) , (3.9) F ± ij (z, w) x ± i (z)x ± j (w) = G ± ij (z, w) x ± j (w) x ± i (z), (3.10) x ± i (z)x ± j (w) = j, i ±1 x ± j (w)x ± i (z), for a ij = 0, (3.11) Sym z1,··· ,zn n=1−aij k=0 (−1) k (r i s i ) ± k(k−1) 2 1 − a ij k ±i x ± i (z 1 ) · · · x ± i (z k )x ± j (w) (3.12) ×x ± i (z k+1 ) · · · x ± i (z n ) = 0, for a ij < 0, 1 ≤ i < j < n, Sym z1,··· ,zn n=1−aij k=0 (−1) k (r i s i ) ∓ k(k−1) 2 1 − a ij k ∓i x ± i (z 1 ) · · · x ± i (z k )x ± j (w) (3.13) ×x ± i (z k+1 ) · · · x ± i (z n ) = 0, for a ij < 0, 1 ≤ j < i < n,
where Sym z1,··· ,zn denotes symmetrization w.r.t. the indices (z 1 , · · ·, z n ). In particular, τ keeps each term among the relations (3.2)-(3.6), (3.9)-(3.11); but interchanges the relations between (3.7) and (3.8), the ones between (3.12) and (3.13).
Remark 3.3. (1) When r = q = s −1 , g ij (z)
is the same as that of the one-parameter quantum affine algebras (cf. [Gr]).
(2) When r = q = s −1 , the algebra U q,q −1 ( g) modulo the ideal generated by
the set { ω ′ i − ω −1 i (i ∈ I), γ ′ 1 2 − γ − 1 2 }, is the usual Drinfeld realization U q ( g). (3) Denote ±c (k) ij := ± c (k) ij / ± c (0) ij , t := r−s. (3.7) is equivalent to the following exp −t ℓ>0 a i (−ℓ)z ℓ · x ± j (w)·exp t ℓ>0 a i (−ℓ)z ℓ = k≥0 ±c (k) ij γ ∓ k 2 (γγ ′ ) 1 2 z w k x ± j (w). (3.8) is equivalent to the following exp t ℓ>0 a i (ℓ)z −ℓ · x ± j (w)·exp −t ℓ>0 a i (ℓ)z −ℓ = k≥0 ∓c (k) ji γ ′± k 2 (γγ ′ ) − i, i −ℓa ij 2 ) |ℓ|(r i − s i ) · γ |ℓ| − γ ′|ℓ| r i − s i . (D3) [ a i (ℓ), ω ±1 j ] = [ a i (ℓ), ω ′ ±1 j ] = 0. (D4) D x ± i (k) D −1 = r k x ± i (k), D ′ x ± i (k) D ′ −1 = s k x ± i (k), D a i (ℓ) D −1 = r ℓ a i (ℓ), D ′ a i (ℓ) D ′ −1 = s ℓ a i (ℓ). (D5) ω i x ± j (k) ω −1 i = ω ′ j , ω i ±1 x ± j (k), ω ′ i x ± j (k) ω ′ −1 i = ω ′ i , ω j ∓1 x ± j (k). (D6 1 ) [ a i (ℓ), x ± j (k) ] = ± (γγ ′ ) ℓ 2 ( i, i ℓa ij 2 − i, i −ℓa ij 2 ) ℓ(ri−si) γ ′± ℓ 2 x ± j (ℓ+k), for ℓ > 0, (D6 2 ) [ a i (ℓ), x ± j (k) ] = ± (γγ ′ ) −ℓ 2 ( i, i ℓa ij 2 − i, i −ℓa ij 2 ) ℓ(ri−si) γ ± ℓ 2 x ± j (ℓ+k), for ℓ < 0. (D7) x ± i (k+1) x ± j (k ′ ) − j, i ±1 x ± j (k ′ ) x ± i (k+1) = − j, i i, j −1 ± 1 2 x ± j (k ′ +1) x ± i (k) − i, j ±1 x ± i (k) x ± j (k ′ +1) . (D8) [ x + i (k), x − j (k ′ ) ] = δ ij r i − s i γ ′−k γ − k+k ′ 2 ω i (k+k ′ ) − γ k ′ γ ′ k+k ′ 2 ω ′ i (k+k ′ ) , where ω i (m), ω ′ i (−m) (m ∈ Z ≥0 ) such that ω i (0) = ω i and ω ′ i (0) = ω ′ i are defined as below: ∞ m=0 ω i (m)z −m = ω i exp (r i −s i ) ∞ ℓ=1 a i (ℓ)z −ℓ , ω i (−m) = 0, ∀ m > 0 ; ∞ m=0 ω ′ i (−m)z m = ω ′ i exp −(r i −s i ) ∞ ℓ=1 a i (−ℓ)z ℓ , ω ′ i (m) = 0, ∀ m > 0 . (D9 1 ) x ± i (m)x ± j (k) = j, i ±1 x ± j (k)x ± i (m), for a ij = 0, (D9 2 ) Sym m1,···mn n=1−aij k=0 (−1) k (r i s i ) ± k(k−1) 2 1−aij k ±i x ± i (m 1 ) · · · x ± i (m k )x ± j (ℓ) ×x ± i (m k+1 ) · · · x ± i (m n ) = 0, for a ij = 0, 1 ≤ i < j < n, (D9 3 ) Sym m1,···mn n=1−aij k=0 (−1) k (r i s i ) ∓ k(k−1) 2 1−aij k ∓i x ± i (m 1 ) · · · x ± i (m k )x ± j (ℓ) ×x ± i (m k+1 ) · · · x ± i (m n ) = 0, for a ij = 0, 1 ≤ j < i < n, where [m] ±i = r ±m i −s ±m i ri−si , [m] ±i ! = [m] ±i · · · [2] ±i [1] ±i , m n ±i = [m]±i! [n]±i![m−n]±i! , Sym m1,··· ,mn denotes symmetrization w.r.t. the indices (m 1 , · · · , m n ).
As one of crucial observations of considering the compatibilities of the defining system above, we have
Proposition 3.5. There exists the Q-algebra antiautomorphism τ of U r,s ( g) such that τ (r) = s, τ (s) = r, τ ( ω ′ i , ω j ±1 ) = ω ′ j , ω i ∓1 and τ (ω i ) = ω ′ i , τ (ω ′ i ) = ω i , τ (γ) = γ ′ , τ (γ ′ ) = γ, τ (D) = D ′ , τ (D ′ ) = D, τ (x ± i (m)) = x ∓ i (−m), τ (a i (ℓ)) = a i (−ℓ), τ (φ i (m)) = ϕ i (−m), τ (ϕ i (−m)) = φ i (m),
and τ preserves each defining relation (Dn) in Definition 3.1 for n = 1, · · · , n.
Quantum Lie bracket.
In this paragraph, we first establish an algebraic isomorphism between the two realizations of two-parameter quantum affine algebras U r,s ( g) in the above, which is called Drinfeld isomorphism in one-parameter quantum affine algebras. We need to make some preliminaries on the definition of quantum Lie bracket that appears to be regardless to degrees of relative elements (see the properties (3.16) & (3.17) below). This a bit generalized quantum Lie bracket compared to the one used in the usual construction of the quantum Lyndon basis (for definition, see [R2]), which is consistent with the cases when adding the bracketing on those corresponding Lyndon words, is crucial to our proving later on.
Definition 3.6. For q i ∈ K * = K\{0} and i = 1, 2, · · · s − 1, The quantum Lie brackets [ a 1 , a 2 , · · · , a s ] (q1, q2, ··· , qs−1) and [ a 1 , a 2 , · · · , a s ] q1, q2, ··· , qs−1 are defined inductively by
[ a 1 , a 2 ] q1 = a 1 a 2 − q 1 a 2 a 1 , [ a 1 , a 2 , · · · , a s ] (q1, q2, ··· , qs−1) = [ a 1 , [ a 2 , · · · , a s ] (q2, ··· , qs−1) ] q1 , [ a 1 , a 2 , · · · , a s ] q1, q2, ··· , qs−1 = [ [ a 1 , · · · , a s−1 ] q1, ··· , qs−2 , a s ] qs−1 ,
By consequences of the above definitions, the following identities follow
[ a, bc ] v = [ a, b ] q c + q b [ a, c ] v q , (3.14) [ ab, c ] v = a [ b, c ] q + q [ a, c ] v q b, (3.15) [ a, [ b, c ] u ] v = [ [ a, b ] q , c ] uv q + q [ b, [ a, c ] v q ] u q , (3.16) [ [ a, b ] u , c ] v = [ a, [ b, c ] q ] uv q + q [ [ a, c ] v q , b ] u q . (3.17)
In particular, we get immediately,
[ a, [ b 1 , · · · , b s ] (v1, ··· , vs−1) ] = i [ b 1 , · · · , [ a, b i ], · · · , b s ] (v1, ··· , vs−1) , (3.18) [ a, a, b ] (u, v) = a 2 b − (u+v) aba + (uv) ba 2 = (uv)[ b, a, a ] u −1 ,v −1 , (3.19) [ a, a, a, b ] (u 2 , uv, v 2 ) = a 3 b − [3] u,v a 2 ba + (uv)[3] u,v aba 2 − (uv) 3 ba 3 , (3.20) [ a, a, a, a, b ] (u 3 ,u 2 v,uv 2 ,v 3 ) = a 4 b − [4] u,v a 3 ba + uv 4 2 u,v a 2 ba 2 (3.21) − (uv) 3 [4] u,v aba 3 + (uv) 6 ba 4 , where [n] u,v = u n −v n u−v , [n] u,v ! := [n] u,v · · · [2] u,v [1] u,v , n m u,v := [n]u,v ! [m]u,v ![n−m]u,v! .
By the definition above, the formula (D7) will take the convenient form as
(3.22) x ± i (k), x ± j (k ′ +1) i,j ∓1 = − j, i i, j −1 ± 1 2 x ± j (k ′ ), x ± i (k+1) j,i ∓1 .
3.3. Quantum root vectors. Furthermore, for each α = α i1 + α i2 + · · · + α in := α i1,i2,··· ,in ∈∆ + , by [R2], we can construct the quantum root vector x + α (0) as a (r, s)-bracketing in an inductive fashion, for more details, see [HRZ] :
x + α (0) : = x + αi 1 ,i 2 ,··· ,i n−1 (0), x + in (0) ω ′ α i 1 ,i 2 ,··· ,i n−1 , ωi n −1 = · · · x + i1 (0), x + i2 (0) i1,i2 −1 , · · · , x + in (0) ω ′ α i 1 ,i 2 ,··· ,i n−1 , ωi n −1 .(*)
Applying τ to (*), we can obtain the definition of quantum root vector x − α (0) as below:
x
− α (0) : = x − in (0), x − αi 1 ,i 2 ,··· ,i n−1 (0) ω ′ in , ωα i 1 ,i 2 ,··· ,i n−1 = x − in (0) · · · x − i2 (0), x − i1 (0) i2,i1 · · · ω ′ in , ωα i 1 ,i 2 ,··· ,i n−1 .
Definition 3.7. (see Definition 3.9 in [HRZ]) For α = α i1,i2,··· ,in ∈∆ + , we define the quantum affine root vectors x ± α (k) of nontrivial level k by
x + α (k) := · · · x + i1 (k), x + i2 (0) i1,i2 −1 , · · · , x + in (0) ω ′ α i 1 ,i 2 ,··· ,i n−1 , ωi n −1 , x − α (k) := x − in (0), · · · , x − i2 (0), x − i1 (k) i2,i1 · · · ω ′ in , ωα i 1 ,i 2 ,··· ,i n−1 ,
where τ x ± α (±k) = x ∓ α (∓k). Remark 3.8. Using the definition, we fix an ordering of the maximal root θ, and give the maximal quantum root vectors x − θ (1) and x + θ (−1) as follows. For the case of A (1) n , we fix the maximal root θ = α 1 + α 2 + · · · + α n , and
x − θ (1) = [ x − n (0), x − n−1 (0), · · · , x − 2 (0), x − 1 (1) ] (s, ··· , s) , x + θ (−1) = [ x + 1 (−1), x + 2 (0), · · · , x + n (0) ] r,··· ,r .
For the case of D
(1) n , we fix the maximal root θ = α 1 + α 2 + · · · + α n−2 + α n + α n−1 + · · · + α 2 , and
x
− θ (1) = [ x − 2 (0), · · · , x − n (0), x − n−2 (0), · · · , x − 2 (0), x − 1 (1) ] (s, ··· , s, r −1 , ··· , r −1 ) , x + θ (−1) = [ x + 1 (−1), x + 2(
0), · · · , x + n−2 (0), x + n (0), · · · , x + 2 (0) ] r,··· ,r,s −1 ,··· ,s −1 .
For the case of E
(1) 6 , we fix the maximal root θ = α 1 + α 3 + · · · + α 6 + α 2 + α 4 + α 3 + α 5 + α 4 + α 2 , and
x
− θ (1) = x − α13456243542 (1) = [ x − 2 (0), x − α1345624354 (1) ] r −2 s −1 = · · · = [ x − 2 (0), x − 4 (0), x − 5 (0), x − 3 (0), x − 4 (0), x − 2 (0), x − 6 (0), · · · , x − 3 (0), x − 1 (1) ] (s, ··· , s, r −1 , s, r −1 , s, s, r −2 s −1 ) , x + θ (−1) = x + α13456243542 (−1) = [ x + α1345624354 (−1), x + 2 (0) ] r −1 s −2 = [ x + 1 (−1), x + 3 (0), · · · , x + 6 (0), x + 2 (0), x + 4 (0), x + 3 (0), x + 5 (0), x + 4 (0), x + 2 (0) ] r, ··· , r, s −1 , r, s −1 , r, r, r −1 s −2 . 3.4. Two-parameter Drinfel'd Isomorphism Theorem.
We state the main theorem as follows.
Theorem 3.9. Given a simple Lie algebra g of simply-laced type, let θ = α i1 + · · · + α i h−1 be the maximal root with respect to a chosen prime root system Π. Then there exists an algebra isomorphism Ψ : U r,s ( g) −→ U r,s ( g) defined by: for i ∈ I,
ω i −→ ω i ω ′ i −→ ω ′ i ω 0 −→ γ ′−1 ω −1 θ ω ′ 0 −→ γ −1 ω ′−1 θ γ ± 1 2 −→ γ ± 1 2 γ ′ ± 1 2 −→ γ ′ ± 1 2 D ±1 −→ D ±1 D ′ ±1 −→ D ′ ±1 e i −→ x + i (0) f i −→ x − i (0) e 0 −→ x − θ (1) · (γ ′−1 ω −1 θ ) f 0 −→ aτ x − θ (1) · (γ ′−1 ω −1 θ ) = a(γ −1 ω ′−1 θ ) · x + θ (−1) where ω θ = ω i1 · · · ω i h−1 , ω ′ θ = ω ′ i1 · · · ω ′ i h−1 , and a = 1, for type A (1) ; (rs) n−2 , for type D (1) ; (rs) 4 , for type E(1)
6 .
Remark 3.10. Let E i , F i (i ∈ I 0 ) and ω 0 , ω ′ 0 denote the images of e i , f i (i ∈ I 0 ) and ω 0 , ω ′ 0 in the algebra U r,s ( g), respectively. Denote by U ′ r,s ( g) the subalgebra of
U r,s ( g) generated by E i , F i , ω ±1 i , ω ′±1 i (i ∈ I 0 ), γ ± 1 2 , γ ′ ± 1 2 , D ±1 , D ′±1 , that is, U ′ r,s ( g) := E i , F i , ω ±1 i , ω ′ ±1 i , γ ± 1 2 , γ ′ ± 1 2 , D ±1 , D ′ ±1 i ∈ I 0 .
Thereby, to prove the Drinfeld isomorphism theorem (Theorem 3.9) is equivalent to prove the following three Theorems:
Theorem A. Ψ : U r,s (ĝ) −→ U ′ r,s (ĝ) is an epimorphism.
Theorem B. U ′ r,s (ĝ) = U r,s (ĝ). Theorem C. There exists a surjective Φ : U ′ r,s (ĝ) −→ U r,s (ĝ) such that ΨΦ = ΦΨ = 1.
Proof of Drinfeld Isomorphism Theorem
For completeness, we check Theorem A for types D To check relation (X4), we first consider that when i = 0,
[ E 0 , F i ] = [ x − θ (1) (γ ′−1 ω −1 θ ), x − i (0) ] = −[ x − i (0), x − θ (1) ] ω ′ i , ω0 −1 (γ ′−1 ω −1 θ )
. Thus, relation (X4) follows from immediately from the following Lemma 4.1 in the case of i = 0.
Lemma 4.1. Let i ∈ {1, · · · , n}, then we have
[ x − i (0), x − θ (1) ] ω ′ i , ω0 −1 = 0.
To show Lemma 4.1, the following Lemmas will play a crucial role which will be proved in the appendix.
For our purpose, we need some notations: For 1 i < j n − 1,
x − α1,i (1) = [ x − i (0), · · · , x − 2 (0), x − 1 (1) ] (s, ··· , s) , x − βi,j (1) = [ x − i (0), · · · , x − n (0), x − n−2 (0), · · · , x − j+1 (0), x − j (1) ] (s, ··· , s, r −1 , ··· , r −1 ) . Consequently, we get x − θ (1) = x − β1,2 (1). Lemma 4.2. [ x − i−1 (0), x − βi−1,i+1 (1) ] s −1 = 0, for 1 < i < n. Lemma 4.3. [ x − i (0), x − βi,i+1 (1) ] (rs) −1 = 0, for 1 i n − 1. Lemma 4.4. [ x − 2 (0), x − β1,4 (1) ] = 0. Lemma 4.5. [ x − i (0), x − β1,i+2 (1) ] = 0, for 3 i n − 2.
The following Lemmas can be verified directly.
Lemma 4.6. [ x − i (0), x − β1,i (1) ] s −1 = 0, for 3 i n − 1. Lemma 4.7. [ x − n (0), x − n (0), x − α1,n(1
) ] (rs 2 , r 2 s) = 0. Now using the above Lemmas, we turn to prove Lemma 4.1.
Proof of Lemma 4.1. (I) When i = 1, ω ′ 1 , ω 0 = rs and ω ′ 1 , ω θ = (rs) −1 . It follows from Lemma 4.3 for the case of i = 1.
(II) When i = 2, ω ′ 2 , ω 0 = s, that is, ω ′ 2 , ω θ = s −1 . Let us first consider
x − θ (1) = x − β1,2 (1) (by definition) = [ x − 2 (0), x − 3 (0), x − β1,4 (1) ] (r −1 ,r −1 ) (by (3.16)) = [ [ x − 2 (0), x − 3 (0) ] r −1 , x − β1,4 (1) ] r −1 + r −1 [ x − 3 (0), [ x − 2 (0), x − β1,4 (1) ] ] (=0 by Lemma 4.4) = [ [ x − 2 (0), x − 3 (0) ] r −1 , x − β1,4 (1) ] r −1 .
Our previous result leads us to get
[ x − 2 (0), x − θ (1) ] s −1 (by definition) = [ x − 2 (0), [ x − 2 (0), x − 3 (0) ] r −1 , x − β1,4 (1) ] (r −1 , s −1 ) (by (3.16)) = [ [ x − 2 (0), x − 2 (0), x − 3 (0) ] (r −1 , s −1 ) , x − β1,4 (1) ] r −1 (=0 by (D9 2 )) + s −1 [ [ x − 2 (0), x − 3 (0) ] r −1 , [ x − 2 (0), x − β1,4 (1) ] ] r −1 s (=0 by Lemma 4.4) = 0.
(III) When 3 i n − 1, ω ′ i , ω 0 = 1, that is to say, ω ′ i , ω θ = 1. We may use (3.16) and (D9 1 ) to show that
[ x − i (0), x − θ (1) ] (by definition) = [ x − i (0), [ x − 2 (0), · · · , x − i−2 (0), x − β1,i−1 (1) ] (r −1 ,··· ,r −1 ) ] = [ x − 2 (0), · · · , x − i−2 (0), [ x − i (0), x − β1,i−1 (1) ] ] (r −1 ,··· ,r −1 ) .
For this purpose, it suffices to check that
[ x − i (0), x − β1,i−1 (1) ] = 0. It is now straight- forward to verify that [ x − i (0), x − β1,i−1 (1) ] r −1 s (by definition) = [ x − i (0), [ x − i−1 (0), x − i (0), x − β1,i+1 (1) ] (r −1 , r −1 ) ] r −1 s (by (3.16)) = [ x − i (0), [ x − i−1 (0), x − i (0) ] r −1 , x − β1,i+1 (1) ] (r −1 , r −1 s) (by (3.16)) + r −1 [ x − i (0), x − i (0), [ x − i−1 (0), x − β1,i+1 (1) ] ] (1, r −1 s) (=0 by Lemma 4.5) = [ [ x − i (0), [ x − i−1 (0), x − i (0) ] r −1 ] s , x − β1,i+1 (1) ] r −2 (=0 by (D9 2 )) + s[ [ x − i−1 (0), x − i (0) ] r −1 , [ x − i (0), x − β1,i+1 (1) ] r −1 ] (rs) −1 (by (3.17)) = s[ x − i−1 (0), [ x − i (0), x − β1,i (1) ] s −1 ] r −2 (=0 by Lemma 4.6) + [ [ x − i−1 (0), x − β1,i (1) ] r −1 , x − i (0) ] r −1 s (by definition) = [ x − β1,i−1 (1), x − i (0) ] r −1 s , which implies that (1 + r −1 s)[ x − i (0), x − β1,i−1 (1) ] = 0. That is to say, r = −s, [ x − i (0), x − β1,i−1 (1) ] = 0.
(IV) When i = n, ω ′ n , ω 0 = (rs) −2 , that is to say ω ′ n , ω θ = (rs) 2 . By using (3.16) and (D9 1 ), we see that
[ x − n (0), x − θ (1) ] (rs) 2 (by definition) = [ x − n (0), [ x − 2 (0), · · · , x − n−3 (0), x − β1,n−2 (1) ] (r −1 ,··· ,r −1 ) ] (rs) 2 = [ x − 2 (0), · · · , x − n−3 (0), [ x − n (0), x − β1,n−2 (1) ] (rs) 2 ] (r −1 ,··· ,r −1 ) .
Hence, if we have verified that [ x − n (0), x − β1,n−2 (1) ] (rs) 2 = 0, then we would get the desired conclusion. We actually have
x − β1,n−2 (1) = [ x − n−2 (0), x − n−1 (0), x − n (0), x − α1,n−1 (1) ] (s, r −1 , r −1 ) (by (3.16)) = [ x − n−2 (0), [ x − n−1 (0), x − n (0) ] (rs) −1 , x − α1,n−1 (1) ] (s 2 ,r −1 ) (=0 by (D9 1 )) + (rs) −1 [ x − n−2 (0), x − n (0), [ x − n−1 (0), x − α1,n−1 (1) ] s ] (rs 2 ,··· ,r −1 ) (by (3.16)) = (rs) −1 [ [ x − n−2 (0), x − n (0) ] r −1 , x − α1,n (1) ] rs 2 + r −2 s −1 [ x − n (0), [ x − n−2 (0), x − α1,n (1) ] ] (rs) 2 (=0 by (3.16) & (D9 1 )) = (rs) −1 [ [ x − n−2 (0), x − n (0) ] r −1 , x − α1,n (1) ] rs 2 .
It can be proved from the above steps that
x − β1,n−2 (1) = (rs) −1 [ x − n−2 (0), x − n (0), x − α1,n (1) ] (rs 2 ,··· ,r −1 )
. Hence, we may easily check that
[ x − n (0), x − β1,n−2 (1) ] rs 3 = (rs) −1 [ x − n (0), [ x − n−2 (0), x − n (0) ] r −1 , x − α1,n (1) ] (rs 2 , rs 3 ) (by (3.16)) = (rs) −1 [ [ x − n (0), x − n−2 (0), x − n (0) ] (r −1 , s) , x − α1,n (1) ] r 2 s 4 (=0 by (D9 2 )) + r −1 [ [ x − n−2 (0), x − n (0) ] r −1 , [ x − n (0), x − α1 n (1) ] rs 2 ] rs (by (3.17)) = r −1 [ x − n−2 (0), [ x − n (0), x − n (0), x − α1,n (1) ] (rs 2 , r 2 s) ] r −2 (=0 by Lemma 4.7) + rs[ [ x − n−2 (0), x − n (0), x − α1,n (1) ] (rs 2 , r −1 ) , x − n (0) ] r −3 s −1 (by definition) = (rs) 2 [ x − β1,n−2 (1), x − n (0) ] r −3 s −1 . So, we have (1 + r −1 s) [ x − n (0), x − β1,n−2 (1) ] (rs) 2 = 0. Since r = −s, then we get [ x − n (0), x − β1,n−2 (1) ] (rs) 2 = 0.
This completes the proof of Lemma 4.1. We are now ready to prove relation (X4) for i = j = 0, that is,
Proposition 4.8. [ E 0 , F 0 ] = γ ′−1 ω −1 θ −γ −1 ω ′ θ −1 r−s .
Proof. Note that the construction of E 0 and F 0 , we check the statement step by step. First, using (D1) and (D5), we have
E 0 , F 0 = (rs) n−2 x − θ (1) γ ′−1 ω θ −1 , γ −1 ω ′ θ −1 x + θ (−1) = (rs) n−2 x − θ (1), x + θ (−1) · (γ −1 γ ′−1 ω θ −1 ω ′ θ −1 ).
τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 17
We may now use the result of the case of A
(1) n−1 [ x − α1,n−1 (1), x + α1,n−1 (−1) ] = γω ′ α1,n−1 − γ ′ ω α1,n−1 r − s .
Applying the above result, it is now straightforward to verify that
[ x − β1,n (1), x + β1,n (−1) ] (by definition) = [ [ x − n (0), x − α1,n−1 (1) ] s , [ x + α1,n−1 (−1), x + n (0) ] r ] (by (3.16)) = [ [ [ x − n (0), x + α1,n−1 (−1) ], x − α1,n−1 (1) ] s , x + n (0) ] r (=0 by (3.16) & (D8)) + [ [ x − n (0), [ x − α1,n−1 (1), x + α1,n−1 (−1) ] ] s , x + n (0) ] r (by (D5) & (D8)) + [ x + α1,n−1 (−1), [ [ x − n (0), x + n (0) ], x − α1,n−1 (1) ] s ] r (by (D8) & (D5)) + [ x + α1,n−1 (−1), [ x − n (0), [ x − α1,n−1 (1), x + n (0) ] ] s ] r (=0 by (3.16) & (D8)) = γω ′ α1,n−1 · ω ′ n − ω n r − s + γω ′ α1,n−1 − γ ′ ω α1,n−1 r − s ω n = γω ′ β1,n − γ ′ ω β1,n r − s .
We have then by repeating the above step
[ x − β1,n−1 (1), x + β1,n−1 (−1) ] (by definition) = [ [ x − n−1 (0), x − β1,n (1) ] r −1 , [ x + β1,n (−1), x + n−1 (0) ] s −1 ] (by (3.16)) = [ [ [ x − n−1 (0), x + β1,n (−1) ], x − β1,n (1) ] r −1 , x + n−1 (0) ] s −1 (=0 by (3.16) & (D8)) + [ [ x − n−1 (0), [ x − β1,n (1), x + β1,n (−1) ] ] r −1 , x + n−1 (0) ] s −1 (by (3.16) & (D8)) + [ x + β1,n (−1), [ [ x − n−1 (0), x + n−1 (0) ], x − β1,n (1) ] r −1 ] s −1 (by (3.16) & (D8)) + [ x + β1,n (−1), [ x − n−1 (0), [ x − β1,n (1), x + n−1 (0) ] ] r −1 ] s −1 (=0 by (3.16) & (D8)) = (rs) −1 γω ′ β1,n · ω ′ n−1 − ω n−1 r − s + (rs) −1 γω ′ β1,n − γ ′ ω β1,n r − s ω n−1 = (rs) −1 γω ′ β1,n−1 − γ ′ ω β1,n−1 r − s .
Furthermore, it follows from the above results
[ x − β1,n−2 (1), x + β1,n−2 (−1) ] (by definition) = [ [ x − n−2 (0), x − β1,n−1 (1) ] r −1 , [ x + β1,n−1 (−1), x + n−2 (0) ] s −1 ] (by (3.16)) = [ [ [ x − n−2 (0), x + β1,n−1 (−1) ], x − β1,n−1 (1) ] r −1 , x + n−2 (0) ] s −1 (=0 by (D8) & (D5)) + [ [ x − n−2 (0), [ x − β1,n−1 (1), x + β1,n−1 (−1) ] ] r −1 , x + n−2 (0) ] s −1 (by (D5) & (D8)) + [ x + β1,n−1 (−1), [ [ x − n−2 (0), x + n−2 (0) ], x − β1,n−1 (1) ] r −1 ] s −1 (by (D8) & (D5)) + [ x + β1,n−1 (−1), [ x − n−2 (0), [ x − β1,n−1 (1), x + n−2 (0) ] ] r −1 ] s −1 (=0 by (D5) & (D8)) = (rs) −2 γω ′ β1,n−1 · ω ′ n−2 − ω n−2 r − s + (rs) −2 γω ′ β1,n−1 − γ ′ ω β1,n−1 r − s ω n−1 = (rs) −2 γω ′ β1,n−2 − γ ′ ω β1,n−2 r − s .
By the same way, we get at last
[ x − β1,2 (1), x + β1,2 (−1) ] = (rs) 2−n γω ′ β1,2 − γ ′ ω β1,2 r − s .
As a consequence, we obtain the required result
E 0 , F 0 = γ ′−1 ω −1 θ − γ −1 ω ′ θ −1 r − s .
For the rest of this subsection, we will focus on checking the Serre relations of
U r,s (D (1) n ). Lemma 4.9. (1) E n E 0 = (rs) 2 E 0 E n , (2) E 0 E 2 2 − (r + s)E 2 E 0 E 2 + rsE 2 2 E 0 = 0, (3) E 2 0 E 2 − (r + s)E 0 E 2 E 0 + rsE 2 E 2 0 = 0, (4) F 0 F n = (rs) 2 F n F 0 , (5) F 1 F 2 0 − (r + s) F 0 F 1 F 0 + rs F 2 0 F 1 = 0, (6) F 2 2 F 0 − (r + s)F 2 F 0 F 2 + rsF 0 F 2 2 = 0. Proof.
Relations (4)-(6) follow from the action of τ on relations (1)- (3). To be precise, let us just consider the second and third relation.
(1) For the second equality, it is easy to see that 3 (1) ω 2 . By the above observation, it can be proved in a straightforward manner that
E 2 , x − θ (1) (by (3.16)) = [ [ x + 2 (0), x − 2 (0) ], x − β1,3 (1) ] r −1 (by (D8) & (D5)) + [ x − 2 (0), · · · , x − n (0), x − n−2 (0), · · · , x − 3 (0), [ x + 2 (0), x − 2 (0) ], x − 1 (1) ] (s,··· ,s,r −1 ··· ,r −1 ) (=0 by (D8), (D5) & (D9 1 )) = (rs) −1 x − β1,E 0 E 2 2 − (r + s)E 2 E 0 E 2 + rs E 2 2 E 0 = rs E 2 2 x − θ (1) − (1 + r −1 s) E 2 x − θ (1)E 2 + r −1 s x − θ (1)E 2 2 (γ ′−1 ω −1 θ ) = rs E 2 , E 2 , x − θ (1) r −1 s (γ ′−1 ω −1 θ ) = E 2 , x − β1,3 (1) ω 2 (γ ′−1 ω −1 θ ) (=0 by (3.16) & (D8)) = 0.
(2) Note that the formula of E 2 , x − θ (1) obtained in (1), we actually have Lemma 4.10. x − β1, 3 (1), x − β1,2 (1) s = 0, for r = −s.
E 2 0 E 2 − (r + s)E 0 E 2 E 0 + (rs) E 2 E 2 0 = (rs) x − θ (1), x − θ (1), E 2 (1, rs −1 ) (γ ′−2 ω −2 θ ) = − x − θ (1), x − β1,3 (1)ω 2 rs −1 (γ ′−2 ω −2 θ ) = − x − θ (1), x − β1,3 (1) s −1 ω 2 (γ ′−2 ω −2 θ ) (=0
Proof. See the appendix.
Proof of Theorem
A for U r,s (E (1) 6
). As usual, we need to verify some critical relations of Theorem A.
Note that the highest root of simple Lie algebra E 6 is θ = α 13456243542 . = α 1 + α 3 + · · · + α 6 + α 2 + α 4 + α 3 + α 5 + α 4 + α 2 .
The maximal quantum root vectors x − θ (1) and x + θ (−1) are defined as follows
x − θ (1) = x − α13456243542 (1) = [ x − 2 (0), x − α1345624354 (1) ] r −2 s −1 = · · · = [ x − 2 (0), x − 4 (0), x − 5 (0), x − 3 (0), x − 4 (0), x − 2 (0), x − 6 (0), · · · , , x − 3 (0), x − 1 (1) ] (s, ··· , s, r −1 , s, r −1 , s, s, r −2 s −1 ) . x + θ (−1) = x + α13456243542 (−1) = [ x + α1345624354 (−1), x + 2 (0) ] r −1 s −2 = [ x + 1 (−1), x + 3 (0), · · · , x + 6 (0), x + 2 (0), x + 4 (0), x + 3 (0), x + 5 (0), , x + 4 (0), x + 2 (0) ] r, ··· , r, s −1 , r, s −1 , r, r, r −1 s −2 .
Similarly, relation (X4) holds due to Lemma 4.11 below in the case of i = 0.
Lemma 4.11. [ x − i (0), x − θ (1) ] ω ′ 1
,ω θ = 0, for i = 1, 2, · · · , 6. To verify Lemma 4.11, the following three Lemmas, which we will check in the appendix, will play a crucial role. To be precise, let x ± i1 i2··· in (k) = x ± αi 1 ,i 2 ,··· ,in (k). We may easily check that [
x − 4 (0), x − 1345624 (1) ] r = 0, [ x − 3 (0), x − 1345624354 (1) ] (rs) −1 = 0. Lemma 4.14. [ x − 1 (0), x − 1345624354 (1) ] (rs) −1 = 0.
Proof of Lemma 4.11. We may now use the previous Lemmas to show that (I) When i = 1, ω ′ 1 , ω 0 = rs and ω ′ 1 , ω θ = (rs) −1 ,
[ x − 1 (0), x − θ (1) ] (rs) −1 (by definition) = [ x − 1 (0), x − 2 (0), x − 1345624354 (1) ] (r −2 s −1 , (rs) −1 ) (by (3.16)) = [ [ x − 1 (0), x − 2 (0) ] , x − 1345624354 (1) ] r −3 s −2 (=0 by (D9 1 )) + [ x − 2 (0), [ x − 1 (0), x − 1345624354(1
) ] (rs) −1 ] r −2 s −1 (=0 by Lemma 4.14)
= 0.
(II) When i = 2, ω ′ 2 , ω 0 = rs 2 and ω ′ 2 , ω
θ = r −1 s −2 , [ x − 2 (0), x − θ (1) ] r −1 s −2 (by definition) = [ x − 2 (0), x − 2 (0), x − 4 (0), x − 134562435 (1) ] (s r −2 s −1 , r −1 s −2 ) (by (3.16)) = [ x − 2 (0), [ x − 2 (0), x − 4 (0) ] r −1 , x − 134562435 (1) ] (r −1 , r −1 s −2 ) (by (3.16)) + r −1 [ x − 2 (0), x − 4 (0), [ x − 2 (0), x − 134562435 (1) ] (rs) −1 ] (rs, r −1 s −2 ) (=0 by Lemma 4.12) = [ [ x − 2 (0), x − 2 (0), x − 4 (0) ] (r −1 , s −1 ) , x − 134562435 (1) ] r −2 s −1 (=0 by (D9 2 )) + s −1 [ [ x − 2 (0), x − 4 (0) ], [ x − 2 (0), x − 134562435 (1) ] (rs) −1 ] r −1 s (=0 by Lemma 4.12) = 0.
(III) When i = 3, ω ′ 3 , ω 0 = rs and ω ′ 3 , ω θ = (rs) −1 . We may easily check that
[ x − 3 (0), x − θ (1) ] (rs) −1 (by definition) = [ x − 3 (0), x − 2 (0), x − 1345624354 (1) ] (r −2 s −1 , (rs) −1 ) (by (3.16)) = [ [ x − 3 (0), x − 2 (0) ] , x − 1345624354 (1) ] r −3 s −2 (=0 by (D9 1 )) + [ x − 2 (0), [ x − 3 (0), x − 1345624354 (1) ] (rs) −1 ] r −2 s −1 (=0 by Lemma 4.13) = 0.
(IV) When i = 4, ω ′ 4 , ω 0 = (rs) −1 and ω ′ 4 , ω θ = rs. We have
[ x − 4 (0), x − θ (1) ] s 2 (by definition) = [ x − 4 (0), x − 2 (0), x − 4 (0), x − 134562435 (1) ] (s r −2 s −1 , s 2 ) (by (3.16)) = [ x − 4 (0), [ x − 2 (0), x − 4 (0) ] r −1 , x − 134562435 (1) ] (r −1 , s 2 ) (by (3.16)) + r −1 [ x − 2 (0), x − 4 (0), [ x − 2 (0), x − 134562435 (1) ] (rs) −1 ] (rs, s 2 ) (=0 by Lemma 4.12) = [ [ x − 4 (0), x − 2 (0), x − 4 (0) ] (r −1 , s) , x − 134562435 (1) ] r −1 s (=0 by (D9 2 )) + s[ [ x − 2 (0), x − 4 (0) ] r −1 , [ x − 4 (0), x − 134562435 (1) ] s ] (rs) −1 (by definition & (3.17)) = s [ x − 2 (0), [ x − 4 (0), x − 1345624354 (1) ] r ] r −3 s −1 (=0 by Lemma 4.13) + rs [ [ x − 2 (0), x − 1345624354 (1) ] r −2 s −1 , x − 4 (0) ] r −2 (by definition) = −r −1 s[ x − 4 (0), x − θ (0) ] r 2 , which implies that (1 + r −1 s) [ x − 4 (0), x − θ (0) ] rs = 0. That is to say, when r = −s, [ x − 4 (0), x − θ (0) ] rs = 0. (V)
The proof of the case i = 5 or 6 is similar to that of the case i = 1 or 3, which are left to the readers.
We would like to point out that the proof of relation [ E 0 , F 0 ] = ω0−ω ′ 0 r−s is the same as that of the case of D (1) n . We now proceed to show the Serre relations for E (1) Proposition 4.15. We have the following Serre relations
(1) E 0 E 2 2 − (rs)(r + s)E 2 E 0 E 2 + (rs) 3 E 2 2 E 0 = 0, (2) E 2 0 E 2 − (rs)(r + s)E 0 E 2 E 0 + (rs) 3 E 2 E 2 0 = 0, (3) F 2 2 F 0 − (rs)(r + s)F 2 F 0 F 2 + (rs) 3 F 0 F 2 2 = 0, (4) F 2 F 2 0 − (rs)(r + s)F 0 F 2 F 0 + (rs) 3 F 2 0 F 2 = 0,
Proof. Here we only give the proof of the first (r, s)-Serre relation, and the others are left to the readers.
Let us first to prove that
[ E 2 , x − θ (1) ] (by definition & (3.16)) = [ [ x + 2 (0), x − 2 (0) ], x − 1345624354 (1) ] r −2 s −1 (by (D8), (D5)) + [ x − 2 (0), x − 4 (0), x − 5 (0), x − 3 (0), x − 4 (0), [ x + 2 (0), x − 2 (0) ], , x − 13456 (1) ] (r −1 , s, r −1 , s, s, r −2 s −1 ) (=0 by (D8), (D5) & (D9 1 )) = (rs) −2 x − 1345624354
(1) ω 2 By the above result, we get that
E 0 E 2 2 − (rs)(r + s)E 2 E 0 E 2 + (rs) 2 E 2 2 E 0 = (rs) −2 E 2 2 x − θ (0) − (1 + r −1 s)E 2 x − θ (1)E 2 + (r −1 s) x − θ (1)E 2 2 γ ω −1 θ = (rs) −2 [ E 2 , E 2 , x − θ (1) ] (1, r −1 s) γ ω −1 θ = (rs) −2 [ E 2 , x − θ (1) ] ω 2 = [ x − 4 (0), x − 5 (0), x − 3 (0), x − 4 (0), [ x + 2 (0), x − 2 (0) ], x − 13456 (1) ] (r −1 , s, r −1 , s, s) (=0 by (D8), (D5) & (D9 1 )) = 0.
Proof of Theorem B. This is similar to that of the case A
(1) n−1 [HRZ]. We shall show that the algebra U r,s ( g) is generated by
E i , F i , ω ±1 i , ω ′ ±1 i , γ ± 1 2 , γ ′ ± 1 2 (i ∈ I 0 )
. More explicitly, any generators of the algebra U r,s ( g) are in the subalgebra U ′ r,s ( g). To do so, we also need the following two Lemmas, which can be similarly checked like in [HRZ].
Lemma 4.16. (1)
x − 1 (1) = [ E 2 , E 3 , · · · , E n−2 , E n , · · · , E 2 , E 0 ] (s −1 ,··· ,s −1 ,r,r,··· ,r) γ ′ ω 1 ∈ U ′ r,s (D (1) n ), then for any i ∈ I, x − i (1) ∈ U ′ r,s (D (1) n ).(2)
x + 1 (−1) = τ E 2 , E 3 , · · · , E n−2 , E n , · · · , E 2 , E 0 ] (s −1 ,··· ,s −1 ,r,r,··· ,r) γ ′ ω 1 = γω ′ 1 [ F 0 , F 2 , · · · , F n−2 , F n , · · · , F 2 , F 0 ] (r −1 ,··· ,r −1 ,r −1 ,s,··· ,s) ∈ U ′ r,s (D (1) n ), then for any i ∈ I, x + i (−1) ∈ U ′ r,s (D
n ).(1)
Lemma 4.17.
(1)
x − 1 (1) = [ E 3 , · · · , E 6 , E 2 , E 4 , E 3 , E 5 , E 4 , E 2 , E 0 ] (r −1 s −2 ,r,r,s −1 ,r,s −1 ,r,··· ,r) γ ′ ω 1 ∈ U ′ r,s (E (1) 6 ), then for any i ∈ I, x − i (1) ∈ U ′ r,s (E (1) 6 ). (2) x + 1 (−1) = τ E 3 , · · · , E 6 , E 2 , E 4 , E 3 , E 5 , E 4 , E 2 , E 0 ] (r −1 s −2 , r, r, s −1 , r, s −1 , r,··· , r) γ ′ ω 1 = γω ′ 1 [ F 0 , F 2 , F 4 , F 5 , F 3 , F 4 , F 2 , F 6 , · · · , F 3 ] (s,··· ,s,r −1 ,s,r −1 ,s,s,r −2 s −1 ) ∈ U ′ r,s (E (1) 6 ), then for any i ∈ I, x + i (−1) ∈ U ′ r,s (E (1) 6 ).
Furthermore, applying the above results and combining (D8) with (D6), we get the following
Lemma 4.18. (1) a i (l) ∈ U ′ r,s (ĝ), for l ∈ Z\{0}. (2) x ± i (k) ∈ U ′ r,s (ĝ), for k ∈ Z.
Therefore, by induction, all generators are in the subalgebra U ′ r,s ( g). So, this finishes the proof of Theorem B.
Proof of Theorem C. This subsection focuses on showing Theorem C.
Theorem C. There exists a surjective Φ : U ′ r,s (ĝ) −→ U r,s (ĝ) such that ΨΦ = ΦΨ = 1.
Proof. We define Ψ on the generators as follows. For i ∈ I 0 ,
Ψ(E i ) = e i , Ψ(F i ) = f i , Ψ(ω i ) = ω i , Ψ(ω ′ i ) = ω ′ i , Ψ(γ) = γ, Ψ(γ ′ ) = γ ′ , Ψ(D) = D, Ψ(D ′ ) = D ′ .
Consequently, it is not difficult to see that ΨΦ = ΦΨ = 1.
Up to now, we prove the Drinfeld Isomorphism Theorem for the two parameter quantum affine algebras.
Vertex Representations
In the last section, we turn to construct the level-one vertex representations of two-parameter quantum affine algebras U r,s ( g) for types X (1) n (where X = ADE). More precisely, in our construction we can take c = 1 in the Drinfeld relations in this section.
Two-parameter quantum Heisenberg algebra.
Definition 5.1. Two-parameter quantum Heisenberg algebra U r,s ( h) is the subalgebra of U r,s ( g) generated by { a j (l), γ ± 1 2 , γ ′± 1 2 | l ∈ Z\{0}, j ∈ I }, satisfying the following relation, for m, l ∈ Z\{0}
[ a i (m), a j (l) ] = δ m+l,0 (rs) |m| 2 (r i s i ) − ma ij 2 [ ma ij ] i |m| γ |m| − γ ′ |m| r j − s j .
We denote by U r,s ( h + ) (resp. U r,s ( h − ) ) the commutative subalgebra of U r,s ( h) generated by a j (l) (resp. a j (−l)) with l ∈ Z >0 , j ∈ I. In fact, we have
U r,s ( h − ) = S( h − ), where S( h − ) is the symmetric algebra associated to h − . Then S( h − ) is a U r,s ( h)-module with the action defined by γ ± 1 2 · v = r ± 1 2 v, γ ′± 1 2 · v = s ± 1 2 v, a i (−l) · v = a i (−l) v, a i (l) · v = j (rs) l 2 (r i s i ) − la ij 2 [ la ij ] i l · r l − s l r j − s j · d v d a j (−l)
.
for any v ∈ S( h − ), l ∈ Z >0 and i ∈ I.
Fock space.
Let Q = i∈I Zα i be the root lattice of g with the Killing form (· | ·), one can form a group algebra K[Q] with base elements of the form e β (β ∈ Q), and the product e β e β ′ = e β+β ′ , β, β ′ ∈ Q.
We define the Fock space as
F := S( h − ) ⊗ K[Q], and make it into a U r,s ( h)-module (where U r,s ( h) is generated by { a i (±l), ω ±1 i , ω ′±1
i , γ ± 1 2 , γ ′± 1 2 | i ∈ I, l ∈ Z >0 }) via extending the action of U r,s ( h − ) to the Fock space F . Let z be a complex variable and add the action of α i (0) as follows:
z αi(0) (v ⊗ e β ) = z (αi | β) v ⊗ e β , e α (v ⊗ e β ) = v ⊗ e α+β , a i (±l) · (v ⊗ e β ) = (a i (±l) · v) ⊗ e β , l ∈ Z\{0}, ε i (0) · (v ⊗ e β ) = (ε i , β) v ⊗ e β ,
such that
ω i · (v ⊗ e β ) = β, i v ⊗ e β , ω ′ i · (v ⊗ e β ) = i, β −1 v ⊗ e β .
Vertex operators.
Let ǫ 0 ( , ): Q × Q → K * be the cocycle such that ǫ 0 (α, β + θ) = ǫ 0 (α, β)ǫ 0 (α, θ), ǫ 0 (α + β, θ) = ǫ 0 (α, θ)ǫ 0 (β, θ), we construct such a cocycle directly by
ǫ 0 (α i , α j ) = (−r i s i ) a ij 2 , i > j,(rs)1 2 , i = j, 1, i < j.
For α, β ∈ Q, define K-linear operators as
ǫ α (v ⊗ e β ) = ǫ 0 (α, β) v ⊗ e β , D(r)(v ⊗ e β ) = r β v ⊗ e β , D(s)(v ⊗ e β ) = s β v ⊗ e β .
We can now introduce the main vertex operators.
E ± − (α i , z) = exp ± ∞ n=1 s ±n/2 [n] a i (−n)z n , E ± + (α i , z) = exp ∓ ∞ n=1 r ∓n/2 [n] a i (n)z −n , where [n] = r n −s n r−s , for n ∈ Z >0 , α i ∈ Q.
Theorem 5.2. For the simply-laced cases, we have the vertex representation (of level-1) π of U r,s ( g) on the Fock space F as follows:
γ ±1 → r ±1 , γ ′±1 → s ±1 , D → D(r), D ′ → D(s), x + j (z) → X + j (z) = E + − (α j , z)E + + (α j , z)e αj z αj (0)+1 r εj (0)− 1 2 ǫ αj , x − j (z) → X − j (z) = E − − (α j , z)E − + (α j , z)e −αj z −αj(0)+1 s εj (0)− 1 2 ǫ αj , ω ′ j (z) → Φ j (z) = ω ′ j exp − (r − s) k>0 a j (−k)z k , ω j (z) → Ψ j (z) = ω j exp (r − s) k>0 a j (k)z −k .
5.4. Proof of the theorem 5.2. We have to show that the operators X ± i (z), Φ i (z) and Ψ i (z) satisfy all the relations of Drinfeld's realization with γ = r, γ ′ = s. More explicitly, we want to show that X ± i (z), Φ i (z) and Ψ i (z) satisfy relations (3.2)-(3.13) in Definition 3.1. It is clear that (3.2)-(3.5) follow from the construction of vertex operators Φ i (z) and Ψ i (z). We are going to divide the proof into several steps.
It is easy to see that Ψ i , Φ i , e αj have the following commutative relations:
Φ i (z)e ±αj = i, j ∓1 e ±αj Φ i (z), (5.1) Ψ i (z)e ±αj = j, i ±1 e ±αj Ψ i (z). (5.2)
Note that the action of c is the identity, since here we construct a level-one representation. (3.6) follows from the following lemma.
Lemma 5.3. For i, j ∈ I, we have
g ij z w (rs) 1 2 r Φ i (z)Ψ j (w) = g ij z w (rs) 1 2 s Ψ j (w)Φ i (z). (5.3)
Proof: When a ij = 0, the proof is trivial, so we only check the relation for the case of a ij = 0 (i.e., a ij = −1 and a ii = 2).
Φ i (z)Ψ j (w) = ω ′ i exp − (r − s) k>0 a i (−k)z k · ω j exp (r − s) k>0 a j (k)w −k = Ψ j (w)Φ i (z) exp − (r − s) 2 k>0 [ a i (−k), a j (k) ]( z w ) k = Ψ j (w)Φ i (z) exp − k>0 (rs) k 2 i, i − a ij k 2 − i, i a ij k 2 k · [k]( z w ) k = Ψ j (w)Φ i (z) i, i − 1 2 ( z w (rs) 1 2 r)−1 i, i 1 2 ( z w (rs) 1 2 r)−1 −1 i, i − 1 2 ( z w (rs) 1 2 s)−1 i, i 1 2 ( z w (rs) 1 2 s)−1 , i = j, i, i ( z w (rs) 1 2 r)−1 i, i ( z w (rs) 1 2 r)−1 −1 i, i ( z w (rs) 1 2 s)−1 i, i ( z w (rs) 1 2 s)−1 , i = j, = Ψ j (w)Φ i (z)g ij z w (rs) 1 2 r −1 g ij z w (rs) 1 2 s ,
where we used the formal identity log(1 − z) = − ∞ n=1 z n n . Furthermore, the following Lemma 5.4 follows from the relations (3.7) and (3.8).
Lemma 5.4. For i, j ∈ I, we have
Φ i (z)X ± j (w)Φ i (z) −1 = g ij z w (rs) 1 2 r ∓ 1 2 ±1 X ± j (w), Ψ i (z)X ± j (w)Ψ i (z) −1 = g ji w z (rs) 1 2 s ± 1 2 ∓1 X ± j (w).
(
5.4)
Proof: Naturally we only check (5.2) and (5.3) for the case of a ij = 0. The vertex operator is a product of two exponential operators and a middle term operator. So we first consider
Φ i (z)E ± + (α j , w) = exp − (r − s) k>0 a i (−k)z k r εi+1(0) i s εi(0) i exp ∓ k>0 r ∓ k 2 [k] a j (k)w −k = E ± + (α j , w)Φ i (z) exp ± (r − s)) k>0 r ∓ k 2 [k] [ a i (−k), a j (k) ]( z w ) k = E ± + (α j , w)Φ i (z) exp ± k>0 (rs) k 2 r ∓ k 2 i, i − a ij k 2 − i, i a ij k 2 k ( z w ) k = E ± + (α j , w)Φ i (z) i, i − 1 2 ( z w (rs) 1 2 r ∓ 1 2 )−1 i, i 1 2 ( z w (rs) 1 2 r ∓ 1 2 )−1 ±1 , i = j, i, i ( z w (rs) 1 2 r ∓ 1 2 )−1 i, i ( z w (rs) 1 2 r ∓ 1 2 )−1 ±1 , i = j, = E ± + (α j , w)Φ i (z)g ij z w (rs) 1 2 r ∓ 1 2 ±1 i, j ±1 .
We would proceed in the same way with the following result,
Ψ i (z)E ± − (α j , w) = exp (r − s) k>0 a i (k)z −k r εi(0) i s εi+1(0) i exp ± k>0 s ± k 2 [k] a j (−k)w k = E ± − (α j , w)Ψ i (z) exp ± (r − s) k>0 s ± k 2 [k] [ a i (k), a j (−k) ]( w z ) k = E ± − (α j , w)Ψ i (z) exp ± k>0 (rs) k 2 s ± k 2 i, i a ij k 2 − i, i − a ij k 2 k ( z w ) k = E ± − (α j , w)Ψ i (z) i, i − 1 2 ( z w (rs) 1 2 s ± 1 2 )−1 i, i 1 2 ( z w (rs) 1 2 s ± 1 2 )−1 ∓1 , i = j, i, i ( z w (rs) 1 2 s ± 1 2 )−1 i, i ( z w (rs) 1 2 s ± 1 2 )−1 ∓1 , i = j, = E ± − (α j , w)Ψ i (z)g ji z w (rs −1 ) ∓ 1 4 ±1 j, i ∓1 .
Applying (5.1) and (5.2), we would arrive at the required results. Thus the proof of the lemma is complete.
Before checking the relations (3.9) and (3.10) and the quantum Serre-relations, we need to introduce a useful notion-normal ordering, which plays an important role in the theory of ordinary vertex operator calculus. Define : α i (n)α j (−n) : = : α j (−n)α i (n) : = α j (−n)α i (n), : α i (0)a j : = : a j α i (0) : = 1 2 (α i (0)a j + a j α i (0)).
We can extend the notion to the vertex operators. For example, we define
: X ± i (z)X ± j (w) : = E ± − (α i , z)E ± − (α j , w)E ± + (α i , z)E ± + (α j , w) · e ±(αi+αj ) z ±αi(0)+1 w ±αj (0)+1 ǫ ±αi ǫ ±αj , : X ± i (z)X ∓ j (w) : = E ± − (α i , z)E ∓ − (α j , w)E ± + (α i , z)E ∓ + (α j , w) · e ±(αi−αj ) z ±αi(0)+1 w ∓αj (0)+1 ǫ ±αi ǫ ∓αj .
Therefore, the following formulas hold : X ± i (z)X ± j (w) : = : X ± j (w)X ± i (z) : , : X ± i (z)X ∓ j (w) : = : X ± j (w)X ∓ i (z) : .
Using the above notation, we give the operator product expansions as follows where we have used the formula: log(1 − x) = − n>0
x n n . Now we turn to check the relation (3.10).
Lemma 5.6. The vertex operators satisfy the following
F ± ij (z, w) X ± i (z) X ± j (w) = G ± ij (z, w) X ± j (w) X ± i (z),
where F ± ij (z, w) and G ± ij (z, w) are defined as definition 3.1. Proof. Similarly, if a ij = 0, it is trivial, So we only show the Lemma for a ij = 0. First we notice that
G ± ij (z, w) F ± ij (z, w) = j, i ±1 z − ( j, i i, j −1 ) ± 1 2 w z − ( i, j j, i ) ± 1 2 w = (rs) ∓ 1 2 (r −1 s) ± 1 2 z−w z−(r −1 s) ± 1 2 w , i > j; (rs) ± 1 2 (r −1 s) ± 1 2 z−w z−(r −1 s) ± 1 2 w , i < j, (rs −1 ) ± 1 2 z−w z−(rs −1 ) ± 1 2 w , i = j, = X ± i (z) X ± j (w) X ± j (w) X ± i (z) = X ± i (z) X ± j (w) X ± j (w) X ± i (z)
.
On the other hand, we have : X ± i (z) X ± j (w) :=: X ± j (w) X ± i (z) : Thus we get the required statement immediately.
Furthermore, before verifying the relation (3.9), we need the following lemma.
Lemma 5.7. We claim that
: X + i (z)X − i (zr) := Φ i (zs − 1 2 )(rs) − 1 2 , : X + i (ws −1 )X − i (w) := Ψ i (wr 1
2 )(rs) − 1 2 , which can be easily verified directly.
Proof. Continuing to use the above notations, we get immediately, We would proceed in the same way with the second formula.
: X + i (z)X − i (zr) : = exp ∞ n=1 s n/2 [n] a i (−n)(z) n exp − ∞ n=1 s −n/2 [n] a i (−n)(zr) n exp − ∞ n=1 r −n/2 [n] a i (n)(z) −n exp
: Now, we are ready to check the following proposition, which will yield the relation (3.9).
X + i (ws −1 )X − i(
Proposition 5.8. For i, j ∈ I, one has
[ X + i (z), X − j (w) ] = δ ij r i − s i δ(zw −1 s)ψ i (wr 1 2 ) − δ(zw −1 r)φ i (zs − 1 2 ) . 1 2 w(1 − z 1 z 2 ) = 0,
where each term is zero. Consequently, we complete the proof of Theorem 5.2.
Appendix: Proofs of Some Lemmas via Quantum Calculations
Here, we would provide more details for some Lemmas' proofs, through which readers can see the quantum calculations of (r, s)-brackets how to work.
Proof of Lemma 4.2. It is easy to get that x − βi−1,i+1 (1) = x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i (0), x − i−1 (1) ] s (s, ··· , s, r −1 , ··· , r −1 ) (by (3.22)) = −(rs)
1 2
x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i−1 (0), x − i (1) ] r −1 (s, ··· , s, r −1 , ··· , r −1 ) .
Using the above result, one has Suppose Lemma 4.3 is true for the case of i, then for the case of i − 1, we note that
[ x − i−1 (0), x − βi−1,i+1 (1) ] s −1 = −(rs) 1 2 x − i−1 (0), x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i−1 (0), x − i (1) ] r −1 (s, ··· , s, r −1 , ··· , r −1 ) s −1 (by (3.16) & (D9 1 )) = −(rs) 1 2 x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i−1 (0), x − i−1 (0), x − i(x − βi−1,i (1) = [ x − i (0)
, · · · , x − n (0), x − n−2 (0), · · · , x − i (0), x − i−1 (1) ] (s, ··· , s, r −1 , ··· , r −1 ) (by (3.22)) = r −1 s x − i (0), · · · , x − n (0), x − n−2 (0), · · · , [ x − i (1), x − i−1 (0) ] r (s, ··· , s, r −1 , ··· , r −1 ) (by (3.16) & (D9 1 )) = · · · = r −1 s x − i (0), [ x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i (1) ] (s, ··· , s, r −1 , ··· , r −1 ) ,
x − i−1 (0) (r,r −1 ) (by definition) = r −1 s [ x − i (0), x − βi,i+1 (1), x − i−1 (0) ] (r,r −1 ) (by (3.16)) = r −1 s [ [ x − i (0), x − βi,i+1 (1) ] (rs) −1 , x − i−1 (0) ] rs (=0 by inductive hypothesis) + r −2 [ x − βi,i+1 (1), [ x − i (0), x − i−1 (0) ] s ] r 2 s = r −2 [ x − βi,i+1 (1), [ x − i (0), x − i−1 (0) ] s ] r 2 s .
Using the above identity, then we have
[ x − i−1 (0), x − βi−1,i (1) ] r −2 = r −2 x − i−1 (0), x − βi,i+1 (1), [ x − i (0), x − i−1 (0) ] s r 2 s r −2 (by (3.16)) = r −2 [ x − i−1 (0), x − βi,i+1 (1) ] r −1 , [ x − i (0), x − i−1 (0) ] s rs + r −3 x − ηi,i+1 (1), [ x − i−1 (0), x − i (0), x − i−1 (0) ] (s, r −1 ) (rs) 2 (=0 by (D9 3 )) = r −2 [ x − i−1 (0), x − βi,i+1 (1) ] r −1 , [ x − i (0), x − i−1 (0) ] s rs .
At the same time, we can also get (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), x − i (1) ] (s, ··· , s, r −1 , ··· , r −1 ) ] r −1 (by (D9 1 )) = [ x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i−1 (0), x − i (1) ] r −1 ] (s, ··· , s, r −1 , ··· , r −1 ) ] r −1 (by (3.22) and definition) Added in proof. Part of the work started initially 10 years ago when the first author visited l'DMA, l'Ecole Normale Supéieure de Paris from October to November, 2004, the Fachbereich Mathematik der Universität Hamburg from November 2004 to February 2005. It was not until his visit to ICTP (Trieste, Italy) from March to August, 2006 that Hu found out the explicit formula of the generating function g ij (z) with τ -invariance which leads to the inherent definition for the Drinfeld realization in two-parameter setting. A reason for preventing the submission of this work for 10 years is the newly found and amending constraint: γγ ′ = (rs) c . The original constraint γγ ′ = rs is apparently dissatisfied since product of two group-likes should be still group-like. The current change was realized and made by the first author when the second author tried to construct level-two vertex operator representation and found that the constraint should be γγ ′ = (rs) 2 in that module. Finally, the authors would like to express their thanks to Dr. Yunnan Li, who pointed out a necessary revision for the definition of γ, γ ′ in Definition 2.2, which is crucial for our main Theorem 3.9.
[ x − i−1 (0), x − βi,i+1 (1) ] r −1 = [ x − i−1 (0), [ x − i+1= −(rs) − 1 2 [ x − i+1 (0), · · · , x − n (0), x − n−2 (0), · · · , x − i+1 (0), [ x − i(
τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g)
τ
-INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 19
i (n)(zr) −n r −αi(0) (rs) εi − 1 2 = exp − (r − s) n>0 a i (−n)(zs − 1 2 ) k r εi+1(0) s εi(0) (rs) − 1 2 = Φ i (zs − 1 2 )(rs) − 1 2 .τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 29
.
1) ] (r −1 , s −1 ) (s, ··· , s, r −1 , ··· , r −1 ) Use induction on i. For the case of i = n−1, by definition, it is easy to see that [ x − n−1 (0), x − βn−1,n (1) ] (rs) −1 = [ x − n−1 (0), x − n (1) ] (rs) −1 = 0.
0), x − i−1 (1) ] s ] (s, ··· , s, r −1 , ··· , r −1 ) ] r −1 (by definition) = −(rs) − 1 2 x − βi−1,i+1 (1).
as vector spaces. Definition 2.10. (Prop. 3.2 [HRZ]) Let τ be the Q-algebra anti-automorphism of U r,s
( z1 w ) n and similar fraction for other formal power series.
τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 35
(0), A ] (rs) −1 = 0. So, under the condition r = −s, we get [ x − 3 (0), A ] (rs) −1 = 0. Therefore, this completes the proof.
ACKNOWLEDGMENTHu is supported in part by the NNSF of China (No. 11271131), the RSFDP from the MOE of China. Zhang would like to thank the support of NSFC grant (No. 11101258) and Shanghai Leading Academic Discipline Project (J50101). Both authors are indebted to Marc Rosso for his recommending Grossé's arXiv-preprint earlier in 2004, from where the initial motivation stemmed.τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 27Lemma 5.5. For i = j ∈ I such that a ij = 0, we have the following operator product expansions.where the two factors are called contraction factors, denoted as X ± i (z)X ∓ j (w) and X ± i (z)X ± j (w) respectively in the sequel.Proof. By the definition of normal ordering, the first and third formula follow from the following equalities.It remains to deal with the rest two formulas, which hold from the following results.Proof. Firstly, observe thatSimilarly, we get for i < j,It suffices to show the case of i = j. Firstly, we get directly,On the other hand, it is clear to see thatAs a consequence, the bracket becomesApplying the above Lemma 5.7, then we arrive atwhere we have used the property of δ-function:Lastly, we are left to show the quantum Serre-relations (3.11)-(3.13).For simplicity, we only check the " + " case of a ij = −1, 1 ≤ i < j < n, i.e.,The others can be obtained similarly. Let us review the following formulas for further reference.By the properties of normal ordering, we then have.Note that. Let us proceed to show this formal series identity. Both as formal series and fractions, we can express the contraction factors as follows.Changing the positions of z 1 and z 2 , we obtain the other three expressions for the part {z 1 ↔ z 2 } in the quantum Serre relation. Substituting the above expressions into the right hand side of (5.6) and pulling out the common factors, we obtain thatExpanding the two sides of the above identity, one gets Proof of Lemma 4.4. Firstly, note that So it suffices to check the relation [ x − 2 (0), x − α1,4 (1) ] = 0. In fact, it is easy to see thatThen, we obtain (1 + r −1 s)[ x − 2 (0), x − α1,4 (1) ] = 0. When r = −s, we arrive at our required conclusion [ x − 2 (0), x − α1,4 (1) ] = 0.Proof of Lemma 4.6. Repeatedly using(3.16), it is easy to get thatProof of Lemma 4.7. By direct calculation, one hasTherefore, it suffices to show the relation [ x − n (0), x − n (0), x − α1,n−1 (1) ] (s, r) = 0. Indeed, it is easy to get the following conclusionProof of Lemma 4.10. First, we shall check thatIn fact, we get directlyUsing the above fact, we would like to show thatThe above result means that if r = −s, thenwhich will be used in the sequel.Using the above result, we get easily Applying the above statement, we are ready to derive thatProof of Lemma 4.12. Firstly, we need to considerUsing the above result, now we turn to checkTo get our required conclusion, we also need to deal withCombining the definition of quantum root vector with the above relation, one has(1) ] r −1 Therefore, by definition,(3.16)and Serre relations, we arrive atSo, to show Lemma 4.12, it suffices to check that [ x − 2 (0), x − 1345624 (1) ] (rs) −1 = 0. Actually,Expanding the two sides of the above relation, we get immediately In fact, we observe that(1) ] (s, s, s, r −1 , s, r −1 , r −1 ) (by(3.16(=0 by the following conclusion)Applying the result, we obtain immediately Finally, we are left to verify [ x − 3 (0), A ] (rs) −1 = 0. Indeed, it is easy to see that
. -Invariant Generating Functions, Vertex Representations Of, Ur, 39τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 39
. + , x − 4 (0), x − 2 (0) ] s , [ x − 4 (0), x − 13456 (1) ] 1 ] r −2 (=0 by the proof of Lemma 4.12+ r[ [ x − 4 (0), x − 2 (0) ] s , [ x − 4 (0), x − 13456 (1) ] 1 ] r −2 (=0 by the proof of Lemma 4.12)
(s, 1) = 0. Before giving the proof of the above two conclusions, we also need the following claims, whose proofs are easy and left to the reader. x − 1345624 (1). x − 3 (0), x − 1345620), x − 1345624 (1) ] (s, 1) = 0. Before giving the proof of the above two conclusions, we also need the following claims, whose proofs are easy and left to the reader. [ x − 3 (0), x − 134562 (
. Therefore, using the above claims, one has [ x − 3 (0), x − 13456243 (1) ] s −1 (by definition & (3.16)) 0), x − 1345624 (1) ] (s, r −1 s) (by definition & (3.16Therefore, using the above claims, one has [ x − 3 (0), x − 13456243 (1) ] s −1 (by definition & (3.16)) 0), x − 1345624 (1) ] (s, r −1 s) (by definition & (3.16))
As a consequence, it suffices to verify that. x − 1 (0), x − 13456243 (1) ] (rs) −1 = 0As a consequence, it suffices to verify that [ x − 1 (0), x − 13456243 (1) ] (rs) −1 = 0.
On the classification of finite-dimensional pointed Hopf algebras. N Andruskiewitsch, H-J Schneider, Ann. of Math. 1712N. Andruskiewitsch, H-J. Schneider, On the classification of finite-dimensional pointed Hopf algebras, Ann. of Math. (2), 171 (1)(2010), 375C-417.
Two-parameter quantum groups of exceptional type E-series and convex PBW-type basis. X Bai, N Hu, Algebra Colloq. 154X. Bai, N. Hu, Two-parameter quantum groups of exceptional type E-series and convex PBW-type basis, Algebra Colloq., 15 (4) (2008), 619-636.
Braid group action and quantum affine algebras. J Beck, Comm. Math. Phys. 165J. Beck, Braid group action and quantum affine algebras, Comm. Math. Phys., 165 (1994), 555-568.
Yetter-Drinfeld modules under cocycle twists. G Benkart, M Pereira, S Witherspoon, J. Algebra. 32411G. Benkart, M. Pereira and S. Witherspoon, Yetter-Drinfeld modules under cocycle twists, J. Algebra, 324 (11) (2010), 2990-3006.
Two-parameter quantum groups and Drinfel'd doubles. G Benkart, S Witherspoon, Algebr. Represent. Theory. 7G. Benkart and S. Witherspoon, Two-parameter quantum groups and Drinfel'd doubles, Algebr. Represent. Theory, 7 (2004), 261-286.
Representations of two-parameter quantum groups and Schur-Weyl duality, Hopf algebras. G Benkart, S Witherspoon, Lecture Notes in Pure and Appl. Math. 237DekkerG. Benkart and S. Witherspoon, Representations of two-parameter quantum groups and Schur-Weyl duality, Hopf algebras, pp. 65-92, Lecture Notes in Pure and Appl. Math., 237, Dekker, New York, 2004.
Representations of Finite Dimensional Algebras and Related Topics in Lie Theory and Geometry. G Benkart, S Witherspoon, Amer. Math. Soc40Providence, RIRestricted two-parameter quantum groups, Fields Institute CommunicationsG. Benkart and S. Witherspoon, Restricted two-parameter quantum groups, Fields Insti- tute Communications, "Representations of Finite Dimensional Algebras and Related Topics in Lie Theory and Geometry", vol. 40, Amer. Math. Soc., Providence, RI, 2004, pp. 293-318.
Drinfel'd doubles and Lusztig's symmetries of twoparameter quantum groups. N Bergeron, Y Gao, N Hu, J. Algebra. 301N. Bergeron, Y. Gao and N. Hu, Drinfel'd doubles and Lusztig's symmetries of two- parameter quantum groups, J. Algebra, 301 (2006), 378-405.
Representations of two-parameter quantum orthogonal and symplectic groups. N Bergeron, Y Gao, N Hu, arXiv.math.QA/0510124Proceedings of the International Conference on Complex Geometry and Related Fields. the International Conference on Complex Geometry and Related Fields39AMS/IP Studies in Advanced MathematicsN. Bergeron, Y. Gao and N. Hu, Representations of two-parameter quantum orthogonal and symplectic groups, arXiv.math.QA/0510124, AMS/IP Studies in Advanced Mathematics, "Proceedings of the International Conference on Complex Geometry and Related Fields", Vol. 39, pp. 1-21, 2007.
Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type C. R Chen, N Hu, PreprintR. Chen, N. Hu, Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type C, Preprint (2009).
. -Invariant Generating Functions, Vertex Representations Of, Ur, 43τ -INVARIANT GENERATING FUNCTIONS, VERTEX REPRESENTATIONS OF Ur,s( g) 43
Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type F. X Chen, N Hu, 4PreprintX. Chen, N. Hu, Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type F 4 , Preprint (2011).
Monomial bases of quantized enveloping algebras, in 'Recent developments in quantum affine algebras and related topics. V Chari, N Xi, Amer. Math. Soc. Contemp. Math. 216V. Chari, N. Xi, Monomial bases of quantized enveloping algebras, in 'Recent developments in quantum affine algebras and related topics', Amer. Math. Soc. Contemp. Math., 216, (2001), 23-57.
A basis of type Poincaré-Birkhoff-Witt for the quantum algebra of sl 2. I Damiani, J. Algebra. 161I. Damiani, A basis of type Poincaré-Birkhoff-Witt for the quantum algebra of sl 2 , J. Algebra, 161 (1993), 291-310.
V G Drinfeld, Quantum groups, in "Proceedings ICM. Berkeley, AmerMath. SocV.G. Drinfeld, Quantum groups, in "Proceedings ICM", Berkeley, Amer. Math. Soc., pp. 798-820, 1987.
A new realization of Yangians and quantized affine algebras. V G Drinfeld, Soviet Math. Dokl. 36V.G. Drinfeld, A new realization of Yangians and quantized affine algebras, Soviet Math. Dokl., 36 (1988), 212-216.
Vertex representations of quantum affine algebras. I Frenkel, N Jing, Proc. Nat'l. Acad. Sci. USA. 85I. Frenkel and N. Jing, Vertex representations of quantum affine algebras, Proc. Nat'l. Acad. Sci. USA., 85 (1998), 9373-9377.
Quantum subgroups of GL α,β (n). G A García, J. Algebra. 3246G. A. García,Quantum subgroups of GL α,β (n), J. Algebra 324 (6) (2010), 1392-1428.
On quantum shuffle and quantum affine algebras. P Grossé, arXiv:math/0107176J. Algebra. 3182P. Grossé, On quantum shuffle and quantum affine algebras, J. Algebra 318 (2) (2007), 495-519. (arXiv:math/0107176).
Lusztig isomorphisms for Drinfeld doubles of bosonizations of Nichols algebras of diagonal type. I Heckenberger, J. Algebra. 323I. Heckenberger, Lusztig isomorphisms for Drinfeld doubles of bosonizations of Nichols alge- bras of diagonal type, J. Algebra, 323 (2010), 2130C-2182.
The Weyl groupoid of a Nichols algebra of diagonal type. Invent. Math. 164-, The Weyl groupoid of a Nichols algebra of diagonal type, Invent. Math. 164 (2006), 175C-188.
N Hu, Y Pei, arXiv.math.QA/0702298Notes on two-parameter quantum groups, (I). 51N. Hu, Y. Pei, Notes on two-parameter quantum groups, (I), arXiv.math.QA/0702298, Sci. in China, Ser. A., 51 (6) (2008), 1101-1110.
Notes on two-parameter quantum groups. arXiv:0908.1635v2Comm. Algebra. 40II-, Notes on two-parameter quantum groups, (II), arXiv: 0908.1635v2, Comm. Algebra, 40 (2012), 3202-3220.
Two-parameter Quantum Affine Algebra Ur,s( sln), Drinfeld Realization and Quantum Affine Lyndon Basis. N Hu, M Rosso, H Zhang, Comm. Math. Phys. 278N. Hu, M. Rosso and H. Zhang, Two-parameter Quantum Affine Algebra Ur,s( sln), Drin- feld Realization and Quantum Affine Lyndon Basis, Comm. Math. Phys., 278 (2008), 453- 486.
The two-parameter quantum group of exceptional type G 2 and Lusztig's symmetries. N Hu, Q Shi, Pacific J. Math. 230N. Hu, Q. Shi, The two-parameter quantum group of exceptional type G 2 and Lusztig's symmetries, Pacific J. Math., 230 (2007), 327-345.
Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type G 2. N Hu, X Wang, Pacific J. Math. 2412N. Hu, X. Wang, Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type G 2 , Pacific J. Math., 241 (2) (2009), 243-273.
Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type B. J. Geom. Phys. 60-, Convex PBW-type Lyndon basis and restricted two-parameter quantum groups of type B, J. Geom. Phys., 60 (2010), 430-453.
Vertex representations of two-parameter quantum affine algebras Ur,s( g): the simply-laced cases. N Hu, H Zhang, ECNU-PreprintN. Hu, H. Zhang, Vertex representations of two-parameter quantum affine algebras Ur,s( g): the simply-laced cases, ECNU-Preprint, 2006.
Vertex representations of two-parameter quantum affine algebras Ur,s( g): the nonsimply-laced cases. N Hu, H Zhang, ECNU-PreprintN. Hu, H. Zhang, Vertex representations of two-parameter quantum affine algebras Ur,s( g): the nonsimply-laced cases, ECNU-Preprint, 2006.
J C Jantzen, Lectures on Quantum Groups. RIAmer. Math. Soc. Providence6J. C. Jantzen, Lectures on Quantum Groups, vol. 6, Graduate Studies in Math., Amer. Math. Soc. Providence, RI, 1996.
Twisted vertex representations of quantum affine algebras. N Jing, Invent. Math. 102N. Jing, Twisted vertex representations of quantum affine algebras, Invent. Math., 102 (1990), 663-690.
On Drinfeld realization of quantum affine algebras. N Jing, Ohio State Univ. Math. Res. Inst. Publ. de Gruyter. 7N. Jing, On Drinfeld realization of quantum affine algebras, Ohio State Univ. Math. Res. Inst. Publ. de Gruyter, Berlin, 7 (1998), 195-206.
Two-parameter quantum vertex representations via finite groups and the McKay correspondence. N Jing, H Zhang, Trans. Amer. Math. Soc. 3637N. Jing, H. Zhang, Two-parameter quantum vertex representations via finite groups and the McKay correspondence, Trans. Amer. Math. Soc., 363 (7) (2011), 3769-3797.
Fermionic realization of two-parameter quantum affine algebra Ur,s( sln). N Jing, H Zhang, Lett. Math. Phys. 89N. Jing, H. Zhang, Fermionic realization of two-parameter quantum affine algebra Ur,s( sln), Lett. Math. Phys., 89 (2009), 159-170.
Addendum to "Drinfeld realization of twisted quantum affine algebras. N Jing, H Zhang, Comm. Algebra. 38N. Jing, H. Zhang, Addendum to "Drinfeld realization of twisted quantum affine algebras", Comm. Algebra, 38 (2010), 3484-3488.
Infinite Dimensinal Lie Algebras. V G Kac, Cambridge University Press3rd editionV. G. Kac, Infinite Dimensinal Lie Algebras, 3rd edition, Cambridge University Press, 1990.
A Klimyk, K Schmüdgen, Quantum Groups and Their Representations. Berlin Heidelberg New YorkSpringer-VerlagA. Klimyk, K. Schmüdgen, Quantum Groups and Their Representations, Springer-Verlag Berlin Heidelberg New York, 1997.
Standard Lyndon bases of Lie algebras and enveloping algebras. M Lalonde, A Ram, Trans. Amer. Math. Soc. 347M. Lalonde, A. Ram, Standard Lyndon bases of Lie algebras and enveloping algebras, Trans. Amer. Math. Soc., 347 (1995), 1821-1830.
Vertex representations for toroidal Lie algebra of type G 2. D Liu, N Hu, J. Pure Appl. Algebra. 1981-3D. Liu, N. Hu, Vertex representations for toroidal Lie algebra of type G 2 , J. Pure Appl. Algebra, 198 (1-3) (2005), 257-279.
Quantum Weyl group and universal quantum R-matrix for affine Lie algebra A (1) 1. S Levendorskii, Y Soibel'man, V Stukopin, Lett. Math. Phys. 274S. Levendorskii, Y. Soibel'man and V. Stukopin, Quantum Weyl group and universal quan- tum R-matrix for affine Lie algebra A (1) 1 , Lett. Math. Phys., 27 (4) (1993), 253-264.
Introduction to Quantum Groups. G Lusztig, Progress in Math. 110G. Lusztig, Introduction to Quantum Groups, Progress in Math. Vol. 110, Birkhäuser Boston, 1993.
Multi-parameter quantum groups and quantum shuffle, (I). Y Pei, N Hu, M Rosso, Comptemp. Math. 506Y. Pei, N. Hu and M. Rosso, Multi-parameter quantum groups and quantum shuffle, (I), Comptemp. Math., 506 (2010), 145-171.
A two-parameter quantization of GL(n). M Takeuchi, Proc. Japan Acad. Japan AcadM. Takeuchi, A two-parameter quantization of GL(n), Proc. Japan Acad., 66 Ser. A (1990), 112-114.
Drinfeld realizations, quantum affine Lyndon bases and vertex representations of two-parameter quantum affine algebras. H Zhang, Ph. Doctoral Dissertation, ECNUH. Zhang, Drinfeld realizations, quantum affine Lyndon bases and vertex representations of two-parameter quantum affine algebras, Ph. Doctoral Dissertation, ECNU, 2007.
Drinfeld realization of twisted quantum affine algebras. H Zhang, N Jing, Comm. Algebra. 35H. Zhang, N. Jing, Drinfeld realization of twisted quantum affine algebras, Comm. Algebra, 35 (2007), 3683-3698.
Two-parameter quantum affine algebra Ur,s( sl 2 ), Algebra Colloq. H Zhang, R Pang, to appearH. Zhang, R. Pang, Two-parameter quantum affine algebra Ur,s( sl 2 ), Algebra Colloq. (to appear).
| [] |
[
"X-ray imaging and multiferroic coupling of cycloidal magnetic domains in ferroelectric monodomain BiFeO 3",
"X-ray imaging and multiferroic coupling of cycloidal magnetic domains in ferroelectric monodomain BiFeO 3"
] | [
"R D Johnson \nDepartment of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom\n\nAppleton Laboratory-STFC\nISIS facility\nOX11 0QX, 67010Rutherford, Chilton, Didcot, L'AquilaUnited Kingdom, Italy\n",
"P Barone ",
"A Bombardi \nDiamond Light Source, Harwell Science and Innovation Campus\nOX11 0DEDidcotUnited Kingdom\n",
"R J Bean \nDepartment of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonC.M.M.PUnited Kingdom\n",
"S Picozzi ",
"P G Radaelli \nDepartment of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom\n",
"Y S Oh \nRutgers Center for Emergent Materials and Department of Physics and Astronomy\n136 Frelinghuysen Road08854PiscatawayNew JerseyUSA\n",
"S-W Cheong \nRutgers Center for Emergent Materials and Department of Physics and Astronomy\n136 Frelinghuysen Road08854PiscatawayNew JerseyUSA\n",
"L C Chapon \nInstitut Laue-Langevin\nBP 156, 6, rue Jules Horowitz38042Grenoble Cedex 9France\n"
] | [
"Department of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom",
"Appleton Laboratory-STFC\nISIS facility\nOX11 0QX, 67010Rutherford, Chilton, Didcot, L'AquilaUnited Kingdom, Italy",
"Diamond Light Source, Harwell Science and Innovation Campus\nOX11 0DEDidcotUnited Kingdom",
"Department of Physics and Astronomy\nUniversity College London\nGower StreetWC1E 6BTLondonC.M.M.PUnited Kingdom",
"Department of Physics\nClarendon Laboratory\nUniversity of Oxford\nOX1 3PUOxfordUnited Kingdom",
"Rutgers Center for Emergent Materials and Department of Physics and Astronomy\n136 Frelinghuysen Road08854PiscatawayNew JerseyUSA",
"Rutgers Center for Emergent Materials and Department of Physics and Astronomy\n136 Frelinghuysen Road08854PiscatawayNew JerseyUSA",
"Institut Laue-Langevin\nBP 156, 6, rue Jules Horowitz38042Grenoble Cedex 9France"
] | [] | Magnetic domains at the surface of a ferroelectric monodomain BiFeO3 single crystal have been imaged by hard X-ray magnetic scattering. Magnetic domains up to several hundred microns in size have been observed, corresponding to cycloidal modulations of the magnetization along the wavevector k=(δ,δ,0) and symmetry equivalent directions. The rotation direction of the magnetization in all magnetic domains, determined by diffraction of circularly polarized light, was found to be unique and in agreement with predictions of a combined approach based on a spin-model complemented by relativistic density-functional simulations. Imaging of the surface shows that the largest adjacent domains display a 120 • vortex structure.PACS numbers: 75.85.+t, 75.60.Ch, The seminal work of I. Dzyaloshinsky [1] on the relativistic origin of weak ferromagnetism in antiferromagnetic substances is intimately connected to various emergent physical phenomena in condensed matter. For example, in the skyrmion lattice the very presence of antisymmetric exchange interactions (Dzyalonshinskii-Moriya [1, 2]) in a non-centrosymmetric crystal stabilizes the long period helical structure in zero magnetic field. Also, for some spin-driven ferroelectrics (multiferroics), the electric polarization is driven by non-collinear magnetic orders; the inverse Dzyalonshinskii-Moriya effect. In this case, a phenomenological formulation[3]shows that for cycloidal magnetic structures, i.e. spins rotating in a plane that contains the magnetic wave-vector (k), the electric polarization (P ) transforms as a product involving the magnetization density and its gradient; the so-called Lifshitz invariant of the form P · λ, where λ = (∇ · L)L − (L · ∇)L, and L is the antiferromagnetic order-parameter. In a complementary view, the magnetic polarity can be thought of as arising locally from spin current [4], as λ = k × (S i × S j ), where S i and S j are spins on adjacent sites. Like P , λ is a polar vector, and will be called magnetic polarity in the remainder.In BiFeO 3 , arguably the most studied multiferroic owing to room temperature magnetoelectric coupling [5], ferroelectricity is the consequence of an improper structural transition at T c ∼ 1100K to the polar space group R3c. In bulk samples, the magnetic ordering transition occurs at T N ∼ 640K. While the two do not coincide, the respective order parameters are coupled through antisymmetric exchange, i.e., P drives the appearance of the inhomogeneous magnetization through a coupling term γλP , where γ is a coupling constant, a scenario originally proposed by Kadomtseva [6]. The magnetic structure can be described locally as canted G-type, but with a long period modulation (∼ 620Å) in the hexagonal basal plane[7]. Subsequent studies[8,9]determined that the modulation is cycloidal with the spins rotating in the (k, z)-plane where k can take the three symmetryequivalent directions k 1 = (δ, δ, 0), k 2 = (δ, −2δ, 0) and k 3 = (−2δ, δ, 0) in the hexagonal setting of the R3c group (employed throughout), and δ=0.0045 at 300K.In this letter, we study the magnetic domains at the surface of a millimeter-size single crystal of BiFeO 3 with a single ferroelectric (FE) domain. Using the high momentum and spatial resolution of synchrotron X-ray diffraction, combined with circular polarization of the beam, we determine the absolute rotation direction of the magnetization in individual magnetic domains, which are found to have the same magnetic polarity. The sign of γ is determined and compared to model-Hamiltonian and ab − initio calculations. The large domains observed appear to form vortex structures with a closure of the wavevector for three adjacent 120 • domains.Single crystals of several mm 3 were grown from a Bi 2 O 3 /Fe 2 O 3 /B 2 O 3 flux by slow cooling from 870 • C to 620 • C. A selected crystal was mechanically cut and polished perpendicular to the c-axis, and then annealed to remove any induced strain. A piezoresponse force microscopy (PFM) of the polished face (not shown) indicated that the surface had a single FE domain, with the electrical polarization pointing down into the sample. We | 10.1103/physrevlett.110.217206 | [
"https://arxiv.org/pdf/1303.6987v2.pdf"
] | 2,383,586 | 1303.6987 | 7e7d3dea9d2ab5e412240668510402ba84b5ff29 |
X-ray imaging and multiferroic coupling of cycloidal magnetic domains in ferroelectric monodomain BiFeO 3
24 May 2013
R D Johnson
Department of Physics
Clarendon Laboratory
University of Oxford
OX1 3PUOxfordUnited Kingdom
Appleton Laboratory-STFC
ISIS facility
OX11 0QX, 67010Rutherford, Chilton, Didcot, L'AquilaUnited Kingdom, Italy
P Barone
A Bombardi
Diamond Light Source, Harwell Science and Innovation Campus
OX11 0DEDidcotUnited Kingdom
R J Bean
Department of Physics and Astronomy
University College London
Gower StreetWC1E 6BTLondonC.M.M.PUnited Kingdom
S Picozzi
P G Radaelli
Department of Physics
Clarendon Laboratory
University of Oxford
OX1 3PUOxfordUnited Kingdom
Y S Oh
Rutgers Center for Emergent Materials and Department of Physics and Astronomy
136 Frelinghuysen Road08854PiscatawayNew JerseyUSA
S-W Cheong
Rutgers Center for Emergent Materials and Department of Physics and Astronomy
136 Frelinghuysen Road08854PiscatawayNew JerseyUSA
L C Chapon
Institut Laue-Langevin
BP 156, 6, rue Jules Horowitz38042Grenoble Cedex 9France
X-ray imaging and multiferroic coupling of cycloidal magnetic domains in ferroelectric monodomain BiFeO 3
24 May 2013arXiv: 3 Consiglio Nazionale delle Ricerche, Istituto Superconduttori, materiali innovativi e dispositivi (CNR-SPIN),
Magnetic domains at the surface of a ferroelectric monodomain BiFeO3 single crystal have been imaged by hard X-ray magnetic scattering. Magnetic domains up to several hundred microns in size have been observed, corresponding to cycloidal modulations of the magnetization along the wavevector k=(δ,δ,0) and symmetry equivalent directions. The rotation direction of the magnetization in all magnetic domains, determined by diffraction of circularly polarized light, was found to be unique and in agreement with predictions of a combined approach based on a spin-model complemented by relativistic density-functional simulations. Imaging of the surface shows that the largest adjacent domains display a 120 • vortex structure.PACS numbers: 75.85.+t, 75.60.Ch, The seminal work of I. Dzyaloshinsky [1] on the relativistic origin of weak ferromagnetism in antiferromagnetic substances is intimately connected to various emergent physical phenomena in condensed matter. For example, in the skyrmion lattice the very presence of antisymmetric exchange interactions (Dzyalonshinskii-Moriya [1, 2]) in a non-centrosymmetric crystal stabilizes the long period helical structure in zero magnetic field. Also, for some spin-driven ferroelectrics (multiferroics), the electric polarization is driven by non-collinear magnetic orders; the inverse Dzyalonshinskii-Moriya effect. In this case, a phenomenological formulation[3]shows that for cycloidal magnetic structures, i.e. spins rotating in a plane that contains the magnetic wave-vector (k), the electric polarization (P ) transforms as a product involving the magnetization density and its gradient; the so-called Lifshitz invariant of the form P · λ, where λ = (∇ · L)L − (L · ∇)L, and L is the antiferromagnetic order-parameter. In a complementary view, the magnetic polarity can be thought of as arising locally from spin current [4], as λ = k × (S i × S j ), where S i and S j are spins on adjacent sites. Like P , λ is a polar vector, and will be called magnetic polarity in the remainder.In BiFeO 3 , arguably the most studied multiferroic owing to room temperature magnetoelectric coupling [5], ferroelectricity is the consequence of an improper structural transition at T c ∼ 1100K to the polar space group R3c. In bulk samples, the magnetic ordering transition occurs at T N ∼ 640K. While the two do not coincide, the respective order parameters are coupled through antisymmetric exchange, i.e., P drives the appearance of the inhomogeneous magnetization through a coupling term γλP , where γ is a coupling constant, a scenario originally proposed by Kadomtseva [6]. The magnetic structure can be described locally as canted G-type, but with a long period modulation (∼ 620Å) in the hexagonal basal plane[7]. Subsequent studies[8,9]determined that the modulation is cycloidal with the spins rotating in the (k, z)-plane where k can take the three symmetryequivalent directions k 1 = (δ, δ, 0), k 2 = (δ, −2δ, 0) and k 3 = (−2δ, δ, 0) in the hexagonal setting of the R3c group (employed throughout), and δ=0.0045 at 300K.In this letter, we study the magnetic domains at the surface of a millimeter-size single crystal of BiFeO 3 with a single ferroelectric (FE) domain. Using the high momentum and spatial resolution of synchrotron X-ray diffraction, combined with circular polarization of the beam, we determine the absolute rotation direction of the magnetization in individual magnetic domains, which are found to have the same magnetic polarity. The sign of γ is determined and compared to model-Hamiltonian and ab − initio calculations. The large domains observed appear to form vortex structures with a closure of the wavevector for three adjacent 120 • domains.Single crystals of several mm 3 were grown from a Bi 2 O 3 /Fe 2 O 3 /B 2 O 3 flux by slow cooling from 870 • C to 620 • C. A selected crystal was mechanically cut and polished perpendicular to the c-axis, and then annealed to remove any induced strain. A piezoresponse force microscopy (PFM) of the polished face (not shown) indicated that the surface had a single FE domain, with the electrical polarization pointing down into the sample. We
Magnetic domains at the surface of a ferroelectric monodomain BiFeO3 single crystal have been imaged by hard X-ray magnetic scattering. Magnetic domains up to several hundred microns in size have been observed, corresponding to cycloidal modulations of the magnetization along the wavevector k=(δ,δ,0) and symmetry equivalent directions. The rotation direction of the magnetization in all magnetic domains, determined by diffraction of circularly polarized light, was found to be unique and in agreement with predictions of a combined approach based on a spin-model complemented by relativistic density-functional simulations. Imaging of the surface shows that the largest adjacent domains display a 120 • vortex structure. The seminal work of I. Dzyaloshinsky [1] on the relativistic origin of weak ferromagnetism in antiferromagnetic substances is intimately connected to various emergent physical phenomena in condensed matter. For example, in the skyrmion lattice the very presence of antisymmetric exchange interactions (Dzyalonshinskii-Moriya [1,2]) in a non-centrosymmetric crystal stabilizes the long period helical structure in zero magnetic field. Also, for some spin-driven ferroelectrics (multiferroics), the electric polarization is driven by non-collinear magnetic orders; the inverse Dzyalonshinskii-Moriya effect. In this case, a phenomenological formulation [3] shows that for cycloidal magnetic structures, i.e. spins rotating in a plane that contains the magnetic wave-vector (k), the electric polarization (P ) transforms as a product involving the magnetization density and its gradient; the so-called Lifshitz invariant of the form P · λ, where λ = (∇ · L)L − (L · ∇)L, and L is the antiferromagnetic order-parameter. In a complementary view, the magnetic polarity can be thought of as arising locally from spin current [4], as λ = k × (S i × S j ), where S i and S j are spins on adjacent sites. Like P , λ is a polar vector, and will be called magnetic polarity in the remainder.
In BiFeO 3 , arguably the most studied multiferroic owing to room temperature magnetoelectric coupling [5], ferroelectricity is the consequence of an improper structural transition at T c ∼ 1100K to the polar space group R3c. In bulk samples, the magnetic ordering transition occurs at T N ∼ 640K. While the two do not coincide, the respective order parameters are coupled through antisymmetric exchange, i.e., P drives the appearance of the inhomogeneous magnetization through a coupling term γλP , where γ is a coupling constant, a scenario originally proposed by Kadomtseva [6]. The magnetic structure can be described locally as canted G-type, but with a long period modulation (∼ 620Å) in the hexagonal basal plane [7]. Subsequent studies [8,9] determined that the modulation is cycloidal with the spins rotating in the (k, z)-plane where k can take the three symmetryequivalent directions k 1 = (δ, δ, 0), k 2 = (δ, −2δ, 0) and k 3 = (−2δ, δ, 0) in the hexagonal setting of the R3c group (employed throughout), and δ=0.0045 at 300K.
In this letter, we study the magnetic domains at the surface of a millimeter-size single crystal of BiFeO 3 with a single ferroelectric (FE) domain. Using the high momentum and spatial resolution of synchrotron X-ray diffraction, combined with circular polarization of the beam, we determine the absolute rotation direction of the magnetization in individual magnetic domains, which are found to have the same magnetic polarity. The sign of γ is determined and compared to model-Hamiltonian and ab − initio calculations. The large domains observed appear to form vortex structures with a closure of the wavevector for three adjacent 120 • domains.
Single crystals of several mm 3 were grown from a Bi 2 O 3 /Fe 2 O 3 /B 2 O 3 flux by slow cooling from 870 • C to 620 • C. A selected crystal was mechanically cut and polished perpendicular to the c-axis, and then annealed to remove any induced strain. A piezoresponse force microscopy (PFM) of the polished face (not shown) indicated that the surface had a single FE domain, with the electrical polarization pointing down into the sample. We label this domain FE↓, with the opposite polar domain labelled FE↑. The synchrotron X-ray experiments were performed at Diamond Light Source (UK) on Beamline I16 [10]. A horizontally polarized beam with a flux of ∼ 10 12 photons per second was delivered by a linear undulator and tuned to an energy of 5.8 keV, off resonance of chemical elements present in BiFeO 3 . Circular polarization of the beam was achieved by transmission through a 100µm thick diamond phase-plate, reducing the incident flux by ∼ 40 %. The diamond crystal was aligned to scatter near the (111) reflection in transmission. For a certain deviation of ∆θ from the Bragg condition, the crystal behaves as a quarter wave plate giving circular light. The handedness of the light is determined by the sign of ∆θ, which was calculated by dynamical scattering theory, and confirmed through experimental calibration of the beam line by measuring the X-ray dichroism of a standard ferromagnet.
To prevent contamination of the magnetic signal by charge scattering from neighboring structural reflections (δ is extremely small), we focussed on magnetic satellites of the N =(0,0,9) reflection, which is extinct by the presence of c-glide planes. Additionally, contamination from multiple scattering was fully eliminated by positioning the sample at an azimuthal angle φ=-170.0 • with respect to [1,0,0]. Diffraction of λ/2 X-rays was made negligible by employing up-stream harmonic rejection mirrors. The magnetic signal was clearly identified using the full X-ray beam size (100µm vertical x 350µm horizontal) with linearly-polarized light scanning in reciprocal-space around the positions of the six satellites N ± k 1 ,N ± k 2 ,N ± k 3 , for various positions on the crystal surface. The high momentum resolution allows the full separation of the six satellites, shown in Fig. 1, in contrast to previous neutron experiments [8,9]. The beam size was subsequently reduced using slits to create a footprint of 50x50µm 2 on the crystal surface. An image of the magnetic domains ( Fig. 2) was then constructed by step-scanning the sample position with a step size of 50± 1µm, recording the intensities of magnetic Bragg peaks N + k 1 ,N + k 2 ,N + k 3 using rocking-curve scans. This procedure lead to the identification of three large magnetic k-domains corresponding to k 1 , k 2 and k 3 , shown in Fig. 2, and to some smaller domains at the edges of the scanned surface and around a sizeable crystal imperfection in the center of the specimen. The three main domains are extremely large, reaching up to 500 µm in some directions. Note that the average penetration depth of the X-ray beam is 3.3 µm at this energy, placing a lower bound on the domain thickness. Despite the long period of the modulation (620Å), this result indicates that each domain corresponds to several hundred magnetic periods. The real space directions of the wave-vectors are shown in Fig. 2b. It appears that the modulation of the magnetization follows a 120 • vortex structure described by the path k 1 → k 3 → k 2 when rotating anticlockwise on the crystal surface.
To determine the absolute rotation direction of the magnetization in each domain (magnetic polarity), scattering data were collected using circularly polarized light.
For alternate chiralities of the X-ray beam (left/right handed), the intensities of the magnetic signals N + k 1 , N + k 2 , and N + k 3 , were recorded after analysis with a pyrolitic graphite crystal as a function of the analyzer angle η, where η=0 and η=90 correspond to the σ ′ and π ′ polarization channels (perpendicular and parallel to the scattering plane), respectively. The incident-light polarization is described by the Stokes vector P s =(P 1 ,P 2 ,P 3 ) [11], where P 1 , P 2 , P 3 represent respectively the degree of linear polarization along σ and π, oblique polarization (±45 • ) and left or right circular polarization. P 1 and P 2 have been determined by fitting the variation with η of the Thomson scattering intensity for the reflection (0,0,6), taking into account the cross-channel leakage of the analyzer. |P 3 | was determined by supposing a fully polarized beam, i.e. |P 3 | = 1 − P 2 1 − P 2 2 . In our measurements, right and left handed light was 93% and 92% circularly polarized, respectively (see supplementary information for the detailed calculations and conventions used). For each magnetic domain, the intensity (I M ) of the corresponding diffraction peak was evaluated using the density-matrix formalism [12]:
I M (Q, P s , η) = tr[D(η).V m (Q).ρ(P s ).V m (Q) † ](1)
where ρ is the density-matrix representing the polarization of the incident beam, and D the matrix representing the analyser configuration. V m =B.M (Q) is the scattering amplitude where B is expressed as a two by two matrix on the basis of the σ and π polarizations [11] and M (Q) the magnetic unit-cell structure factor. For the peaks at Q = (0, 0, 9) + k i (i=1,2,3):
M (Q) = 6f (Q)[M − βiM z ].e −i.18πz(2)
where f(Q) is the magnetic form factor for Fe 3+ , calculated in the dipolar approximation from [13], M and M z are the magnetization vectors of the cycloid along k i and the c-axis, respectively, and z is the fractional coordinate of Fe in the unit-cell (z=0.2208 at 300K). In our conventions β = +1 and β=-1 correspond to cycloids rotating counterclockwise (CCW) and clockwise (CW), respectively, when the structure is viewed propagating along k i and c is up. Comparison of intensities collected on the three main domains and calculations assuming circular cycloids (Fig. 3), unambiguously demonstrates that all magnetic configurations rotate CW following our definition. This is inferred from the η-positions of the I m extrema obtained with both light polarizations, which would be interchanged for a structure of opposite magnetic polarity. Within our conventions, λ is oriented in the +c direction, antiparallel to P . Refining the ellipticity of the cycloid (M z /M ) does not lead to significant improvements. This, and the failure to observe higher order magnetic satellites, supports the picture of a harmonic modulation at 300K, discussed in [14,15]. No improvements of the fit were obtained by considering a slight tilt of the cycloidal plane, as recently suggested [16]. -2δ,δ,0). The red (blue) color corresponds to the signal observed with a right-handed (left-handed) X-ray incident polarization. The solid lines show the results of a least-square refinement of the BiFeO3 magnetic structure assuming β=-1 (CW, see text for details). Bottom: Calculated variation of the scattered X-ray intensity with the analyser angle η assuming β=+1 (CCW, see text for details). The direction of electric polarization P is shown as a green arrow.
The relationship between ferroelectric and magnetic polarity was further investigated through ab−initio spinconstrained calculations in the framework of densityfunctional theory (DFT). The VASP code [17] with the PAW pseudopotentials [18] was employed within the GGA+U approach [19,20] (U ranging between 3 and 7 eV and J=1 eV for Fe d-states) including spin-orbit coupling, with a plane-wave cutoff of 450 eV. The total polarization was calculated via the Berry-phase formalism [21,22]. Structural parameters for the FE phase were taken from Ref. [23]. Due to its long periodicity, the true modulation of the magnetization is currently unaccessible to DFT. The modulation angle of the antiferromagnetic order parameter is given by θ = 2π(q x x + q y y), where q = k 1 , k 2 , or k 3 . Choosing q = k 3 , corresponding to a cycloidal modulation of spins rotating in the ac-plane, one needs a supercell na x 2nb x c in order to accommodate θ = 2π/na. The largest possible supercell, 2a x 4b x c, contains 240 atoms (just within the capabilities of state-of-the-art DFT simulations) and has modulation angle π/a. Accordingly, we considered a hypothetical spin configuration where the cycloidal period is reduced to two unit cells along a, with spins rotating CW (see Fig. 4, left panels) or CCW. The total energies of the two states are then compared in two symmetryequivalent FE states with opposite polarization, and in a reference paraelectric (centrosymmetric, R3c) structure. As shown in Table I, the paraelectric state is degenerate with respect to magnetic polarity, which is then lifted in both FE states. Furthermore, the energy favored state switches when polarization is switched. The reliability of this trend has been checked for different values of U, as well as within a conventional local-density approximation, giving |∆E| between 1.1 and 4.7 meV/Fe. These findings strongly point to a tight relationship between the magnetic polarity of the cycloidal modulation and the FE polarization. However, the rather large energy difference ∆E, as well as the disagreement of the predicted magnetic polarity with the experimental finding, are most probably due to the artificially short modulation of the magnetic configuration imposed in DFT calculations. Testing this hypothesis by mapping the energy evolution as a function of the modulation vector would require very demanding -if at all possible -DFT calculations. Instead, we adopted a different strategy as follows.
We introduce a Heisenberg-like spin model with nearest neighbor (nn) and next nearest neighbor (nnn) symmetric, as well as antisymmetric exchange interactions. The symmetric exchange interactions have been estimated by mapping the DFT energy of collinear ferro-and antiferromagnetic spin configurations onto the Heisenberg model, giving J nnn /J nn ∼ 0.03, consistent with the value extrapolated from spin-wave dispersions [24,25]. The antisymmetric exchange interactions for a given direction of P are captured through the magnetoelectric coupling constants γ (nn) and γ 2 (nnn weight). γ and γ 2 are then estimated by imposing the following constraints on the mean-field Heisenberg energy: i) the minimum of the energy occurs at the experimental modulation angle θ exp ∼ 3.24 • and ii) the energy difference at θ = π (i.e. the spin configuration simulated in our DFT calculations) is equal to ∆E, as evaluated from first principles. Under these assumptions we can estimate γ ≃ 2.38 · 10 −4 V and γ opt 2 = 0.6, with |∆E(θ exp )| ≃ 0.11 meV /Fe, for U = 5 eV (the same order of magnitude was obtained for U = 3 eV and U = 7 eV). Following Ref. [6], the inhomogeneous magnetoelectric coefficient in the framework of Landau theory of phase transitions would be γ = 4πA/lP c ∼ 5.8 · 10 −4 V (with exchange stiffness A = 1.87 · 10 5 eV/cm [6], modulation period l = 620Å and assuming the calculated P c = 105.17 µC/cm 2 ), in good qualitative agreement with our estimate. Our model analysis also underlines the relevant role of nnn interactions, as through lines corresponds to total energy with (without) next-nearest neighbor contribution Jnnn. A zoom for small modulation angles is also shown. The inset shows the energy difference between CW and CCW configurations for the optimal γ, γ2 and by artificially modifying the nnn contribution γ2.
including J nnn the mean-field Heisenberg energy almost reproduces the DFT results even at θ = π, where the only constraint has been imposed on ∆E (Fig. 4). As anticipated, the energy-favored magnetic polarity appears to depend strongly on the modulation angle of the cycloidal configuration and on the relative weight of nn and nnn antisymmetric exchange interactions, which give rise to opposite energy contributions with a different dependence on θ (as detailed in the supplementary information). For γ < ∼ 0.7, the energetic competition between nn and nnn interactions causes the favored magnetic polarity to change sign when moving from short to long modulation periods, therefore reconciling DFT and experimental results.
In summary, magnetic domains of up to 500 µm have been observed at the surface of a single crystal of BiFeO 3 consisting of a single ferroelectric domain. The magnetic cycloids in each domain were found to propagate with a unique rotation direction imposed by the electric polarity of the crystal, in agreement with the predictions of our theoretical study if nnn interactions are taken into account. In future studies, it would be of interest to observe the switching of the rotation direction of the magnetic cycloids upon switching of the ferroelectric polarization by an applied electric field, as observed in TbMnO 3 [26], and predicted by our calculations.
PACS numbers: 75.85.+t, 75.60.Ch, 75.25.-j
FIG. 1 .
1(Color online) Reciprocal-space scans showing magnetic Bragg intensities of the six satellites of the (0,0,9) parent reflection, with k1 = (δ, δ, 0) (red), k2 = (δ, −2δ, 0) (green) and k3 = (−2δ, δ, 0) (blue) and δ ∼ 0.0045. The x-and y-axis are taken respectively along the reciprocal a * direction and real space b-direction.
FIG. 2 .
2(Color online) a) Photograph of the polished crystal surface of BiFeO3 normal to the (001) axis (hexagonal setting, see text for details). A v-shaped defect is seen in the center of the surface. The downward direction of the electric polarization P determined by PFM is shown (cross) together with the reciprocal a * , b * axis (yellow lines). b) Distribution of antiferromagnetic domains with wave-vectors k1 = (δ, δ, 0) (red), k2 = (δ, −2δ, 0) (green) and k3 = (2δ, −δ, 0) (blue). The direction of propagation of the cycloidal modulation in real-space coordinates for each domain is shown. Each pixel is colored according to the diffraction signal (domain) present. In the case of multiple diffraction peaks (overlap of domains), the pixels are shaded with mixed colors, respectively.
FIG. 3 .
3(Color online) Top: Variation of the scattered Xray intensity with the analyser angle η (circle symbols) for three magnetic reflections (δ,δ,0), (δ,-2δ,0), (
FIG. 4 .
4(Color online) Sketch of the considered magnetic configuration in the 2a x 4b x c hexagonal cell of BiFeO3. Upper left: side view. Bottom left: spin configuration for a selected layer of Fe ions. Right panel: mean-field energy as a function of the modulation angle for the FE↓ domain, with all the parameters estimated from DFT with U =5 eV (see text); vertical dotted line marks the experimental θexp, thick (thin)
TABLE I .
IDFT results obtained for U =5 eV, J=1 eV. The energy difference is defined as ∆E = ECW − ECCW . FE↑ and FE↓ are characterized by opposite collective displacements, τ , respectively upward and downward, of Bi sublattice with respect to O layers perpendicular to c axis.τ (Å) Pc (µC/cm 2 ) ∆ E (meV/Fe) Favored rotation
FE↑ 0.668
105.17
-2.34
CW
PE
0
0
0
-
FE↓ -0.668
-105.17
2.34
CCW
The work done at the University of Oxford was funded by an EPSRC grant, number EP/J003557/1, entitled "New Concepts in Multiferroics and Magnetoelectrics", and the work at Rutgers was supported by DOE DE-FG02-07ER46328. Work in L'Aquila was supported by the European Research Council (ERC-StG No.203523 BISMUTH) and by the CARIPLO Foundation (No. 2010-0584 ECOMAG).
. I Dzyaloshinsky, Journal of Physics and Chemistry of Solids. 4241I. Dzyaloshinsky, Journal of Physics and Chemistry of Solids 4, 241 (1958).
. T Moriya, Phys. Rev. 12091T. Moriya, Phys. Rev. 120, 91 (1960).
. M Mostovoy, Phys. Rev. Lett. 9667601M. Mostovoy, Phys. Rev. Lett. 96, 067601 (2006).
. H Katsura, N Nagaosa, A V Balatsky, Phys. Rev. Lett. 9557205H. Katsura, N. Nagaosa, and A. V. Balatsky, Phys. Rev. Lett. 95, 057205 (2005).
. T Zhao, A Scholl, F Zavaliche, K Lee, M Barry, A Doran, M P Cruz, Y H Chu, C Ederer, N A Spaldin, R R Das, D M Kim, S H Baek, C B Eom, R Ramesh, Nat Mater. 5823T. Zhao, A. Scholl, F. Zavaliche, K. Lee, M. Barry, A. Do- ran, M. P. Cruz, Y. H. Chu, C. Ederer, N. A. Spaldin, R. R. Das, D. M. Kim, S. H. Baek, C. B. Eom, and R. Ramesh, Nat Mater 5, 823 (2006).
. A Kadomtseva, A Zvezdin, Y Popov, A Pyatakov, G Vorobev, JETP Letters. 79571A. Kadomtseva, A. Zvezdin, Y. Popov, A. Pyatakov, and G. Vorobev, JETP Letters 79, 571 (2004).
. I Sosnowska, T P Neumaier, E Steichele, Journal of Physics C: Solid State Physics. 154835I. Sosnowska, T. P. Neumaier, and E. Steichele, Journal of Physics C: Solid State Physics 15, 4835 (1982).
. D Lebeugle, D Colson, A Forget, M Viret, A M Bataille, A Goukasov, Phys. Rev. Lett. 100227602D. Lebeugle, D. Colson, A. Forget, M. Viret, A. M. Bataille, and A. Goukasov, Phys. Rev. Lett. 100, 227602 (2008).
. S Lee, T Choi, W Ratcliff, R Erwin, S.-W Cheong, V Kiryukhin, Phys. Rev. B. 78100101S. Lee, T. Choi, W. Ratcliff, R. Erwin, S.-W. Cheong, and V. Kiryukhin, Phys. Rev. B 78, 100101 (2008).
. S P Collins, A Bombardi, A R Marshall, J H Williams, G Barlow, A G Day, M R Pearson, R J Woolliscroft, R D Walton, G Beutier, G Nisbet, AIP Conf. Proc. 1234303S. P. Collins, A. Bombardi, A. R. Marshall, J. H. Williams, G. Barlow, A. G. Day, M. R. Pearson, R. J. Woolliscroft, R. D. Walton, G. Beutier, and G. Nisbet, AIP Conf. Proc. 1234, 303 (2009).
. F De Bergevin, M Brunel, Acta Crystallographica Section A. 37314F. de Bergevin and M. Brunel, Acta Crystallographica Section A 37, 314 (1981).
. U Fano, Rev. Mod. Phys. 2974U. Fano, Rev. Mod. Phys. 29, 74 (1957).
P J Brown, International Tables for Crystallography. P. J. Brown, International Tables for Crystallography, Vol. C (2006) pp. 454-461.
. I Sosnowska, R , Przenios Lo, Phys. Rev. B. 84144404I. Sosnowska and R. Przenios lo, Phys. Rev. B 84, 144404 (2011).
. M Ramazanoglu, W Ratcliff, Y J Choi, S Lee, S.-W Cheong, V Kiryukhin, Phys. Rev. B. 83174434M. Ramazanoglu, W. Ratcliff, Y. J. Choi, S. Lee, S.-W. Cheong, and V. Kiryukhin, Phys. Rev. B 83, 174434 (2011).
. M Ramazanoglu, M Laver, W Ratcliff, S M Watson, W C Chen, A Jackson, K Kothapalli, S Lee, S.-W , M. Ramazanoglu, M. Laver, W. Ratcliff, S. M. Watson, W. C. Chen, A. Jackson, K. Kothapalli, S. Lee, S.-W.
. V Cheong, Kiryukhin, Phys. Rev. Lett. 107207206Cheong, and V. Kiryukhin, Phys. Rev. Lett. 107, 207206 (2011).
. G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996).
. P E Blochl, Phys. Rev. B. 5017953P. E. Blochl, Phys. Rev. B 50, 17953 (1994).
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
. S L Dudarev, G A Botton, S Y Savrasov, C J Humphreys, A P Sutton, Phys. Rev. B. 571505S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton, Phys. Rev. B 57, 1505 (1998).
. R D King-Smith, D Vanderbilt, Phys. Rev. B. 471651R. D. King-Smith and D. Vanderbilt, Phys. Rev. B 47, 1651 (1993).
. R Resta, Rev. Mod. Phys. 66899R. Resta, Rev. Mod. Phys. 66, 899 (1994).
. A Palewicz, I Sosnowska, R Przenios Lo, A Hewat, Acta Physica Polonica A. 117296A. Palewicz, I. Sosnowska, R. Przenios lo, and A. Hewat, Acta Physica Polonica A 117, 296 (2010).
. M Matsuda, R S Fishman, T Hong, C H Lee, T Ushiyama, Y Yanagisawa, Y Tomioka, T Ito, Phys. Rev. Lett. 10967205M. Matsuda, R. S. Fishman, T. Hong, C. H. Lee, T. Ushiyama, Y. Yanagisawa, Y. Tomioka, and T. Ito, Phys. Rev. Lett. 109, 067205 (2012).
. J Jeong, E A Goremychkin, T Guidi, K Nakajima, G S Jeon, S.-A Kim, S Furukawa, Y B Kim, S Lee, V Kiryukhin, S.-W Cheong, J.-G Park, Phys. Rev. Lett. 10877202J. Jeong, E. A. Goremychkin, T. Guidi, K. Nakajima, G. S. Jeon, S.-A. Kim, S. Furukawa, Y. B. Kim, S. Lee, V. Kiryukhin, S.-W. Cheong, and J.-G. Park, Phys. Rev. Lett. 108, 077202 (2012).
. F Fabrizi, H C Walker, L Paolasini, F De Bergevin, A T Boothroyd, D Prabhakaran, D F Mcmorrow, Phys. Rev. Lett. 102237205F. Fabrizi, H. C. Walker, L. Paolasini, F. de Bergevin, A. T. Boothroyd, D. Prabhakaran, and D. F. McMorrow, Phys. Rev. Lett. 102, 237205 (2009).
| [] |
[
"Isospin symmetry in mirror α-decays",
"Isospin symmetry in mirror α-decays"
] | [
"N K Timofeyuk \nDepartment of Physics\nSchool of Electronics and Physical Sciences\nUniversity of Surrey\nGU2 7XHGuildfordSurreyUK\n",
"P Descouvemont \n) Physique Nucléaire Théorique et Physique Mathématique\nUniversité Libre de Bruxelles\nCP229, B1050BrusselsBelgium\n",
"R C Johnson \nDepartment of Physics\nSchool of Electronics and Physical Sciences\nUniversity of Surrey\nGU2 7XHGuildfordSurreyUK\n"
] | [
"Department of Physics\nSchool of Electronics and Physical Sciences\nUniversity of Surrey\nGU2 7XHGuildfordSurreyUK",
") Physique Nucléaire Théorique et Physique Mathématique\nUniversité Libre de Bruxelles\nCP229, B1050BrusselsBelgium",
"Department of Physics\nSchool of Electronics and Physical Sciences\nUniversity of Surrey\nGU2 7XHGuildfordSurreyUK"
] | [] | We show that a consequence of isospin symmetry, recently discovered in mirror conjugated onenucleon decays, can be extended to mirror-conjugated α-particles decays, both virtual and real. For virtual α-decays of bound mirror pairs this symmetry manifests itself as a relation between the Asymptotic Normalization Coefficients (ANCs) of α-particle overlap integrals. This relation is given by a simple analytical formula which involves α-particle separation energies and charges of residual nuclei. For bound-unbound mirror pairs, the ANC of a bound nucleus is related to the α-width of the mirror unbound level. For unbound mirror pairs we get a new analytical formula that relates the widths of mirror resonances. We test the validity of these analytical formuli against the predictions of a two-body potential and of a many-body microscopic cluster model for several mirror states in 7 Li-7 Be, 11 B-11 C and 19 F-19 Ne isotopes. We show that these analytical formulae are valid in many cases but that some deviations can be expected for isotopes with strongly deformed and easily excited cores. In general, the results from microscopic model are not very sensitive to model assumptions and can be used to predict unknown astrophysically relevant cross sections using known information about mirror systems. | 10.1103/physrevc.75.034302 | [
"https://export.arxiv.org/pdf/nucl-th/0610012v1.pdf"
] | 40,571,764 | nucl-th/0610012 | a41601fa829b1b3ec399396541d97f3eaa0433f8 |
Isospin symmetry in mirror α-decays
4 Oct 2006
N K Timofeyuk
Department of Physics
School of Electronics and Physical Sciences
University of Surrey
GU2 7XHGuildfordSurreyUK
P Descouvemont
) Physique Nucléaire Théorique et Physique Mathématique
Université Libre de Bruxelles
CP229, B1050BrusselsBelgium
R C Johnson
Department of Physics
School of Electronics and Physical Sciences
University of Surrey
GU2 7XHGuildfordSurreyUK
Isospin symmetry in mirror α-decays
4 Oct 2006(Dated: March 31, 2022)PACS numbers: 21.10.Jx, 21.60.Gx, 27.20.+n, 27.30.+t
We show that a consequence of isospin symmetry, recently discovered in mirror conjugated onenucleon decays, can be extended to mirror-conjugated α-particles decays, both virtual and real. For virtual α-decays of bound mirror pairs this symmetry manifests itself as a relation between the Asymptotic Normalization Coefficients (ANCs) of α-particle overlap integrals. This relation is given by a simple analytical formula which involves α-particle separation energies and charges of residual nuclei. For bound-unbound mirror pairs, the ANC of a bound nucleus is related to the α-width of the mirror unbound level. For unbound mirror pairs we get a new analytical formula that relates the widths of mirror resonances. We test the validity of these analytical formuli against the predictions of a two-body potential and of a many-body microscopic cluster model for several mirror states in 7 Li-7 Be, 11 B-11 C and 19 F-19 Ne isotopes. We show that these analytical formulae are valid in many cases but that some deviations can be expected for isotopes with strongly deformed and easily excited cores. In general, the results from microscopic model are not very sensitive to model assumptions and can be used to predict unknown astrophysically relevant cross sections using known information about mirror systems.
I. INTRODUCTION
In the last few years, it has been realised that charge symmetry of nucleon-nucleon (NN) interaction leads to specific relations between the amplitudes of mirrorconjugated one-nucleon decays A N Z → A−1 N −1 Z + n and A Z N → A−1 Z−1 N + p [1]. In a mirror pair of bound states this symmetry links Asymptotic Normalization Coefficients (ANCs) for mirror-conjugated overlap integrals A N Z| A−1 N −1 Z ⊗ n and A Z N | A−1 Z−1 N ⊗ p . In bound-unbound mirror states, it manifests itself as a link between the neutron ANC and the width of the mirror proton resonance. In both cases this link can be represented by an approximate simple model-independent analytical formula that contains only nucleon binding energies, nuclear charges and the range of the strong nucleon-core interaction [1]. Comparison with microscopic cluster model calculations [2,3] has shown that the average accuracy of this formula is about 7% for bound mirror pairs [2] and 10% for bound-unbound mirror pairs [3].
The knowledge of the link between mirror ANCs can be beneficial for predicting unknown ANCs using the unformation about known mirror ANCs. The latter can be used in nuclear astrophysics to predict or verify nucleon capture cross sections at stellar energies. Thus, the proton ANCs for 8 B, 9 C, 12 N and 27 P have been determined using the measured neutron ANCs for their mirror analogs 8 Li [4], 9 Li [5], 12 B [6] and 27 Mg [7] respectively, and then have been used to predict the astrophysical S-factors for the corresponding non-resonant (p,γ) reactions on 7 Be, 8 B, 11 C and 26 Si at low energies. Also, the isospin symmetry in bound-unbound mirror pairs has been used to predict the neutron ANC for the halo nu- [8].
In this paper, we show that similar consequences of isospin symmetry are present in mirror-conjugated αdecays. Their knowledge may be used in nuclear astrophysics to predict important (α, γ), (α,N) and (N,α) cross sections.
In Sec.II.A we consider bound mirror pairs and derive a simple analytical formula for the ratio of mirror ANCs squared. As in the case of nucleon decays, the formula depends only on mirror α-particle binding energies, nuclear charges and the range of the α-core potential. We test this formula for the two-body model, where exact numerical solutions are available. In Sec. II.B we make predictions in the microscopic cluster model (MCM) for the ANCs of bound mirror pairs 7 Li-7 Be, 11 B-11 C and 19 F- 19 Ne in which the α-decay threshold in the lowest. All three mirror pairs are important for nuclear astrophysics applications. In Sec. III we consider boundunbound mirror states of the same pairs of nuclei both in a two-body model and in the MCM. In Sec. IV we discuss isospin symmetry in mirror resonance states and in Sec. V we summarise the results obtained and draw conclusions.
II. BOUND MIRROR PAIRS
A. Two-body model with charge-independent α-core strong interaction
We consider (1) a bound system A−4 Z−2 (N − 2) + α and (2) its bound mirror analog A−4 N −2 (Z −2)+α in a two-body model. We order these systems is such a way that the binding energy ε 1 of the first system is larger than the second binding energy ε 2 . We denote this two cores as X 1 and X 2 and assume that the nuclear α − X i interaction V N in mirror systems is exactly the same so that all the difference in the wave functions Ψ 1 and Ψ 2 of these mirror systems is determined by different Coulomb interactions V C1 and V C2 . In practice, the two mirror α-particle wave functions are close to each other both in the internal nuclear region and on the surface, where the α − X i potential strongly decreases.
The wave function Ψ i , where i = 1,2, satisfies the Schrödinger equation
(T + V N + V Ci + ε i )Ψ i = 0(1)
with binding energy ε i . The radial part Ψ (i) l (r) corresponds to the orbital momentum l behaves asymptotically as
Ψ (i) l (r) ≈ C (i) l W −ηi,l+1/2 (2κ i r)/r.(2)
Here C (i) l is the α-particle ANC, W is the Whittaker function, κ i = √ 2µε i /h, µ is the reduced mass for the α + X i system (we neglect the i dependence of µ) and η i = Z i Z α e 2 µ/h 2 κ i .
The ANC C (i) l can be represented by the integral
C (i) l = − 2μ h 2 ∞ 0 dr r 2φ (i) l (r)(V N + V Ci −Ṽ i )Ψ (i) l (r),(3)
where the functionφ
(i) l
is the regular solution of the Schrödinger equation with an arbitrary potentialṼ i
(T l +Ṽ i + ε i )φ (i) l = 0,(4)
with the boundary conditioñ
φ (1) l (r) → φ (1) l (r) = e − πi 2 (l+1+η1) F l (iκ 1 r)/κ 1 r,(5)
for r → ∞, where F is the regular Coulomb function. The only requirement on the potentialṼ i is that at large distances r it should cancel the long-range Coulomb interaction potential V Ci between α and X i in order to provide convergence for the integral (3). We exploit the freedom in choosing theṼ 1 to separate out from the formula (3) for C (2) l a term which looks as close as possible to the corresponding formula for C (1) l . We chooseṼ 1 to be the Coulomb interaction V (1) C0 between a point α-particle and a point core X 1 so that
φ (1) l (r) = φ (1) l (r) = e − πi 2 (l+1+η1) F l (iκ 1 r)/κ 1 r(6)
for all r. We next chooseṼ 2 so thatφ (2) l (r) is proportional toφ (1) l (r) for a range of values of r < a that will be specified later. For r > a the general requirement for thẽ V 2 at large distances must be satisfied, so we definẽ
V 2 = ε 1 − ε 2 + V (1) C0 , r < ã V 2 = V (2) C0 , r ≥ a,(7)
With this choice in Eq. (4) the functionφ (2) l (r) is the regular solution of the Schrödinger equation
(T l + V (1) C0 + ε 1 )φ (2) l (r) = 0, r < a (T l + V (2) C0 + ε 2 )φ (2) l (r) = 0, r ≥ a.(8)
and is therefore proportional to φ (1) l (r) for r < a. Its explicit form isφ (2)
l (r) = Aφ (1) l (r), r ≤ ã φ (2) l (r) = φ (2) l (r) + BW −η2,l+1/2 (2κ 2 r)/r. r ≥ a (9)
The coefficients A and B are found from continuity of φ (2) l (r) and its derivative at r = a:
A = A 0 (a) + BW 2 /aφ (1) l (10) where A 0 (a) = φ (2) l (a)/φ (1) l (a),(11)B = A ′ 0 (a)/(W 2 /aφ (1) l ) ′(12)
Here the notation W 2 for W −η2,l+1/2 (2κ 2 r) is introduced and the ′ denotes the differentiation with respect to a. With these choices for theṼ i the formula (3) becomes
−h 2 2µ C (2) l = A a 0 dr r 2 φ (1) l (V N + ∆V C1 )Ψ (2) l + ∞ a dr r 2φ (2) l (V N + ∆V C2 )Ψ (2) l + R C (a) (13) where ∆V Ci = V Ci − V (i) C0(14)
and
R C (a) = A a 0 dr r 2 φ (1) l (V C2 − V C1 − ε 1 + ε 2 )Ψ (2) l .(15)
Introducing new functions
∆Ψ 12 = Ψ (2) l − Ψ (1) l(16)
and
δφ 12 (r, a) = φ (2) l (r) − A 0 (a)φ (1) l (r),(17)
and rearranging all terms in Eq. (13) is such a way that integrals from a to ∞ do not contain products φ
(1) l (r)Ψ (2) l (r) which increase with r, we get −h 2 2µ C (2) l = A 0 (a) ∞ 0 dr r 2 φ (1) l (V N + ∆V C1) )Ψ (1) l +R C (a) + R ∆Ψ + R δφ (a) + R B (a) + R ∆VC (a),(18)
where the first term of the r.h.s. of the Eq. (18 ) is
nothing but −h 2 /2µA 0 (a)C(1)
l . We will show that all the five remainder terms in Eq. (18) are small compared with either −h 2 /2µA 0 (a)C
(1) l or −h 2 /2µC
(2) l provided the radius a is chosen in a specific way.
The term R C (a) is negligible for a < R N , where R N is the radius of the nuclear interior, because both the Coulomb diffefence V C2 − V C1 and the binding energy difference ε 1 − ε 2 are small compared with the nuclear potential V N . For a > R N , R C (a) grows because the function φ
(1) l increases faster than Ψ (2) l decreases. The contribution from R ∆Ψ , where R ∆Ψ = ∞ 0 dr r 2 φ (2) l (V N + ∆V C1 )∆Ψ 12 ,(19)
does not depend on a and is determined by the difference between the functions Ψ (2) l and Ψ
(1) l in the region that gives the most contribution to the integral in the r.h.s. of Eq. (19). In the cases considered below, this difference is about 2%.
The term R δφ (a) defined as
R δφ (a) = ∞ a dr r 2 δφ 12 (r, a)V N Ψ (1) l − a 0 dr r 2 δφ 12 (r, a)V N ∆Ψ 12 ,(20)
contains the function δφ 12 (r, a) which is equal to zero at r = a. Therefore, if a is at a point where V N Ψ (1) l reaches its maximum and is a decreasing function at r > a then the contribution from R δφ (a) will be small.This point can be chosen to be the nuclear radius R N , which for α + X system is about (1.1-1.3)(4 1/3 +X 1/3 ). If at the same time φ (2) l (r)/φ (1) l (r) varies slowly with r around a then φ 12 (r, a) ≈ 0 which guarantees that R δφ (a) is negligible. However, R δφ (a) increases if a < R N and φ (2) l /φ (1) l at r = R N differs from A 0 (a). On the other hand, R δφ (a) is very small for a > R N .
The next term,
R B (a) = B ∞ a dr r W 2 (V N + ∆V C2 )Ψ (2) l +B W 2 aφ (1) l a 0 dr r 2 φ (1) l (V N + ∆V C1 )Ψ (2) l ,(21)
depends on B. The B is zero at two points, at a = 0 and at a = a m where the function A 0 (a) reaches its maximum (or in other words A ′ 0 (a m ) = 0). At all other points the contribution from R B (a) depends on how large is BW 2 /aφ (1) l with respect to A 0 (a). We show in Appendix that
BW 2 aφ (1) l A 0 (a) = p 2 (a) − p 1 (a) p 2 (a) + p 1 (a) ,(22)
where
p i (a) = 2η i κ i r + l(l + 1) r 2 + κ 2 i .(23)
For mirror α states p 2 (a) does not differ much from p 1 (a), especially near a ≈ R N . Thus BW 2 /aφ (1) l << A 0 (a) and, therefore, R B (R N ) will be small compared with −h 2 /2µA 0 (a)C (1) l . The last term,
R ∆VC (a) = ∞ a dr r 2 (φ (2) l ∆V C2 − A 0 (a)φ (1) l ∆V C1 )Ψ (1) l − a 0 dr r 2 (φ (2) l ∆V C2 − A 0 (a)φ (1) l ∆V C1 )∆Ψ 12 .(24)
is zero for all a greater than the radius of the α-core Coulomb interaction R c and is small for a < R c if ∆V Ci ≪ V N . For all cases considered below, this condition is satisfied.
Thus, if Ψ (1) l ≈ Ψ (2) l
is a good approximation and if a is chosen near R N then the contributions from all the remainder terms R i (a) are very small and Eq.(18) reduces toh
2 2µ C (2) l = A 0 (a)h 2 2µ C (1) l .(25)
Then the ratio R
R = C (2) l /C (1) l 2(26)
of the mirror squared ANCs can be approximated by the model-independent analytical expression
R ≈ R 0 = A 2 0 (R N ) = κ 1 F l (iκ 2 R N ) κ 2 F l (iκ 1 R N ) 2 .(27)
The accuracy of this approximation depends on how rapidly A 0 (R N ) changes over the region of uncertainty of R N . In all cases considered below this function varies slowly around R N (see the insets in Fig.1 where A 0 (a)/A 0 (a m ) is plotted). The approximation (27) is similar to the formula,
C p C n 2 ≈ F l (iκ p R N ) κ p R N j l (iκ n R N ) 2 ,(28)
obtained in Ref. [1] for ANCs C p and C n of mirror proton and neutron virtual decays respectively. In principle, Eq. (27) could be obtained from (28) by replacing the spherical Bessel function j l (iκ n r) by F l (iκ 1 R N )/κ 1 R N . However, Eq. (28) has been obtained in Ref. [1] starting from different assumptions. Namely, it was explicitely assumed that the main contribution to the ANC comes only from internal nuclear region, r ≤ R N , that the Coulomb interactions inside the nuclear region can be replaced by constants and that the difference between these constants is equal to the difference in proton and neutron binding energies. Our exact two-body calculations have shown that the accuracy of these assumptions is much worse than the accuracy of the formula (27) itself. In particular, all α-particle wave functions have nodes because of the Pauli principle, which causes cancellations between some contributions to the ANC from the internal region so that the contributions from the surface become important. For large orbital momentum l the surface region, in which the nuclear potential decreases, is even more important. We illustrate this in the insets of In Fig.1 we show the deviations ∆ i from C
(2) l defined as
∆ i = − 2μ h 2 R i (a)/C (2) l(29)
where i = C, ∆Ψ, δφ, B and ∆V C , together with the total deviation ∆ = i ∆ i for three mirror pairs, 7 Li(=α+t) -7 Be(=α+ 3 He), 11 B(=α+ 7 Li) -11 C(=α+ 7 Be) and 19 F(=α+ 15 N) -19 Ne(=α+ 15 O). The calculations have been done using a Woods-Saxon potential with a diffuseness of 0.65 fm, the radius and the depth of which has been adjusted to fit the α-particle energies in mirror systems. The total spin-parity in all three cases is 3 2 − (the second 3 2 − state was considered for 11 B-11 C to enhance the difference in the mirror wave functions) but the orbital momenta l and the number of nodes are different. The ratio A 0 (a)/A 0 (a m ), shown in insets of Fig.1, does not change much near R N . The total deviation ∆ is minimal at r = R N and is determined mainly by ∆ δφ when r < R N and by ∆ C + ∆ B at r > R N with ∆ C significantly larger than ∆ B . The contribution from ∆ ∆V C is too small to be shown in these figures.
We have performed exact two-body calculations for other states of the mirror pairs 7 Li-7 Be, 11 B-11 C and 19 F-19 Ne using Woods-Saxon potentials with diffusseness varying from 0.35 to 0.95 fm. The sensitivity of the ratio R to the potential choice was less than 2%. Both the exact ratios R P M and the analytical approximations R 0 are given in Table I. Since in all cases a m was very close to R N and A 0 (a) changed very slowly around R N , the R 0 values from Table II were calculated at R N = a m . The ratio R P M /R 0 is also plotted in Fig.2. One can see that R P M and R 0 agree on average within 2% or less. For 7 Li-7 Be this agreement is slightly worse, about 3-4%, which can be explained by the larger difference in internal wave functions due to the smaller Coulomb interaction.
0 2 4 6 8 10 a (fm) −0.1 0 0.1 0.2 0.3 0.4 ∆ i ∆ ∆Ψ ∆ C ∆ B ∆ δφ ∆ 0 5 10 a (fm) 0 1 7 Li(3/2 − )− 7 Be(3/2 − ) A 0 (a)/A 0 (a m ) ∆(a) C 2 (a)/C 2 a) 1 node, l = 1 0 2 4 6 8 10 a (fm) −0.1 0 0.1 0.2 0.3 0.4 ∆ i ∆ ∆Ψ ∆ C ∆ B ∆ δφ ∆ 0 5 10 a (fm) 0 1 11 B(3/2 − )− 11 C(3/2 − ) A 0 (a)/A 0 (a m ) ∆(a) C 2 (a)/C 2 2 nodes, l = 0 b) 0 2 4 6 8 10 a (fm) −0.1 0 0.1 0.2 0.3 0.4 ∆ i ∆ ∆Ψ ∆ C ∆ B ∆ δφ ∆ 0 5 10 a (fm) 0 1 19 F(3/2 − )− 19 Ne(3/2 − ) A 0 (a)/A 0 (a m ) ∆(a)3 − 1 − 3 1 − 3 1 − 1 − 5 − 3 2 − 3 2 − 1 + 7 − 7 − 5 + 5 + 3 + 3 + 1 + 5 + 1 − 5 − 3 + 3 − 9 + 0
B. Mirror ANCs in a microscopic cluster model
The relation (27) for mirror ANCs obtained in the twobody model can be extended to many-body systems. The expression for an ANC in the many-body case is [9]
C (i) l = − 2μ h 2 ∞ 0 dr r 2φ(i) l (r) [Φ JX i Xi ⊗ Y l (r)] JA ×Φ α ||V N + V Ci −Ṽ i ||Ψ JA A (30) where Ψ JA A , Φ α and Φ JX i
Xi are the many-body wave functions of the nucleus A, α-particle and the decay product X i , and J A and J Xi are the total spins of A and X i . The integration in the source term [Φ
JX i Xi ⊗ Y l (r)] JA Φ α ||V N + V Ci −Ṽ i ||Ψ JA A
is carried out over the internal coordinates of α and X i and the potentials V N and V C are the sums of the two-body nuclear and Coulomb interactions. Following the reasoning of section A, we get the formula (27). The deviation from this formula will be determined by the remainder terms R C (a), R ∆Ψ , R B (a), R δφ (a) and R ∆VC (a) defined by Eqs. similar to (15), (19), (20), (21) and (24) but in the integrands of which V Ψ is be replaced by the matrix elements of the
[Φ JX i Xi ⊗ Y l (r)] JA Φ α ||V ||Ψ JA A type.
The main difference between the two-body and manybody cases is that V C − V C0 is not zero at r > R N . It contains long range contributions from the r −λ (λ ≥ 2) terms the strengths of which are determined by the ma-
trix elements [Φ JX i Xi ⊗ Y l (r)] JA Φ α ||M (Eλ)||Ψ JA A where M (Eλ)
is the electromagnetic operator of multipolarity λ [2]. If these matrix elements are large, then all the remnant terms that contain ∆V Ci may cause significant differences between R and R 0 . This is expected for nuclei with strongly deformed and/or easily excited cores.
Another factor that may lead to additional differences between R and R 0 in many-nucleon systems is that the condition Ψ
(1) l ≈ Ψ (2) l
for the validity of Eq. (27) in the two-body case is replaced by the equality of the projec-
tions [Φ JX i Xi ⊗ Y l (r)] JA Φ α |Ψ JA A
(or overlap integrals) of the mirror wave functions for nuclei A N Z and A Z N into the mirror channels X i + α. If the norms of these overlap integrals (or spectroscopic factors) differ then the terms R ∆Ψ , R δφ (a) and R ∆VC (a) will increase. This can be especially important for weak components of overlap integrals where symmetry breaking in the spectroscopic factors may become large.
Our previous study of many-body effects in mirror virtual nucleon decays suggests that they are on average of the order of 7% [2], although stronger deviations in some individual cases were observed as well. Here, we study the many-body effects in mirror α-particle ANCs using a multi-cluster model of the same type as in Ref. [2] for the same mirror pairs 7 Li-7 Be, 11 B-11 C and 19 F-19 Ne I: Microscopic calculations for RMCM , analytical estimate R0 and the potential model estimate RP M , for the mirror pairs from the first column with the spin-parity J π and the orbital momentum l of the α particle. Also shown are the ratios R M CM bα = (bα(2)/bα(1)) 2 , where bα(i) = Cα(i)/ Sα(i) is the normalized ANC for the nucleus i, and Sα is the spectroscopic factor. The significance of these ratios is discussed in the text. For RMCM and R M CM bα , average values and range of variations between calculations with V2 and MN potentials and two different oscillator radii are presented. RP M is averaged over the choice of different parameters of the Woods-Saxon potentials and shown together with the range of its variation. considered above in the two-body model. The multi-channel cluster wave function for a nucleus A consisting of a core X and an α-particle can be represented as follows:
Mirror pair
J π l RMCM R0 RP M R M CM bα 7 Li − 7 Be 3 2 − 1 1.35 ± 0Ψ JAMA A = lωJX A Φ α g JX JA ωl (r) ⊗ Φ JX X JAMA(31)
where A is the antisymmetrization operator which permutes nucleons between the α-particle and the core. Both the α-particle wave function and the "core" wave function Φ JX X corresponding to the total spin J X are defined in the translation-invariant harmonic-oscillator shell model. In addition, for 11 C we used the threecluster model of Ref. [10] in which Φ JX X is defined in a two-cluster model. The quantum number l labels the orbital momentum of the α-particle. The relative wave function g JX JA ωlm (r) = g JX JA ωl (r)Y lm (r) is determined using the microscopic R-matrix method [16] to provide the correct asymptotic behaviour
g JX JA ωl (r) ≈ C JX JA l,ω W −η,l+1/2 (2κr) r , r → ∞,(32)
determined by the Whittaker function and the ANC C JX JA l,ω . The MCM requires some choice of the oscillator radius b to describe the internal structure of the clusters. In all three mirror pairs considered in this paper, the oscillator radius that provides a good description of the α-particle differs significantly from that of the core. Dealing with different b for each of the cluster would create big difficulties in using the MCM. Therefore, we use the same value of b for both clusters but do the calculations twice. The first time we use b = 1.36 fm that reproduces the r.m.s. radius of the α-particle and minimises its binding energy, and the second time we use either b = 1.5 fm (to describe the triton and/or 3 He core for the 7 Li -7 Be mirror pair) or b = 1.6 fm (for 11 B -11 C and 19 F -19 Ne). Our previous calculations for 17 O -17 F have shown that different oscillator radii change strongly the absolute value of neutron and proton ANCs but does not change their ratio very much [2]. In the three-cluster calculations for the 11 B -11 C mirror pair we used only one value of the oscillator radius, b = 1.36 fm, the same as in Ref. [10].
For each oscillator radius, we use two NN potentials, the Volkov potential V2 [11] and the Minnesota (MN) potential [12], except in three-cluster calculations for 11 B-11 C where only V2 is used. The two-body spin-orbit force [13] with S 0 = 30 MeV·fm 5 and the Coulomb interaction are also included. Both V2 and MN have one adjustable parameter that gives the strength of the odd NN potentials V 11 and V 33 . We fit this parameter in each case to reproduce the experimental values for the α-particle separation energies. Slightly different adjustable parameters in mirror nuclei, needed to reproduce these energies, simulate charge symmetry breaking of the effective NN interactions, which could be a consequence of charge symmetry breaking in realistic NN interactions.
The range of changes in squared ANCs C 2 α (2) and C 2 α (1) in mirror nuclei 2 and 1 is given in Table II. Similar to previous studies of of one-nucleon ANCs in Refs. [2,8,14], the V2 potential gives larger C 2 α values than the MN (up to a factor of two) at a fixed oscillator radius b and the different choices of b give a comparable change (up to the factor of two) in C 2 α at a fixed NN potential. The range of change in the ratio R MCM with different choices of oscillator radius and the NN potential are also given in Table II. For 11 B -11 C, this range includes changes with different number of clusters. In Table I the average value of R MCM is compared to the analytical estimate R 0 and to predictions within the potential model R P M . To visualise the deviation between R MCM and R 0 we plot the ratio R MCM /R 0 in Fig.2.
We have also calculated the α-particle spectroscopic factors S α defined as
S α = A 4 ∞ 0 dr r 2 [Φ JX i Xi ⊗ Y l (r)] JA Φ α |Ψ JA A 2(33)
and have shown their range of variation in Table II. The ratio R MCM S = S α (2)/S α (1) of these spectroscopic factors is given in Table II as well and is plotted in Fig.3. We also calculate the ratio R MCM bα = (b α (2)/b α (1)) 2 of the normalized squared ANCs b α = C α / √ S α . As in the case of mirror virtual nucleon decays studied in Ref. [2,15], the approximate equality R MCM bα ≈ R P M means that in mirror nuclei the effective local nuclear α-core interaction can be considered to be the same.
We now discuss individual mirror pairs in more details. 7 Li -7 Be. The squared ANCs in these mirror nuclei change by about 55% with different oscillator radii and NN potentials. However, the ratio C α ( 7 Be)/C α ( 7 Li) changes by only about 1.5% both in the ground and the first excited states. This ratio differs from the analytical estimate R 0 by no more than 3% and 4% for the ground and the first excited state respectively and agrees reasonably well with the potential model calculations. The mirror symmetry in spectroscopic factors is also clearly seen. Some minor differences in R MCM bα and R P M are present which means that the effective local nuclear t+α and 3 He + α interactions differ slightly. Since the 7 Li and 7 Be ANCs determine the cross sections for the 3 H(α, γ) 7 Li and 3 He(α, γ) 7 Be capture reactions at zero energies, the mirror symmetry of the α-particle ANCs means that relations should exist between the astrophysical S-factors of these reactions. Thus, with our value of R MCM the ratio S 34 ( 7 Be)/S 34 ( 7 Li) at zero energy is 6.6 and 5.9 for the ground and the first excited states respectively. 11 B -11 C. The calculations for this mirror pair have been performed for all excited states that are below the α-particle emission threshold in 11 C. In the two-cluster model, only the ground and the 1 2 − first excited state in the 7 Li -7 Be mirror cores have been taken into account. In the three-cluster model, both the 7 Li+α ( 7 Be+α) and t+ 8 Be 2 − in 7 Li ( 7 Be) and the first 0 + and 2 + states in 8 Be included [10].
The squared ANCs calculated in the two-cluster MCM change with different NN potential and oscillator radius choice by the factor of four on average (see Table II). Taking two-cluster nature of 7 Li and 7 Be into account in most cases significantly increases ANCs thus increasing the range of their variations with model assumptions. However, in all cases the ratio R MCM changes by no more than 9%. The R MCM values obtained in the twocluster model are close to the analytical estimate R 0 and to the potential model prediction R P M , agreement being within 1-5% (see Fig.2). For the second 3 2 − state with l = 2, a larger deviation from R 0 and R P M (5-10%) The range of changes in squared ANCs C 2 α (2) and C 2 α (1) (in fm −1 ) for mirror nuclei 2 and 1 (Z2 > Z1) and in their ratio RMCM with the choice of oscillator radius and the NN potential. For 11 B -11 C, this range include also changes with different number of clusters. Also shown are the spectroscopic factors Sα(2), Sα(1) and their ratio R M CM S = Sα(2)/Sα(1). coincides with larger symmetry breaking in the mirror spectroscopic factors (see Fig.3).
J π l C 2 α (2) C 2 α (1) RMCM Sα(2) Sα(1) R M
The R MCM values obtained in the three-cluster MCM are significantly larger than the predictions of the twocluster model. This is caused mainly by the influence of the t+ 8 Be and 3 He+ 8 Be channels. When these channels are removed, so that only the 7 Li+α and 7 Be+α partitions are left, then both the two-cluster and three-cluster MCM predict very similar results for the ratio R MCM (see Fig.2). At the same time, the ratio of mirror spectroscopic factors is not very much influenced by the t+ 8 Be ( 3 He+ 8 Be) clustering, although for the 5 2 + state with l = 3 it is slightly reduced. This happens because the effective local α− 7 Li and α− 7 Be interaction differ. This can be seen by comparing the R MCM bα obtained in the three-cluster calculations with R P M . In two-body calculations these quantities agree with each other within the uncertainties of their calculations for most of the mirror states. 19 F - 19 Ne. The two-cluster MCM calculations for this mirror pair have been performed for all excited states that are below the α-particle emission threshold in 19 Ne. The mirror cores 15
III. BOUND-UNBOUND MIRROR PAIRS
The symmetry in mirror α-decays can be extended to bound-unbound mirror pairs. As in the case of nucleon decays [1,3], such a symmetry would manifest itself as a link between the ANC of the bound α-particle state and the width of its analog resonant state. This follows from the possibility to represent the resonance width by an integral similar to (3) and (30). For isolated narrow resonances, the generalization of Eq. (17) of Ref. [3] for the two-body α-particle case gives the width Γ 0 l as
Γ 0 l ≈ 2κ R E R Rm 0 dr rF l (κ R r)(V N − ∆V C )Ψ BSA l (r) 2 ,(34)
where E R is the resonance energy, k R = 2µE R /h 2 , F l is the regular Coulomb wave function and Ψ BSA l is a wave function of the α-particle resonance in the bound-state approximation. This function has the dimension of a bound-state wave function and is defined and normalized within some channel radius R m taken well outside the range of the α-core interaction. The width Γ 0 l defined by Eq. (34) is related to the residue γ 2 l at the R-matrix pole by [17],
Γ 0 l = 2κ R R m γ 2 l /|O l (κ R R m )| 2 ,(35)
where O l is the outgoing Coulomb function. It determines the observable width Γ l by
Γ l = Γ 0 l /(1 + γ 2 l S ′ l ) −1 ,(36)
where S l = Re(κ R R m O ′ l /O l ) and the derivation is performed with over the energy E. For very narrow resonances, such that γ 2 l S ′ l ≪ 1, the observed width, Γ l , and the one related to the residue in the R-matrix pole, Γ 0 l , are the same. It is for such cases that the analytical expression for the ratio
R Γ = Γ α /C 2 α(37)
can be derived. Following the reasoning of Sec. II.A we get the approximate model-independent formula
R Γ ≈ R res 0 =h 2 k R µ ε b.s. E R F l (k R R N ) F l (iκ b.s. R N ) 2(38)
where ε b.s. is the binding energy of a bound α-particle state and κ b.s. = √ 2µε b.s. /h. As in the case of bound mirror pairs, the difference between R res 0 and the exact value of R Γ will be determined by remainder terms similar to those given in Eqs. (15), (19), (20), (21) and (24), and their magnitude will depend on how similar are the bound state α-particle wave function and its mirror analog Ψ BSA l . As for bound mirror pairs, the formula (38) will be more accurate if the function |F l (k R r)/F l (κ b.s. r)| varies slowly near r ≈ R N . This function changes the most slowly near its maximum, at r = a m . In Fig. 4 we plot the function |F l (k R a)F l (κ b.s. a m )/ F l (κ b.s. a)F l (k R a m )| for three mirror pairs of excited states 11 B( 197 MeV). The α-particle in the chosen states of 11 B and 19 F is weakly bound and its mirror states in 11 C and 19 Ne are resonances which are important for some astrophysical applications. This ratio is almost a constant for r ∼ 4 − 6 fm which is close to R N .
We compare R res 0 , calculated assuming R N = a m , to R Γ obtained in exact two-body calculations. To perform the two-body calculations, we have chosen an α-core potential of the Woods-Saxon form and varied its diffuseness from 0.35 fm to 0.95 fm. For each diffuseness the depth and the radius of this potential were adjusted to reproduce simultaneously both the α-particle separation energy ε b.s. in a chosen state and the position E R of the resonance in its mirror analog. The width has been determined from the behaviour of the resonant phase shift tan δ l = Γ l (E)/2(E − E R ) near E R . The range of change in squared ANCs and in resonance widths with the potential geometry is presented in Table III. The widths change by a factor from 1.65 to 4.1 and the ANCs squared in the mirror states change by the same amount so that R res P M changes by less than 2% with respect to an average value. These average values are very close to R res 0 when l α = 0 (see Table IV). In the l α = 0 case, when the centrifugal barrier in absent, the approximation (38) becomes less accurate, with R res P M being smaller than R res 0 by 12%. This loss of accuracy is probably caused by a larger difference in mirror s-wave functions when one of the α-particles is loosely-bound. In all cases, the agreement between R res P M and R res 0 is much better than for nucleon decays in bound-unbound mirror pairs [3].
To check the validity of the approximation (38) for many-body systems we have calculated R Γ for boundunbound mirror states from Tables III and IV using the MCM of the previous section. The width Γ α have been calculated by solving the Schrodinger-Bloch equation, as described in Ref. [16]. The calculations have been done using two oscillator radii for potential V2 and only one oscillator radius, 1.36 fm, for potential MN, because the larger radius, b = 1.6 fm, has caused numerical problems. The resulting ratio R MCM Γ is presented in Table IV ) for the ratio RΓ (all in MeV·fm) for some mirror states in 11 B-11 C and 19 F- 19 Ne. ε b.s is the binding energy of a bound α-particle state and ER is the resonance energy of its mirror analog while l is the orbital momentum.
IV. UNBOUND MIRROR PAIRS
The ideas of Secs. II and III about mirror summetry can be immediately applied to the widths of two mirror narrow resonances 2 and 1. For the ratio
R ΓΓ = Γ α (2)/Γ α (1)(39)
Eqs. (27) and (38) can be generalised straightforwardly to give
R ΓΓ ≈ R 0 ΓΓ = k 1 k 2 F l (k 2 R N ) F l (k 1 R N ) 2 ,(40)
where k i = √ 2µE i /h and E i is the resonance energy of the i-th α-particle.
The idea that the widths of two mirror resonances are related has already been used many times to predict unknown widths for those resonances where the widths of their mirror analogs are known. The relation between mirror widths is usually obtained from the relation of the width Γ α to the Coulomb barrier penetration factor P l (E, R N ) and the reduced width θ 2 α [17]:
Γ α = 2h 2 µR 2 N θ 2 α P l (E, R N ),(41)
where
P l (E, R N ) = kR N F 2 l (kR N ) + G 2 l (kR N ) ,(42)G 2 l (kR N )
is the irregular Coulomb function, and R N is located somewhere on the surface. Assuming that the reduced widths θ α (1) and θ α (2) for mirror resonances are equal one obtains from Eqs.(39), (42) and (41)
R ΓΓ ≈ R θ ΓΓ ≡ k 2 k 1 F 2 l (k 1 R N ) + G 2 l (k 1 R N ) F 2 l (k 2 R N ) + G 2 l (k 2 R N ) .(43)
The Eqs. (40) and (43) are not identical and can not be deduced one from another. First, we investigate numerically the difference between the approximations (40) and (43) in a two-body model for a hypothetical mirror pair 19 F - 19 Ne with arbitrary resonance energy E 1 in the α+ 15 N and (E 2 ) energy in the (α+ 15 O) channel such that E 2 = E 1 + 0.5 MeV, for all l α ≤ 5. The difference of about 0.5 MeV is typical for low-lying α-particle resonances in 19 F - 19 Ne. The ratio |F l (k 2 a)/F l (k 1 a)| for such a system is presented in Fig. 5 for the lowest resonance energy in the real α+ 15 N system, E 1 = 0.350 MeV, as a function of a. This ratio is varies very slowly for 5 < a < 8 fm and reaches its maximum at about 6 -7 fm, which is beyond the nuclear surface radius R N . To compare (40) and (43) we calculate them both at the surface, R N = 5 fm, as has been done in other studies of mirror symmetry in the 19 F -19 Ne resonances [18,19]. The ratio R 0 ΓΓ /R θ ΓΓ is plotted in Fig.6 for different energies E 1 taken below the Coulomb barrier. According to Fig.6, R 0 ΓΓ and R θ ΓΓ are the same for E 1 ≤ 2 MeV but at higher energies a difference appears. This difference increases with decreasing orbital momentum. The largest difference, about 12%, is seen for l α = 0 at E 1 ≈ 4 MeV. The most likely reason for this effect is the growth of the resonance width with the resonance energy. At some point, the integral representation (34) looses its accuracy, making the approxiation (40) invalid. The higher is the centifugal barrier, the higher the resonance energy can be before this happens.
Next, we compare R 0 ΓΓ and R θ ΓΓ to the results of potential model and MCM calculations for some realistic mirror narrow resonances in 7 Li -7 Be, 11 B -11 C and 19 F - 19 Ne. Unlike in previous sections, only one value of the diffuseness, 0.65 fm, has been used in the potential model calculations. As for the MCM, the conditions of the calculations are the same as in previous sections.
The calculated widths Γ α in mirror resonances and their ratio are presented in Table V. In Table VI these ratios are compared to R 0 ΓΓ and R θ ΓΓ . In all cases studied, Γ α depends strongly on the choice of the model and its parameters. For the 7 Li-7 Be and 19 F-19 Ne mirror pairs, the ratios R MCM ΓΓ and R P M ΓΓ agree very well with the analytical predictions R 0 ΓΓ and R θ ΓΓ . For 7 Li-7 Be they also agree with experimental value R exp ΓΓ = Γ exp α ( 7 Be)/Γ exp α ( 7 Li) obtained using the 7 Li and 7 Be widths of the 7 2 − resonance from [21]. For the 5 2 − 2 resonance in 19 F- 19 Ne, the value R exp ΓΓ = 121 ± 55 determined by using Γ exp α from [19] is much smaller than the theoretical values of 203 -211. The most likely reason for this is that the 19 Ne( 5 2 − 2 ) width has been determined Ref. [19] indirectly using the measured 19 Ne( 5 2 − 2 ) branching ratio Γ α /Γ and its γ-width assuming that Γ γ ( 19 F) = Γ γ ( 19 Ne). Such an assumption is not always valid.
For 11 B-11 C, R P M ΓΓ agrees very well with the analytical predictions R 0 ΓΓ and R θ ΓΓ . The two-cluster MCM predictions also agree with them, expect for the 5 2 − 2 state with l α = 2 where a 10% increase in the ratio of mirror widths can be seen. The three-cluster MCM increases this ratio which could be due to the 8 Be+t and 8 Be+ 3 He clustering effects. Both the two-and three-cluster predictions agree with the ratio of experimentally determined widths taken from [22]. In all cases, the difference between the microscopic calculations and the analytical approximations (40) and (43) does not exceed 10%.
V. SUMMARY AND CONCLUSION
In this paper, we have shown that the structureless two-body bound mirror systems α + X 1 and α + X 2 , with the same strong nuclear attraction but different Coulomb repulsion, should have ANCs that are related by a modelindependent analytical approximation (27). This expression involves the ratio of the regular Coulomb wave functions calculated at imaginary momentum at some distance a between α and X. We have demonstrated that if this distance is taken at the point where the product of α − X potential and α − X wave function is the largest, which occurs around R N ≈ (1.1 − 1.3)(4 1/3 + X 1/3 ), then deviation from this approximation should be small provided the nuclear wave functions of these mirror systems are similar to each other in the region that gives most contribution to the ANC in Eq. (3). The analytical approximation (27) remains valid for mirror systems with a many-body internal structure if mirror spectroscopic factors are approximately the same and if X 1 and X 2 are not too strongly deformed and/or do not have easily excited low-lying states.
The isospin symmetry between mirror α-decays extends to bound-unbound and unbound mirror pairs. In the first case, a link between the α-particle ANC of a bound state and the width of its mirror unbound analog is given by the formula (38). In the second case, the link between the widths of mirror resonances can be given by a new formula (40) that at the energies well below the combined Coulomb and centrifugal barrier complements the old formula (43) obtained using the concept of the penetrability of the Coulomb barrier and assuming equality of the reduced widths of mirror resonances.
The comparison of the approximations (27), (38) and (40) to the results of exact calculations either in a twobody potential model or in a microscopic cluster model for three mirror pairs, 7 Li -7 Be, 11 B -11 C and 19 F - 19 Ne, have confirmed their validity for many mirror nuclear states. The deviations from these approximations are smaller than those seen in mirror nucleon decays in Ref. [2,3] because the difference in mirror α-particle wave functions are much smaller than the differences in mirror proton and neutron wave functions, especially for loosely-bound states. The largest deviations from analytical estimates have been seen for three-cluster 11 B -11 C mirror states with excited 7 Li and 7 Be cores. Also, a noticeable deviation has been seen for the second 7 2 − state in 19 F-19 Ne. This state has tiny spectroscopic factors for the decay channels α+ 15 N g.s. and α+ 15 O g.s. (about 0.001) and the probability of symmetry breaking in such week components is always large.
The ANCs and α-widths calculated in our microscopic approach are sensitive to the model assumptions. In particular, they change within a factor of four for different choices of the effective NN potential and oscillator parameters, the smallest values being produced by combiningthe MN potential with the oscillator parameter b = 1.36 fm and the largest values predicted by V2 , potential model prediction R P M ΓΓ and the ratio R exp ΓΓ of experimentally known widths of mirror states in 7 Li-7 Be, 11 B-11 C and 19 F- 19 Ne with spin-parity J π and orbital momentum l. with b = 1.6 fm. The variation of ANCs and α-widths with model assumptions can be even stronger if mirror states have specific structure, for example, the t+ 8 Be and 3 He+ 8 Be configurations in 11 B and 11 C. However, the calculated in the MCM ratios R, R Γ and R ΓΓ do not change much with different choices of unput model parameters. This fact can be used to predict unknown ANCs or α-widths if the corresponding mirror quantities have been measured. Such predictions can be beneficial for nuclear astrophysics. Many low-energy (α, γ), (α, N ) and (N, α) reactions proceed via the population of isolated α-particle narrow resonances the widths of which determine the corresponding reaction rates. It is not always possible to measure such widths because of the very small reaction cross sections involved. In this case, using isospin symmetry in mirror α-decays may be helpful. For unbound mirror states this symmetry has already been used. For another class of mirror pairs, when the mirror analogs of the resonances are bound, α-widths can be determined by measuring the ANCs of bound states in α-transfer reactions and using the relation Γ α = R Γ C 2 α . As an example, we can point out that the widths of the astrophysically important resonance 19 Ne( 3 2 + 2 ) at 4.033 MeV could be detemined if the ANC of its mirror ana-
Mirror pair
Fig. 1
1by plotting some examples of C 2 (a)/C 2 , where the ANC C 2 (a) has been calculated neglecting the contributions from r > a in Eq. (3). Quite often the r ≤ R N region gives only half the contribution to the ANC. The derivation of Eq. (27) in the present paper is quite general and it suggests that Eq. (27) should be valid even when the contribution from r ≤ R N is small. Also, this equation should be valid for all shapes of nuclear potentials, even with unphysically diffused edges, and does not depend on the exact functional form of the Coulomb potential in the internal region. The only criteria of its applicability is the similarity of the wave functions of mirror nuclei.
FIG. 1 :
1The deviations ∆i and ∆ = i ∆i as a function of matching radius a for the 3 2 − states in mirror pairs 7 Li-7 Be (a), 11 B-11 C (b) and 19 F-19 Ne (c). Also shown in insets are the ratios A0(a)/A0(am) and C2(a)/C2.
FIG. 2 :
2Ratio of the potential model estimate RP M to the analytical estimate R0 (open circles) and ratio of the MCM predictions RMCM to the analytical estimate R0 calculated in the two-cluster (filled circles) and three-cluster microscopic cluster model in which both the 7 Li+α ( 7 Be+α) and t+ 8 Be ( 3 He+ 8 Be) (triangles) or only the 7 Li+α ( 7 Be+α) partitions (crosses) have been taken into account.
( 3
3He+ 8 Be) partitions are taken into account with the first excited
−−
N -15 O were considered both in the ground and the first excited state 3 2 − . We have found that different choices of the oscillator radius strongly influence the mixture of the α+ 15 N( ) configurations in all the states of 19 F, leading to large changes in spectroscopic factors and ANCs. The same is true for the α+ 15 O( ) configurations in19 Ne. However, despite the 3-5 times change in squared ANCs, the ratio R MCM of mirror squared ANCs changes by less 3.5%. This ratio is close to both the analytical estimate R 0 and the predictions of the potential model R P M . The deviation between R MCM and these estimates does not exceed 5%. The mirror symmetry in spectroscopic factors is also clearly seen. In most cases R
. 1 ) 8 )FIG. 4 :
184III: Range of change for the width Γα (in MeV) of an α-particle resonance and for its mirror squared ANC C 2 α (in fm −1 ) with different model parameters. The results of calculation are given both in the potential model and in the ×10 −10 (4.68 -18.4 ) ×10 72 (0.98 -3.40 ) ×10 −13 (2.54 -23.×10 The ratios |F l (kRa)F l (κ b.s. am)/F l (κ b.s. a)F l (kRam)| as a function of a.
−
res P M , deviating from R res P M by 12%. For 19 F( state in 19 F ( 19 Ne) which is mostly built on the second excited state 3 2 − of the 15 N ( 15 O) core with an orbital momentum l = 2. The spectroscopic factor for the configuration 19 F| 15 O g.s.⊗α is very small, about 10 −3 . The spectroscopic factor of the mirror configuration, defined using the concept of the bound state approximation for the narrow resonance function, is also very small. In such weak components effects due to charge symmetry breaking could be large.When the 15 N( 3 2 − ) ⊗ α ( 15 O( 3 2 − ) ⊗ α) configuration in 19 F ( 19 Ne) is neglected, the MCM gives for the 7 2 − 2 state R MCM Γ values that are close both to R res 0 and R res P M . For example, with V2 and an oscillator radius of 1.6 fm R MCM Γ =
FIG. 5 :FIG. 6 :
56The ratio |F l (k2a)/F l (k1a)| for ER(α+ 15 N) = 0.350 MeV and ER(α+ 15 O) = 0.850 MeV for different orbital momenta l as a function of a. The ratio R 0 ΓΓ /R θ ΓΓ for different orbital momenta l as a function of the resonance energy E1 in α+ 15 N.
TABLE
TABLE II :
II
Be+α) partitions (crosses) have been taken into account. R P M agree within uncertainties of their definition which means that mirror symmetry in the effective local α+ 15 N and α+ 15 O interactions is a good assumption.1
− 3 1
− 1
− 5
− 3 2
− 3 2
− 1
+ 7
− 7
− 5
+ 5
+ 3
+ 3
+
1
+ 5
+ 1
− 5
− 3
+ 3
− 9
+
0.90
0.95
1.00
1.05
1.10
1.15
1.20
1.25
1.30
1.35
1.40
R
S
2−cluster MCM
3−cluster MCM with (7+4) and (3+8) clustering
3−cluster MCM with (7+4) clustering
1 1
0 2 2 2 0 2 1 2 4 1 3 1 3
1 3 0 2 1 2 5
7 Li−
7 Be
11
B−
11
C
19
F−
19
Ne
l
2J
π
FIG. 3: Ratio RS of the α-particle spectroscopic factors in the 7 Li-7 Be, 11 B-11 C and 19 F-19 Ne mirror pairs, calculated in the
two-cluster (filled circles) and three-cluster microscopic cluster models in which both the 7 Li+α ( 7 Be+α) and t+ 8 Be ( 3 He+ 8 Be)
(triangles) or only the 7 Li+α ( 7
TABLE
TABLE IV :
IVAnalytical estimate R res
P M and the two-body (R res
0 ) and MCM prediction (R M CM
Γ
TABLE V :
VResonance widths Γα for mirror nuclei 1 and 2 (in MeV) and their ratio calculated in the MCM, R M CM ΓΓ , and potential model, R P M ΓΓ , for mirror states with spin-parity J π and orbital momentum l.Microscopic cluster model
Potential model
J π
l
Γα(2)
Γα(1)
R M CM
ΓΓ
Γα(2)
Γα(1)
R P M
TABLE VI :
VIAnalytical estimates R θ ΓΓ and R 0 ΓΓ , MCM ratio R M CMΓΓ
AcknowledgementsSupport from the UK EPSRC via grant GR/T28577 is gratefully acknowledged.log in 19 F was known. Unfortunately, available data on the 15 N( 6 Li,d)19F * ( 3 2 + 2 ) reaction do not allow the extraction the ANC of interest because of strong sensitivity to optical potentials and to the geometry of the bound state potential well that arises due to angular momentum mismatch. An alternative possibility to measure this ANC with a high precision is to use the reaction 15 N( 19 F, 15 N) 19 F * . This reaction involves the same opti-cal potentials in the entrance and exit channels and would not suffer the angular momentum mismatch.VI. APPENDIXWe prove here that BW 2 /aφ (1) l is small with respect to A 0 (a). The coefficients A and B that are found from the continuity ofφ(2)l (r) and its derivative at r = a can alternatively be presented as follows:where ′ means differentiation with respect to a. When expressed in terms of F 1 , F 2 , and W 2 we findwhere δ 2 = −(l + 1 + ıη 2 )π/2. Therefore the quantity BW 2 /(aφ(1)l A 0 (a)) isWe can get a good idea about the magnitude of this term by using semiclassical expressions for the F i and W 2 . For our purposes we can writewhere the local wave numbers p i (r) are given byand b is an arbitrary point in the region where the semiclassical approximation is valid. We also assume that a and b lie in the region where the exponentially decreasing components of the F i can be ignored.Using these expressions and evaluating the derivatives in a way which consistently respects the semiclassical approximation (see[23], pages 23-24) we find BW 2 /(aφ(1)l A 0 (a)) = p 2 (a) − p 1 (a) p 2 (a) + p 1 (a) .
. N K Timofeyuk, R C Johnson, A M Mukhamedzhanov, Phys. Rev. Lett. 91232501N.K. Timofeyuk, R.C. Johnson and A.M. Mukhamedzhanov, Phys. Rev. Lett. 91, 232501 (2003).
. N K Timofeyuk, P Descouvemont, Phys. Rev. C. 7164305N.K. Timofeyuk and P. Descouvemont, Phys. Rev. C 71, 064305 (2005).
. N K Timofeyuk, P Descouvemont, Phys. Rev. C. 7264324N.K. Timofeyuk and P. Descouvemont, Phys. Rev. C 72, 064324 (2005).
. L Trache, A Azhari, F Carstoiu, H L Clark, C A Gagliardi, Y.-W Lui, A M Mukhamedzhanov, X Tang, N Timofeyuk, R E Tribble, Phys.Rev. C. 6762801L.Trache, A.Azhari, F.Carstoiu, H.L.Clark, C.A.Gagliardi, Y.-W.Lui, A.M.Mukhamedzhanov, X.Tang, N.Timofeyuk, R.E.Tribble, Phys.Rev. C 67, 062801(R) (2003)
. B Guo, Nucl. Phys. A. 761162B. Guo et al, Nucl. Phys. A 761, 162 (2005)
. N K Timofeyuk, S B Igamov, Nucl. Phys. 713217N.K. Timofeyuk and S.B. Igamov, Nucl. Phys. A713, 217 (2003)
. B Guo, Z H Li, X X Bai, W P Liu, N C Shu, Y S Chen, Phys. Pev. C. 7348801B. Guo, Z.H. Li, X.X. Bai, W.P. Liu, N.C. Shu, Y.S. Chen, Phys. Pev. C 73, 048801 (2006)
. N K Timofeyuk, D Baye, P Descouvemont, R Kamouni, I J Thompson, Phys. Rev. Lett. 96162501N.K. Timofeyuk, D.Baye, P.Descouvemont, R. Kamouni and I.J. Thompson, Phys. Rev. Lett. 96, 162501 (2006)
. N K Timofeyuk, Nucl. Phys. A. 63219N.K. Timofeyuk, Nucl. Phys. A 632, 19 (1998)
. P Descouvemont, Nucl. Phys. A. 584532P. Descouvemont, Nucl. Phys. A 584, 532 (1995)
. A B Volkov, Nucl. Phys. 7433A.B. Volkov, Nucl. Phys. 74, 33 (1965).
. D R Thompson, M Lemere, Y C Tang, Nucl. Phys. 28653D.R. Thompson, M. LeMere and Y.C. Tang, Nucl. Phys. A286, 53 (1977).
. D Baye, N Pecher, Bull Sc. Acad. Roy. Belg. 67835D. Baye and N. Pecher, Bull Sc. Acad. Roy. Belg. 67 835, (1981).
. D Baye, N K Timofeyuk, Phys. Lett. 29313D. Baye and N.K. Timofeyuk, Phys. Lett. 293B, 13 (1992)
. N K Timofeyuk, P Descouvemont, R C Johnson, Eur. Phys. J. A. 27269Suppl. 1N.K. Timofeyuk, P. Descouvemont and R.C. Johnson, Eur. Phys. J. A 27, Suppl. 1, 269 (2006)
. P Descouvemont, M Vincke, Phys. Rev. A. 423835P. Descouvemont and M. Vincke Phys. Rev. A 42, 3835 (1990)
. A M Lane, R G Thomas, Rev. Mod. Phys. 30257A.M. Lane and R.G. Thomas, Rev. Mod. Phys. 30, 257 (1958)
. F Oliveira, Phys. Rev. C. 553149F. de Oliveira et al, Phys. Rev. C 55, R3149 (1997)
. B Davids, A M Van Den, P Berg, F Dendooven, M Fleurot, M A Hunyadi, R H De Huu, H W Siemssen, H J Wilschut, M Wortche, J Hernanz, K E Jose, A H Rehm, R E Wuosmaa, Segel, Phys.Rev. C. 6765808B.Davids, A.M.van den Berg, P.Dendooven, F.Fleurot, M.Hunyadi, M.A.de Huu, R.H.Siemssen, H.W.Wilschut, H.J.Wortche, M.Hernanz, J.Jose, K.E.Rehm, A.H.Wuosmaa, R.E.Segel, Phys.Rev. C 67, 065808 (2003)
. Z Q Mao, H T Fortune, A G Lacaze, Phys. Rev. C. 531197Z.Q. Mao, H.T. Fortune and A.G. Lacaze Phys. Rev. C 53, 1197 (1996).
. D R Tilley, C M Cheves, J L Godwin, G M Hale, H M Hofmann, J H Kelley, C G Sheu, H R Weller, Nucl. Phys. A. 7083D.R. Tilley, C.M. Cheves, J.L. Godwin, G.M. Hale, H.M. Hofmann, J.H. Kelley, C.G. Sheu, H.R. Weller, Nucl. Phys. A 708, 3 (2002)
. F Ajzenberg-Selovej, And J H Kelley, Nucl. Phys. A. 5061F. Ajzenberg-SeloveJ AND J.H. Kelley, Nucl. Phys. A 506, 1 (1990)
Note that the condition p 1 (a) − p 2 (a) = 0 is exactly the condition (in the semi-classical approximation) that A 0 (a) be a stationary function of a. D M Brink, Semi-classical Methods in Nucleus-Nucleus Scattering. Cambridge University Press2 (a) + p 1 (a)D.M.Brink, Semi-classical Methods in Nucleus-Nucleus Scattering, Cambridge University Press 1985. 2 (a) + p 1 (a). Note that the condition p 1 (a) − p 2 (a) = 0 is ex- actly the condition (in the semi-classical approximation) that A 0 (a) be a stationary function of a.
| [] |
[
"Semilinear elliptic equations with Dirichlet operator and singular nonlinearities",
"Semilinear elliptic equations with Dirichlet operator and singular nonlinearities"
] | [
"Tomasz Klimsiak "
] | [] | [] | In the paper we consider elliptic equations of the form −Au = u −γ · µ, where A is the operator associated with a regular symmetric Dirichlet form, µ is a positive nontrivial measure and γ > 0. We prove the existence and uniqueness of solutions of such equations as well as some regularity results. We also study stability of solutions with respect to the convergence of measures on the right-hand side of the equation. For this purpose, we introduce some type of functional convergence of smooth measures, which in fact is equivalent to the quasi-uniform convergence of associated potentials.Mathematics Subject Classification (2010), 35J75, 60J45. | 10.1016/j.jfa.2016.10.029 | [
"https://arxiv.org/pdf/1612.07283v1.pdf"
] | 119,681,284 | 1612.07283 | 359b5d8c0d61eb562c37cfe52c91cb8b5154a638 |
Semilinear elliptic equations with Dirichlet operator and singular nonlinearities
21 Dec 2016
Tomasz Klimsiak
Semilinear elliptic equations with Dirichlet operator and singular nonlinearities
21 Dec 2016
In the paper we consider elliptic equations of the form −Au = u −γ · µ, where A is the operator associated with a regular symmetric Dirichlet form, µ is a positive nontrivial measure and γ > 0. We prove the existence and uniqueness of solutions of such equations as well as some regularity results. We also study stability of solutions with respect to the convergence of measures on the right-hand side of the equation. For this purpose, we introduce some type of functional convergence of smooth measures, which in fact is equivalent to the quasi-uniform convergence of associated potentials.Mathematics Subject Classification (2010), 35J75, 60J45.
Introduction
Let E be a separable locally compact metric space, (E, D[E]) be a regular symmetric Dirichlet form on L 2 (E; m) and let µ be a nontrivial (i.e. µ(E) > 0) positive Borel measure on E. In the present paper we study elliptic equations of the form −Au = g(u) · µ, u > 0, (1.1) where A is the operator associated with (E, D[E]) and g : R + \{0} → R + is a continuous function satisfying c 1 ≤ g(u) · u γ ≤ c 2 , u > 0 (1.2)
for some c 1 , c 2 , γ > 0. The model example of (1.1) is the Dirichlet problem −∆ α/2 u = u −γ · µ, u > 0, on D,
u = 0 on R d \ D,(1.3)
where α ∈ (0, 2], γ > 0 and D is a bounded open subset of R d . The paper consists of two parts. In the first part we address the problem of existence, uniqueness and regularity of solutions of (1.1). In the second part we study stability of solutions of (1.1) with respect to the convergence of measures on the righthand side of the equation. The above problems were treated in [4] in case A = ∆ and [3] in case A is a uniformy elliptic divergence form operator. Some different but related problems are studied in [21] in case A is a Leray-Lions type operator. The main aim of the present paper is to generalize the results of [3,4] to equations with general (possibly nonlocal) operators corresponding to symmetric Dirichlet forms. We also refine some results proved in [3,4,21] for equations with local operators.
In the first part of the paper (Sections 3 and 4) we assume that µ belongs to the class R of smooth (with respect to capacity associated with (E, D[E])) positive Borel measures on E whose potential is m-a.e. finite (see Section 2 for details). It is known (see [16,Proposition 5.13]) that M 0,b ⊂ R, where M 0,b is the class of bounded smooth measures on E. In general, the inclusion is strict. For instance, in case of (1.3), R includes smooth Radon measures µ such that D δ α/2 (x) µ(dx) < ∞, where δ(x) = dist(x, ∂D) (see [14,Example 5.2]).
The first difficulty we encounter when considering equation (1.1) is to define properly a solution. Here we give a probabilistic definition of a solution of (1.1) via the Feynman-Kac formula. Namely, by a solution of (1.1) we mean a quasi-continuous function u on E such that u > 0 quasi-everywhere (q.e. for short) with respect to the capacity Cap naturally associated with (E, D[E]) and for q.e. x ∈ E,
u(x) = E x ζ 0 g(u)(X t ) dA µ t .
Here {(X t ) t≥0 , (P x ) x∈E } is a Hunt process with life time ζ associated with the form (E, D[E]), E x is the expectation with respect to P x and A µ is the positive continuous additive functional in the Revuz correspondence with µ.
One reason for adopting here the probabilistic definition of a solution is that unlike problem (1.3), for general A one can not expect that inf x∈K u(x) > 0 for every compact K ⊂ E. Therefore the variational definition of a solution considered in [3] is not (at least directly) applicable to general equations of the form (1.1), because we do not know whether g(u) · µ is a Radon measure. The probabilistic approach allows one to overcome the difficulty. Another advantage lies in the fact that it allows one to cope with the uniqueness problem.
In Section 3 we prove several results on existence and uniqueness of solutions of (1.1) and its generalization (equation with mixed nonlinearities). It is worth pointing out that the rather delicate problem of uniqueness (see [23]) was not addressed in [3].
Regularity of solutions of (1.1) is studied in Section 4. First, in Proposition 4.5, we generalize some result proved in [18], and then we use this generalization to prove that if µ is bounded then for every γ > 0 the function u (γ+1)/2 belongs to the extended Dirichlet space D e [E] and there exists c(γ) > 0 such that
E(u (γ+1)/2 , u (γ+1)/2 ) ≤ c(γ)c 2 µ T V ,
where µ T V denotes the total variation norm of µ. In case of (1.3) the above inequality gives the estimate of u (γ+1)/2 in the norm of the fractional Sobolev space H α/2 0 (D). In the second part of the paper (Sections 5-7), we study stability of solutions u n of the problems −Au n = g(u n ) · µ n , u n > 0 (1. 4) under different assumptions on the type of convergence of measures µ n and the limit measure µ. We always assume that {µ n } is a sequence of smooth nontrivial Borel measures on E such that sup n≥1 µ n T V < ∞. As for µ, we distinguish two cases: µ ∈ M 0,b , i.e. µ is bounded and smooth, and µ ∈ M b , i.e. µ is a general bounded Borel measure on E.
In Section 5 we start with the study of the general case µ ∈ M b . Our main result (Theorem 5.4) says that if µ n → µ vaguely then the sequence {ν n := g(u n ) · µ n } is tight in the vague topology and its every limit point is a smooth measure. Moreover, if ν n → ν vaguely, then, up to a subsequence, u n → u m-a.e., where −Au = ν.
In Section 6 we address the case µ ∈ M 0,b . We first introduce some type of convergence of smooth measures, which is stronger then the vague and the narrow convergence. At the same time, it is weaker then the convergence in the variation norm, but nevertheless it preserves the smoothness property. This new concept of convergence of {µ n } to µ is defined via some sort of uniform convergence of the sequence of additive functionals {A µn } to A µ , so we denote it by uAF − −− →. We prove (see Proposition 4.3, Proposition 6.1) that, up to a subsequence, the convergence µ n uAF − −− → µ is equivalent to the quasi-uniform convergence of {u n } to u, where u n , u are solutions of the problems −Au n = µ n , −Au = µ, (1.5) respectively. Therefore it is possible to define the convergence µ n uAF − −− → µ analytically without recourse to the notion of additive functional from the probabilistic potential theory. Note that this analytical characterization of the convergence µ n uAF − −− → µ may be viewed as a significant generalization of the stability result proved in [4]. Our main theorem on stability of (1.4) (Theorem 6.3) says that if µ n uAF − −− → µ then (up to a subsequence) u n → u q.e., where u is a solution of (1.1). We also show (see Proposition 6.7) that if µ n uAF − −− → µ then {µ n } is locally equidiffuse, which again confirms the usefulness of our new notion of the convergence of measures.
In Section 7 we return to the case of general measure µ ∈ M b but we assume that E ⊂ R d and µ is approximated by mollification, i.e. µ n = j 1/n * µ, where j 1/n is a mollifier. In our main result we also restrict our attention to a class of operators including ∆ α/2 , α ∈ (0, 2], as a special case. It is known that µ ∈ M b admits a unique decomposition µ = µ c + µ d into the singular part µ c with respect to Cap (the so-called concentrated part) and an absolutely continuous part µ d with respect to Cap (the so-called diffuse part). The case µ c = 0 is covered by results of Section 6, because we show that j 1/n * µ d uAF − −− → µ d . The case µ c = 0 is much more involved, but can be handled by combining the results of Section 5 with those of Section 6. Before describing our main result, we first make some comments on the simplest case A = ∆.
If A = ∆ then from the inverse maximum principle (see [8]) one can deduce that the singular part µ c (with respect to the Newtonian capacity cap 2 ) is responsible for explosions of the solution u of (1.1). When u explodes, g(u) is formally equal to zero, so it seems that in (1.1) the absorption term g forces some reduction of µ c . Several natural question arise here. The first one is whether such reduction really occurs and whether the whole singular part µ c is reduced? Another question is whether in investigating (1.3) one should consider the Newtonian capacity cap 2 , or, maybe, it is better to consider other capacities (for example p-capacities)? What happens if ∆ is replaced by a general Dirichlet operator A? In [3] partial answers to these questions are given in case A = ∆. Let u n be a solution of (1.4) with A = ∆ and µ n = g n · m with {g n } ⊂ L ∞ (D; m), where m is the Lebesgue measure on D. In [3] it is proved that if µ is orthogonal to cap 2 , (1.2) is satisfied with γ ≥ 1 and µ n → µ in the narrow topology, then u n → 0. For γ ∈ (0, 1) similar result is proved in case µ is orthogonal to the p-capacity with p > 2 being the Hölder conjugate to q = d(γ+1) d−1+γ . Finally, let us mention that the same problem of reduction of the singular part of µ forced by absorption g is considered in [21] in case g is bounded and A is a Leray-Lions type operator (i.e. local operator). In Theorem 7.3, the main result of Section 7, we prove that in fact g forces the reduction of the whole singular part µ c of µ for every γ > 0. To be more specific, we prove that if u n is a solution of (1.4) with µ n = j 1/n * µ, then, up to a subsequence,
u n → u m-a.e., where −Au = g(u) · µ d , u > 0.
The above result makes it legitimate to define solutions of (1.1) with bounded Borel measure µ as the solutions of (1.1) with µ replaced by µ d . With this definition, Theorem 7.3 is the existence theorem for (1.1) with bounded Borel measure µ. Finally, note that Cap = cap 2 if A = ∆ and that the capacity cap 2 is absolutely continuous with respect to the p-capacity for p ≥ 2. Therefore in case γ ∈ (0, 1) our result strengthens the corresponding result from [3]. It should be stressed, however, that in [3] more general approximations {µ n } of µ are considered.
Preliminaries
In the paper E is a locally compact separable metric space and m is a positive Radon measure on E such that Supp[m] = E. By (E, D[E]) we denote a symmetric Dirichlet form on L 2 (E; m). Recall that this means that
(E.1) E : D[E] × D[E] → R, where D[E] is a dense linear subspace of L 2 (E; m), (E.2) E is bilinear, E(u, v) = E(v, u) and E(u, u) ≥ 0, u, v ∈ D[E], (E.3) E is closed, i.e. D[E]
equipped with the inner product generated by the form E 1 is a Hilbert space (Here, as usual, for α > 0 we set
E α (u, v) = E(u, v) + α(u, v), u, v ∈ D[E]
, where (·, ·) is the usual inner product in L 2 (E; m)),
(E.4) E is Markovian, i.e. if u ∈ D[E] then v := (0 ∨ u) ∧ 1 ∈ D[E] and E(v, v) ≤ E(u, u).
By Riesz's theorem, for every α > 0 and f ∈ L 2 (E; m) there exists a unique function
G α f ∈ L 2 (E; m) such that E α (G α f, g) = (f, g), g ∈ L 2 (E; m).
It is an elementary check that {G α , α > 0} is a strongly continuous contraction resolvent on L 2 (E; m). By {T t , t ≥ 0} we denote the associated semigroup and by (A, D(A)) the operator generated by {T t }. It is well known (see [9,Section 1.3
]) that D(A) ⊂ D[E]
and
E(u, v) = (−Au, v), u ∈ D(A), v ∈ D[E].
In the whole paper we assume that (E, D[E]) is regular and transient, i.e.
(E.5) (regularity) the space D[E] ∩ C 0 (E) is dense in D[E]
with respect to the E 1 -norm and in C 0 (E) with respect to the supremum norm, (E.6) (transience) there exists a strictly positive function g on E such that
E |u(x)|g(x) m(dx) ≤ u E , u ∈ D[E].
where
u E = E(u, u), u ∈ D[E].
In the whole paper we fix ϕ ∈ B b (E) such that ϕ > 0, E ϕ dm = 1, and we put h = G 1 ϕ, π = ϕ · m.
Given a Dirichlet form (E, D[E]) we define the capacity Cap: 2 E → R + as follows: for an open U ⊂ E we set
Cap(U ) = E 1 (h U , h U ),
where h U is the reduced function of h on U (see [19, Chapter III]), and for arbitrary
A ⊂ E we set Cap(A) = inf{Cap(U ); A ⊂ U ⊂ E, U open}.
An increasing sequence {F n } of closed subsets of E is called nest if Cap(E \F n ) → 0 as n → ∞. A subset N ⊂ E is called exceptional if Cap(N ) = 0. We say that some property P holds quasi everywhere (q.e. for short) if a set for which it does not hold is exceptional.
We say that a function u defined q.e. on E is quasi-continuous if there exists a nest {F n } such that u |Fn is continuous for every n ≥ 1. It is known that each function u ∈ D[E] has a quasi-continuous m-version. From now on for u ∈ D[E] we always consider its quasi-continuous version.
A Borel measure µ on E is called smooth if it does not charge exceptional sets and there exists a nest {F n } such that |µ|(F n ) < ∞, n ≥ 1. By S we denote the set of all positive smooth measures on E.
In the paper we also use the capacity CAP considered in [9,Chapter 2]. We would like to stress that the notions of exceptional sets, quasi-continuity and smooth measures defined with respect to Cap and with respect to CAP are equivalent. Therefore in the paper we may use the results of [9,19] interchangeably.
By S (0) 0 we denote the set of all measures µ ∈ S for which there exists c > 0 such that In the sequel we say that u : E → R is measurable if it is universally measurable, i.e. measurable with respect to the σ-algebra
E |u| dµ ≤ c E(u, u), u ∈ D[E],(2.B * (E) = µ∈P(E) B µ (E),
where P(E) is the set of all probability measures on E and B µ (E) is the completion of B(E) with respect to the measure µ.
By M b we denote the set of all bounded Borel measures on E and by M 0,b the subset of M b consisting of smooth measures. We say that a positive Borel measure µ on E is nontrivial if µ(E) > 0.
Given a Borel measurable function η on E and a Borel measure µ on E we write
(η, µ) = E η dµ.
By u · µ w denote the Borel measure on E defined as
(f, u · µ) = (f · u, µ), f ∈ B(E)
whenever the integrals exist. Let us recall that for given measurable spaces (S, S), (T, T ) a function κ : S × T → R + ∪ {∞} is called a kernel (from S to T ) if for every B ∈ T the mapping S ∋ s → κ(s, B) is S measurable and for every fixed s the mapping T ∋ B → κ(s, B) is a measure. Let us also recall that for given measure µ on S and kernel κ from S to T one can consider its product µ ⊗ κ, which by definition is a measure on S ⊗ T defined as
(µ ⊗ κ)(f ) = S T f (s, t) κ(s, dt) µ(ds).
With a regular symmetric Dirichlet form (E, D[E]) one can associate uniquely a Hunt process X = ((X t ) ,t≥0 , (P x ) x∈E , (F t ) t≥0 , ζ) (see [9]). It is related to (E,
D[E]) by the formula T t f (x) = E x f (X t ), t ≥ 0, m-a.e.,
where E x stands for the expectation with respect to the measure P x . For α, t ≥ 0 and f ∈ B + (E) we write
R α f (x) = E x ζ 0 e −αt f (X t ) dt, p t f (x) = E x f (X t ), x ∈ E.
It is well known (see [9, Section 5.1] that for each µ ∈ S there exists a unique positive continuous additive functional A µ in the Revuz duality with µ. For µ ∈ S we write
(R α µ)(x) = E x ζ 0 e −αt dA µ t , x ∈ Eσ B = inf{t > 0; X t ∈ B}, D A = inf{t ≥ 0; X t ∈ B}, τ B = σ E\B ,
i.e. σ B is the first hitting time of B, D A is the first debut time of B and τ B is the first exit time of B. By B r we denote the set of regular points for B, i.e.
B r = {x ∈ E; P x (σ B > 0) = 0}.
By T we denote the set of all stopping times with respect to the filtration (F t ) t≥0 and by D the set of all measurable functions u on E for which the family {u(X τ ), τ ∈ T } is uniformly integrable with respect to the measure P x for q.e. x ∈ E.
For a Borel measure µ on E and α ≥ 0 by µ • R α we denote the measure defined as
(f, µ • R α ) = (R α f, µ), f ∈ B(E)
and by P µ the measure
P µ (A) = E P x (A) µ(dx), A ∈ F ∞ .
Finally, let us recall that a positive measurable function u on E is called excessive
if p t u ≤ u, t ≥ 0,
and u is called potential if it is excessive and for every sequence {T n } ⊂ T such that
T n ր T ≥ ζ, lim n→∞ E x u(X Tn ) = 0.
for q.e. x ∈ E.
Existence and uniqueness of solutions
Let us recall that in the whole paper we assume that (E, D[E]) satisfies (E.1)-(E.6).
As for µ and g, unless otherwise stated, in the paper we assume that µ ∈ S and g : R + \ {0} → R + is a continuous function satisfying (1.2). We also adopt the convention that g(0) = +∞, g(+∞) = 0.
Remark 3.1. The class of forms satisfying (E.1)-(E.6) is quite wide. For instance, it includes forms generated by divergence form operators considered in [3], i.e. operators of the form
Au(x) = div(a(x)∇u(x)), x ∈ D,
where D is a bounded open subset of R d and a is a symmetric bounded uniformly elliptic d-dimensional matrix. A model example of nonlocal operator associated with form satisfying (E.1)-(E.6) is the fractional Laplacian ∆ α/2 on D with α ∈ (0, 2). For the above and some other interesting examples see, e.g., [9, Chapter 1].
Definition. We say that a measurable function u : E → R + is a solution of (1.1) if (a) u is quasi-continuous and 0 < u(x) < ∞ q.e.,
(b) for q.e. x ∈ E, u(x) = E x ζ 0 g(u(X t )) dA µ t . (3.1)
We will need the following hypothesis:
(H) g : R + \ {0} → R + is nonincreasing.
Existence and uniqueness of solutions of (1.1)
We begin with a comparison and uniqueness result.
Proposition 3.2. Assume that µ 1 , µ 2 are smooth measures such that 0 ≤ µ 1 ≤ µ 2 and g 1 , g 2 : R + \ {0} → R + are measurable functions such that g 1 (y) ≤ g 2 (y) for y > 0. Moreover, assume that either g 1 or g 2 satisfies (H). If u 1 is a solution of (1.1) with data g 1 , µ 1 and u 2 is a solution of (1.1) with data g 2 , µ 2 then u 1 ≤ u 2 q.e.
Proof. Without loss of generality we may assume that g 2 is nonincreasing. By the Meyer-Tanaka formula, for q.e. x ∈ E we have
(u 1 − u 2 ) + (x) ≤ E x ζ 0 1 {u 1 −u 2 >0} (X t )(g 1 (u 1 )(X t ) dA µ 1 t − g 2 (u 2 )(X t ) dA µ 2 t ) = E x ζ 0 1 {u 1 −u 2 >0} (X t )g 1 (u 1 )(X t ) d(A µ 1 t − A µ 2 t ) + E x ζ 0 1 {u 1 −u 2 >0} (X t )(g 1 (u 1 ) − g 2 (u 1 ))(X t ) dA µ 2 t + E x ζ 0 1 {u 1 −u 2 >0} (X t )(g 2 (u 1 ) − g 2 (u 2 ))(X t ) dA µ 2 t .
Since µ 1 ≤ µ 2 , dA µ 1 ≤ dA µ 2 under P x for q.e. x ∈ E by the properties of the Revuz duality. Therefore the first integral on the right-hand side of the above equality is nonpositive. The second one is nonpositive since g 1 ≤ g 2 and µ 2 ≥ 0. Finally, the third term is nonpositive due to the fact that g 2 is nonincreasing and µ 2 ≥ 0. Hence (u 1 − u 2 ) + (x) = 0 for q.e. x ∈ E, which implies that u 1 ≤ u 2 q.e. ✷ Corollary 3.3. Assume that µ ∈ S and g satisfies (H). Then there exists at most one solution of (1.1).
In what follows we will also need the following two hypotheses. The first one was introduced by P.A. Meyer and is called Meyer's hypothesis (L).
(L) For some (and hence for every) α > 0, δ {x} • R α ≪ m for every x ∈ E, where δ {x} is the Dirac measure on E concentrated at x.
(E.7) For every nearly Borel set B such that Cap(B) > 0, P x (σ B < ∞) > 0 for q.e.
x ∈ E. Remark 3.5. Observe that if (E.7) is satisfied then Rµ > 0 q.e. for every nontrivial µ ∈ S. Indeed, let F be a quasi support of A µ . Then by [9, Theorem 5.
1.5] it is also a quasi support of µ. Since µ is nontrivial, Cap(F ) > 0. Therefore by (E.7), P x (σ F < ζ) > 0 q.e. Since F is a quasi support of A µ , E x ζ 0 dA µ t > 0 for q.e. x ∈ F . Hence for q.e. x ∈ E we have 0 < E x E Xσ F ζ 0 dA µ t ≤ Rµ(x). Remark 3.6. (i) It is known that (E.7) is satisfied if the form (E, D[E]) is irreducible (see [9, Theorem 4.7.1]). (ii) (E.7) is satisfied if the form (E, D[E]
) satisfies Meyer's hypothesis (L) and r α (·, ·) defined as r α (x, ·) · m = δ {x} • R α is strictly positive. Indeed, let F be a closed set such that Cap(F ) > 0. Then
0 < E r α (x, y) dµ F (y) = R α µ F (x) = e α F (x) = E x e −ασ F , (3.2)
where µ F is the smooth measure associated with the equilibrium e F (see [9, Theorem 2.1.5]). The first inequality in (3.2) follows from the fact that µ F is nontrivial (since Cap(F ) > 0) and r α (·, ·) is strictly positive. By (3.2) we have P x (σ F < ∞) > 0 for q.e.
x ∈ E.
(iii) From (ii) and Remark 3.5 it follows that the operators from Remark 3.1 satisfy (E.7). Moreover, if µ is nontrivial, g is strictly positive and (E.7) is satisfied then u > 0 q.e.
Proof. First let us assume that µ ∈ S (0) 00 . Let us put V = (D e [E], · E ) and define Φ :
V → V , A : V → V ′ by Φ(u) = R(g(u) · µ), Au = −Au − g(u) · µ, u ∈ V. That Φ(u) ∈ V follows from the fact that S (0) 00 ⊂ S (0) 0 and R(S (0) 0 ) ⊂ D e [E]
, while the fact that Au ∈ V ′ is a consequence of the inclusion S (0) 0 ⊂ V ′ . Now we will show some properties of the mappings A, Φ. If g is nonincreasing then
Au − Av, u − v = u − v E − ((g(u) − g(v)) · µ, u − v) ≥ u − v E , u, v ∈ V,
where ·, · is the duality pairing between V and V ′ . Thus A is strongly monotone, hence coercive. It is also clear that A is hemicontinuous and bounded. As for Φ, let us
first observe that Φ(u) ∞ ≤ g ∞ Rµ ∞ , u ∈ V . Moreover, Φ is continuous. Indeed, let u n → u and let v n = Φ(u n ), v = Φ(u). Then v − v n E = (v − v n , (g(u) − g(u n )) · µ) ≤ 2 Rµ ∞ g ∞ E |g(u) − g(u n )| dµ. Since u n → u in E, there exists a subsequence (n ′ ) ⊂ (n) such that u n ′ → u q.e. (see [9, Theorem 2.1.4
]). From this and the above inequality it follows that
v n ′ → v in E.
The above argument shows that for every subsequence (n ′ ) ⊂ (n) there exists a further
subsequence (n ′′ ) ⊂ (n ′ ) such that v n ′′ → v in E, which implies that v n → v in E. Also observe that if (E, D[E]) satisfies Meyer's hypothesis (L), then Φ is compact. Indeed, let {u n } ⊂ V . Then |v n (x) − p t v n (x)| ≤ g ∞ E x t 0 dA µ r , t ≥ 0 (3.4) for q.e. x ∈ E.(n)) such that {v n } is convergent q.e. Let v = lim n→∞ v n . Then v − v n E = (v − v n , (g(u) − g(u n )) · µ) ≤ 2 g ∞ E |v − v n | dµ,
which converges to zero as n → ∞. Now we may conclude the existence result. In case g is nonincreasing the existence of a solution of (3.
} such that 1 Fn · µ ∈ S (0) 00 , n ≥ 1 (see [9, Section 2.2])
. By what has already been proved, for each n ≥ 1 there exists a solution u n ∈ V of the equation
−Au n = g(u n ) · µ n .
By the definition of a solution, [19,Proposition IV.5.30]). If g is nonincreasing then by Proposition 3.2 the sequence {u n } is nondecreasing. Therefore u := lim n→∞ u n is a solution of (3.
u n (x) = E x ζ 0 g(u n )1 Fn (X t ) dA µ t for q.e. x ∈ E. Since {F n } is a nest, 1 Fn (X t ) → 0, t ∈ [0, ζ), P x -a.s. for q.e. x ∈ E (see
3). If (E, D[E])
satisfies Meyer's hypotheses (L) then by (3.4), which holds with v n replaced by u n , and by [15, Theorem 2.2, Propositions 2.4 and 4.3], there exists a subsequence (n ′ ) ⊂ (n) such that {u n ′ } is convergent q.e. It is clear that u := lim n ′ →∞ u n ′ is a solution of (3.3). The second assertion of the theorem follows immediately from the assumptions and Remark 3.5. ✷ Lemma 3.8. Let µ ∈ R and let u be defined as
u(x) = E x ζ 0 dA µ t , x ∈ E.
Then lim
n→∞ CAP({u > n}) → 0. Proof. Let A n = {u > n}. If σ An < ∞ then σ An < ζ.
Therefore by the Markov property and the fact that µ ∈ R, for q.e. x ∈ E we have
P x (σ An < ∞) ≤ P x (u(X σ An ∧ζ ) ≥ n) ≤ n −1 E x ζ 0 dA µ t ,
which converges to zero as n → ∞. Therefore applying [9, Corollary 4.3.1] we get the desired result. ✷ Theorem 3.9. Assume that (E, D[E]) satisfies (E.7), µ ∈ R is nontrivial and g satisfies (H). Then there exists a solution of (1.1).
Proof. By Corollary 3.3 and Proposition 3.7, for every n ≥ 1 there exists a unique solution u n of the problem
−Au n = g n (u n ) · µ, u n > 0 (3.5) with g n (u) = g(u + 1 n ), u > 0 and g n (u) = g( 1 n ), u ≤ 0. By Proposition 3.2, {u n } is nondecreasing. Hence u 1 ≤ u n for n ≥ 1. Since (E, D[E]
) satisfies (E.7) and µ is nontrivial, u 1 > 0 q.e. Hence u n > u 1 > 0, n ≥ 1 q.e. Put u = lim sup n→∞ u n > 0. Then u > 0 q.e. By the definition of a solution of (3.5),
u n (x) = E x ζ 0 g n (u n (X t )) dA µ t (3.6)
for q.e. x ∈ E. By the Meyer-Tanaka formula and (1.2),
u γ+1 n (x) ≤ (γ + 1)E x ζ 0 g n (u n )u γ n (X t ) dA µ· t ≤ (γ + 1)c 2 E x ζ 0 dA µ t .
Hence
u γ+1 n (x) ≤ (γ + 1)c 2 E x ζ 0 dA µ t < ∞ for q.e.
x ∈ E. From the above inequality we conclude that u is a potential and u ∈ D.
Let τ k = τ G k , G k = {u 1 ≥ k −1 }. Observe that for every x ∈ G k , g(u n (x) + 1 n ) ≤ g(u 1 (x) + 1 n ) ≤ c 2 u γ 1 (x) ≤ c 2 k γ .
Therefore by the Lebesgue dominated convergence theorem,
E x τ k 0 g n (u n )(X t ) dA µ t → E x τ k 0 g(u)(X t ) dA µ t as n → ∞. Since for each k ≥ 1, u n (x) = E x u n (X τ k ) + E x τ k 0 g n (u n )(X t ) dA µ t for q.e. x ∈ E, it follows that u(x) = E x u(X τ k ) + E x τ k 0 g(u)(X t ) dA µ t for q.e. x ∈ E.
Since u is a potential, from Lemma 3.8 and [9, Lemma 5.1.6] it follows that lim k→∞ τ k ≥ ζ. Therefore letting k → ∞ in the above equation we conclude that
(3.1) is satisfied for q.e. x ∈ E. ✷ Theorem 3.10. Assume that (E, D[E]) satisfies (E.7)
and Meyer's hypothesis (L) and that µ ∈ R is nontrivial. Then there exists a solution of (1.1).
Proof. By Proposition 3.7, for every n ≥ 1 there exists a solution u n of (3.5) with g n (u) = g(u + 1 n ) for u > 0. By (1.2) and Proposition 3.2, v n ≤ u n ≤ w n , n ≥ 1 q.e.,
where v n , w n are solutions of the problems
−Av n = c 1 (v n + 1 n ) −γ · µ, v n > 0, −Aw n = c 2 (w n + 1 n ) −γ · µ, w n > 0. (3.7)
Hence
g(u n + 1 n ) ≤ c 2 (u n + 1 n ) −γ ≤ c 2 (v n + 1 n ) −γ q.e.
Let v, w be solutions of the problems
−Av = c 1 v −γ · µ, v > 0, −Aw = c 2 w −γ · µ, w > 0.
From the proof of Theorem 3.9 it follows that {v n } converges q.e. to v. Hence
c 2 (v n + 1 n ) −γ (X) → c 2 v −γ (X), P x ⊗ dA µ -a.s.
for q.e. x ∈ E, where P x ⊗ dA µ is the product of the measure P x and the kernel dA µ from Ω to B(R + ). Moreover,
v n (x) = E x ζ 0 c 2 (v n + 1 n ) −γ (X t ) dA µ t → E x ζ 0 c 2 v −γ (X t ) dA µ t = v(x)
for q.e. x ∈ E, which implies that the family {c 2 (v n (X) + 1 n ) −γ } is uniformly integrable with respect to the measure P x ⊗ dA µ for q.e. x ∈ E. From this we conclude that (n)) such that {u n } converges q.e. The rest of the proof runs as the proof of Theorem 3.9. ✷
lim t→0 + sup n≥1 E x t 0 g n (u n )(X r ) dA µ r = 0 for q.e. x ∈ E. Therefore lim t→0 + sup n≥1 |u n (x) − p t u n (x)| = lim t→0 + sup n≥1 E x t 0 g n (u n )(X r ) dA µ r = 0 (3.8) for q.e. x ∈ E. Since u n ≤ w for n ≥ 1,
Existence and uniqueness of solutions with mixed nonlinearities
In this subsection we study problems of the form
−Au = (g(u) + h(u)) · µ, u > 0. (3.9) Theorem 3.11. Assume that (E, D[E]) satisfies (E.7), µ ∈ R is nontrivial, g, h satisfy (H) and h : R + \ {0} → R + is a continuous function such that c 1 ≤ h(s) · s β ≤ c 2 , s > 0 (3.10)
for some β > 0. Then there exists a unique solution u of problem (3.9). Moreover,
u ≤ c 2 c 1 (2 γ v + 2 β w), (3.11)
where v, w are solutions of the problems
−Av = c 1 v −γ · µ, v > 0, −Aw = c 1 w −β · µ, w > 0.
Proof. Uniqueness follows from Proposition 3.2. To prove the existence of solutions, let u n denote the solution of the problem
−Au n = (g n (u n ) + h n (u n )) · µ, u n > 0 (3.12) with g n (u) = g(u + 1 n ), h n (u) = h(u + 1 n ) for u > 0. By Proposition 3.2, {u n } is nondecreasing and v n ≤ u n , w n ≤ u n q.e.,(3.13)
where v n , w n are solutions of (3.7). Therefore for each n ≥ 1, v n + w n ≤ 2u n q.e.
By Proposition 3.2 the sequences {w n }, {v n } are also nondecreasing. Furthermore,
g(u n + 1 n ) + h(u n + 1 n ) ≤ c 2 (u n + 1 n ) −γ + c 2 (u n + 1 n ) −β ≤ c 2 ( 1 2 w n + 1 2 v n + 1 n ) −γ + c 2 ( 1 2 w n + 1 2 v n + 1 n ) −β ≤ c 2 2 γ (v n + 1 n ) −γ + c 2 2 β (w n + 1 n ) −β .
(3.14)
From the proof of Theorem 3.9 it follows that the sequences {(v n + 1 n ) −γ (X)} and {(w n + 1 n ) −γ (X)} are uniformly integrable with respect to the measure P x ⊗ dA µ . Let u = lim sup n→∞ u n . By the definition of a solution of (3.12),
u n (x) = E x ζ 0 (g n (u n )(X t ) + h n (u n )(X t )) dA µ t (3.15)
for q.e. x ∈ E. By (3.14) the sequence {(g n (u n )(X) + h n (u n )(X)} is uniformly integrable with respect to the measure P x ⊗ dA µ . Therefore letting n → ∞ in (3.15) we get
u(x) = E x ζ 0 (g(u)(X t ) + h(u)(X t )) dA µ t .
Inequality (3.11) follows easily from (3.14.) ✷
Theorem 3.12. Assume that (E, D[E]) satisfies (E.7)
and Meyer's hypothesis (L), µ ∈ R is nontrivial and h : R + \ {0} → R + is a continuous function satisfying (3.10) for some β > 0. Then there exists a solution of (3.9) such that estimate (3.11) holds true.
Proof. In the proof of Theorem 3.11 monotonicity of g, h was used only to prove q.e. convergence of {u n }. As in the proof of Theorem 3.11 we show that the sequence {(g n (u n )(X)+h n (u n )(X)} is uniformly integrable with respect to the measure P x ⊗dA µ . Therefore Remark 4.1. Let u, u n , n ≥ 1, be quasi-continuous. Let us consider the following condition: for every ε > 0,
lim n→∞ P x (sup t≥0 |u n (X t ) − u(X t )| > ε) = 0 (4.2)
for m-a.e. x ∈ E. Condition (4.2) is equivalent to the quasi-uniform, up to a subsequence, convergence of {u n } to u. To see this, let us set A ε n = {|u n − u| > ε} and for arbitrary nearly
Borel set B ⊂ E put p B (x) = P x (σ B < ∞), x ∈ E.
Assume that (4.2) holds. By the diagonal method there exists a subsequence (still denoted by (n)) such that p B ε n (x) → 0, m-a.e. for every ε > 0, where B ε n = k≥n A ε k . Hence, by [9, Corollary 4.3.1], CAP(B ε n ) → 0 for every ε > 0, which implies that u n → u quasiuniformly. Now assume that u n → u quasi-uniformly. Then by [9, Theorem 2.1.5], E(p A ε n , p A ε n ) = CAP(A ε n ) → 0. Therefore, up to a subsequence, p A ε n → 0, m-a.e. Let us also mention that by the standard argument "m-a.e." in condition (4.2) may be replaced by "q.e." Remark 4.2. Replacing CAP by Cap in (4.1) we get a notion of convergence which is weaker then the quasi-uniform convergence. In fact, if
lim n→∞ Cap({|u n − u| > ε}) = 0 (4.3)
for every ε > 0 then by [19,Lemma IV.4.5], u n → u quasi-uniformly on every compact set K ⊂ E. Therefore the convergence defined by (4.3) may be called a locally quasiuniform convergence.
Proposition 4.3. Let µ, µ n ∈ R and let u = Rµ, u n = Rµ n . If u n → u quasi-uniformly then there exists a subsequence (still denoted by (n)) such that for q.e. x ∈ E,
lim n→∞ E x sup t≥0 |A µn t − A µ t | = 0. Proof. Since u n (x) = E x ζ 0 dA µn t → u(x), sup n≥1 E x ζ 0 dA µn t < ∞,
which when combined with the quasi-uniform convergence of {u n } implies that {u n (X)} satisfies the condition UT under P x for q.e. x ∈ E (see [13,Proposition 3.2]). Therefore by [10, Theorem 1.8] (see also [13,Corollary 2.8]), for every ε > 0,
lim n→∞ P x (sup t≥0 |A µn t − A µ t | > ε) = 0 for q.e.
x ∈ E. This and the fact that u n → u, m-a.e. implies that the family {A µn ζ } is uniformly integrable with respect to P x for m-a.e. x ∈ E. Applying the Vitali theorem yields the desired result. ✷ Proof. Since µ n → µ in S (0) 0 , it is easy to see that Rµ n → Rµ in the E-norm. Therefore by [9, Lemma 5.1.1] there exists a subsequence (still denoted by (n)) such that Rµ n → Rµ quasi-uniformly. By this and Proposition 4.
3, E x sup t≥0 |A µn t −A µ t | → 0 for q.e. x ∈ E. Consequently, E x ζ 0 e −αt u n (X t ) dA µn t → E x ζ 0 e −αt u(X t ) dA µ t for q.e.
x ∈ E, so (4.4) follows by the Lebesgue dominated convergence theorem. ✷
The following proposition is a generalization of [18, Theorem 1].
Proposition 4.5. Let µ ∈ M + 0,b and u = Rµ. If E u p−1 dµ < ∞ for some p > 1 then u p/2 ∈ D e [E] and there exists c p > 0 such that
E(u p/2 , u p/2 ) ≤ c p (u p−1 , µ).
Proof. Let θ ∈ D(A) be such that 0 ≤ θ ≤ 1 and θ ∈ L 1 (E; m). Let us choose a nest {F n } such that 1 Fn · µ, 1 Fn u p−1 · µ ∈ S (0) 00 , n ≥ 1, and by u n (·; λ, θ, α) denote a solution of −A λ u n (λ, θ, α) = θαR α µ n with µ n = 1 Fn · µ, α > 0 and A λ = A − λI, λ > 0. Observe that θαR α µ n ∈ L 2 (E; m) ∩ L ∞ (E; m). By [18, Theorem 1], u p/2 n (λ, θ, α) ∈ D[E] and there exists c p > 0 such that E(u p/2 n (λ, θ, α), u p/2 n (λ, θ, α)) ≤ c p (u p−1 n (λ, θ, α), θαR α µ n ). (4.5)
Let u n (·; λ, θ) be a solution of
−A λ u n (λ, θ) = θ · µ n .
By the very definition of a solution,
u n (x; λ, θ, α) = E x ζ 0 e −λr E Xr ζ 0 αe −αt dA µn t θ(X r ) dr and u n (x; λ, θ) = E x ζ 0 e −λt θ(X t ) dA µn t = E x ζ 0 e −λt θ(X t )1 Fn (X t ) dA µ t (4.6)
for q.e. x ∈ E. Therefore by the Markov property and Fubini's theorem,
u n (x; λ, θ, α) = E x ζ 0 e −λr E x ζ r αe −α(t−r) dA µn t θ(X r ) dr = E x ζ 0 αe −αt t 0 e (α−λ)r θ(X r ) dr dA µ t .
Since θ ∈ D[E], t → θ(X t ) is càdlàg. Therefore by standard calculations, Indeed, we have u n (x; λ, θ, α) ≤ R λ (αR α (µ n )) = αR α (R λ (µ n )) ≤ αR α ( R λ µ n ∞ ) ≤ Rµ n ∞ .
From this and (4.5) it follows that E(u p/2 n (λ, θ, α), u p/2 n (λ, θ, α)) ≤ c p (u p−1 n (λ, θ, α), θαR α (µ n )) = c p (αR α (u p−1 n (λ, θ, α) · θ), µ) ≤ c p c(n) µ T V (4.8)
and E(u n (λ, θ, α), u n (λ, θ, α)) ≤ E λ (u n (λ, θ, α), u n (λ, θ, α))
≤ (u n (λ, θ, α), αR α µ n ) ≤ c(n) 1/(p−1) µ n T V . (4.9)
Let us fix a sequence {α k } ⊂ (0, ∞) such that α k ր ∞ and set
S k (u n (x; λ, θ, α k )) = 1 k k i=1 u n (x; λ, θ, α i ).
By (4.9) and Mazur's theorem we may assume that S k (u n (·; λ, θ, α k )) → u n (·; λ, θ) in E. Therefore by [9, Lemma 5.1.1] and Remark 4.1 there exists a subsequence (still denoted by (k)) such that S k (u n (λ, θ, α k )) → u n (λ, θ) quasi-uniformly as k → ∞. It is an elementary check that α k R α k (µ n ) → µ n weakly in S (0) 0 as k → ∞. So, again by Mazur's theorem we may assume that S k (α k R α k µ n ) → µ n strongly in S (0) 0 . Therefore by Lemma 4.4, up to a subsequence we have (S p−1 k (u n (λ, θ, α k )) · θ, S k (α k R α k µ n )) → (u p−1 n (λ, θ) · θ, µ n ) (4.10)
as k → ∞. By [18, Theorem 1], E(S p/2 k (u n (λ, θ, α k )), S p/2 k (u n (λ, θ, α k ))) ≤ c p (S p−1 k (u n (λ, θ, α k )), S k (α k R α k (µ n )) · θ). (4.11)
From this and (4.10) we conclude that sup n≥1 S p/2 k (u n (λ, θ, α k )) E < ∞, which implies that, up to subsequence, {S p/2 k (u n (λ, θ, α k ))} is weakly convergent in E to some v ∈ D e [E]. Since by [9, Lemma 5.1.1] and Remark 4.1 strong, up to a subsequence, convergence in E implies quasi-uniform convergence, by standard reasoning we get v = u p/2 n (λ, θ). Therefore by (4.10) and [18,Theorem 1],
E(u p/2 n (λ, θ), u p/2 n (λ, θ)) ≤ lim inf k→∞ E(S p/2 k (u n (λ, θ, α k )), S p/2 k (u n (λ, θ, α k ))) ≤ c p lim inf k→∞ (S p−1 k (u n (λ, θ, α k )), S k (α k R α k (µ n )) · θ) = c p (u p−1 n (λ, θ) · θ, µ n ) ≤ c p (u p−1 n (λ, θ) · θ, µ). (4.12)
Let us choose θ l ∈ D(A) such that 0 ≤ θ l ≤ 1 and θ l ր 1. For instance, one can take θ l = lR l e F l , where e F l is the equilibrium function for the set F l (see [9,Chapter 2]) and {F l } is defined at the beginning of the proof. From (4.6) and the fact that
u(x) = E x ζ 0 dA µ t , x ∈ E one can deduce that u n (x; λ, θ l ) ≤ u, lim l→∞ lim λ→0 lim n→∞ u n (x; λ, θ l ) = u(x)
for q.e. x ∈ E. This when combined with (4.12) and the assumptions of the proposition gives the desired result. ✷ Theorem 4.6. Assume that u is a solution of (1.1).
(i) If µ ∈ S (0) 00 then u ∈ L ∞ (E; m) and u ∞ ≤ c 2 (γ + 1) 1/(γ+1) Rµ 1/(γ+1) ∞ . (ii) If µ ∈ M + 0,b (E) then u (γ+1)/2 ∈ D e [E] and u (γ+1)/2 2 E ≤ c(γ)c 2 µ T V.
Proof. (i) By the very definition of the space S
u γ+1 (x) ≤ (γ + 1)E x ζ 0 g(u)u γ (X t ) dA µ t ≤ c 2 (γ + 1)Rµ(x),
from which the desired estimate immediately follows.
(ii) Let us put ν = g(u) · µ and p = 1 + γ. Then p > 1 and
E u p−1 dν ≤ c 2 E u p−1 · 1 u p−1 dµ = c 2 µ T V .
By the above estimate and Proposition 4.5, u (γ+1)/2 ∈ D e [E] and there exists c(γ) > 0 such that
E(u (γ+1)/2 , u (γ+1)/2 ) ≤ c(γ) E u γ dν ≤ c 2 c(γ) · µ T V ,
which completes the proof. ✷
Stability: General results I
In Sections 5-7 we study stability of solutions of the problem −Au n = g(u n ) · µ n , u n > 0 (5.1) under different assumptions on the convergence of measures µ n and the limit measure µ. It is known that each measure µ ∈ M b admits a unique decomposition of the form
µ = µ d + µ c ,(5.2)
where µ d ∈ M 0,b , µ c ∈ M b and µ c ⊥Cap. The measure µ d is called the diffuse part of µ, whereas µ c the concentrated part of µ.
In the present section we prove some general results on stability in case µ c = 0. Then in Section 6 we investigate the case where µ is smooth, i.e. µ c = 0. Finally, in Section 7 we turn back to the case µ c = 0 but we assume that µ n are of the form µ n = j 1/n * µ, where j is some mollifier, and that A corresponds to some form E on L 2 (D; dx) with D ⊂ R d .
Lemma 5.1. Let {u n } be a sequence of excessive functions on E such that u n → 0, ma.e. Then there exists a subsequence (n ′ ) ⊂ (n) such that u n ′ → 0 q.e.
Proof. Without loss of generality we may assume that u n ≤ 1, n ≥ 1. Let (n ′ ) ⊂ (n) be such that n ′ ≥1 E u n ′ dπ < ∞ (for the definition of π see Section 2). Let E \ B = {x ∈ E; u n ′ (x) → 0} and let F be a compact subset of E such that K ⊂ B. Then
P π (D F < ζ) ≤ P π (lim sup n ′ →∞ u n ′ (X D F ) > 0) = 0.
Indeed, since u n is an excessive function,
P π (u n ′ (X D F ) > ε) ≤ ε −1 E π u n ′ (X D F ) ≤ ε E u n ′ dπ.
Therefore u n ′ (X D F ) → 0, P π -a.e. by the Borel-Cantelli lemma. Hence Cap(F ) = 0 by [19,Theorem IV.5.28]. Since F ⊂ B was arbitrary, Cap(B) = 0. ✷ Let us recall that a sequence {µ n } of Radon measures on E converges to some Radon measure on E in the narrow topology if E f µ n (dx) → E f µ(dx) for every bounded continuous f : E → R. If the last convergence holds true for every continuous f having compact support then we say that {µ} converges to µ in the vague topology. b be such that µ⊥Cap and let {µ n } ⊂ M + 0,b be a sequence such that sup n≥1 µ n T V < ∞ and µ n → µ in the narrow topology. If u n is a solution of the problem (5.1) then there exists a subsequence (still denoted by (u n )) such that u n → 0 q.e.
Proof. Let ε > 0. Since (E, D[E]) is regular, there exists ψ ε ∈ D[E] ∩ C c (E) such that 0 ≤ ψ ε ≤ 1, 0 ≤ E (1 − ψ ε ) dµ ≤ ε, E(ψ ε , ψ ε ) ≤ ε(E(u n , η) = (η, g(u n ) · µ n )
for every η ∈ D[E] ∩ B + (E). For arbitrary but fixed k > 0 set
η = T k (u n )(1 − ψ ε ) = T k (u n ) − ψ ε T k (u n ) ∈ D e [E],
where T k is the truncature operator, i.e. T k (y) = ((−k) ∨ y) ∧ k, y ∈ R. Then
E(u n , T k (u n )(1 − ψ ε )) = (T k (u n )(1 − ψ ε ), g(u n ) · µ n ) ≤ c 2 E T k (u n )(1 − ψ ε ) u n dµ n ≤ c 2 E (1 − ψ ε ) dµ n . Also E(u n , T k (u n )(1 − ψ ε )) = E(u n , T k (u n )) − E(u n , T k (u n )ψ ε ).
Since (E, D[E]) is a Dirichlet form, it is Markovian. Hence
E(T k (u n ), T k (u n )) ≤ E(u n , T k (u n ))
for n ≥ 1 and consequently,
E(T k (u n ), T k (u n )) ≤ c 2 E (1 − ψ ε ) dµ n + E(u n , T k (u n )ψ ε ).
Since u n is a potential, E(u n , T k (u n )ψ ε ) ≤ kE(u n , ψ ε ).
Therefore E(u n , T k (u n )ψ ε ) ≤ k E(u n , u n ) · E(ψ ε , ψ ε ) ≤ kε 1/2 E(u n , u n ).
By Theorem 4.6, c := sup n≥1 E(u n , u n ) < ∞. Hence
E(T k (u n ), T k (u n )) ≤ c 2 E (1 − ψ ε ) dµ n + kcε 1/2 .
Letting n → ∞ in the above inequality and using (5.3) we obtain
E(T k (u), T k (u)) ≤ c 2 ε + kcε 1/2 .
Since k, ε > 0 were arbitrary, u ≡ 0. The result now follows from Lemma 5.1. ✷ Let T ∈ T and Λ ∈ F T . Write
T Λ (ω) = T (ω), ω ∈ Λ, ∞, ω / ∈ Λ.
It is well known (see [22,Section III.2]) that T Λ ∈ T . Proof. Let Λ = {ω ∈ Ω : T n (ω) < ζ(ω), n ≥ 1}. Then Λ ∈ F T because Λ = n≥1 {T n < ζ} and {T n < ζ} ∈ F Tn ∩ F ζ ⊂ F T . Also observe that T = T Λ ∧ T Λ c and that T Λ is predictable. Since u + , u − ∈ D[E], we may assume that u ≥ 0. Let v ∈ D e [E] be an excessive function such that v ≥ u q.e. (for the existence of such function see [
M t = E x ζ 0 dA µ r |F t − v(X 0 ), t ≥ 0. By (5.4), v ∈ D. Consequently, u ∈ D. Since A µ is continuous, ζ Tn∧ζ dA µ t → 0, P x -a.s. for q.e. x ∈ E. Moreover, ζ Tn∧ζ dM r → ∆M ζ 1 Λ = ∆M T 1 Λ = ∆M T Λ 1 Λ , P x -a.s.
for q.e. x ∈ E. Since the filtration (F t ) t≥0 is quasi-left continuous, every martingale with respect to it has only totally inaccessible jumps (see, e.g., [9, Theorem A.3.6]). Hence ∆M T Λ 1 Λ = 0, P x -a.s. since T Λ is predictable. This proves the lemma. ✷
The next general stability result will play an important role in the proof of Theorem 7.1, which in turn is used in the proof of our main Theorem 7.3 on existence of solutions of (1.1) with general bounded Borel measure on the right-hand side. Perhaps it is also appropriate to make here the following general comments.
In most papers devoted to stability of solutions of semilinear equations with measure data the following equation
−∆u = f (x, u) + µ (5.5)
is considered. Let {µ n } be an approximation of a nonnegative measure µ in the narrow topology and let u n be a solution of (5.5) with µ replaced by µ n . Usually the limit u of {u n } depends on the form of the approximation of µ (see [20]). To be more precise, the limit u solves (5.5) with µ replaced by some nonnegative Borel measure µ # , depending on {µ n }, such that µ # ≤ µ (µ # is called the reduced limit of {µ n }). The question naturally arises whether similar phenomenon takes place in case of equations of the form (1.1). In [3] it is observed that in the particular case of equation (1.1) with A = ∆, g satisfying (1.2) with γ ≥ 1 and singular µ (i.e. µ = µ c ) we have that u n → 0 for any approximation of µ by uniformly bounded measures µ n such that µ n → µ in the narrow topology. In different words, for any approximation of µ in the limit equation the whole singular part of µ disappear. We do not know whether similar result holds true for any γ > 0 and/or general Dirichlet operator A. However, in Theorem 5.4 below we are able to prove a related result for general A and bounded measure. It says that the limit function u satisfies an equation with a measure ν on the right-hand side which is always smooth independently on the approximation of µ. But let us stress that Theorem 5.4 does not imply the result of [3], because even in case µ = µ c we do not know whether ν = 0. It is also worth mentioning that in Theorem 5.4 we consider the convergence in the vague topology.
In the proof of Theorem 5.4 we will need the following additional notation. For every open set U ⊂ E we write be such that sup n≥1 µ n T V < ∞ and µ n → µ vaguely. Let u n be a solution of (5.1) and let ν n = g(u n ) · µ n . Then (i) {ν n } is tight in the vague topology and its each limit point ν belongs to R, (ii) if ν n ′ → ν vaguely for some subsequence (n ′ ) ⊂ (n) then there is a further subsequence (n ′′ ) ⊂ (n ′ ) such that u n ′′ → u, m-a.e., where u is a solution of
−Au = ν.
Proof. Since u n is a solution of (5.1), it is quasi-continuous, u n ∈ D and by the Markov property there is a martingale additive functional M n of X such that
u n (X t ) = ζ t g(u n )(X r ) dA µn r − ζ t dM n r , t ∈ [0, ζ], P x -a.s.
for q.e. x ∈ E. By the Meyer-Tanaka formula, u γ+1 n (X t ) + ζ t dK γ r = (γ + 1) ζ t u γ n · g(u n )(X r ) dA µn r − (γ + 1) ζ t u γ n (X r− ) dM n r , t ∈ [0, ζ], P x -a.s. for some increasing process K γ such that K γ 0 = 0. Therefore by (1.2),
u γ+1 n (X t ) ≤ c 2 (γ + 1)E x ζ 0 dA µn r |F t , t ∈ [0, ζ], P x -a.s. (5.6)
for q.e. x ∈ E. In particular, for every β ∈ S
(0) 00 , E u (γ+1) n (x) dβ(x) ≤ c 2 (γ + 1) Rβ ∞ µ n T V . (5.7)
Observe that
u n (x) = E x ζ 0 dA νn t for q.e.
x ∈ E. Therefore from (5.7) it follows that for every β ∈ S Let K ⊂ E be a compact set. By [9, Lemma 2.2.6], e K = Rβ K for some β K ∈ S (0) 00 , where e K is the equilibrium function for K. Since e K is positive and e K (x) = 1 q.e. on K, we conclude from (5.8) that {ν n } is tight in the vague topology. Let ν denote a limit point of {ν n }. By (5.6) and [5, Lemma 6.1], for every q ∈ (0, 1) we have
E x sup t≥0 u q(γ+1) n (X t ) ≤ c q 2 (γ + 1) q (1 − q) E x ζ 0 dA µn t q
for q.e. x ∈ E. It follows that for every β ∈ S (0) 00 such that β(E) = 1,
E β sup t≥0 |u n (X t )| α ≤ c q 2 (γ + 1) q 1 − q E β ζ 0 dA µn t q ≤ c q 2 (γ + 1) q 1 − q Rβ q ∞ µ n q T V , (5.9)
where α = q(γ + 1). By where u is an excessive function. By (5.10), u (γ+1)/2 ∈ D e [E]. Therefore by Lemma 5.3, u (γ+1)/2 (X Tn ) → 0 for every sequence {T n } of stopping times such that T n ր T ≥ ζ. This implies that for q.e. x ∈ E, u(X Tn ) → 0, P x -a.s. (5.12) A key step in showing that ν is smooth is the proof that u is a potential. We first prove the last property in the simpler case where (1.2) is satisfied for some γ ≥ 1. Since
u (γ+1)/2 ∈ D[E]
, it belongs to D by Lemma 5.3. Therefore by (5.12),
E x u (γ+1)/2 (X Tn ) → 0 for q.e.
x ∈ E. From this we conclude that if γ ≥ 1 then for q.e. x ∈ E,
(E x u(X Tn )) (γ+1)/2 ≤ E x u (γ+1)/2 (X Tn ) → 0, (5.13)
so if (1.2) with γ ≥ 1 is satisfied then u is a potential. Now we turn to the case γ ∈ (0, 1). It is perhaps worth explaining why it differs from the case γ ≥ 1. To show that u is a potential we have to know that E x u(X Tn ) → 0. This may be concluded from (5.12) if u ∈ D. Unfortunately, the last assertion cannot be concluded from the fact u (γ+1)/2 ∈ D[E] when (1.2) is satisfied with γ ∈ (0, 1). Now we give an alternative way to prove that u ∈ D. It is independent of the value of γ > 0. For x ∈ E write
λ α x = δ {x} • R α .
Since (E, D[E]) satisfies Meyer's hypothesis (L), λ α x ≪ m for every x ∈ E . Moreover, since u n is a quasi-continuous excessive function, αR α u n (x) ≤ u n (x) for q.e x ∈ E. From this and (5.11) it follows that for q.e. x ∈ E,
lim inf n→∞ u n (x) ≥ lim inf n→∞ αR α u n (x) = lim inf n→∞ E αu n (y)λ α x (y) m(dy) ≥ E αu(y)λ α x (y) m(dy) = αR α u(x). (5.14) Since u (γ+1)/2 ∈ D[E], u is quasi-continuous. Hence αR α u(x) ր u(x) for q.e.
x ∈ E as α ր ∞. Therefore (5.14) implies that
lim inf n→∞ u n (x) ≥ u(x) (5.15)
for q.e. x ∈ E. By the above, for q.e.
x ∈ E we have
u α (X t ) ≤ lim inf n→∞ u α n (X t ), t ≥ 0, P x -a.s. Hence sup t≥0 u α (X t ) ≤ sup t≥0 lim inf n→∞ u α n (X t ) ≤ lim inf n→∞ sup t≥0 u α n (X t ).
By Fatou's lemma,
E β sup t≥0 u α (X t ) ≤ lim inf n→∞ E β sup t≥0 u α n (X t ) ≤ sup n≥1 E β sup t≥0 u α n (X t ),
so by (5.9),
E β sup t≥0 u α (X t ) ≤ c q 2 (γ + 1) q 1 − q Rβ q ∞ (sup n≥1 µ n q T V )
for every β ∈ S (0) 00 such that β(E) = 1. Since α > 1, we get in particular that u ∈ D. Therefore by (5.12), for q.e. x ∈ E, E x u(X Tn ) → 0 for every {T n } ⊂ T such that T n ր T ≥ ζ, which implies that u is a potential. Therefore by [
R U β = Rβ − R(β) E\U , where (β) E\U is the sweeping out of β on E \ U . Since R(β) E\U ≤ Rβ, we have that (β) E\U ∈ S(0)
00 because by [16,Lemma 5.4], (β) E\U (E) ≤ β(E). Therefore from (5.17) it follows that (R U β, ν n ) → (R U β,ν). For F ∈ Π let F ε = {x ∈ E; dist(x, F ) < ε}. Since E is locally compact, there exists ε > 0 such that F ε is relatively compact. By [9, Lemma 2.2.6] and comments following it, e F ε F = R Fε β for some β ∈ S (0) 00 , so by (5.18) we havē
ν(F ε ) ≥ (e Fε F ,ν) ≥ lim inf n→∞ ν n (IntF ) ≥ ν(IntF ) = ν(F ).
Since ε > 0 can be made arbitrarily small, it follows thatν(F ) ≥ ν(F ) for F ∈ Π. On the other hand, again by (5.18),
ν(F ) ≤ (e Fε F ,ν) ≤ lim sup n→0 ν n (F ε ) ≤ ν(F ε ).
Henceν(F ) ≤ ν(F ), F ∈ Π. Thereforeν(F ) = ν(F ) for F ∈ Π, which implies that ν = ν. ✷
Stability: General results II
In the further study of stability an important role will be played by a new type of convergence of measures of the class R, which we define below. Since this convergence is related to the uniform convergence of associated additive functionals, we will denote it by uAF − −− →.
Definition. Let µ n , µ ∈ R. We say that µ n uAF − −− → µ if for every sequence (n ′ ) ⊂ (n) there exists a further subsequence (n ′′ ) ⊂ (n ′ ) such that
lim n ′′ →∞ E x sup t≥0 |A µ n ′′ t − A µ t | = 0 for q.e. x ∈ E.
Proposition 6.1. Assume that (E, D[E]) satisfies hypothesis (L). Let µ n , µ ∈ R and let u n , u be solutions of −Au n = µ n , −Au = µ.
(6.1)
If µ n uAF − −− → µ then there exists a subsequence (still denoted by (u n )) such that u n → u quasi-uniformly.
Proof. By the assumption, up to a subsequence we have
E x sup t≥0 |A µn t − A µ t | → 0 (6.2)
for q.e. x ∈ E. By (6.1) and the definition of a solution,
sup t≥0 |u n (X t ) − u(X t )| ≤ sup t≥0 E x (sup r≥0 |A µn r − A µ r ||F t ).
From this and [5, Lemma 6.1], for every q ∈ (0, 1) we have
E x sup t≥0 |u n (X t ) − u(X t )| q ≤ 1 1 − q (E x sup t≥0 |A µn t − A µ t |) q .
By (6.2), for q.e. x ∈ E the right-hand side of the above inequality converges to zero as n → ∞, which by Remark 4.1 completes the proof. ✷
Cap(u > k) ≤ Cap(E \ F n ) + Cap(E \F m ) + Cap(F n ∩F m , u > k).
Since F n ∩F m is compact and u is continuous on F n , u is bounded on F n ∩F m . Hence Cap(F n ∩F m , u > k) → 0 as k → ∞. The other two terms converge to zero by the definition of the nest. ✷ Theorem 6.3 below will play a key role in the proof of our main result on existence of solutions of (1.1) with general µ ∈ M b (Theorem 7.3). It is worth pointing out that Theorem 6.3 is new even in case A = ∆. Theorem 6.3. Assume that (E, D[E]) satisfies (E.7) and Meyer's hypotheses (L). Let µ ∈ M + 0,b be nontrivial and let {µ n } ⊂ M + 0,b be a sequence such that sup n≥1 µ n T V < ∞ and µ n uAF − −− → µ. If u n , u are solutions of −Au n = g(u n ) · µ n , u n > 0, −Au = g(u) · µ, u > 0 (6.3)
with g satisfying (H) then there exists a subsequence (still denoted by (u n )) such that u n → u q.e.
Proof. Let g 1 (u) = g(u) ∧ 1, u > 0, and let w n be a solution of the problem −Aw n = g 1 (w n )µ n , w n > 0.
By Proposition 3.2, w n ≤ u n q.e. and w n ≤ v n q.e., where
v n (x) = E x ζ 0 dA µn t , x ∈ E. Put v(x) = E x ζ 0 dA µ t , x ∈ E.
By Proposition 6.1, up to a subsequence, v n → v quasi-uniformly. By the Meyer-Tanaka formula and (1.2), for k ≥ c
1/γ 1 we have w n (x) ∧ k ≥ E x ζ 0 1 {wn≤k} (X t ) c 1 w γ n (X t )
Then for n ≥ n(ε, m),
0 ≤ ψ m n,ε (x) c 1 k γ ≤ u n (x) ∧ k for q.e.
x ∈ E. By the assumptions, up to a subsequence,
lim n→∞ E x sup t≥0 | t 0 η m k,ε (X r ) d(A µn r − A µ r )| = 0 (6.5)
for q.e. x ∈ E. By [5, Lemma 6.1],
E x sup t≥0 |ψ m n,ε (X t ) − ψ m ε (X t )| q ≤ 1 1 − q E x sup t≥0 | t 0 η m k,ε (X r ) d(A µn r − A µ r )| q .
This together with (6.5) and Remark 4.1 shows that up to a subsequence, ψ m n,ε → ψ m ε quasi-uniformly as n → ∞. (6.6) Therefore for every ε > 0, m ≥ 1 there exists a nest {F ε,m j , j ≥ 1} such that ψ m n,ε → ψ m ε uniformly on F ε,m j for every j ≥ 1. By [7, Lemma 94, page 306] there exists a subsequence (still denoted by (n)) such that {u n } converges m-a.e. Now we will show that one can choose a subsequence such that {u n } converges q.e. To this end, for a > 0 set
B n,m a = {u n ≥ 1 a } ∩ F m , A m a,ε = { c 1 k γ ψ m ε ≥ 1 a − ε} ∩ F m ∩ F ε,m j(ε,m)
and D m a,ε = A m a,ε \ (E \ A m a,ε ) r , where (E \ A m a,ε ) r is the set of regular points for E \ A m a,ε (see [9]) and j(ε, m) is such that Cap(E \ F ε,m j(ε,m) ) < ε/m. It is known that D m a,ε is the fine interior of A m a,ε . Then
(x) = E x u(X t )1 {t<τ D m a,ε } , x ∈ D m a,ε .
By the probabilistic definition of a solution of (6.3) we have
|u n (x) − E x u n (X t∧τ D m a,ε )| = E x t∧τ D m a,ε 0 g(u n )(X r ) dA µn r ≤ c 2 t∧τ D m a,ε 0 1 |u n | γ (X r ) dA µn r ≤ c 2 a γ t∧τ D m a,ε 0 dA µn r . Hence lim t→0 + sup n≥1 |u n (x) − E x u n (X t∧τ D m a,ε )| ≤ c 2 · a γ lim t→0 + sup n≥1 E x t 0 dA µn r .
Since µ n uAF − −− → µ, for every δ > 0 there exists n 0 ∈ N such that for every t ≥ 0,
|E x t 0 dA µn r − E x t 0 dA µ r | ≤ δ, n ≥ n 0 . Therefore lim t→0 + sup n≥1 |u n (x) − E x u n (X t∧τ D m a,ε )| ≤ c 2 a γ lim t→0 + sup n≥1 E x t 0 dA µn r ≤ c 2 a γ lim t→0 + sup n≥1 (δ + E x A µ t ) = c 2 a γ δ.
Since δ > 0 was arbitrary, we get
lim t→0 + sup n≥1 |u n (x) − E x u n (X t∧τ D m a,ε )| = 0. (6.7)
By the definition of the set (E \ A m a,ε ) r ,
P x (τ D m a,ε > 0) = 1, x ∈ D m a,ε . (6.8)
By the Tanaka-Meyer formula and (1.2), for every stopping time τ we have
u γ+1 n (X τ ) ≤ c 2 (1 + γ)E x ζ 0 dA µn t |F τ . (6.9)
It is clear that the family {A µn ζ } is uniformly integrable under P x for q.e. x ∈ E. Therefore the family {E x ( ζ 0 dA µn t |F τ ), n ≥ 1} is uniformly integrable under P x , and hence for fixed τ ∈ T the family {u n (X τ ), n ≥ 1} is uniformly integrable under P x for q.e. x ∈ E. From this and (6.8) it follows that for every x ∈ D m a,ε , By (6.9), u γ+1 n (x) ≤ c 2 (1 + γ)v n (x) (6.11) for q.e. x ∈ E. Since {v n } converges quasi-uniformly, there exists a nest, and we may assume that it is {F n }, such that {u n } is uniformly bounded on F k for every k ≥ 1. Therefore by [15, Theorem 2.2, Proposition 2.4], {u n } has a subsequence (still denoted by (n)) such that {u n } is convergent and its limit is finite for q.e. x ∈ D m a,ε . Let a n ր ∞ and let A n = A n an,(2an) −1 , D n = D n an,(2an) −1 . By F let us denote the fine support of µ. Since µ is nontrivial, Cap(F ) > 0. Therefore by (6.4) there exist n 0 such that Cap(η n 0
lim t→0 + sup n≥1 |E x u n (X t∧τ D m a,ε ) − E x u n (X t )1 {t<τ D m a,ε } | ≤ lim t→0 + sup n≥1 {t≥τ D m a,ε } |u n (X τ Dn 0 , 1 2an 0 , F ) > 0. Since (E, D[E]) satisfies (E.7), we have ψ n (2an) −1 > 0, n ≥ n 0 q.e.
Therefore, by (6.6) and Lemma 6.2, Cap(ψ n n,(2an) −1 < a −1 n ) ≤ Cap(ψ n 0 n,(2an 0 ) −1 < a −1 n ) → 0, n → ∞.
Since {A n } is a nest, it follows that
lim n→∞ Cap(E \ A n ) = lim n→∞ E π ∞ D E\An e −t ϕ(X t ) dt = 0,(6.Cap(E \ D n ) = E π ∞ D E\Dn e −t ϕ(X t ) dt (6.13)
for n ≥ 1. Without loss of generality we may assume that the sequence {A n } is increasing, and consequently that {D n } is increasing, for otherwise we can replace {A n } by {à n }, whereà n = n k=1 A k , and considerD n =à n \ (E \à n ) r in place of D n . Therefore by (6.12), P π (lim n→∞ D E\An < ζ) = 0. Since for every B ∈ B(E), (6.14) we deduce from (6.12) that P π ( lim n→∞ τ An < ζ) = 0. (6.15) By [24,Proposition 10.6], τ An = τ Dn , P π -a.s.
σ B = D B on {D B > 0},
Hence P π (lim n→∞ τ Dn < ζ) = 0 and by (6.14), P π (lim n→∞ D E\Dn < ζ) = 0. This and (6.13) show that lim n→∞ Cap(E \ D n ) = 0. (6.16)
We have proved that for every m ≥ 1 there exists a subsequence (still denoted by (n)) such that {u n } converges q.e. and its limit is finite q.e. on D m . Therefore by (6.16) one can find a further subsequence (still denoted by (n)) such that {u n } converges q.e. and its limit is finite q.e. on E. Let w = lim n→∞ u n q.e. Since {u n } is q.e. convergent,
sup n≥1 E x A νn ζ = sup n≥1 u n (x) < ∞.
Therefore by [12,Section 4] the sequence {u n (X)} is uniformly S-tight under P x for q.e. x ∈ E. It is also clear that for every t ≥ 0, u n (X t ) → w(X t ) in probability P x for q.e. x ∈ E. Therefore by [11,Theorem 1], the definition of the sets {A m a,ε } and the Lebesgue dominated convergence theorem,
E x τ A m a,ε 0 g(u n )(X t ) dA µn t → E x τ A m a,ε 0 g(w)(X t ) dA µ t (6.17)
as n → ∞ for q.e. x ∈ E. Moreover, since u n → w q.e., u n (X τ A m a,ε ) → w(X τ A m a,ε ), P x -a.s. (6.18) for q.e. x ∈ E. By the definition of a solution of (6.3),
u n (X t ) = E x u n (X τ A m a,ε ) + E x τ A m a,ε 0 g(u n )(X t ) dA µn t for q.e.
x ∈ E. By the above, (6.17) and (6.18),
w(x) = E x w(X τ A m a,ε ) + E x τ A m a,ε 0 g(w)(X t ) dA µ t (6.19)
for q.e. x ∈ E. By (6.11), w is a potential. Therefore replacing A m a,ε in (6.19) by A n , letting n → ∞ and using (6.15) we obtain
w(x) = E x ζ 0 g(w)(X t ) dA µ t for q.e. x ∈ E. By uniqueness, w = u. ✷ Proposition 6.4. Let µ n , µ ∈ M 0,b . If µ n − µ T V → 0 then µ n uAF − −− → µ. Proof. Let u n (x) = E x A µn ζ , u(x) = E x A µ ζ .
By [5, Lemma 6.1], for every q ∈ (0, 1),
E x sup t≥0 |u n (X t ) − u(X t )| q ≤ 1 1 − q E x ζ 0 dA |µn−µ| t q for q.e.
x ∈ E, where |µ n − µ| stands for the total variation of the measure µ n − µ. Let β ∈ S (0) 00 be such that β(E) = 1. Then from the above inequality we conclude that for every q ∈ (0, 1),
E β sup t≥0 |u n (X t ) − u(X t )| q ≤ 1 1 − q E β ζ 0 dA |µn−µ| t q ≤ 1 1 − q R β q ∞ µ − µ n q T V .
By Remark 4.1 there exists a subsequence (still denoted by (n)) such that u n → u quasi-uniformly. Therefore the proposition follows from Proposition 4.3. ✷
The following proposition answers the question raised in [3, Remark 3.6]. and let u n , v n denote solutions of the problems −Au n = g(u n ) · µ n , u n > 0, −Av n = g(v n ) · (ν n + µ n ), v n > 0.
If u n → 0 in the topology of m-a.e. convergence, sup n≥1 ν n T V < ∞ and ν n uAF − −− → ν for some nontrivial ν ∈ M + 0,b then v n → v in the topology of m-a.e. convergence, where v is a solution of −Av = g(v) · ν, v > 0. (6.20) Proof. By Proposition 3.2, u n ≤ v n , so by monotonicity of g,
E x ζ 0 g(v n )(X t ) dA µn t ≤ E x ζ 0 g(u n )(X t ) dA µn t = u n (x).
By the assumptions of the proposition, up to a subsequence,
E x ζ 0 g(v n )(X t ) dA µn t → 0 (6.21)
as n → ∞ for m-a.e. x ∈ E. Let w n be a solution of −Aw n = g(w n ) · ν n , w n > 0.
By the Meyer-Tanaka formula,
|w n (x) − v n (x)| ≤ E x ζ 0 sgn(w n − v n )(X t )(g(w n )(X t ) − g(v n )(X t )) dA νn t − E x ζ 0 sgn(w n − v n )g(v n )(X t ) dA µn t ≤ E x ζ 0 g(v n )(X t ) dA µn t .
By the above estimate and (6.21), up to a subsequence we have |w n − v n | → 0, m-a.e.
Since sup n≥1 ν n T V < ∞, applying Theorem 6.3 shows that, up to a subsequence, w n → v, m-a.e., which completes the proof. ✷ Proposition 6.6. Let g, {µ n } satisfy the assumptions of Theorem 6.5 and u n , v n be as in Theorem 6.5, with {ν n } ⊂ M + 0,b such that ν n − ν T V → 0 for some nontrivial ν ∈ M + 0,b . Then v n → v in the topology of m-a.e. convergence.
Proof. Follows from Proposition 6.4 and Theorem 6.5. ✷
We close this section with some results showing that {µ n } ⊂ M 0,b is "locally equidiffuse" if it converges in the uAF sense. These results will not be needed later on in our study of stability of solutions of (5.1). However, we find them interesting and we think that they shed a new light on the nature of the convergence in the uAF sense.
Let us recall that a family {µ t , t ∈ T } ⊂ M 0,b is called equidiffuse if for every ε > 0 there exists δ > 0 such that for every A ∈ B(E), if Cap(A) < δ then |µ t |(A) < ε for every t ∈ T . Proposition 6.7. Assume that (E, D[E]) satisfies (E.7). Let µ, µ n ∈ M 0,b be such that sup n≥1 µ n T V < ∞ and µ n uAF − −− → µ. Then there exists a bounded excessive function η ∈ D e [E] such that η > 0 and the family {η · µ n } is equidiffuse.
Proof. By Proposition 6.1, if u n , u are defined by (6.1) then, up to a subsequence, u n → u quasi-uniformly. It follows that there exists a nest {F k } such that for every
k ≥ 1, sup n≥1 sup x∈F k u n (x) < ∞. (6.22)
Since m is a smooth measure, there exists a nest {F n } such that R1F n ∞ < ∞ and m(F n ) < ∞ for n ≥ 1. Therefore there exists a closed set F such that η := R1 F > 0, R1 F ∞ < ∞, m(F ) < ∞ and F ⊂ F k for some k ≥ 1. It is clear that η is excessive and η ∈ D e [E]. Let β := 1 F · m. Then for every B ∈ B(E),
B η dµ n = B R1 F dµ n = E β ζ 0 1 B (X t ) dA µn t ≤ E β ζ D B dA µn t . (6.23)
The family {A µn ζ } is uniformly integrable under the measure P β . To see this, let us first observe that by (6.22),
sup n≥1 E |u n (x)| 2 β(dx) = sup n≥1 F |u n (x)| 2 m(dx) < ∞.
Since u n → u, m-a.e., it follows that
E β A µn ζ = E u n (x) β(dx) → E u(x) β(dx) = E β A µ ζ .
On the other hand, since µ n uAF − −− → µ, A µn ζ → A µ ζ in measure P β , which proves that {A µn ζ } is uniformly integrable under P β . The uniform integrability implies that
lim n→∞ E β sup t≥0 |A µn t − A µ t | = 0. (6.24)
Suppose that, contrary to our claim, the family {η · µ n } is not equidiffuse. Then there exist ε > 0 and a sequence {B k } of Borel subsets of E such that Cap(B k ) → 0 and sup n≥1 B k ηdµ n ≥ ε, k ≥ 1. Then by Theorem IV.5.28 and Lemma 2.19 in [19],
P β ( lim k→∞ D B k ∧ ζ = ζ) = 1.
From this, (6.23) and (6.24) it follows that sup n≥1 B k η dµ n → 0 as k → ∞. This leads to the contradiction that {η · µ n } is not equidiffuse. ✷ Corollary 6.8. Let {µ n } be as in Proposition 6.7. If (E, D[E]) is strongly Feller then for every compact K ⊂ E the family {1 K · µ n } is equidiffuse.
Proof. Follows from the fact that every excessive function with respect to a strongly Feller Dirichlet form is lower semi-continuous. ✷
Stability: Approximation of measures by mollification
In this section we assume that µ is a nontrivial Borel measure on a subset E of R d . By putting µ(R d \ E) = 0 we may and will assume that µ is a Borel measure on R d . We study stability of solutions u n of (5.1) in case µ n = j 1/n * µ, n ≥ 1,
(7.1) where j ε (x) = ε −d j(ε −1 x) for x ∈ R d , ε > 0 and j(x) = c exp( 1 |x| 2 −1 ), |x| < 1, 0, |x| ≥ 1
with c > 0 chosen so that R d j(x) dx = 1. By (5.2), u n is a solution of the equation −Au n = g(u n ) · (j 1/n * µ d + j 1/n * µ c ). (7.2) We shall show that for some class of operators Theorem 6.5 is applicable to (7.2). To this end, we first consider the case µ d = 0 in Theorem 7.1 below, and then we show that j 1/n * µ d uAF − −− → µ d . In the proof of the following theorem a key role is played by Theorem 5.4.
Theorem 7.1. Assume that (E, D[E]
) is a form on E ⊂ R d satisfying (E.7) and Meyer's hypothesis (L). Let µ ∈ M + b be a nontrivial measure such that µ⊥Cap. Let u n denote a solution of the problem −Au n = g(u n ) · µ n , u n > 0 with µ n defined by (7.1). Then u n → 0 in the topology of m-a.e. convergence as n → ∞.
Proof. Let B ∈ B(E) be such that Cap(B) = 0 and µ(E \ B) = 0. Since µ is finite, there exists an increasing sequence {F k } of closed subsets of E such that µ(B \ ∞ k=1 F k ) = 0. Let µ k = 1 F k · µ, µ k n = j 1/n * µ k . Then µ = lim k→∞ µ k and µ n = lim k→∞ µ k n in the total variation norm. Without loss of generality we may assume that µ k − µ T V ≤ k −1 for k ≥ 1. Let ν k n = g(u k n ) · µ k n , k, n ≥ 1, where u k n is a solution of −Au k n = g(u k n ) · µ k n , u k n > 0. By Theorem 5.4, for every sequence (n ′ ) there exists a subsequence (still denoted by (n ′ )) and a smooth measure ν k such that ν k n ′ → ν k vaguely and u k n ′ → u k , m-a.e. as n ′ → ∞, where −Au k = ν k . For a closed set F ⊂ E and n ≥ 1 write B(F, n) = {x ∈ E : dist(x, F ) ≤ 1/n}. By the properties of the vague convergence, for every n ≥ 1 we have 0 = lim inf
n ′ →∞ ν k n ′ (E \ B(F k , n)) ≥ ν k (E \ B(F k , n)).
Since this holds for every n ≥ 1, ν k (E \ F k ) = 0. Hence ν k ≡ 0, because Cap(F k ) = 0 and ν k is a smooth measure. As a consequence, u k = 0. By Proposition 3.2, u k n ≤ u n . By the Meyer-Tanaka formula, (H) and (1.2),
|u n (x) − u k n (x)| γ+1 ≤ (1 + γ)E x ζ 0 (u n − u k n ) γ (X t )(g(u n )(X t ) dA µn t − g(u k n )(X t ) dA µ k n t ) = (1 + γ)E x ζ 0 (u n − u k n ) γ (X t )(g(u n ) − g(u k n ))(X t ) dA µ k n t + (1 + γ)E x ζ 0 g(u n )(X t )(u n − u k n ) γ (X t ) (dA µn t − dA µ k n t ) ≤ (1 + γ)E x ζ 0 g(u n )(u n ) γ (X t ) (dA µn t − dA µ k n t ) ≤ (1 + γ)c 2 E x ζ 0 dA |µn−µ k n | t .
Let β ∈ S (0) 00 . From the above inequality we conclude that
E |u n − u k n | 1+γ dβ ≤ (1 + γ)c 2 E β
Letting n ′ → ∞ and then k → ∞ in the above inequality we see that E u n ′ dβ → 0 for every finite β ∈ S (0) 00 , which implies that, up to a subsequence, u n ′ → 0, m-a.e. ✷
In the rest of the section we confine ourselves to the class of forms defined below. Let ψ : R d → R be defined as
ψ(x) = 1 2 (Bx, x) + R d
(1 − cos(x, y))J(dy),
where B is a d-dimensional nonnegative definite symmetric matrix and J is a symmetric Borel measure on R d \ {0} satisfying Let us recall that in the whole paper we assume that the forms under consideration satisfy (E.5), (E.6). We have already mentioned that the form defined by (7.5) satisfies (E.5), i.e. is regular. It is known, that it satisfies (E.6) if ψ −1 is locally integrable on R d (see [9,Example 1.5.2]).
For a given Hunt process X on E and an open set D ⊂ E we denote by X D the Hunt process on D which is a part of the process X on D (see [9,Appendix A.2] for details). Proof. Let us first observe that the proof can be reduced to the case E = R d . This follows from the fact that if X is a Hunt process associated with the form (E, D[E]) on E = R d and if A µ is an additive functional of X D associated with a smooth measure µ on D then A µ t = Aμ t∧τ D , t ≥ 0, where Aμ is the additive functional of X associated with the measureμ being the extension of µ to R d by putting zero on R d \ D.
Let u ∈ D e [E] and let u ε = j ε * u. Then
E(u ε , u ε ) = R dû ε (x) ·ū ε (x) · ψ(x) dx = R d |û| 2 (x) · |ĵ ε | 2 (x)ψ(x) dx ≤ R d |û| 2 (x)ψ(x) dx = E(u, u). (7.6)
Observe that for every α ≥ 0, R α u(x) = R d G α (x − y)u(y) dy = (G α * u)(x). (7.7)
Hence R α u ε = G α * u ε = G α * (u * j ε ) = j ε * (G α * u) = j ε * (R α u).
In particular, for every u ∈ D(A), j ε * u ∈ D(A) and −A(j ε * u) = j ε * (−Au). (7.8) Assume that u ∈ D(A) and write u n = j 1/n * u. Applying (7.8) gives Hence u n → u in E. Now assume that u ∈ D[E]. Then by (7.6), u n − u E ≤ u n − j 1/n * (αR α u) E + j 1/n * (αR α u) − αR α u E + αR α u − u E ≤ 2 αR α u − u E + j 1/n * (αR α u) − αR α u E .
Letting n → ∞ and then α → ∞ we conclude from the above inequality that j 1/n * u → u in E. Finally, assume that u ∈ D e [E]. Then there exists a sequence {u k } ⊂ D[E] such that u k − u E → 0. Using once again (7.6) we obtain u n − u E = u n − j 1/n * u k E + j 1/n * u k − u k E + u − u k E ≤ 2 u − u k E + j 1/n * u k − u k E .
Letting n → ∞ and then k → ∞ shows that j 1/n * u → u in E. Accordingly, for every u ∈ D e [E], j 1/n * u → u in E. (7.9) Let µ ∈ S (0) 0 and let u n , u be solutions of the problems −Au n = j 1/n * µ, −Au = µ. (7.10)
Then by (7.7), u = Rµ, u n = R(j 1/n * µ) = j 1/n * (Rµ).
Since µ ∈ S Now let u, u n be solutions of (7.10) with µ ∈ M 0,b . Let {F k } be a nest such that
1 F k · µ ∈ S(0)
0 for k ≥ 1, and let u k n , u k be solutions of −Au k n = j 1/n * (1 F k · µ), −Au k = 1 F k · µ. (7.13)
From the probabilistic interpretation of equations (7.10), (7.13) and calculations leading to (5.9) it follows that for every β ∈ S (0) 00 such that β(E) = 1 and every q ∈ (0, 1), E β sup t≥0 |u k n (X t ) − u n (X t )| q ≤ c(q, β) µ k − µ T V , E β sup t≥0 |u(X t ) − u k (X t )| q ≤ c(q, β) µ k − µ T V , (7.14)
where µ k = 1 F k · µ. For u ∈ B(E) put |u| [q] sup = E β sup t≥0 |u(X t )| q . Then by (7.12) and (7.14), lim n→∞ |u n − u| [q] sup ≤ lim
n→∞ (|u n − u k n | [q] sup + |u k n − u k | [q] sup + |u k − u| [q] sup ) ≤ 2c(q, β) µ k − µ T V + lim n→∞ |u k n − u k | [q] sup = 2c(q, β) µ k − µ T V .
Letting k → ∞ shows that |u n − u| ) be the form defined by (7.5) such that (E.7) and Meyer's hypothesis (L) are satisfied. Assume that g satisfies (H), µ ∈ M b , µ d is nontrivial and by u n , u denote solutions of the problems −Au n = g(u n ) · (j 1/n * µ), u n > 0, −Au = g(u) · µ d , u > 0. (7.15) Then u n → u in the topology of m-a.e. convergence as n → ∞.
Proof. Follows from Theorem 6.5 applied to the sequences {µ n = j 1/n * µ c } and {ν n = j 1/n * µ d }. The assumptions of Theorem 6.5 for {µ n } are satisfied by Theorem 7.1, whereas the assumptions for {ν n } are satisfied thanks to Proposition 7.2. ✷ We see that in the limit equation (7.15) the whole concentrated part of µ disappear. This and the fact that (7.15) has a unique solution makes is legitimate to call u satisfying (7.15) the solution of (1.1). With this definition in mind, Theorem 7.3 may be viewed as an existence theorem for equation (1.1).
Remark 3. 4 .
4(i) Hypothesis (L) is satisfied if there exists a Borel measurable functionr α : E × E → R + such that for every f ∈ L 2 (E; m), R α f = E f (y)r α (·, y) m(dy), m-a.e.It therefore clear that operators from Remark 3.1 satisfy (L).(ii) Hypothesis (L) is also called "absolute continuity condition". For equivalents for this property see [9, Theorems 4.1.2, 4.2.4].
Proposition 3. 7 .
7Assume that µ ∈ R and g : R → R + is continuous and bounded. Then if g is nonincreasing or (E, D[E]) satisfies Meyer's hypotheses (L) then there exists a solution of the equation −Au = g(u) · µ. (3.3)
(3.8) is satisfied, which when combined with [15, Theorem 2.2, Propositions 2.4 and 4.3] implies that {u n } has a subsequence convergent q.e. ✷ 4 Regularity of solutions Definition. We say that a sequence {u n } of measurable functions is convergent quasiuniformly to a function u if for every ε > 0, lim n→∞ CAP({|u n − u| > ε}) = 0. (4.1)
Lemma 4. 4 ..
4Assume that µ, µ n ∈ S Let {u n } be a sequence of quasi-continuous functions such that 0 ≤ u n ≤ c for some c > 0 and u n → u quasi-uniformly. Then for every positive η ∈ L 2 (E; m) and every α > 0, E u n R α η dµ n → E uR α η dµ.(4.4)
ee
(α−λ)r θ(X r ) dr = lim α→∞ t 0 αe −α(t−r) e −λr θ(X r ) dr = e −λt θ(X (α−λ)r θ(X r ) dr ≤ 2e −λt for α ≥ λ.Therefore applying the Lebesgue dominated convergence theorem we get lim α→∞ u n (x; λ, θ, α) = lim α→∞ u n (x; λ, θ) for q.e x ∈ E. Observe that u p−1 n (λ, θ, α) ∞ ≤ Rµ n p−1 ∞ := c(n).
,
Rµ ∈ L ∞ (E; m). By the Meyer-Tanaka formula and (1.2),
Example 4. 7 .
7Let (E, D[E]) be the form defined by (7.5) with A = ∆ α/2 for some α ∈ (0, 2] and bounded domain D ⊂ R d . (i) We first give two examples of µ ∈ S For instance, if µ = f · m and f ∈ L p (D; m) with p > d/α then f · m ∈ S (0) 00 . Now, let α = 2 and let µ denote the Riemannian volume measure on some (d − 1) dimensional submanifold Σ of D. Then extending µ by zero to the whole D we get µ ∈ S
0
(D) = {u ∈ L 2 (R d ; dx); u = 0 on R d \ D and R d |û(x)||x| α dx < ∞}andû denotes the Fourier transform of u. Therefore if µ ∈ M + 0,b and u is a solution of (1.1) then u (γ+1)/2 ∈ H α/2 0 (D).
Proposition 5 . 2 .
52Assume that (E, D[E]) satisfies Meyer's hypothesis (L) and g satisfies (1.2) with γ = 1. Let µ ∈ M +
Lemma 5. 3 .
3If u ∈ D e [E] then u ∈ D and for every {T n } ⊂ T such that T n ր T ≥ ζ, u(X Tn ) → 0, P x -a.s. for q.e. x ∈ E.
t , P x -a.s. for q.e x ∈ E, where
D e,U [E] = {u ∈ D e [E] : u = 0 q.e. on E \ U }. It is known (see [9, Theorems 4.4.3, 4.4.4]) that the pair (E, D e,U [E]) is again a regular transient symmetric Dirichlet form. By {R U α , α ≥ 0} we denote the resolvent associated with (E, D e,U [E]). For a compact set F ⊂ U we denote by e U F the equilibrium function associated with (E, D e,U [E]) and F . By [9, Theorem 2.1.5], e U F is quasi-continuous and e U F = 1 q.e. on F, 0 ≤ e U F ≤ 1 q.e., e U F ∈ D e,U [E] ⊂ D e [E]. The last property implies that e U F = 0 q.e. on E \ U . Theorem 5.4. Assume that (E, D[E]) satisfies Meyer's hypothesis (L). Let µ ∈ M + b , {µ n } ⊂ M + 0,b
By [ 7 ,
7Lemma 94, page 306] there exists a subsequence (still denoted by (n)) such that u n → u, m-a.e.,(5.11)
2, Theorem IV.4.22] and [9, Theorem 5.1.4] there exists a smooth measureν such that u(x) = E u γ in (E, D e [E]) as n → ∞. By this and [9, Theorem 2.1.4] we may assume that the above convergence holds q.e. Hence u γ 1 + . . . + u γ n n → u γ , β-a.e. From this we easily deduce that v = u, β-a.e. Consequently, (Rβ, ν n ) → (Rβ,ν). (5.17) By Dynkin's formula (see [9, (4.4.3)]) and [9, Section 2.3], for every open U ⊂ E,
= {F ⊂ E : F -compact, ν(∂F ) = 0}. Then Π is a π-system and σ(Π) = B(E).
Lemma 6. 2 .
2Let u be a quasi-continuous function. Then lim k→∞ Cap({u > k}) = 0. Proof. Let {F n } be a nest such that u is continuous on F n for every n ≥ 1. Since (E, D[E]) is a regular Dirichlet form, the capacity Cap generated by (E, D[E]) is tight (see [19, Remark IV.3.2]), i.e. there exists a nest {F m } of compact subsets of E such that Cap(E \F m ) → 0 as m → ∞. By subadditivity of the capacity Cap,
Theorem 6. 5 .
5Assume that g satisfies (H). Let {µ n } ⊂ R be nontrivial, {ν n } ⊂ M + 0,b
|x| 2 J(dx) < ∞.Consider the form (B, D[B]) on L 2 (R d ; dx) defined asB(u, v) = R dû (x)v(x)ψ(x) dx, D[B] = {u ∈ L 2 (E; dx); R d |û(x)| 2 ψ(x) dx < ∞},(7.4) whereû stands for the Fourier transform of u. It is well known (see [9, Example 1.4.1]) that (B, D[B]) is a symmetric regular Dirichlet form on L 2(R d ; dx). An important example of ψ of the form (7.3) is ψ(x) = φ(|x| 2 ), x ∈ R d , where φ : (0, ∞) → [0, ∞) isa Bernstein function, i.e. smooth function such that (−1) n D n φ ≤ 0 for n ≥ 1. In this case the operator A associated with the form (B, D[B]) is equal to φ(∆). For instance, A = ∆ α/2 for φ(x) = x α/2 with α ∈ (0, 2]. It is well known (see [9, Example 1.4.1]) that if (B, D[B]) satisfies Meyer's hypotheses (L) then the α-Green function G α (·, ·) has the property that G α (x, y) = G α (x − y), x, y ∈ R d for some real function G α defined on R d . For an arbitrary open set D ⊂ R d let (E, D[E]) denote the part of (B, D[B]) on D, i.e. D[E] = {u ∈ D[B] : u = 0, m-a.e on R d \ D}, E(u, v) = B(u, v), u, v ∈ D[E]. (7.5) By [9, Theorems 4.4.3, 4.4.4], (E, D[E]) is a symmetric regular Dirichlet form on L 2 (D; dx). For instance, if (B, D[B]) is defined by (7.4) with ψ(x) = |x| α for some α ∈ (0, 2] and A D is the operator associated with (E, D[E]) then the solution of the problem −A D u = f with f ∈ L 2 (D; dx) may be interpreted as a solution of the Dirichlet problem −∆ α u = f in D, u(x) = 0 for x ∈ R d \ D.
Proposition 7. 2 .
2Assume that (E, D[E]) defined by (7.5) satisfies Meyer's hypothesis (L). Let µ ∈ M 0,b and µ n be defined by (7.1). Then µ n uAF − −− → µ as n → ∞.
u n − u m E = (−A(u n − u m ), u n − u m ) = (j 1/n * (−Au) − j 1/m * (−Au), u n − u m ) ≤ 2 − Au L 2 u n − u m L 2 .
n (X t ) − u(X t )| α ≤ c(α, β) lim n→∞ u n − u E = 0. (7.12)
Theorem 7 . 3 .
73Let (E, D[E]
1 )
1For a given Dirichlet form (E, D[E]) one can always define the so-called extended Dirichlet space D e [E] as the set of m-measurable functions on E for which there exists an E-Cauchy sequence {u n } ⊂ D[E] convergent m-a.e. to u (the so-called approximating sequence). One can show that for u ∈ D e [E] the limit E(u, u) = lim n→∞ E(u n , u n ) exists and does not depend on the approximating sequence {u n } for u. Each element u ∈ D e [E] has a quasi-continuous version. It is known that (E, D[E]) is transient iff the pair (E, D e [E]) is a Hilbert space. In the latter case for a given measure µ ∈ S(0)
0
inequality (2.1) holds for every u ∈ D e [E].
such that {u n } is convergent m-a.e. and weakly in D e [E] to some function u ∈ D e [E]. Since u n ∈ D e [E],5.3)
(see [9, Lemma 2.2.7]). By Theorem 4.6 and [15, Propositions 2.4, 2.11], u n ∈ D e [E] and
there exists a subsequence (still denoted by (n))
19, Theorem I.2.6]). By [9, Theorems 2.2.1, 5.1.1] there exists a positive measure µ ∈ S
AcknowledgementsResearch supported by National Science Centre Grant No. 2012/07/B/ST1/03508.Let {F m } be a nest such that v n → v uniformly on F m for every m ≥ 1. For ε > 0 let us choose n(ε, m) so that |v n (x) − v(x)| ≤ ε for x ∈ F m and n ≥ n(ε, m). Thenand η m k,ε ≤ 1 q.e., η m k,ε = 0 q.e. on E \ C m ε,k . Hence for n ≥ n(ε, m),for q.e x ∈ E. By [19, Theorem IV.5.28] P x (lim k,m→∞ τ C m k,ε < ζ) = 0 for q.e. x ∈ E. Hence
The Analytic Operator-Valued Feynman Integral via Additive Functionals of Brownian Motion. S Albeverio, G W Johnson, Z.-M Ma, Acta Appl. Math. 42S. Albeverio, G.W. Johnson, Z.-M. Ma, The Analytic Operator-Valued Feynman Integral via Additive Functionals of Brownian Motion, Acta Appl. Math. 42 (1996) 267-295.
M R Blumenthal, R K Getoor, Markov Processes and Potential Theory. New YorkDover PublicationsM.R. Blumenthal, R.K. Getoor, Markov Processes and Potential Theory. Dover Publications, New York, 2007.
Semilinear elliptic equations with singular nonlinearities. L Boccardo, L Orsina, Calc. Var. Partial Differential Equations. 37L. Boccardo, L. Orsina, Semilinear elliptic equations with singular nonlinearities, Calc. Var. Partial Differential Equations 37 (2010) 363-380.
Reduced measures for obstacle problems. H Brezis, A C Ponce, Adv. Differential Equations. 10H. Brezis, A.C. Ponce, Reduced measures for obstacle problems. Adv. Differential Equations, 10 (2005) 1201-1234.
Ph, B Briand, Y Delyon, E Hu, L Pardoux, Stoica, L p solutions of Backward Stochastic Differential Equations. 108Ph. Briand, B. Delyon, Y. Hu, E. Pardoux, L. Stoica, L p solutions of Backward Stochastic Differential Equations, Stochastic Process. Appl. 108 (2003) 109-129.
Probabilities and Potential. C Dellacherie, P A Meyer, AmsterdamNorth-HollandC. Dellacherie, P.A. Meyer, Probabilities and Potential, North-Holland, Amster- dam, 1978.
C Dellacherie, P A Meyer, Probabilities and Potential C. North-HollandAmsterdamC. Dellacherie, P.A. Meyer, Probabilities and Potential C, North-Holland, Ams- terdam, 1988.
Singularities of positive supersolutions in elliptic PDEs. L Dupaigne, A C Ponce, Selecta Math. (N.S.). 10L. Dupaigne, A.C. Ponce, Singularities of positive supersolutions in elliptic PDEs, Selecta Math. (N.S.) 10 (2004) 341-358.
M Fukushima, Y Oshima, M Takeda, Dirichlet Forms and Symmetric Markov Processes. Second revised and extended edition. BerlinWalter de GruyterM. Fukushima, Y. Oshima, M. Takeda, Dirichlet Forms and Symmetric Markov Processes. Second revised and extended edition, Walter de Gruyter, Berlin, 2011.
Convergence en loi de semimartingales et variation quadratique. J Jacod, Lecture Notes in Math. 850J. Jacod, Convergence en loi de semimartingales et variation quadratique, Lecture Notes in Math. 850 (1981) 547-560.
A Jakubowski, Convergence in Various Topologies for Stochastic Integrals Driven by Semimartingales. 24A. Jakubowski, Convergence in Various Topologies for Stochastic Integrals Driven by Semimartingales, Ann. Probab. 24 (1996) 2141-2153.
A Non-Skorohod Toplogy on the Skorohod Space. A Jakubowski, Electron. J. Probab. 24ppA. Jakubowski, A Non-Skorohod Toplogy on the Skorohod Space, Electron. J. Probab. 2 (1997) no. 4, 21 pp.
Convergence en loi des suites d'intégrales stochastiques sur l'espace D 1. A Jakubowski, J Mémin, G Pagès, Probab. Theory Related Fields. 81A. Jakubowski, J. Mémin, G. Pagès, Convergence en loi des suites d'intégrales stochastiques sur l'espace D 1 . Probab. Theory Related Fields 81 (1989) 111-137.
Semi-Dirichlet forms, Feynman-Kac functionals and the Cauchy problem for semilinear parabolic equations. T Klimsiak, J. Funct. Anal. 268T. Klimsiak, Semi-Dirichlet forms, Feynman-Kac functionals and the Cauchy prob- lem for semilinear parabolic equations, J. Funct. Anal. 268 (2015) 1205-1240.
Right Markov processes and systems of semilinear equations with measure data. T Klimsiak, Potential Anal. 44T. Klimsiak, Right Markov processes and systems of semilinear equations with measure data, Potential Anal. 44 (2016) 373-399.
Dirichlet forms and semilinear elliptic equations with measure data. T Klimsiak, A Rozkosz, J. Funct. Anal. 265T. Klimsiak, A. Rozkosz, Dirichlet forms and semilinear elliptic equations with measure data, J. Funct. Anal. 265 (2013) 890-925.
Semilinear elliptic equations with measure data and quasi-regular Dirichlet forms. T Klimsiak, A Rozkosz, Colloq. Math. 145T. Klimsiak, A. Rozkosz, Semilinear elliptic equations with measure data and quasi-regular Dirichlet forms, Colloq. Math. 145 (2016) 35-67.
Some Inequalities for Submarkovian Generators and Their Applications to the Perturbation Theory. V A Liskevich, Yu A Semenov, Proc. Amer. Math. Soc. 119V.A. Liskevich, Yu.A. Semenov, Some Inequalities for Submarkovian Generators and Their Applications to the Perturbation Theory, Proc. Amer. Math. Soc. 119 (1993) 1171-1177.
Z.-M Ma, M Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms. BerlinSpringer-VerlagZ.-M. Ma, M. Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer-Verlag, Berlin, 1992.
Reduced limits for nonlinear equations with measures. M Marcus, A C Ponce, J. Funct. Anal. 258M. Marcus, A.C. Ponce, Reduced limits for nonlinear equations with measures, J. Funct. Anal. 258 (2010) 2316-2372.
Stability properties, existence and nonexistence of renormalized solutions for elliptic equations with measure data. F Murat, A Porretta, Comm. Partial Differential Equations. 27F. Murat, A. Porretta, Stability properties, existence and nonexistence of renor- malized solutions for elliptic equations with measure data, Comm. Partial Differ- ential Equations 27 (2002) 2267-2310.
P Protter, Stochastic Integration and Differential Equations. BerlinSpringer2nd ed.P. Protter, Stochastic Integration and Differential Equations. 2nd ed., Springer, Berlin, 2004.
Pathological solutions of elliptic differential equations. J Serrin, Ann. Scuola Norm. Sup. Pisa. 183J. Serrin, Pathological solutions of elliptic differential equations, Ann. Scuola Norm. Sup. Pisa (3) 18 (1964) 385-387.
M Sharpe, General Theory of Markov Processes. New YorkAcademic PressM. Sharpe, General Theory of Markov Processes, Academic Press, New York, 1988.
Monotone Operators in Banach Space and Nonlinear Partial Differential Equations. R E Showalter, Math. Surveys Monographs. 49Amer. Math. SocR.E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Dif- ferential Equations. Math. Surveys Monographs 49, Amer. Math. Soc., 1997.
| [] |
[
"Fast branching algorithm for Cluster Vertex Deletion *",
"Fast branching algorithm for Cluster Vertex Deletion *",
"Fast branching algorithm for Cluster Vertex Deletion *",
"Fast branching algorithm for Cluster Vertex Deletion *"
] | [
"Anudhyan Boral ",
"Marek Cygan ",
"Tomasz Kociumaka ",
"Marcin Pilipczuk ",
"Anudhyan Boral ",
"Marek Cygan ",
"Tomasz Kociumaka ",
"Marcin Pilipczuk "
] | [] | [] | In the family of clustering problems, we are given a set of objects (vertices of the graph), together with some observed pairwise similarities (edges). The goal is to identify clusters of similar objects by slightly modifying the graph to obtain a cluster graph (disjoint union of cliques).Hüffner et al. [Theory Comput. Syst. 2010] initiated the parameterized study of Cluster Vertex Deletion, where the allowed modification is vertex deletion, and presented an elegant O(2 k k 9 + nm)-time fixed-parameter algorithm, parameterized by the solution size. In our work, we pick up this line of research and present an O(1.9102 k (n + m))-time branching algorithm. | 10.1007/s00224-015-9631-7 | [
"https://arxiv.org/pdf/1306.3877v1.pdf"
] | 2,562,767 | 1306.3877 | e6cd7e6a8805be538dedde2c2cb7fe937707be25 |
Fast branching algorithm for Cluster Vertex Deletion *
17 Jun 2013
Anudhyan Boral
Marek Cygan
Tomasz Kociumaka
Marcin Pilipczuk
Fast branching algorithm for Cluster Vertex Deletion *
17 Jun 2013
In the family of clustering problems, we are given a set of objects (vertices of the graph), together with some observed pairwise similarities (edges). The goal is to identify clusters of similar objects by slightly modifying the graph to obtain a cluster graph (disjoint union of cliques).Hüffner et al. [Theory Comput. Syst. 2010] initiated the parameterized study of Cluster Vertex Deletion, where the allowed modification is vertex deletion, and presented an elegant O(2 k k 9 + nm)-time fixed-parameter algorithm, parameterized by the solution size. In our work, we pick up this line of research and present an O(1.9102 k (n + m))-time branching algorithm.
Introduction
The problem to cluster objects based on their pairwise similarities has arisen from applications both in computational biology [6] and machine learning [5]. In the language of graph theory, as an input we are given a graph where vertices correspond to objects, and two objects are connected by an edge if they are observed to be similar. The goal is to transform the graph into a cluster graph (a disjoint union of cliques) using a minimum number of modifications.
The set of allowed modifications depends on a particular problem and an application considered. Probably the most studied variant is the Cluster Editing problem, known also as Correlation Clustering, where we seek for a minimal number of edge editions to obtain a cluster graph. The study of Cluster Editing include [3,4,13,18,28] and, from the parameterized perspective, [7,8,9,10,11,14,15,17,20,21,22,24,25,26].
The main principle of the parameterized complexity is that we seek for algorithms that are efficient if the considered parameter is small. However, the distance measure in Cluster Editing, the number of edge editions, may be quite large in practical instances, and, in the light of recent lower bounds refuting the existence of subexponential FPT algorithms for Cluster Editing [17,24], it seems reasonable to look for other distance measures (see e.g. Komusiewicz's PhD thesis [24]) and/or different problem formulations.
In 2008, Hüffner et al. [23] initiated the parameterized study of the Cluster Vertex Deletion problem (ClusterVD for short). Here, the allowed modification is a vertex deletion.
Cluster Vertex Deletion (ClusterVD)
Parameter: k Input: An undirected graph G and an integer k. Question: Does there exist a set S of at most k vertices of G such that G \ S is a cluster graph, i.e., a disjoint union of cliques?
In terms of motivation, we want to refute as few objects as possible to make the set of observations completely consistent. As a vertex deletion removes as well all its incident edges, we may expect that this new editing measure may be significantly smaller in practical applications than the edge-edition distance.
As ClusterVD can be equivalently stated as the problem of hitting, with minimum number of vertices, all induced P 3 s (paths on 3 vertices) in the input graph, ClusterVD can be solved in O(3 k (n+ m)) time by a straightforward branching algorithm [12], where n and m denote the number of vertices and edges of G, respectively. The dependency on k can be improved by considering more elaborate case distinction in the branching algorithm, either directly [19], or via a general algorithm for 3-Hitting Set [29]. Hüffner et al. [23] provided an elegant O(2 k k 9 + nm)-time algorithm, using the iterative compression principle [27] and a reduction to the weighted maximum matching problem.
In our work we pick up this line of research and obtain the fastest algorithm for (unweighted) Clus-terVD.
Theorem 1. Cluster Vertex Deletion can be solved in O(1.9102 k (n + m)) time and polynomial space on an input (G, k) with |V (G)| = n and |E(G)| = m.
Contrary to the algorithm of [23], our algorithm is a typical branching algorithm, where a number of branches and reductions is presented, and the complexity is analysed through (sometimes long) case analysis and branching vectors. The advantage of this approach is that we obtain a linear dependency on the graph size in the running time.
The main observation in the proof of Theorem 1 is that, if, for some vertex v, we know that there exists a solution S not containing v, in the neighbourhood of v the ClusterVD problem reduces to Vertex Cover. More precisely, define N 1 and N 2 to be the vertices within distance 1 and 2 from v, respectively, and define the auxiliary graph H v to be a graph on N 1 ∪N 2 having and edge for each edge of G between N 1 and N 2 and for each non-edge inside N 1 in G. In other words, two vertices are connected by an edge in H v iff, together with v, they form a P 3 in G. We observe that a solution S not containing v needs to contain a vertex cover of H v . Moreover, one can show that we may greedily take as much as possible (inclusion-wise) vertices from N 2 into the aforementioned vertex cover, as these vertices would help us resolve the remaining part of the graph.
We note that a similar observation has been already used in [23] to cope with a variant of ClusterVD where we restrict the number of clusters in the resulting graph.
Branching to find the 'correct' vertex cover of H v is a very efficient branching, with worst-case (1, 2) (i.e., golden-ratio) branching vector. However, we do not have the vertex v beforehand, and branching to obtain such a vertex may be quite costly. Thus, our approach is to get as much gain as possible from the vertex cover-style branching on the auxiliary graph H v , to be able to balance the loss from some inefficient branches used to obtain the vertex v to start with. Consequently, we employ quite involved analysis of properties and branching algorithms for the auxiliary graph H v .
The paper is organised as follows. We give some preliminary definitions and notation in Section 2. In Section 3 we analyse the auxiliary graph H v and show a branching algorithm finding all relevant vertex covers of H v . Then, in Section 4 we prove Theorem 1. Section 5 concludes the paper.
Preliminaries
We use standard graph notation. All our graphs are undirected and simple. For a graph G, by V (G) and E(G) we denote its vertex-and edge-set, respectively. For v ∈ V (G), the set N G (v) = {u|uv ∈ E(G)} is the neighbourhood of v in G and N G [v] = N G (v) ∪ {v} is the closed neighbourhood. We extend these notions to sets of vertices X ⊆ V (G) by N G [X] = v∈X N G [v] and N G (X) = N G [X] \ X. We omit the subscript if it is clear from the context. For a set X ⊆ V (G) we also define G[X] to be the subgraph induced by X and G \ X is a shorthand for
G[V (G) \ X]. A set X ⊆ V (G) is called a vertex cover of G if G \ X is edgeless. By MinVC(G)
we denote the size of the minimum vertex cover of G.
In all further sections, we assume we are given an instance (G, k) of Cluster Vertex Deletion, where G = (V, E). That is, we use V and E to denote the vertex-and edge-set of the input instance G.
A P 3 is an ordered set of 3 vertices (u, v, w) such that uv, vw ∈ E and uw / ∈ E. A graph is a cluster graph iff it does not contain any P 3 ; hence, in ClusterVD we seek for a set of at most k vertices that hits all P 3 s.
If at some point a vertex v is fixed in the graph G, we define sets N 1 = N 1 (v) and N 2 = N 2 (v) as follows:
N 1 = N G (v) and N 2 = N G (N G [v]
). That is, N 1 and N 2 are sets of vertices within distance 1 and 2 from v, respectively. For a fixed v ∈ V , we define an auxiliary graph
H v with V (H v ) = N 1 ∪ N 2 and E(H v ) = {uw|u, w ∈ N 1 , uw / ∈ E} ∪ {uw|u ∈ N 1 , w ∈ N 2 , uw ∈ E}.
Thus, H v consists of the vertices in N 1 and N 2 along with non-edges among vertices of N 1 and edges between N 1 and N 2 . Observe the following.
Lemma 2. For u, w ∈ N 1 ∪ N 2 , we have uw ∈ E(H v ) iff u, w and v form a P 3 in G.
Proof. For every uw ∈ E(H v ) with u, w ∈ N 1 , (u, v, w) is a P 3 in G. For uw ∈ E(H v ) with u ∈ N 1 and w ∈ N 2 , (v, u, w) forms a P 3 in G. In the other direction, for any P 3 in G of the form (u, v, w) we have u, w ∈ N 1 and uw / ∈ E, thus uw ∈ E(H v ). Finally, for any P 3 in G of the form (v, u, w) we have u ∈ N 1 , w ∈ N 2 and uw ∈ E, hence uw ∈ E(H v ).
We call a subset S ⊆ V a modulator when G \ S is a cluster graph, that is, a collection of cliques. A modulator with minimal cardinality is called a solution.
Our algorithm is a typical branching algorithm, that is, it consists of a number of branching steps. In a step (A 1 , A 2 , . . . , A r ), A 1 , A 2 , . . . , A r ⊆ V , we independently consider r subcases. In the i-th subcase we look for a solution S containing A i : we delete A i from the graph and decrease the parameter k by |A i |. If k becomes negative, we terminate the current branch and return a negative answer from the current subcase. For brevity, we sometimes write in the branching step w instead of
{w} if A i = {w} for some i.
The branching vector for a step (A 1 , A 2 , . . . , A r ) is the vector (|A 1 |, |A 2 |, . . . , |A r |). It is well-known (see e.g. [16]) that the number of final subcases of a branching algorithm is bounded by O(c k ), where c is the largest positive root of an equation 1 = r i=1 x −|Ai| among all branching steps (A 1 , A 2 , . . . , A r ) in the algorithm.
The auxiliary graph H v
In this section we investigate properties of the auxiliary graph H v . Hence, we assume that a ClusterVD input (G, k) is given with G = (V, E), and a vertex v ∈ V is fixed. We first start with a few basic properties and then we build on them an efficient branching algorithm for ClusterVD, if we know there exists a solution not containing v. Proof. Observe that if S is a modulator, then G \ S does not contain a P 3 . By Lemma 2, if v / ∈ S, no edge may remain in H v \ S and the lemma follows.
Basic properties
Lemma 3. Let G be a connected graph which is not a clique. Then, for every v ∈ V (G), there is a P 3 containing v. Proof. Consider N (v). If there exist vertices u, w ∈ N (v) such that uw / ∈ E(G) then we have a P 3 (u, v, w). Otherwise, since N [v] induces a clique, we must have w ∈ N (N [v]) such that uw ∈ E(G) for some u ∈ N (v). Thus we have a P 3 , (v, u, w) involving v.Lemma 5. Let X be a vertex cover of H v . Then, in G \ X, the connected component of v is a clique.
Proof. Suppose the connected component of v in G \ X is not a clique. Then by Lemma 3, there is a P 3 involving v. Such a P 3 is also present in G. However, by Lemma 2, as X is a vertex cover of H v , X intersects such a P 3 , a contradiction.
Lemma 6. Let S be a modulator such that v / ∈ S. Denote by X the set S ∩ V (H v ). Let Y be a vertex cover of H v . Suppose that X ∩ N 2 ⊆ Y ∩ N 2 . Then T (S \ X) ∪ Y is also a modulator. Proof. Since Y (and hence, T ∩ V (H v )) is a vertex cover of H v and v / ∈ T , we know by Lemma 5 that the connected component of v in G \ T is a clique. If T is not a modulator, then there must be a P 3 contained in Z \ T , where Z = V \ ({v} ∪ N 1 ). But since S ∩ Z ⊆ T ∩ Z, G \ S would also contain such a P 3 .
For vertex covers of H v , X and Y , we say X dominates Y if |X| ≤ |Y |, X ∩ N 2 ⊇ Y ∩ N 2 and at least one of these inequalities is sharp. Two vertex covers X and Y are said to be equivalent if X ∩ N 2 = Y ∩ N 2 and |X ∩ N 1 | = |Y ∩ N 1 |. We note that the first aforementioned relation is transitive and strongly anti-symmetric, whereas the second is an equivalence relation.
As a corollary of Lemma 6, we have:
Corollary 7. Let S be a modulator such that v / ∈ S. Suppose Y is a vertex cover of H v which either dominates or is equivalent to the vertex cover X = S ∩ V (H v ). Then T (S \ X) ∪ Y is also a modulator with |T | ≤ |S|.
Branching algorithm
We are now ready to develop a branching algorithm that guesses the 'correct' vertex cover of H v . Recall that we are working in the setting where we look for a solution to ClusterVD on (G, k) not containing v, thus, by Lemma 4, containing a vertex cover of H v . Our goal is to branch into a number of subcases, in each subcase picking a vertex cover of H v . By Corollary 7, our branching algorithm, to be correct, needs only to generate at least one element from each equivalence class of the 'equivalent' relation, among maximal elements in the 'dominate' relation.
The algorithm consists of a number of branching steps; in each subcase of each step we take a number of vertices into the constructed vertex cover of H v and, consequently, into the constructed solution to ClusterVD on G. At any point, the first applicable rule is applied.
First, we disregard isolated vertices in H v . Second, we take care of large-degree vertices.
Rule 1. If there is a vertex u ∈ V (H v ) with degree at least 3 in H v , include either u or N Hv (u) into the vertex cover.
That is, use the branching step (u, N Hv (u)).
Note that Rule 1 yields a branching vector
(1, d), where d ≥ 3 is the degree of u in H v .
Henceforth, we can assume that vertices have degree 1 or 2 in H v . Assume there exists u ∈ N 1 of degree 1, with uw ∈ E(H v ). Moreover, assume there exists a solution S containing u. If w ∈ S, then, by Lemma 6, S \ {u} is also a modulator, a contradiction.
Otherwise, if w ∈ N 2 , then (S \ {u}) ∪ {w} dominates S. Finally, if w ∈ N 1 , then (S \ {u}) ∪ {w} is equivalent to S.
Hence, we infer the following greedy rule.
Rule 2. If there is a vertex u ∈ N 1 of degree 1 in H v , include N Hv (u)
into the vertex cover. That is, use the branching step (N Hv (u)). Now we assume vertices in N 1 are of degree exactly 2 in H v . Suppose we have vertices u, w ∈ N 1 with uw ∈ E(H v ). We would like to branch on u as in Rule 1, including either u or N Hv (u) into the vertex cover. However, note that in the case where u is deleted, Rule 2 is triggered on w and consequently the other neighbour of w is deleted. Hence, we infer the following rule.
Rule 3.
If there are vertices u, w ∈ N 1 , uw ∈ E(H v ) then include either N Hv (w) or N Hv (u) into the vertex cover. That is, use the branching step (N Hv (w), N Hv (u)).
Note that Rule 3 yields the branching vector (2, 2). We are left with the case where the maximum degree of H v is 2, there are no edges with both endpoints in N 1 , and no vertices of degree one in N 1 . Hence H v must be a collection of even cycles and paths (recall that N 2 is an independent set in H v ). On each such cycle C, of 2l vertices, the vertices of N 1 and N 2 alternate. Note that we must use at least l vertices for the vertex cover of C. By Lemma 6 it is optimal to greedily select the l vertices in C ∩ N 2 .
Rule 4.
If there is an even cycle C in H v with every second vertex in N 2 , include C ∩ N 2 into the vertex cover. That is, use the branching step (C ∩ N 2 ).
For an even path P of length 2l, we have two choices. If we are allowed to use l + 1 vertices in the vertex cover of P , then, by Lemma 6, we may greedily take P ∩ N 2 . If we may use only l vertices, the minimum possible number, we need to choose P ∩ N 1 , as it is the unique vertex cover of size l of such path. Hence, we have an (l, l + 1) branch with our last rule.
Rule 5. Take the longest possible even path P in H v and either include P ∩ N 1 or P ∩ N 2 into the vertex cover. That is, use the branching step (P ∩ N 1 , P ∩ N 2 ).
In Rule 5, we pick the longest possible path to avoid the branching vector (1, 2) as long as possible; this is the worst branching vector in the algorithm of this section.
When we are forced to use the (1, 2) branch, we exploit a very specific structure of H v . A seagull is a connected component of H v that is isomorphic to a P 3 with middle vertex in N 1 and endpoints in N 2 . The graph H v is called an s-skein if it is a disjoint union of s seagulls and some isolated vertices. The following observation is straightforward from the above analysis. Lemma 8. If the algorithm of Section 3.2 may only use a branch with the branching vector (1, 2), then H v is an s-skein for some s ≥ 1.
We conclude this section with a note on how fast a single branching step may be executed. Note that, as H v contains parts of the complement of G, it may have size superlinear in the size of G. However, it is easy to see that the following oracle procedure suffices to find and execute the lowest-numbered available branching step in the graph H v .
Algorithm
In this section we show our algorithm for ClusterVD, proving Theorem 1. The algorithm is a typical branching algorithm, where at each step we choose one branching rule and apply it. In each subcase, a number of vertices is deleted, and the parameter k drops by this number. If k becomes negative, the current subcase is terminated with a negative answer. On the other hand, if k is nonnegative and G is a cluster graph, the vertices deleted in this subcase form a modulator of size at most k.
Preprocessing
At each step, we first preprocess simple connected components of G.
Lemma 10. In linear time, we can for each connected component C of G:
1. conclude that C is a clique; or 2. conclude that C is not a clique, but identify a vertex w such that C \ {w} is a cluster graph; or 3. conclude that none of the above holds.
Proof. On each connected component C, we perform a depth-first search. At every stage, we ensure that the set of already marked vertices induces a clique.
When we enter a new vertex, w, adjacent to a marked vertex v, we attempt to maintain this invariant. We check if the number of marked vertices is equal to the number neighbours of w which are marked; if so then the new vertex w is marked. Since w is adjacent to every marked vertex, the set of marked vertices remains a clique. Otherwise, there is a marked vertex u such that uw / ∈ E(G), and we may discover it by iterating once again over edges incident to w. In this case, we have discovered a P 3 (u, v, w) and C is not a clique. At least one of u, v, w must be deleted to make C into a cluster graph. We delete each one of them, and repeat the algorithm (without further recursion) to check if the remaining graph is a cluster graph. If one of the three possibilities returns a cluster graph, then (2) holds. Otherwise, (3) holds.
If we have marked all vertices in a component C while maintaining the invariant that marked vertices form a clique, then the current component C is a clique.
For each connected component C that is a clique, we disregard C. For each connected component C that is not a clique, but C \ {w} is a cluster graph for some w, we may greedily delete w from G: we need to delete at least one vertex from C, and w hits all P 3 s in C. Thus, henceforth we assume that for each connected component C of G and for each v ∈ V (C), C \ {v} is not a cluster graph. In other words, we assume that we need to delete at least two vertices to solve each connected component of G.
Studying H v
Once preprocessing is no longer possible, we fix an arbitrary vertex v in G, and let C be its connected component. Our goal is to 'resolve' the neighbourhood of v: either decide to delete v, or guess the 'correct' vertex cover of H v . However, if we implement this in a straightforward manner, we do not get the time bound promised by Theorem 1. To achieve this bound, we carefully study the cases where H v has small vertex cover or has special structure, and discover some possible greedy decisions that can be made.
We would like to make decision depending on the size of the minimum vertex cover of H v . As C is not a clique, by Lemma 3 H v contains at least one edge, thus MinVC(G) ≥ 1. We first note that we can make a distinction on small vertex covers of G in linear time.
Lemma 11. In linear time, we can determine whether H v has minimum vertex cover of size 1, of size 2, or of size at least 3. Moreover, in the first two cases we can find the vertex cover in the same time bound.
Proof. We use Lemma 9 on to find, in linear time, a vertex w with degree at least 3, or generate H v explicitly.
In the latter case, H v has vertices of degree at most 2. Then, H v consists of paths and cycles and we can find the size of the minimum vertex cover in linear time. We use the fact that paths with l vertices require at least ⌊ l 2 ⌋ vertices, and cycles with l vertices require ⌈ l 2 ⌉ vertices in the vertex cover. If we find a vertex w of degree at least 3 in H v , then w must be in any vertex cover of size at most 2. Otherwise, N (w) must be in the vertex cover but |N (w)| ≥ 3. We proceed to delete w and restart the algorithm of Lemma 9 on the remaining graph to check if it has a vertex cover of size 0 or 1. We perform at most 2 such restarts. Finally, if we do not find a vertex cover of size at most 2, it must be the case that the minimum vertex cover contains at least 3 vertices.
We now make a few important observations about H v that will enable us to do some greedy choices in the future.
Lemma 12. Suppose X is a vertex cover of H v . Then there is a solution S such that either v / ∈ S or |X \ S| ≥ 2.
Proof. Suppose S is a solution such that v ∈ S and |X \ S| ≤ 1. Consider T (S \ {v}) ∪ X. Clearly, |T | ≤ |S|. Since T contains X, a vertex cover, by Lemma 5, the connected component of v in G \ T is a clique. Thus, there is no P 3 containing v. Since, any P 3 in G \ T which does not include v must also be contained in G \ S, contradicting the fact that S is a modulator, we obtain that T is also a modulator. Hence, T is a solution.
Corollary 13. If MinVC(H v ) = 1 then there is a solution S not containing v.
Proof. Let X be a minimum vertex cover of H v , and let S be a solution promised by Lemma 12 for the vertex cover X. Then v / ∈ S, as |X \ S| ≤ |X| = 1. Proof. Assume the contrary. Consider a component C of C \ {v} which is not a clique. Since v must be adjacent to each connected component of C \ {v}, C ∩ N 1 must be non-empty. For any w ∈ C ∩ N 1 , we have that w 1 , w 2 = w and ww 1 , ww 2 / ∈ E, since otherwise the result follows. If uw ∈ E with u ∈ N 2 , then, as {w 1 , w 2 } is a vertex cover we must have u = w 1 or u = w 2 , We would then have w 1 or w 2 contained in a non-clique C, contradicting our assumption. Hence uw ∈ E ⇒ u ∈ N 1 . Thus C ⊆ N 1 . As w 1 and w 2 are not contained in C and they cover all edges in H v , C must be an independent set in H v . In G \ {v}, therefore, C must be a clique, a contradiction.
Lemma 15.
Let v ∈ V . Suppose that H v is an s-skein. Then there is a solution S such that v / ∈ S.
Proof. Let H v consist of seaguls (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), . . . , (x s , y s , z s ). That is, the middle vertices y i 's are in N 1 , while the endpoints x i 's and z i 's are in N 2 . If s = 1, {y 1 } is a vertex cover of H v and Corollary 13 yields the result. Henceforth, we assume s ≥ 2. As X consider the set N 1 with all the vertices isolated in H v removed. Clearly X is a vertex cover of H v , thus we may use X as in Lemma 12 and obtain a solution S. If v / ∈ S we are done, so let us assume |X \ S| ≥ 2. Take arbitrary i such that y i ∈ X \ S. As |X \ S| ≥ 2, we may pick another j = i, y j ∈ X \ S. The crucial observation from the definition of H v is that (y j , y i , x i ) and (y j , y i , z i ) are P 3 s in G. As y i , y j / ∈ S, we have x i , z i ∈ S. Hence, since the choice of i was arbitrary, we infer that for each 1 ≤ i ≤ s either y i ∈ S or x i , z i ∈ S, and, consequently, S contains a vertex cover of H v . By Lemma 5, S \ {v} is also a modulator in G, a contradiction.
Branching steps
We are now ready to present the branching steps of our algorithm. We assume the preprocessing (Lemma 10) is done and a vertex v is picked. We first run the algorithm of Lemma 11 to determine if H v has a small minimum vertex cover. Second, we run the algorithm of Lemma 9 to check if H v is not an s-skein for some s.
We consider the following cases. In the first case, we first delete v from the graph and decrease k by one. Then we check whether the connected component containing w 1 or w 2 is not a clique; By Lemma 14, for some w ∈ {w 1 , w 2 }, the connected component of G \ {v} containing w is not a clique; finding such w clearly takes linear time. We invoke the algorithm of Section 3.2 on H w .
In the second case, we invoke the algorithm of Section 3.2 on H v .
3.
MinVC(H v ) ≥ 3 and H v is not an s-skein for some s ≥ 3. We branch into two cases: we look for a solution containing v or not containing v. In the first branch, we simply delete v and decrease k by one. In the second branch, we invoke the algorithm of Section 3.2 on H v .
Complexity analysis
In the previous discussion we have argued that invoking each branching step takes linear time. As in each branch we decrease the parameter k by at least one, the depth of the recursion is at most k. In this section we analyse branching vectors occuring in our algorithm. To finish the proof of Theorem 1 we need to show that the largest positive root of the equation 1 = r i=1 x −ai among all possible branching vectors (a 1 , a 2 , . . . , a r ) is strictly less than 1.9102.
As the number of resulting branching vectors in the analysis is rather large, we use a Python script for automated analysis (attached in the appendix). The main reason for a large number of branching vectors is that we need to analyse branchings on the graph H v in case when we consider v not to be included in the vertex cover. Let us now proceed with formal arguments.
In a few places, the algorithm of Section 3.2 is invoked on the graph H v and we know that MinVC(H v ) ≥ h for some integer h. Consider the branching tree T of this algorithm. For a node x ∈ V (T), the depth of x is the number of vertices of H v deleted on the path from x to the root. We mark some nodes of T. Each node of depth less than h is marked. Moreover, if a node x is of depth d < h and the branching step at node x has branching vector (1, 2), we infer that graph H v at this node is an s-skein for some s ≥ h − d, all descendants of x in V (T) are also nodes with branching steps with vectors (1,2). In this case, we mark all descendants of x that are within distance (in T) less than h − d. Note that in this way we may mark some descendants of x of depth equal or larger than h.
We split the analysis of an application of the algorithm of Section 3.2 into two phases: the first one contains all branching steps performed on marked nodes, and the second on the remaining nodes. In the second phase, we simply observe that each branching step has branching vector not worse than (1,2). In the first phase, we aim to write a single branching vector summarizing the phase, so that with its help we can balance the loss from other branches when v is deleted from the graph.
The main property of the marked nodes in T is that their existence is granted by the assumption MinVC(H v ) ≥ h. That is, each leaf of T has depth at least h, and, if at some node x of depth d < h the graph H v is an s-skein, we infer that s ≥ h − d (as the size of minimum vertex cover of an s-skein is s) and the algorithm performs s independent branching steps with branching vectors (1,2) in this case. Overall, no leaf of T is marked.
To analyse such branchings for h = 2 and h = 3 we employ the Python script, supplied in the appendix. The procedure branch Hv generates all possible branching vectors for the first branch, assuming the algorithm of Section 3.2 is allowed to pick branching vectors (1), (1,3), (2,2) or (1, 2) (option allow skein enables/disables the use of the (1, 2) vector in the first branch). Note that all other vectors described in Section 3.2 may be simulated by applying a number of vectors (1) after one of the aforementioned branching vectors.
Let us now move to the analysis of the algorithm of Section 4.3.
In Case 1 the algorithm of Section 3.2 performs branchings with vectors not worse than (1,2). Consider now Case 2. If v is deleted, we apply the algorithm of Section 3.2 to H w , yielding at least one branching step (as the connected component with w is not a clique). Hence, after this first branching step, we have either one subcase with parameter drop at least 2, or two subcases with parameter drops at least 2 and at least 3. Clearly, the second case yields worse branching vector.
If v is not deleted, the algorithm of Section 3.2 is applied to H v . The script invokes the procedure branch Hv on h = 2 and allow skein=False to obtain a list of possible branching vectors. For each such vector, we append entries (2, 3) from the subcase when v is deleted.
Case 3 is analysed analogously. The script invokes the procedure branch Hv on h = 3 and allow skein=False to obtain a list of possible branching vectors. For each such vector, we append the entry (1) from the subcase when v is deleted.
We infer that the largest root of the equation 1 = r i=1 x −ai occurs for branching vector (1, 3, 3, 4, 4, 5) and is less than 1.9102. This branching vector corresponds to Case 3 and the algorithm of Section 3.2, invoked on H v , first performs a branching step with the vector (1, 3) and in the branch with 1 deleted vertex, finds H v to be a 2-skein and performs two independent branching steps with vectors (1,2). This analysis concludes the proof of Theorem 1.
Conclusions and open problems
We have presented a new branching algorithm for Cluster Vertex Deletion. We hope our work will trigger a race for faster FPT algorithms for ClusterVD, as it was in the case of the famous Vertex Cover problem. Repeating after Hüffner et al. [23], we would like to re-pose here the question for a linear vertexkernel for ClusterVD. As ClusterVD is a special case of the 3-Hitting Set problem, it admits an O(k 2 )-vertex kernel in the unweighted case and an O(k 3 )-vertex kernel in the weighted one [1,2]. However, Cluster Editing is known to admit a much smaller 2k-vertex kernel, so there is a hope for a similar result for ClusterVD.
Lemma 4 .
4Let S be a modulator such that v / ∈ S. Then S contains a vertex cover of H v .
Lemma 9 .
9Given a designated vertex v ∈ V , one can in linear time either compute a vertex w of degree at least 3 in H v , together with its neighbourhood in H v , or explicitely construct the graph H v .Proof. First, mark vertices of N 1 and N 2 . Second, for each vertex of G compute its number of neighbours in N 1 and N 2 . This information, together with |N 1 |, suffices to compute degrees of vertices in H v . Hence, we may identify a vertex of degree at least 3 in H v , if it exists. For such a vertex w, computing N Hv (w) takes time linear in the size of G. If no such vertex w exists, the complement of G[N 1 ] has size linear in |N 1 | and we may construct H v in linear time in a straightforward manner.
Lemma 14 .
14Suppose that C \ {v} is not a cluster graph, where C is the connected component containing v. Suppose further that X = {w 1 , w 2 } is a minimum vertex cover of H v . Then in G \ {v}, either the connected component containing w 1 is not a clique, or the connected component containing w 2 is not a clique.
Also available at www.mimuw.edu.pl/~malcin/research/cvd
Python script automating complexity analysis 1 import scipy.optimize def value(vector):"""compute the value of a branching vector""" def h(x): return sum([x**(-v) for v in vector])-1 return scipy.optimize.brenth(h,1, 100) def join(first, then):"""peform 'then' in each branch after the execution of 'first' """ return [x+y for x in first for y in then] def add(a, vector): """add a to each element of a vector""" return join(
A kernelization algorithm for d-hitting set. F N Abu-Khzam, J. Comput. Syst. Sci. 767F. N. Abu-Khzam. A kernelization algorithm for d-hitting set. J. Comput. Syst. Sci., 76(7):524-531, 2010.
Kernels: Annotated, proper and induced. F N Abu-Khzam, H Fernau, Lecture Notes in Computer Science. H. L. Bodlaender and M. A. Langston4169SpringerF. N. Abu-Khzam and H. Fernau. Kernels: Annotated, proper and induced. In H. L. Bodlaender and M. A. Langston, editors, IWPEC, volume 4169 of Lecture Notes in Computer Science, pages 264-275. Springer, 2006.
Aggregating inconsistent information: Ranking and clustering. N Ailon, M Charikar, A Newman, 23:1-23:27Journal of the ACM. 555N. Ailon, M. Charikar, and A. Newman. Aggregating inconsistent information: Ranking and clustering. Journal of the ACM, 55(5):23:1-23:27, 2008.
Quadratic forms on graphs. N Alon, K Makarychev, Y Makarychev, A Naor, Proceedings of the 37th ACM Symposium on Theory of Computing (STOC 2005). the 37th ACM Symposium on Theory of Computing (STOC 2005)ACMN. Alon, K. Makarychev, Y. Makarychev, and A. Naor. Quadratic forms on graphs. In Proceedings of the 37th ACM Symposium on Theory of Computing (STOC 2005), pages 486-493. ACM, 2005.
N Bansal, A Blum, S Chawla, Correlation clustering. Machine Learning. 56N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56:89-113, 2004.
Clustering gene expression patterns. A Ben-Dor, R Shamir, Z Yakhini, Journal of Computational Biology. 63/4A. Ben-Dor, R. Shamir, and Z. Yakhini. Clustering gene expression patterns. Journal of Computational Biology, 6(3/4):281-297, 1999.
A golden ratio parameterized algorithm for cluster editing. S Böcker, Journal of Discrete Algorithms. 16S. Böcker. A golden ratio parameterized algorithm for cluster editing. Journal of Discrete Algorithms, 16:79-89, 2012.
A fixed-parameter approach for weighted cluster editing. S Böcker, S Briesemeister, Q B A Bui, A Truß, Proceedings of the 6th Asia-Pacific Bioinformatics Conference. the 6th Asia-Pacific Bioinformatics ConferenceAPBC6S. Böcker, S. Briesemeister, Q. B. A. Bui, and A. Truß. A fixed-parameter approach for weighted clus- ter editing. In Proceedings of the 6th Asia-Pacific Bioinformatics Conference (APBC 2008), volume 6 of Advances in Bioinformatics and Computational Biology, pages 211-220, 2008.
Exact algorithms for cluster editing: Evaluation and experiments. S Böcker, S Briesemeister, G W Klau, Algorithmica. 602S. Böcker, S. Briesemeister, and G. W. Klau. Exact algorithms for cluster editing: Evaluation and experi- ments. Algorithmica, 60(2):316-334, 2011.
Even faster parameterized cluster deletion and cluster editing. S Böcker, P Damaschke, Information Processing Letters. 11114S. Böcker and P. Damaschke. Even faster parameterized cluster deletion and cluster editing. Information Processing Letters, 111(14):717-721, 2011.
Clustering with partial information. H L Bodlaender, M R Fellows, P Heggernes, F Mancini, C Papadopoulos, F A Rosamond, Theoretical Computer Science. H. L. Bodlaender, M. R. Fellows, P. Heggernes, F. Mancini, C. Papadopoulos, and F. A. Rosamond. Clus- tering with partial information. Theoretical Computer Science, 411(7-9):1202-1211, 2010.
Fixed-parameter tractability of graph modification problems for hereditary properties. L Cai, Inf. Process. Lett. 584L. Cai. Fixed-parameter tractability of graph modification problems for hereditary properties. Inf. Process. Lett., 58(4):171-176, 1996.
Maximizing quadratic programs: Extending Grothendieck's inequality. M Charikar, A Wirth, Proceedings of the 45th Symposium on Foundations of Computer Science (FOCS 2004). the 45th Symposium on Foundations of Computer Science (FOCS 2004)IEEE Computer SocietyM. Charikar and A. Wirth. Maximizing quadratic programs: Extending Grothendieck's inequality. In Proceedings of the 45th Symposium on Foundations of Computer Science (FOCS 2004), pages 54-60. IEEE Computer Society, 2004.
Fixed-parameter enumerability of cluster editing and related problems. Theory of Computing Systems. P Damaschke, 46P. Damaschke. Fixed-parameter enumerability of cluster editing and related problems. Theory of Computing Systems, 46(2):261-283, 2010.
Graph-based data clustering with overlaps. M R Fellows, J Guo, C Komusiewicz, R Niedermeier, J Uhlmann, Discrete Optimization. 81M. R. Fellows, J. Guo, C. Komusiewicz, R. Niedermeier, and J. Uhlmann. Graph-based data clustering with overlaps. Discrete Optimization, 8(1):2-17, 2011.
Exact Exponential Algorithms. Texts in theoretical computer science. F Fomin, D Kratsch, SpringerBerlin HeidelbergF. Fomin and D. Kratsch. Exact Exponential Algorithms. Texts in theoretical computer science. Springer Berlin Heidelberg, 2010.
Tight bounds for parameterized complexity of cluster editing. F V Fomin, S Kratsch, M Pilipczuk, M Pilipczuk, Y Villanger, Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik. N. Portier and T. Wilke20STACSF. V. Fomin, S. Kratsch, M. Pilipczuk, M. Pilipczuk, and Y. Villanger. Tight bounds for parameterized complexity of cluster editing. In N. Portier and T. Wilke, editors, STACS, volume 20 of LIPIcs, pages 32-43. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik, 2013.
Correlation clustering with a fixed number of clusters. I Giotis, V Guruswami, Proceedings of the 17th Symposium on Discrete Algorithms (SODA 2006). the 17th Symposium on Discrete Algorithms (SODA 2006)ACM PressI. Giotis and V. Guruswami. Correlation clustering with a fixed number of clusters. In Proceedings of the 17th Symposium on Discrete Algorithms (SODA 2006), pages 1167-1176. ACM Press, 2006.
Automated generation of search tree algorithms for hard graph modification problems. J Gramm, J Guo, F Hüffner, R Niedermeier, Algorithmica. 394J. Gramm, J. Guo, F. Hüffner, and R. Niedermeier. Automated generation of search tree algorithms for hard graph modification problems. Algorithmica, 39(4):321-347, 2004.
Graph-modeled data clustering: Exact algorithms for clique generation. J Gramm, J Guo, F Hüffner, R Niedermeier, Theory of Computing Systems. 38J. Gramm, J. Guo, F. Hüffner, and R. Niedermeier. Graph-modeled data clustering: Exact algorithms for clique generation. Theory of Computing Systems, 38(4):373-392, 2005.
Editing graphs into disjoint unions of dense clusters. J Guo, I A Kanj, C Komusiewicz, J Uhlmann, Algorithmica. 614J. Guo, I. A. Kanj, C. Komusiewicz, and J. Uhlmann. Editing graphs into disjoint unions of dense clusters. Algorithmica, 61(4):949-970, 2011.
A more relaxed model for graph-based data clustering: s-plex cluster editing. J Guo, C Komusiewicz, R Niedermeier, J Uhlmann, SIAM Journal of Discrete Mathematics. 244J. Guo, C. Komusiewicz, R. Niedermeier, and J. Uhlmann. A more relaxed model for graph-based data clustering: s-plex cluster editing. SIAM Journal of Discrete Mathematics, 24(4):1662-1683, 2010.
Fixed-parameter algorithms for cluster vertex deletion. F Hüffner, C Komusiewicz, H Moser, R Niedermeier, Theory Comput. Syst. 471F. Hüffner, C. Komusiewicz, H. Moser, and R. Niedermeier. Fixed-parameter algorithms for cluster vertex deletion. Theory Comput. Syst., 47(1):196-217, 2010.
Parameterized Algorithmics for Network Analysis: Clustering & Querying. C Komusiewicz, Technische Universität BerlinPhD thesisC. Komusiewicz. Parameterized Algorithmics for Network Analysis: Cluster- ing & Querying. PhD thesis, Technische Universität Berlin, 2011. Available at http://fpt.akt.tu-berlin.de/publications/diss-komusiewicz.pdf.
Alternative parameterizations for cluster editing. C Komusiewicz, J Uhlmann, Proceedings of the 37th International Conference on Current Trends in Theory and Practice of Computer Science. the 37th International Conference on Current Trends in Theory and Practice of Computer ScienceSpringer6543SOFSEM 2011C. Komusiewicz and J. Uhlmann. Alternative parameterizations for cluster editing. In Proceedings of the 37th International Conference on Current Trends in Theory and Practice of Computer Science (SOFSEM 2011), volume 6543 of Lecture Notes in Computer Science, pages 344-355. Springer, 2011.
Applying modular decomposition to parameterized cluster editing problems. Theory of Computing Systems. F Protti, M D Da Silva, J L Szwarcfiter, 44F. Protti, M. D. da Silva, and J. L. Szwarcfiter. Applying modular decomposition to parameterized cluster editing problems. Theory of Computing Systems, 44(1):91-104, 2009.
Finding odd cycle transversals. B A Reed, K Smith, A Vetta, Oper. Res. Lett. 324B. A. Reed, K. Smith, and A. Vetta. Finding odd cycle transversals. Oper. Res. Lett., 32(4):299-301, 2004.
Cluster graph modification problems. R Shamir, R Sharan, D Tsur, Discrete Applied Mathematics. 1441-2R. Shamir, R. Sharan, and D. Tsur. Cluster graph modification problems. Discrete Applied Mathematics, 144(1-2):173-182, 2004.
Algorithms, measures, and upper bounds for satisfiability and related problems. M Wahlström, PhD thesisLinköping Studies in Science and TechnologyM. Wahlström. Algorithms, measures, and upper bounds for satisfiability and related prob- lems. PhD thesis, Linköping Studies in Science and Technology, 2007. Available at http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8714.
| [] |
[
"Adversarial Task Assignment",
"Adversarial Task Assignment"
] | [
"Chen Hajaj [email protected] \nElectrical Engineering and Computer Science\nVanderbilt University Nashville\nTN\n",
"Yevgeniy Vorobeychik [email protected] \nElectrical Engineering and Computer Science\nVanderbilt University Nashville\nTN\n"
] | [
"Electrical Engineering and Computer Science\nVanderbilt University Nashville\nTN",
"Electrical Engineering and Computer Science\nVanderbilt University Nashville\nTN"
] | [] | The problem of assigning tasks to workers is of long-standing fundamental importance. Examples of this include the classical problem of assigning computing tasks to nodes in a distributed computing environment, assigning jobs to robots, and crowdsourcing. Extensive research into this problem generally addresses important issues such as uncertainty and incentives. However, the problem of adversarial tampering with the task assignment process has not received as much attention.We are concerned with a particular adversarial setting where an attacker may target a set of workers in order to prevent the tasks assigned to these workers from being completed. When all tasks are homogeneous, we provide an efficient algorithm for computing the optimal assignment. When tasks are heterogeneous, we show that the adversarial assignment problem is NP-Hard, and present an algorithm for solving it approximately. Our theoretical results are accompanied by extensive experiments showing the effectiveness of our algorithms. | 10.24963/ijcai.2018/526 | [
"https://arxiv.org/pdf/1804.11221v2.pdf"
] | 13,744,769 | 1804.11221 | e3e24dc2b678463ca21273c4067e24e7ef446bd9 |
Adversarial Task Assignment
May 2018
Chen Hajaj [email protected]
Electrical Engineering and Computer Science
Vanderbilt University Nashville
TN
Yevgeniy Vorobeychik [email protected]
Electrical Engineering and Computer Science
Vanderbilt University Nashville
TN
Adversarial Task Assignment
May 2018
The problem of assigning tasks to workers is of long-standing fundamental importance. Examples of this include the classical problem of assigning computing tasks to nodes in a distributed computing environment, assigning jobs to robots, and crowdsourcing. Extensive research into this problem generally addresses important issues such as uncertainty and incentives. However, the problem of adversarial tampering with the task assignment process has not received as much attention.We are concerned with a particular adversarial setting where an attacker may target a set of workers in order to prevent the tasks assigned to these workers from being completed. When all tasks are homogeneous, we provide an efficient algorithm for computing the optimal assignment. When tasks are heterogeneous, we show that the adversarial assignment problem is NP-Hard, and present an algorithm for solving it approximately. Our theoretical results are accompanied by extensive experiments showing the effectiveness of our algorithms.
Introduction
The problem of allocating a set of tasks among a collection of workers has been a fundamental research question in a broad array of domains, including distributed computing, robotics, and, recently, crowdsourcing [2,25,17]. Despite the extensive interest in the problem, however, there is little prior work on task assignment in settings where workers may be attacked. Such adversarial task assignment problems can arise, for example, when tasks are of high economic or political consequence, such as in robotic rescue missions following terror activities, or crowdsourcing to determine which executables are malicious or benign, or which news stories constitute fake news. We investigate the adversarial task assignment problem in which a rational external attacker targets one or more workers after tasks have already been assigned. Equivalently, this can be viewed as a robust task assignment problem with unknown uncertainty about worker failures. We formalize the interaction between the attacker and requester (defender) as a Stackelberg game in which the defender first chooses an assignment, and the attacker subsequently attacks a set of workers so as to maximize the defender's losses from the attack. We seek a strong Stackelberg equilibrium (SSE) of this game and focus on computing an optimal robust assignment.
Our analysis begins with a setting in which tasks are homogeneous, that is, all tasks have the same utility for the defender (e.g., rescue soldiers from a battlefield, or label a large dataset of images). We characterize the optimal structure of a robust assignment, and use this insight to develop an algorithm that extracts this assignment in time linear in the number of tasks and targets, and quadratic in the number of workers. We show that this algorithm significantly outperforms several baselines, and obtains a good solution even when no adversary is present.
Next, we turn to heterogeneous task settings. This case, it turns out, is considerably more challenging. Specifically, we show that it may be beneficial to assign more than a single worker to a task. Moreover, even if we impose a restriction that only a single worker can be assigned to a task (optimal when tasks are homogeneous), extracting the optimal assignment is strongly NP-Hard. To overcome this issue, we propose an integer programming approach for solving the restricted problem, as well as an algorithm for finding an approximately optimal assignment in the general case. Again, our experiments show that our approach significantly outperforms several baselines.
Related Work The problem of task assignment in adversarial settings has been considered from several perspectives. One major stream of literature is about robots acting in adversarial environments. Alighanbari and How [1] consider assigning weapons to targets, somewhat analogous to our problem, but do not model the decision of the adversary; their model also has rather different semantics than ours. Robotic soccer is another common adversarial planning problem, although the focus is typically on coordination among multiple robots when two opposing teams are engaged in coordination and planning [14].
Another major literature stream which considers adversarial issues is crowdsourcing. One class of problems is a number of workers to hire [4], the issue of individual worker incentives in truthfully responding to questions [22], or in the amount of effort they devote to the task [27,10,17], rather than adversarial reasoning per se. Another, more directly adversarial setting, considers situations where some workers simply answer questions in an adversarial way [12,24]. However, the primary interest in this work is robust estimation when tasks are assigned randomly or exogenously, rather than task assignment itself. Similarly, prior research on machine learning when a portion of data is adversarially poisoned [5,28,11,6,18] focuses primarily on the robust estimation problem, and not task assignment; in addition, it does not take advantage of structure in the data acquisition process, where workers, rather than individual data points, are attacked. Other works [13,3] focus on the change of the system after the assignment process and the structure of the social network rather than the assignment process itself.
Our work has a strong connection to the literature on Stackelberg security games [8,16,26]. However, the mathematical structure of our problem is quite different. For example, we have no protection resources to allocate, and instead, the defender's decision is about assigning tasks to potentially untrusted workers.
Model
Consider an environment populated with a single requester (hereafter denoted by "defender"), a set of n workers, W , a set of m tasks, T , and an adversary. Furthermore, each worker w ∈ W is characterized by a capacity constraint c w , which is the maximum number of tasks it can be assigned, and an individual proficiency or the probability of successfully completing a task, denoted by p w .Worker proficiencies are assumed to be common knowledge to both the defender and attacker. Such proficiencies can be learned from experience [21,9,19]; moreover, in many settings, these are provided by the task assignment (e.g., crowdsourcing) platform, in the form of a reputation system [20].
For exposition purposes, we index the workers by integers i in decreasing order of their proficiency, so that P = (p 1 , . . . , p n ) s.t. p i ≥ p j ∀i < j, and denote the set of k most proficient workers by W k . Thus, the capacity of worker i would be denoted by c i . Each task t ∈ T is associated with a utility u t that the defender obtains if this task is completed successfully. If the task is not completed successfully, the defender obtains zero utility from it.
We focus on the common case where the defender faces a budget constraint of making at most B ≤ m assignments; the setting with B > m necessitates different algorithmic techniques, and is left for future work. The defender's fundamental decision is the assignment of tasks to workers. Formally, an assignment s specifies a subset of tasks T ′ (s) and the set of workers, W t (s) assigned to each task t ∈ T ′ (s).
Suppose that multiple workers are assigned to a task t, and let L t (s) denote the labels returned by workers in W t (s) for t (for example, these could simply indicate whether a worker successfully complete the task). Then the defender determines the final label to assign to t (e.g., whether or not the task has been successfully completed) according to some deterministic mapping δ : L t (s) → l (e.g., majority label), such that L ∈ {1, . . . , j t } |Wt(s)| and l ∈ {1, . . . , j t }. Naturally, whenever a single worker w is a assigned to a task and returns a label l w , δ(l w ) = l w . Let ι t be the (unknown) correct label corresponding to a task t; this could be an actual label, such as the actual object in the image, or simply a constant 1 if we are only interested in successful completion of the task. The defender's expected utility when assigning a set of tasks T ′ (s) to workers and obtaining the labels is then
u def (s) = t∈T ′ (s) u t Pr{δ(L t (s)) = ι t },(1)
where the probability is with respect to worker proficiencies (and resulting stochastic realizations of their outcomes). It is immediate that in our setting if there is no adversary and no capacity constraints for the workers, all tasks should be assigned to the worker with the highest p w . Our focus, however, is how to optimally assign workers to tasks when there is an intelligent adversary who may subsequently (to the assignment) attack a set of workers. In particular, we assume that there is an adversary (attacker) with the goal of minimizing the defender's utility u def ; thus, the game is zero-sum. To this end, the attacker chooses a set of τ workers to attack, for example, by deploying a cyber attack against the corresponding computer nodes, physical attacks on search and rescue robots, or attacks against the devices on which the human workers performs their tasks. Alternatively, our goal is to be robust to τ -worker failures (e.g., N -τ -robustness [7]). We encode the attacker's strategy by a vector α where α w = 1 iff a worker w is attacked (and w α w = τ since τ workers are attacked). The adversary's attack takes place after the tasks have already been assigned to workers, where the attacker knows the actual assignments of tasks to workers before deploying the attack, and the consequence of an attack on a worker w is that all tasks assigned to w fail to be successfully completed.
Clearly, when an attacker is present, the policy of assigning all tasks to the most competent worker (when there are no capacity constraints) will yield zero utility for the defender, as the attacker will simply attack the worker to whom all the tasks are assigned. The challenge of how to split the tasks up among workers, trading off quality with robustness to attacks, is the subject of our inqury. Formally, we aim to compute a strong Stackelberg equilibrium of the game between the defender (leader), who chooses a task-to-worker assignment policy, and the attacker (follower), who attacks a single worker [23].
Homogeneous tasks
We start by considering tasks which are homogeneous, that is, u t = u t ′ for any two tasks t, t ′ . Without loss of generality, suppose that all u t = 1. Note that since all tasks share the same utility, if B < m, the defender is indifferent regarding the identity of tasks being assigned. Further, it is immediate that we never wish to waste budget, since assigning a worker always results in non-negative marginal utility. Consequently, we can simply randomly subsample B tasks from the set of all tasks, and consider the problem with m = B.
We overload the notation and use s = {s 1 , . . . , s n } to denote the number of tasks allocated to each worker. Although the space of deterministic assignments is large, we now observe several properties of optimal assignments which allow us to devise an efficient algorithm for this problem. Proposition 1. Suppose that tasks are homogeneous. For any assignment s there is a weakly utility-improving assignment s ′ for the defender which assigns each task to a single worker.
Proof. Consider an assignment s and the corresponding best response by the attacker, α, in which a workerw is attacked. Let a taskt be assigned to a set of workers Wt with |Wt| = k > 2. Then there must be another task t ′ which is unassigned. Now consider a worker w ∈ Wt. Since utility is additive, we can consider just the marginal utility of any worker w ′ to the defender and attacker; denote this by u w ′ . Let T w ′ be the set of tasks assigned to a worker w ′ under s.
Let u w = t∈Tw u M wt , where u M wt = u t Pr{δ(L t (s)) = ι t } − u t Pr{δ(L t (s) \ L w t ) = ι t }
is the marginal utility of worker of w towards a task t. Clearly, u w ≤ uw, since the attacker is playing a best response.
Suppose that we reassign w fromt to t ′ . If w =w, the attacker will still attack w (since the utility of w to the attacker can only increase), and the defender is indifferent. If w =w, there are two cases: (a) the attacker still attacksw after the change, and (b) the attacker now switches to attack w. Suppose the attacker still attacksw. The defender's net gain is p w − u M wt ≥ 0. If, instead, the attacker now attacks w, the defender's net gain is uw − u w ≥ 0.
Consequently, we can restrict the set of assignments to those which assign a single worker per task; we denote this restricted set of assignments by S. Given a assignment s ∈ S and the attack strategy α, the defender's expected utility is:
u def (s, α) = w∈W s w p w (1 − α w )(2)
Next, we show that there is always an optimal assignment that assigns tasks to the k most proficient workers, for some k.
Proposition 2.
In an optimal assignment s, suppose that s i > 0 for i > 1. Then there must be an optimal assignment in which s i−1 > 0.
Proof. Consider an optimal assignment s and the attacker's best response α in which W is the set of workers being attacked. Now, consider moving 1 task from i to i − 1.
We denote the updated set of workers attacked (due to this change) asW ′ . Suppose that i ∈W , that is, the worker i was initially attacked. If i − 1 ∈W ′ , there are two potions: 1) i ∈W ′ (i.e., i is still being attacked) and hence the net gain to the defender does not change, and 2) i / ∈W ′ and hence the net gain to the defender is
p i (|T i | − 1) ≥ 0. If i − 1 / ∈W ′ , the net gain is p i−1 > 0. Suppose that i / ∈W . If i − 1 is now attacked, the net gain is p w (|T w | − 1) ≥ 0 (where w ∈W and w / ∈W ′ ). Otherwise (i.e., i − 1 / ∈W ′ ), the net gain is p i−1 − p i ≥ 0.
We can now present an assignment algorithm for optimal assignment (Algorithm 1) which has complexity O(n 2 mτ ). The intuition behind the algorithm is to consider each worker i as a potential target of an attack, and then compute the best assignment subject to a constraint that i is attacked (i.e., that p i s i ≥ p j s j for all other workers j = i). Subject to this constraint, we consider all possible numbers of tasks that can be assigned to i, and then assign as many tasks as possible to the other workers in order of their proficiency (where the τ workers that contribute the most to the defender's utility are attacked). The only special case (Steps 7-10) is when assigning the last worker. In this case, it may be beneficial to alternate the last two workers' assignments to result in a more beneficial overall assignment. Optimality follows from the fact that we exhaustively search possible targets and allocation policies to these, and assign as many tasks as possible to the most effective workers.
Next, we turn to show that Algorithm 1 computes an optimal SSE commitment when tasks are homogeneous. For readability, we denote the worker associated with the highest utility as w max , i.e., s w max p w max ≥ s w p w ∀w ∈ W . We focus on the case where at least one worker is not attacked as otherwise all possible assignments result in u def = 0.
for s i ∈ {1, . . . , c i } do 4: Υ i ← s i p i ,B ← m − s i 5: for j ∈ {1, . . . , n} \ i do 6: s j ← min( pi pj s i , B, c j ), B ← B − s j 7: if j < n ∧ B + 1 ≤ min( pi pj+1 s i − 1, c j+1 ) then 8: s ′ ← s, s ′ j ← s j − 1 9: if u def (s, α) ≤ u def (s ′ , α ′ ) + p j+1 then 10: s j ← s j − 1, B ← B + 1 11: Υ j ← s j p j 12:
Sort Υ in ascending order 13:
util ← n−τ k=1 Υ k 14:
if util > u max then 15: u max ← util, s * ← s 16: return s * Proof. Since Algorithm 1 iterates over all possible assignments for this worker (Step 3), if assigning less (or more) tasks was profitable, this updated assignment was resulted by the algorithm.
Proposition 4. Given an assignment s and two (arbitrary) workers j and k, such that p k < p j and α j = 0 (i.e., j is not attacked under s), the defender cannot move a task from k to j without making j a target.
Proof. Assume in negation that j can be assigned with an additional task and that j / ∈ T still holds. On each iteration, Algorithm 1 assigns worker i with s i tasks (Step 3), than each time the algorithm gets to Step 5, it assigns some other worker with the maximal amount of tasks such that this worker do not contribute to the defender's utility more than i does. Specifically, this is also the case for j. Hence, by assigning j an additional task it must become a target (i.e., j ∈ T ), contradicting the assumption. Theorem 1. Algorithm 1 computes an optimal SSE commitment when tasks are homogeneous.
Proof. Assume in negation that there exists some assignment s ′ = s and its corresponding attack strategy, α ′ , such that u def (s ′ , α ′ ) > u def (s, α). Specifically, there exist 16 different ways to make a single change in an assignment (as detailed below). For each of these changes, we prove that such assignment is not possible or contradicts the above assumption (i.e., do not improve the defender's expected utility). For readability of the proof, we denote w s i as worker i under assignment s.
1.
Move a task from a non-target to a non-target w s 1 / ∈ T , w s ′ 1 / ∈ T , w s 2 / ∈ T , w s ′ 2 / ∈ T : If p 1 ≥ p 2 , this change can either leave the utility as is (if p 1 = p 2 ) or decrease it (assign a task to a less proficient worker), hence contradicting the assumption. Otherwise, if p 1 < p 2 , according to Proposition 4, a more proficient non-target cannot be assigned with additional task and remains non-target.
2. w s 1 / ∈ T , w s ′ 1 / ∈ T , w s 2 / ∈ T , w s ′ 2 ∈ T :
Moving a task from a non-target worker to a less proficient non-target (i.e., if p 1 ≥ p 2 ) cannot make the less proficient worker a target (w s ′ 2 ∈ T is not possible in this case). Otherwise, if p 1 < p 2 , p 2 will become the worker associated with the highest utility under s ′ following this change. Still, if this assignment resulted in an higher utility for the defender, this was the output of Algorithm 1. Since this is not the output, u def (s, α) ≥ u def (s ′ , α), contradicting the assumption.
3. w s 1 / ∈ T , w s ′ 1 / ∈ T , w s 2 ∈ T , w s ′ 2 / ∈ T :
A worker that is currently being attacked cannot be assigned with an additional task and not be attacked anymore.
4. w s 1 / ∈ T , w s ′ 1 / ∈ T , w s 2 ∈ T , w s ′ 2 ∈ T :
Moving a task from a non-target worker to a target will only reduce the utility of the defender, contradicting the assumption.
5. w s 1 / ∈ T , w s ′ 1 ∈ T , w s 2 / ∈ T , w s ′ 2 /
∈ T : A non-target worker cannot give a task and become a target.
6. w s 1 / ∈ T , w s ′ 1 ∈ T , w s 2 / ∈ T , w s ′ 2 ∈ T :
A non-target worker cannot give a task and become a target.
7. w s 1 / ∈ T , w s ′ 1 ∈ T , w s 2 ∈ T , w s ′ 2 /
∈ T : A non-target worker cannot give a task and become a target.
8. w s 1 / ∈ T , w s ′ 1 ∈ T , w s 2 ∈ T , w s ′ 2 ∈ T :
A non-target worker cannot give a task and become a target. 9. w s 1 ∈ T , w s ′ 1 / ∈ T , w s 2 / ∈ T , w s ′ 2 / ∈ T : Since w 2 assigned with an additional task and still not attacked, it must be the least proficient assigned worker. Thus, the new target (instead of w 1 ) is some other worker that results in the highest utility for the defender. According to Proposition 3, if this step was beneficial, it was resulted by Algorithm 1 that considers each worker as w max .
10. w s 1 ∈ T , w s ′ 1 / ∈ T , w s 2 / ∈ T , w s ′ 2 ∈ T :
If p 2 ≥ p 1 this change make w 2 the worker who contributes most. According to Proposition 3, if this step was beneficial, it was resulted by Algorithm 1 that considers each worker as w max . Otherwise, if p 1 > p 2 , s 1 < s 2 . The gain from this switch is (s 1 − 1)p 1 − s 2 p 2 . This gain will be positive only if s 1 p 1 − p 1 > s 2 p 2 . Still, this is only possible if w 2 is the least proficient worker assigned (the difference of any other worker k from being a target is at most p k ). If this is the least proficient assigned worker and becomes the target, it implies that w 2 was assigned with the maximal amount of tasks such that it is not a target under s. Hence, s2p2 > s 1 p 1 − p 1 , contradicting the assumption. 11. w s 1 ∈ T , w s ′ 1 / ∈ T , w s 2 ∈ T , w s ′ 2 / ∈ T : A worker that is currently being attacked cannot be assigned with an additional task and not be attacked anymore.
12. w s 1 ∈ T , w s ′ 1 / ∈ T , w s 2 ∈ T , w s ′ 2 ∈ T : Since w 1 is no longer a target (and w 2 was initially a target), there exists some other worker w ′ that prior to the change was not a target but becomes one due to the change (i.e, currently contributes more than w s ′ 1 ). Hence, the defender's utility decreases due to this change, contradicting the assumption. 13. w s 1 ∈ T , w s ′ 1 ∈ T , w s 2 / ∈ T , w s ′ 2 / ∈ T : Assigning another task to a non-target and keeping it a non-target is only possible if w 2 is the least proficient worker assigned (otherwise, this worker will become a target). Still, the difference between the utility resulted from each assigned worker k (beside maybe the least proficient worker) and the worker who contributes the most, w max , is at most p k . By reducing the number of tasks assigned to w 1 , its difference from w max becomes more than p k for some worker k ∈ W . Hence, there is no way that w 1 is assigned with one less task and still a target.
14. w s 1 ∈ T , w s ′ 1 ∈ T , w s 2 / ∈ T , w s ′ 2 ∈ T : w 2 becomes a target instead of some other worker. There are two possible cases: 1) w 2 is the least proficient worker assigned. Note that the difference between the utility resulted from each assigned worker k (beside maybe the least proficient worker) and the worker who contributes the most, w max , is at most p k . By reducing the number of tasks assigned to w 1 , its difference from w max becomes more than p k for some worker k ∈ W . Hence, there is no way that w 1 is assigned with one less task and still a target. 2) w 2 is some other worker. This means, that p 2 = w max (under the new assignment). Still, if this assignment resulted in an higher utility for the defender, this was the output of Algorithm 1. Since this is not the output, u def (s, α) ≥ u def (s ′ , α), contradicting the assumption. 15. w s 1 ∈ T , w s ′ 1 ∈ T , w s 2 ∈ T , w s ′ 2 / ∈ T : A worker that is currently being attacked cannot be assigned with an additional task and not be attacked anymore. 16. w s 1 ∈ T , w s ′ 1 ∈ T , w s 2 ∈ T , w s ′ 2 ∈ T : Since both workers remain targets, u def (s, α) = u def (s ′ , α ′ ), contradicting the assumption.
Finally, since no single change is shown to be profitable from the defender's point of view, and any possible change in assignments can be represented as a set of single changes, we conclude that Algorithm 1 computes an optimal SSE commitment when tasks are homogeneous.
Heterogeneous tasks
It turns out that the more general problem in which utilities are heterogeneous is considerably more challenging than the case of homogeneous allocation. First, we show that even if the tasks' utilities are slightly different, it may be beneficial to assign the same task to multiple workers. Consider the case of an environment populated with 2 workers and 2 tasks. WLOG, we order the tasks by their utility, i.e., u t1 > u t2 . Regardless of the workers' proficiencies, assigning one worker per task will result in an expected utility of min(p i u t1 , p j u t2 ). On the other hand, assigning both workers to t 1 will result in an expected utility of min(p i u t1 , p j u t1 ) which is always higher. Aside from the considerably greater complexity challenge associated with solving problems with heterogeneous utilities suggested by this example, there is the additional challenge of incorporating (non-linear) decision rules into the optimization problem to resolving disagreement among workers, should it arise.
We begin by showing that if B ≤ m, there is an optimal assignment in which only the B tasks associated with the highest utility are included.
Proposition 5. Suppose that tasks are heterogeneous. For any assignment s there is a weakly utility-improving (i.e., results in the same or higher utility) assignment s ′ for the defender which only assigns tasks from the set of tasks with the B highest utilities.
Proof. For readability, we assume that tasks are ordered based on their utility in decreasing order (i.e., u i ≥ u j , ∀i ≤ j), and that a single worker is assigned per task; generalization is straightforward. Consider an assignment s and the corresponding best response by the attacker, α, in which the set of workersW is attacked. Let a task t i be s.t. i > B. Then there must be another task t j , s.t. j ≤ B, which is unassigned. Now consider a worker w ∈ W ti . Since utility is additive, we can consider just the marginal utility of any worker w ′ to the defender and attacker; denote this by u w ′ . Let T w ′ be the set of tasks assigned to a worker w ′ under s. Let u w = t∈Tw u M wt , where u M wt is the marginal utility of worker of w towards a task t.
Suppose that we reassign w from t i to t j . If w ∈W , the attacker will still attack w (since the utility of w to the attacker can only increase), and the defender is indifferent. If w / ∈w, there are two cases: (a) the attacker still attacksW after the change, and (b) the attacker now switches to attack w. Suppose the attacker still attacksW . The defender's net gain is p w u j − u M wt ≥ 0. If, instead, the attacker now attacks w, the defender's net gain is u w ′ − u w ≥ 0. Where w ′ is the worker that is not being attacked anymore.
This allows us to restrict attention to the B highest-utility tasks, and assume that m = B.
We now show that the defender's assignment problem, denoted Heterogeneous tasks assignment (HTA), is NP-hard even if we restrict the strategies to assign only a single worker per task. Proposition 6. HTA is strongly NP-hard even when we assign only one worker per task.
Proof. We prove the proposition by reducing the decision version of the Bin packing problem (BP), which is a strongly NP-complete problem, to the decision version of the HTA problem. In the BP problem we are given a set {o 1 , o 2 , ..., o m } of m objects of sizes {v 1 , v 2 , ..., v m } and a set of n containers {C 1 , C 2 , ..., C n }, each of size γ, and we need to decide if all the objects can be fitted into the given containers. Our transformation maps the set of m objects to a set of m + 1 tasks T = {t 1 , t 2 , ..., t m+1 } with utilities {v 1 , v 2 , ..., v m , γ} and the set of n containers to a set of n + 1 workers W = {w 1 , w 2 , ..., w n+1 }. We consider the private case where all the workers have the same proficiency p (i.e. p w = p, ∀w ∈ W ). The decision version of the HTA problem asks if there exists an assignment of the m + 1 tasks to the n + 1 workers that achieves a utility of at least pV , where V = m i=1 v i . If we started with a YES instance of the BP problem, then there exists an assignment A that fits all m objects into the n containers. Consider the following assignment of tasks to workers in the HTA problem. If A(o i ) = C j , we assign task t i to worker w j . Also, we assign task t m+1 (with utility γ) to worker w n+1 . Note that no worker can achieve an individual utility greater than pγ, which is achieved by worker w n+1 . Thus, the utility of the overall task assignment is m i=1 pv i + pγ − pγ = pV , meaning that our transformation produced a YES instance of the HTA problem. Now suppose that we ended up with a YES instance of the HTA problem. Then there exists a task assignment B such that the sum of utilities (V * ) minus the adversarial harm (γ * ) is at least pV (i.e. V * − γ * ≥ pV ). Note that V * = i = 1 m pv i + pγ = pV + pγ (each task is assigned to some worker). This implies pV + pγ − γ * ≥ pV and γ * /p ≤ γ. Thus the utility sum (before performance p is applied) of the tasks assigned to any single worker cannot exceed γ. This could only happen if task t m+1 (with utility γ) was the only task assigned to the corresponding player. WLOG let that worker be w n+1 . All other tasks must have been assigned to workers {w 1 , w 2 , ..., w n }. It is easy to see that this implies a feasible assignment of objects to containers in the BP problem -if B(t j ) = w i , for 1 ≤ j ≤ m, then we place object o j in container C i . Thus the transformation must have started off with a YES instance of the BP problem.
We now propose an algorithm which computes an approximately optimal assignment. We begin by supposing that only one worker can be assigned per task (we relax this shortly). In this case, the optimal attack can be computed using the following linear integer program:
max α w∈W α w t∈T s wt u t p w (3a) s.t. : w∈W α w = τ (3b) α w ∈ {0, 1}.(3c)
The objective (3a) aims to maximize the effect of the attack (i.e., the utility of the targets). Constraint (3b) ensures that the adversary attacks exactly τ workers. First, note that the extreme points of the constraint set are integral, which means we can relax the integrality constraint to α w ∈ [0, 1]. In order to plug this optimization into the defender's optimal assignment problem, we convert this relaxed program to its dual form:
min λ,β λτ + w β w (4a) s.t. : λ + β w ≥ p w t∈T s wt u t ∀ w (4b) β ≥ 0.(4c)
Thus, the optimal assignment can be computed using the following linear integer program:
max s,γ,λ,β w∈W p w t∈T s wt u t − γ (5a) s.t. : γ ≥ λτ + w β w (5b) λ + β w ≥ t∈T s wt u t p w , ∀w ∈ W (5c) w∈W t∈T s wt = m (5d) w s wt = 1, ∀t ∈ T (5e) t s wt ≤ c w , ∀w ∈ W (5f) s wt ∈ {0, 1}. (5g)
The objective (5a) aims to maximize the defender's expected utility given the adversary's attack (second term). Constraint (5b and 5c) validates that the adversary's targets are the workers who contribute the most to the defender's expected utility and Constraint (5d) ensures that each allocation assigns all the possible tasks among the different workers. Finally, Constraint (5e) ensures that only one worker is assigned for each task and Constraint (5f) ensures that no worker is assigned with more tasks than it can perform. Next, we propose a greedy algorithm that attempts to incrementally improve utility by shifting workers among tasks, now allowing multiple workers to be assigned to a task. Whenever more than one worker is assigned to a given task, the defender has to choose a deterministic mapping δ to determine the outcome. We consider a very broad class of weighted majority functions for this purpose (natural if successful completion of a task means that the worker returned the correct label). In this mapping, each worker w is assigned a weight θ w , and the final label is set according to the weighted majority rule, i.e., δ(L t ) = sgn( w∈Wt(s) θ w l w ).
In order to approximate the defender's expected utility, we use the sample average approximation (SAA) [15] for solving stochastic optimization problems by using Monte-Carlo simulation. Using this approach, the defender's utility can be approximated by:
u def (C K , W ′ ) = t∈T u t K k=1 I{sgn w∈W ′ s wt θ w C wtk } K(6)
where C K is a set of K matrices, each of size n over m. Each cell C wtk is a randomly sample based on p w represents whether or not the worker w successfully completed the task. That is, C wtk = 1 if worker w successfully completed task t, and C wtk = 0 otherwise. In a similar manner, s wt = 1 if worker w is assigned to task t, and s wt = 0 otherwise.
Algorithm 2 formally describes the computation of this assignment. Given an optimal assignment extracted using the mixed-integer linear program in Equation (5), we iteratively alternate over all tasks in ascending order based on their utility. For each task, we reassign the worker associated with this task to the most beneficial task. If this reassignment improves the defender's utility, we label it as beneficial (Steps 9 and 10). Finally, we commit to the reassignment that will maximize the defender's utility (Step 12). if u def (C K , α) > util then 10: t = t ′ , util ← u def (C K , α) 11: s wt ′ = 0, s wt = 1 12: s wt = 0, s wt = 1 13: return s
Experiments
We now experimentally demonstrate the effectiveness of our proposed approaches. Workers' proficiencies are sampled using two distributions: a uniform distribution over the [0.5, 1] interval and an exponential distribution with µ = 0.25 where proficiencies are truncated to be in this interval for the latter. We compare our adversarial assignment algorithms to three natural baselines: Split-k and two versions of Monte-Carlo (involving random assignment of tasks to workers). Specifically, for the Split-k method, we divide tasks equally among the top k workers. 1 For the Monte-Carlo approach, we consider a simple variant which randomly distributes tasks among all the workers, denoted by Monte-Carlo, and a variant of this which randomly distributes the tasks among the top ⌈ n 2 ⌉ workers, denoted by Top Monte-Carlo. In both cases, the assigned worker for each task is picked uniformly at random.
Homogeneous Tasks
We begin by considering homogeneous tasks. For each experiment, we take an average of 5,000 sample runs. Figure 1 presents the results comparing our algorithm to baselines for 50 workers and tasks. As the figure shows, our algorithm outperforms the baselines, and the gap becomes particularly pronounced as the number of targets increases. Moreover, there doesn't appear to be a qualitative difference between uniform and exponential distribution in this regard. It is natural that we must trade off robustness with performance of robust algorithms in non-adversarial settings. We therefore conclude the homogeneous analysis by analyzing the loss incurred by allowing for robustness, compared to a solution which is optimal in non-adversarial settings. We vary the number of workers from 2 to 50, and fix the number of tasks at 100 and the number of targets optimized against at t = 1. Table 1: Expected loss of using adversarial assignment in non-adversarial settings. Table 1 shows the expected loss of using adversarial task assignment in a nonadversarial settings. With only 5 workers, we pay a steep price (just under 25%), but as the number of workers increases, the loss shrinks; with 50 workers, we only lose 4.6% compared to optimal non-robust assignment.
Heterogeneous Tasks We used CPLEX version 12.51 to solve the integer linear program above.
First, we analyze how the heterogeneous assignment given in mixed-integer linear program (MILP) (5) performs compared to the baselines when task utilities are sampled from U [0, 1] and worker proficiencies are samples from U [0.5, 1]. We use similar baseline methods to the ones used in studying homogeneous task assignment. Figure 2 depicts the expected utility for the defender when using each of the methods in an environment populated with 15 tasks and 10 workers where the number of targets the adversary attacks varies between 1 and 5 over 3, 000 runs. As is evidence from the figure, even the baseline mixed-integer linear program (which assumes a single worker is assigned per task) significantly outperforms the baselines, with the difference growing as we increase the number of workers attacked. Next, we evaluate how much more we gain by using Algorithm 2 after computing an initial assignment using MILP (5). In these experimets we use a natural weighted majority decision rule with θ w = p w (i.e., workers' proficiencies), and set K = 2500. We consider two uniform distributions for this study: U [0, 1] and U [0, 100]. Each marginal improvement is averaged over 3,000 runs.
Workers
The results are shown in Tables 2 and 3. We can see that there are cases where assigning multiple workers per task can offer a significant benefit. However, as the problem size increases, this benefit significantly attenuates, and it may suffice to just rely on the assignment obtained from the MILP.
Conclusion
We consider the problem of assigning tasks to workers when workers can be attacked, and their ability to successfully complete assigned tasks compromised. We show that the optimal assignment problem (in the sense of Stackelberg equilibrium commitment), when the attack takes place after the tasks have been assigned to workers, can be found in pseudo-polynomial time. Furthermore, when tasks are heterogeneous, we show that the problem is more challenging, as it could be optimal to assign multiple workers to the same task. Even if we constrain the assignment such that only one worker is assigned per task, extracting the optimal assignment becomes strongly NP-Hard (we exhibit an integer linear program for the latter problem). Finally, we provide with an algorithm of converting this constraint assignment to one that allows multiple workers per task (and hence approximate optimal allocation).
Proposition 3 .
3Given an assignment s resulted from Algorithm 1, changing (i.e., decreasing or increasing) the number of tasks allocated to the worker with the highest utility (w max ) is promised to result in the same of lower utility for the defender.
The set of workers W , and their proficiencies P return: The optimal policy s * 1: u max ← 0 2: for i ∈ {1, . . . , n} do 3:
Algorithm 2
2Heterogeneous assignment input: The set of workers W , and their proficiencies P return: The heuristic deterministic allocation 1: Extract the optimal 1-worker allocation using Equation 5 2: util ← u def (C K , α) 3: for t ∈ {1, . . . , m} do 4: for w ∈ {1, . . . t ′ ∈ {m, . . . , t + 1} do 8: s wt ′ = 1, s wt = 0, Update α 9:
Figure 1 :
1Homogeneous tasks: comparison to baseline methods.
Figure 2 :
2Heterogeneous tasks: comparison to baseline methods.
Exp. loss 24.9% 17.4% 15.27% 13.2% 11.6% 8.6% 5.8% 5.8% 6.5% 4.6%Workers
5
10
15
20
25
30
35
40
45
50
Table 2 :
2Average improvement using Algorithm 2; τ = 1.
Table 3 :
3Average improvement using Algorithm 2; τ = 2.
The remainder is assigned in an iterative way from the least proficient worker to the most proficient one.
AcknowledgmentsThis research was partially supported by the National Science Foundation (CNS-1640624, IIS-1526860, IIS-1649972), Office of Naval Research (N00014-15-1-2621), Army Research Office (W911NF-16-1-0069), and National Institutes of Health (UH2 CA203708-01, R01HG006844-05).
Cooperative task assignment of unmanned aerial vehicles in adversarial environments. Mehdi Alighanbari, Jonathan P How, In American Control Conference. IEEEMehdi Alighanbari and Jonathan P How. Cooperative task assignment of un- manned aerial vehicles in adversarial environments. In American Control Confer- ence, 2005., pages 4661-4666. IEEE, 2005.
How to allocate tasks asynchronously. Dan Alistarh, A Michael, Seth Bender, Rachid Gilbert, Guerraoui, IEEE Annual Symposium on Foundations of Computer Science (FOCS). IEEEDan Alistarh, Michael A Bender, Seth Gilbert, and Rachid Guerraoui. How to allo- cate tasks asynchronously. In IEEE Annual Symposium on Foundations of Computer Science (FOCS), pages 331-340. IEEE, 2012.
How robust is the wisdom of the crowds?. Michal Noga Alon, Omer Feldman, Moshe Lev, Tennenholtz, International Joint Conferences on Artificial Intelligence (IJCAI). Noga Alon, Michal Feldman, Omer Lev, and Moshe Tennenholtz. How robust is the wisdom of the crowds? In International Joint Conferences on Artificial Intelli- gence (IJCAI), pages 2055-2061, 2015.
How many crowdsourced workers should a requester hire?. Arthur Carvalho, Stanko Dimitrov, Kate Larson, Annals of Mathematics and Artificial Intelligence. 781Arthur Carvalho, Stanko Dimitrov, and Kate Larson. How many crowdsourced workers should a requester hire? Annals of Mathematics and Artificial Intelligence, 78(1):45-72, 2016.
Robust matrix completion and corrupted columns. Yudong Chen, Huan Xu, Constantine Caramanis, Sujay Sanghavi, International Conference on Machine Learning (ICML). Yudong Chen, Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust matrix completion and corrupted columns. In International Conference on Machine Learning (ICML), pages 873-880, 2011.
Robust sparse regression under adversarial corruption. Yudong Chen, Constantine Caramanis, Shie Mannor, International Conference on Machine Learning (ICML). Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust sparse regression under adversarial corruption. In International Conference on Machine Learning (ICML), pages 774-782, 2013.
Contingency-risk informed power system design. Richard Li-Yang Chen, Amy Cohn, Neng Fan, Ali Pinar, IEEE Transactions on Power Systems. 295Richard Li-Yang Chen, Amy Cohn, Neng Fan, and Ali Pinar. Contingency-risk informed power system design. IEEE Transactions on Power Systems, 29(5):2087 -2096, 2014.
Computing the optimal strategy to commit to. Vincent Conitzer, Tuomas Sandholm, ACM Conference on Electronic Commerce (EC). ACMVincent Conitzer and Tuomas Sandholm. Computing the optimal strategy to com- mit to. In ACM Conference on Electronic Commerce (EC), pages 82-90. ACM, 2006.
Artificial intelligence for artificial artificial intelligence. Peng Dai, Daniel Sabby Weld, AAAI Conference on Artificial Intelligence (AAAI). Peng Dai, Daniel Sabby Weld, et al. Artificial intelligence for artificial artificial intelligence. In AAAI Conference on Artificial Intelligence (AAAI), pages 1153- 1159, 2011.
Extending workers' attention span through dummy events. Avshalom Elmalech, David Sarne, Esther David, Chen Hajaj, AAAI Conference on Human Computation and Crowdsourcing (HCOMP). Avshalom Elmalech, David Sarne, Esther David, and Chen Hajaj. Extending workers' attention span through dummy events. In AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2016.
Robust logistic regression and classification. Jiashi Feng, Huan Xu, Shie Mannor, Shuicheng Yan, Neural Information Processing Systems (NIPS). Jiashi Feng, Huan Xu, Shie Mannor, and Shuicheng Yan. Robust logistic regres- sion and classification. In Neural Information Processing Systems (NIPS), pages 253-261, 2014.
Who moderates the moderators? crowdsourcing abuse detection in user-generated content. Arpita Ghosh, Satyen Kale, Preston Mcafee, ACM Conference on Electronic Commerce (EC). Arpita Ghosh, Satyen Kale, and Preston McAfee. Who moderates the modera- tors? crowdsourcing abuse detection in user-generated content. In ACM Conference on Electronic Commerce (EC), pages 167-176, 2011.
Robust task allocation for dynamic distributed real-time systems subject to multiple environmental parameters. Dazhang Gu, Frank Drews, Lonnie Welch, Distributed Computing Systems, 2005. ICDCS 2005. Proceedings. 25th IEEE International Conference on. IEEEDazhang Gu, Frank Drews, and Lonnie Welch. Robust task allocation for dy- namic distributed real-time systems subject to multiple environmental parameters. In Distributed Computing Systems, 2005. ICDCS 2005. Proceedings. 25th IEEE In- ternational Conference on, pages 675-684. IEEE, 2005.
Dynamically formed heterogeneous robot teams performing tightly-coordinated tasks. Edward Gil Jones, Brett Browning, Bernardine Dias, Brenna Argall, Manuela Veloso, Anthony Stentz, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). the IEEE International Conference on Robotics and Automation (ICRA)IEEEEdward Gil Jones, Brett Browning, M Bernardine Dias, Brenna Argall, Manuela Veloso, and Anthony Stentz. Dynamically formed heterogeneous robot teams per- forming tightly-coordinated tasks. In Proceedings of the IEEE International Con- ference on Robotics and Automation (ICRA), pages 570-575. IEEE, 2006.
The sample average approximation method for stochastic discrete optimization. J Anton, Alexander Kleywegt, Tito Homem-De Shapiro, Mello, SIAM Journal on Optimization. 122Anton J Kleywegt, Alexander Shapiro, and Tito Homem-de Mello. The sample average approximation method for stochastic discrete optimization. SIAM Journal on Optimization, 12(2):479-502, 2002.
Complexity of computing optimal stackelberg strategies in security resource allocation games. Dmytro Korzhyk, Vincent Conitzer, Ronald Parr, AAAI Conference on Artificial Intelligence (AAAI). Dmytro Korzhyk, Vincent Conitzer, and Ronald Parr. Complexity of comput- ing optimal stackelberg strategies in security resource allocation games. In AAAI Conference on Artificial Intelligence (AAAI), pages 805-810, 2010.
Sequential peer prediction: Learning to elicit effort using posted prices. Yang Liu, Yiling Chen, AAAI Conference on Artificial Intelligence (AAAI). Yang Liu and Yiling Chen. Sequential peer prediction: Learning to elicit effort using posted prices. In AAAI Conference on Artificial Intelligence (AAAI), pages 607-613, 2017.
Robust linear regression against training data poisoning. Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea, ACM Workshop on Artificial Intelligence and Security. Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea. Robust linear regres- sion against training data poisoning. In ACM Workshop on Artificial Intelligence and Security, 2017.
Efficiency of active learning for the allocation of workers on crowdsourced classification tasks. Edoardo Manino, Long Tran-Thanh, Nicholas R Jennings, arXiv:1610.06106arXiv preprintEdoardo Manino, Long Tran-Thanh, and Nicholas R Jennings. Efficiency of active learning for the allocation of workers on crowdsourced classification tasks. arXiv preprint arXiv:1610.06106, 2016.
Conducting behavioral research on amazon's mechanical turk. Behavior research methods. Winter Mason, Siddharth Suri, 44Winter Mason and Siddharth Suri. Conducting behavioral research on amazon's mechanical turk. Behavior research methods, 44(1):1-23, 2012.
Get another label? improving data quality and data mining using multiple, noisy labelers. S Victor, Foster Sheng, Panagiotis G Provost, Ipeirotis, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMVictor S Sheng, Foster Provost, and Panagiotis G Ipeirotis. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceed- ings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 614-622. ACM, 2008.
Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. Adish Singla, Andreas Krause, Proceedings of the international conference on World Wide Web. the international conference on World Wide WebACMAdish Singla and Andreas Krause. Truthful incentives in crowdsourcing tasks us- ing regret minimization mechanisms. In Proceedings of the international conference on World Wide Web, pages 1167-1178. ACM, 2013.
Theory of the market economy. Stackelberg Heinrich Von, Heinrich von Stackelberg. Theory of the market economy. 1952.
Avoiding imposters and delinquents: Adversarial crowdsourcing and peer prediction. Jacob Steinhardt, Gregory Valiant, Moses Charikar, Neural Information Processing Systems (NIPS). Jacob Steinhardt, Gregory Valiant, and Moses Charikar. Avoiding imposters and delinquents: Adversarial crowdsourcing and peer prediction. In Neural Information Processing Systems (NIPS), pages 4439-4447, 2016.
Task decomposition, dynamic role assignment, and low-bandwidth communication for real-time strategic teamwork. Peter Stone, Manuela Veloso, Artificial Intelligence. 1102Peter Stone and Manuela Veloso. Task decomposition, dynamic role assignment, and low-bandwidth communication for real-time strategic teamwork. Artificial In- telligence, 110(2):241-273, 1999.
Security and game theory: algorithms, deployed systems, lessons learned. Milind Tambe, Cambridge University PressMilind Tambe. Security and game theory: algorithms, deployed systems, lessons learned. Cambridge University Press, 2011.
Budgetfix: budget limited crowdsourcing for interdependent task allocation with quality guarantees. Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Nicholas R Sarvapali D Ramchurn, Jennings, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS). Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D Ramchurn, and Nicholas R Jennings. Budgetfix: budget limited crowdsourcing for interde- pendent task allocation with quality guarantees. In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 477-484, 2014.
Robust PCA via outlier pursuit. Huan Xu, Constantine Caramanis, Sujay Sanghavi, Neural Information Processing Systems (NIPS). Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust PCA via outlier pursuit. In Neural Information Processing Systems (NIPS), pages 2496-2504, 2010.
| [] |
[
"Origin and Effects of Anomalous Dynamics on Unbiased Polymer Translocation",
"Origin and Effects of Anomalous Dynamics on Unbiased Polymer Translocation"
] | [
"Debabrata Panja ",
"Gerard T Barkema ",
"\nInstitute for Theoretical Physics\nInstitute for Theoretical Physics\nUniversiteit van Amsterdam\nValckenierstraat 651018 XEAmsterdamThe Netherlands\n",
"\nDepartment of Physics\nUniversiteit Utrecht, Minnaertgebouw\nLeuvenlaan 4, Postbus 80.195, The Netherlands Robin C. Ball3508 TDUtrecht\n",
"\nUniversity of Warwick\nCV4 7ALCoventryUK\n"
] | [
"Institute for Theoretical Physics\nInstitute for Theoretical Physics\nUniversiteit van Amsterdam\nValckenierstraat 651018 XEAmsterdamThe Netherlands",
"Department of Physics\nUniversiteit Utrecht, Minnaertgebouw\nLeuvenlaan 4, Postbus 80.195, The Netherlands Robin C. Ball3508 TDUtrecht",
"University of Warwick\nCV4 7ALCoventryUK"
] | [] | In this paper, we investigate the microscopic dynamics of a polymer of length N translocating through a narrow pore. Characterization of its purportedly anomalous dynamics has so far remained incomplete. We show that the polymer dynamics is anomalous until the Rouse time τ R ∼ N 1+2ν , with a mean square displacement through the pore consistent with t (1+ν)/(1+2ν) , with ν ≈ 0.588 the Flory exponent. This is shown to be directly related to a decay in time of the excess monomer density near the pore as t −(1+ν)/(1+2ν) exp(−t/τ R ). Beyond the Rouse time translocation becomes diffusive. In consequence of this, the dwell-time τ d , the time a translocating polymer typically spends within the pore, scales as N 2+ν , in contrast to previous claims. | null | [
"https://arxiv.org/pdf/cond-mat/0610671v2.pdf"
] | 8,304,620 | cond-mat/0610671 | 937f4c202dfc3222091d64fdf06b28fea3d5531d |
Origin and Effects of Anomalous Dynamics on Unbiased Polymer Translocation
25 Jul 2007
Debabrata Panja
Gerard T Barkema
Institute for Theoretical Physics
Institute for Theoretical Physics
Universiteit van Amsterdam
Valckenierstraat 651018 XEAmsterdamThe Netherlands
Department of Physics
Universiteit Utrecht, Minnaertgebouw
Leuvenlaan 4, Postbus 80.195, The Netherlands Robin C. Ball3508 TDUtrecht
University of Warwick
CV4 7ALCoventryUK
Origin and Effects of Anomalous Dynamics on Unbiased Polymer Translocation
25 Jul 2007
In this paper, we investigate the microscopic dynamics of a polymer of length N translocating through a narrow pore. Characterization of its purportedly anomalous dynamics has so far remained incomplete. We show that the polymer dynamics is anomalous until the Rouse time τ R ∼ N 1+2ν , with a mean square displacement through the pore consistent with t (1+ν)/(1+2ν) , with ν ≈ 0.588 the Flory exponent. This is shown to be directly related to a decay in time of the excess monomer density near the pore as t −(1+ν)/(1+2ν) exp(−t/τ R ). Beyond the Rouse time translocation becomes diffusive. In consequence of this, the dwell-time τ d , the time a translocating polymer typically spends within the pore, scales as N 2+ν , in contrast to previous claims.
I. INTRODUCTION
Transport of molecules across cell membranes is an essential mechanism for life processes. These molecules are often long and flexible, and the pores in the membranes are too narrow to allow them to pass through as a single unit. In such circumstances, the passage of a molecule through the pore -i.e. its translocation -proceeds through a random process in which polymer segments sequentially move through the pore. DNA, RNA and proteins are naturally occurring long molecules (1)(2)(3)(4)(5) subject to translocation in a variety of biological processes. Translocation is used in gene therapy (7; 8), in delivery of drug molecules to their activation sites (9), and as an efficient means of single molecule sequencing of DNA and RNA (6). Understandably, the process of translocation has been an active topic of current research: both because it is an essential ingredient in many biological processes and for its relevance in practical applications.
Translocation is a complicated process in living organisms -its dynamics may be strongly influenced by various factors, such as the presence of chaperon molecules, pH value, chemical potential gradients, and assisting molecular motors. It has been studied empirically in great variety in the biological literature (10; 11). Studies of translocation as a biophysical process are more recent. In these, the polymer is simplified to a sequentially connected string of N monomers. Quantities of interest are the typical time scale for the polymer to leave a confining cell or vesicle, the "escape time" (12), and the typical time scale the polymer spends in the pore or "dwell time", (13) as a function of chain length N and other parameters like membrane thickness, membrane adsorption, electrochemical potential gradient, etc. (14). These have been measured directly in numerous experiments (16). Experimentally, the most studied quantity is the dwell time τ d , i.e., the pore blockade time for a translocation event. For theoretical descriptions of τ d , during the last decade a number of mean-field type theories (12)(13)(14) have been proposed, in which translocation is described by a Fokker-Planck equation for first-passage over an entropic barrier in terms of a single "reaction coordinate" s. Here s is the number of the monomer threaded at the pore (s = 1, . . . , N ). These theories apply under the assumption that translocation is slower than the equilibration time-scale of the entire polymer, which is likely for high pore friction. In Ref. (17), this assumption was questioned, and the authors found that for a self-avoiding polymer performing Rouse dynamics, τ d ≥ τ R , the Rouse time. Using simulation data in 2D, they suggested that the inequality may actually be an equality, i.e., τ d ∼ τ R ∼ N 1+2ν ≃ N 2.18 . This suggestion was numerically confirmed in 2D in Ref. (18). However, in a publication due to two of us, τ d in 3D was numerically found to scale as ∼ N 2.40±0.05 (15). Additionally, in a recent publication (21) τ d was numerically found to scale as N 2.52±0.04 in three dimensions [a discussion on the theory of Ref. (21) appears at the end of Sec. IV].
Amid all the above results on τ d mutually differing by ∼ O(N 0.2 ), the only consensus that survives is that τ d ≥ τ R (15; 17). Simulation results alone cannot determine the scaling of τ d : different groups use different polymer models with widely different criteria for convergence for scaling results, and as a consequence, settling differences
of ∼ O(N 0.2 ) in O(τ R ), is extremely delicate.
An alternative approach that can potentially settle the issue of τ d scaling with N is to analyze the dynamics of translocation at a microscopic level. Indeed, the lower limit τ R for τ d implies that the dynamics of translocation is anomalous (17). We know of only two published studies on the anomalous dynamics of translocation, both using a fractional Fokker-Planck equation (FFPE) (20; 21). However, whether the assumptions underlying a FFPE apply for polymer translocation are not clear. Additionally, none of the studies used FFPE for the purpose of determining the scaling of τ d . In view of the above, such a potential clearly has not been thoroughly exploited.
The purpose of this paper is to report the characteristics of the anomalous dynamics of translocation, derived from the microscopic dynamics of the polymer, and the scaling of τ d obtained therefrom. Translocation proceeds via the exchange of monomers through the pore: imagine a situation when a monomer from the left of the membrane translocates to the right. This process increases the monomer density in the right neighbourhood of the pore, and simultaneously reduces the monomer density in the left neighbourhood of the pore. The local enhancement in the monomer density on the right of the pore takes a finite time to dissipate away from the membrane along the backbone of the polymer (similarly for replenishing monomer density on the left neighbourhood of the pore). The imbalance in the monomer densities between the two local neighbourhoods of the pore during this time implies that there is an enhanced chance of the translocated monomer to return to the left of the membrane, thereby giving rise to memory effects, and consequently, rendering the translocation dynamics subdiffusive. More quantitatively, the excess monomer density (or the lack of it) in the vicinity of the pore manifests itself in reduced (or increased) chain tension around the pore, creating an imbalance of chain tension across the pore (we note here that the chain tension at the pore acts as monomeric chemical potential, and from now on we use both terms interchangeably). We use well-known laws of polymer physics to show that in time the imbalance in the chain tension across the pore relaxes as t −(1+ν)/(1+2ν) exp(−t/τ R ) (22). This results in translocation dynamics being subdiffusive for t < τ R , with the mean-square displacement ∆s 2 (t) of the reaction coordinate s(t) increasing as t (1+ν)/(1+2ν) ; and diffusive for t > τ R . With ∆s 2 (τ d ) ∼ N , this leads to τ d ∼ N 2+ν . This paper is divided in four sections. In Sec. II we detail the dynamics of our polymer model, and outline its implications on physical processes including equilibration of phantom and self-avoiding polymers. In Sec. III we elaborate on a novel way of measuring the dwell time that allows us to obtain better statistics for large values of N . In Sec. IV we describe and characterize the anomalous dynamics of translocation, obtain the scaling of τ d with N and compare our results with that of Ref. (21). In Sec. V we end this paper with a discussion. FIG. 2 Illustration of the two-dimensional version of the lattice polymer model. In the upper polymer, interior monomers 2, 4, 6, 9, 10 and 11 can either diffuse along the polymer backbone, or move sideways; monomer 7 can join either 6 or 8; the end monomers 1 and 12 can move to any vacant nearestneighbor site. In the lower polymer, interior monomers 3, 5, 6, 10 and 11 can either diffuse along the tube, or move sideways; monomer 1 can move to any vacant nearest-neighbor site, and monomer 12 can join its neighbor 11. All other monomers are not mobile.
II. OUR POLYMER MODEL
Over the last years, we have developed a highly efficient simulation approach of polymer dynamics. This approach is made possible via a lattice polymer model, based on Rubinstein's repton model (30) efficient implementation and a study of some of its properties and applications can be found in Refs. (23; 24). In this model, polymers consist of a sequential chain of monomers, living on a FCC lattice. Monomers adjacent in the string are located either in the same, or in neighboring lattice sites. The polymers are self-avoiding: multiple occupation of lattice sites is not allowed, except for a set of adjacent monomers. The polymers move through a sequence of random single-monomer hops to neighboring lattice sites. These hops can be along the contour of the polymer, thus explicitly providing reptation dynamics. They can also change the contour "sideways", providing Rouse dynamics. Time in this polymer model is measured in terms of the number of attempted reptation moves. A two-dimensional version of our threedimensional model is illustrated in fig. 2.
A. Influence of the accelerated reptation moves on polymer dynamics
From our experience with the model we already know that the dynamical properties are rather insensitive to the ratio of the rates for Rouse vs. reptation moves (i.e., moves that alter the polymer contour vs. moves that only redistribute the stored length along the backbone). Since the computational efficiency of the latter kind of moves is at least an order of magnitude higher, we exploit this relative insensitivity by attempting reptation moves q times more often than Rouse moves; typical values are q = 1, 5 or 10, which correspond to a comparable amount of computational effort invested in both kinds of moves. Certainly, the interplay between the two kinds of moves is rather intricate (25). Recent work by Drzewinski and van Leeuwen on a related lattice polymer model (26) provides evidence that the dynamics is governed by N q −1/2 , supporting our experience that, provided the polymers are sizable, one can boost one mechanism over the other quite a bit (even up to q ∼ N 2 ) before the polymer dynamics changes qualitatively. In order to further check the trustworthiness of this model, we use it to study the equilibration properties of polymers with one end tethered to a fixed infinite wall (this problem relates rather directly to that of a translocating polymer: for a given monomer threaded into the pore, the two segments of the polymer on two sides of the membrane behave simply as two independent polymer chains; see Fig. 1). This particular problem, wherein polymer chains (of length N ) undergo pure Rouse dynamics (i.e., no additional reptation moves) is a well-studied one: the equilibration time is known to scale as N 1+2ν for self-avoiding polymers and as N 2 for phantom polymers. To reproduce these results with our model we denote the vector distance of the free end of the polymer w.r.t. the tethered end at time t by e(t), and define the correlation coefficient for the end-to-end vector as
c(t) = e(t) · e(0) − e(t) · e(0) e 2 (t) − e(t) 2 e 2 (0) − e(0) 2 .
(1)
The angular brackets in Eq. (1) denote averaging in equilibrium. Thec(t) quantities appearing in Fig. 3 have been obtained by the following procedure: we first obtain the time correlation coefficients c(t) for 32 inde-
III. TRANSLOCATION, DWELL AND UNTHREADING TIMES
Our translocation simulations are carried out only for self-avoiding polymers. For long polymers, full translocation simulations, i.e., having started in one of the cells (Fig. 1), finding the pore and distorting their shapes to enter the pore to finally translocate to the other side of the membrane are usually very slow. An uncoiled state, which the polymer has to get into in order to pass through the pore is entropically unfavourable, and therefore, obtaining good translocation statistics for translocation events is a time-consuming process. To overcome this difficulty, in Ref. (15), we introduced three different time scales associated with translocation events: translocation time, dwell time and unthreading time. For the rest of this section, we refer the reader to Fig. 1.
A. Translocation and dwell times
In states A and B (Fig. 1), the entire polymer is located in cell A, resp. B. States M and M ′ are defined as the states in which the middle monomer is located exactly halfway between both cells. Finally, states T and T ′ are the complementary to the previous states: the polymer is threaded, but the middle monomer is not in the middle of the pore. The finer distinction between states M and T, resp. M ′ and T ′ is that in the first case, the polymer is on its way from state A to B or vice versa, while in the second case it originates in state A or B and returns to the same state. The translocation process in our simulations can then be characterized by the sequence of these states in time (Fig. 4). In this formulation, the dwell time τ d is the time that the polymer spends in states T, while the translocation time τ t is the time starting at the first instant the polymer reaches state A after leaving state B, until it reaches state B again. As found in Ref. (15), τ d and τ t are related to each other by the relation
τ t = V τ d N 1+γ−2γ1 ,(2)
where γ = 1.1601, γ 1 = 0.68 and V is the volume of cell A or B (see Fig. 1).
B. Unthreading time and its relation to dwell time
The unthreading time τ u is the average time that either state A or B is reached from state M (not excluding possible recurrences of state M). Notice that in the absence of a driving field, on average, the times to unthread to state A equals the the times to unthread to state B, due to symmetry. The advantage of introducing unthreading time is that when one measures the unthreading times, the polymer is at the state of lowest entropy at the start of the simulation and therefore simulations are fast and one can obtain good statistics on unthreading times fairly quickly. Additionally, the dwell and unthreading times are related to each other, as outlined below, and using this relation one is able to reach large values of N for obtaining the scaling of the dwell time.
The main point to note that the dwell time can be decomposed into three parts as
τ d = τ A→M + τ M M + τ M→B ,(3)
whereas mean unthreading time can be decomposed into two parts as
τ u = τ M M + τ M→B .(4)
Here (3) and (4) are strictly positive, we arrive at the inequality
τ u < τ d < 2τ u .(5)
Since Eq. (5) is independent of polymer lengths, on average, the dwell time scales with N in the same way as the unthreading time (27).
IV. CHARACTERIZATION OF THE ANOMALOUS DYNAMICS OF TRANSLOCATION AND ITS RELATION TO τ d
The reaction coordinate [the monomer number s(t ′ ), which is occupying the pore at time t ′ ] a convenient choice for the description of the microscopic movements of the translocating polymer, since the important time-scales for translocation can be obtained from the time evolution of the reaction coordinate s(t ′ ) as shown below. To delve deeper into its temporal behavior, we determine P N,r (s 1 , s 2 , t), the probability distribution that at time t a polymer of length N is in a configuration for which monomer s 1 is threaded into the pore, and the polymer evolves at time t ′ + t into a configuration in which monomer s 2 is threaded into the pore. The subscript r denotes our parametrization for the polymer movement, as it determines the frequency of attempted reptation moves and the sideways (or Rouse) moves of the polymer. See Sec. II.A for details.
To maintain consistency, all simulation results reported here in this section are for q = 10, so we drop q from all notations from here on. This value for q is used in view of our experience with the polymer code: q = 10 yields the fastest runtime of our code; this was the same value used in our earlier work (15). As discussed in Sec. II.A, this value of q does not change any physics.
First, we investigate the shape of these probability distributions for various values of s 1 , s 2 and t, for different sets of N and r values. We find that as long as the t values are such that neither s 1 nor s 2 are too close to the end of the polymer, P N (s 1 , s 2 , t) depends only on (s 2 − s 1 ), but the centre of the distribution is slightly shifted w.r.t. the starting position s 1 by some distance that depends on s 1 , N and q. This is illustrated in Fig. 5, where (on top of each other) we plot P N (s 1 , s 2 , t) for s 1 = N/4, N/2 and 3N/4 at t = 100 time units, for N = 400, as well as the Gaussian distribution. Notice in fig. 5 that the distribution P N (s 1 , s 2 , t) differs slightly from Gaussian (the parameter for the Gaussian is calculated by least-square optimization). We find that this difference decreases with increasing values of t (not shown in this paper).
We now define the mean [s 2 − s 1 ] (s 1 , t) and the variance ∆s 2 (s 1 , t) of the distribution P N (s 1 , s 2 , t) as [s 2 − s 1 ] (s 1 , t) = ds 2 P N (s 1 , s 2 , t) (s 2 − s 1 );
∆s 2 (s 1 , t) = ds 2 P N (s 1 , s 2 , t) × {(s 2 − s 1 ) 2 − [ s 2 − s 1 (s 1 , t)] 2 } ,(6)
where the quantities within parenthesis on the l.h.s. of Eq. (6) denote the functional dependencies of the mean and the standard deviation of (s 2 − s 1 ). We also note here that we have checked the skewness of P N (s 1 , s 2 , t), which we found to be zero within our numerical abilities, indicating that P N (s 1 , s 2 , t) is symmetric in (s 2 − s 1 ). 5 PN (s1, s2, t) for N = 400, at t = 100. Note the data collapse when plotted as a function of (s2 − s1). The distribution differs slightly from Gaussian.
In principle, both the mean and the standard deviation of (s 2 −s 1 ) can be used to obtain the scaling of τ d with N , but the advantage of using ∆s 2 (s 1 , t) for this purpose is that, as shown in Fig. 5, it is independent of s 1 , so from now on, we drop s 1 from its argument. Since unthreading process starts at s 1 = N/2, the scaling of τ d with N is easily obtained by using the relation
∆s 2 (τ d ) ∼ N 2 ,(7)
in combination with the fact that the scalings of τ d and τ u with N behave in the same way [inequality (5)]. Note here that Eq. (7) uses the fact that for an unthreading process the polymer only has to travel a length N/2 along its contour in order to leave the pore.
A. The origin of anomalous dynamics and the relaxation of excess monomer density near the pore during translocation
The key step in quantitatively formulating the anomalous dynamics of translocation is the following observa-tion: a translocating polymer comprises of two polymer segments tethered at opposite ends of the pore that are able to exchange monomers between them through the pore; so each acts as a reservoir of monomers for the other. The velocity of translocation v(t) =ṡ(t), representing monomer current, responds to φ(t), the imbalance in the monomeric chemical potential across the pore acting as "voltage". Simultaneously, φ(t) also adjusts in response to v(t). In the presence of memory effects, they are related to each other by φ(t) = t 0 dt ′ µ(t − t ′ )v(t ′ ) via the memory kernel µ(t), which can be thought of as the (time-dependent) 'impedance' of the system. Supposing a zero-current equilibrium condition at time 0, this relation can be inverted to obtain v(t)
= t 0 dt ′ a(t − t ′ )φ(t ′ ),
where a(t) can be thought of as the 'admittance'. In the Laplace transform language,μ(k) =ã −1 (k), where k is the Laplace variable representing inverse time. Via the fluctuation-dissipation theorem, they are related to the respective autocorrelation functions as The behaviour of µ(t) may be obtained by considering the polymer segment on one side of the membrane only, say the right, with a sudden introduction of p extra monomers at the pore, corresponding to impulse current v(t) = pδ(t). We then ask for the time-evolution of the mean response δΦ (r) (t) , where δΦ (r) (t) is the shift in chemical potential for the right segment of the polymer at the pore. This means that for the translocation problem (with both right and left segments), we would have φ(t) = δΦ (r) (t) − δΦ (l) (t), where δΦ (l) (t) is the shift in chemical potential for the left segment at the pore due to an opposite input current to it.
µ(t − t ′ ) = φ(t)φ(t ′ ) v=0 and a(t − t ′ ) = v(t)v(t ′ ) φ=0 .
We now argue that this mean response, and hence µ(t), takes the form µ(t) ∼ t −α exp(−t/τ R ). The terminal exponential decay exp(−t/τ R ) is expected from the relaxation dynamics of the entire right segment of the polymer with one end tethered at the pore [see Fig. 3(b)]. To understand the physics behind the exponent α, we use the well-established result for the relaxation time t n for n self-avoiding Rouse monomers scaling as t n ∼ n 1+2ν . Based on the expression of t n , we anticipate that by time t the extra monomers will be well equilibrated across the inner part of the chain up to n t ∼ t 1/(1+2ν) monomers from the pore, but not significantly further. This internally equilibrated section of n t + p monomers extends only r(n t ) ∼ n ν t , less than its equilibrated value (n t + p) ν , because the larger scale conformation has yet to adjust: the corresponding compressive force from these n t + p monomers is expected by standard polymer scaling (29) to follow f /(k B T ) ∼ δr(n t )/r 2 (n t ) ∼ νp/ [n t r(n t )] ∼ t −(1+ν)/(1+2ν) . This force f must be transmitted to the membrane, through a combination of decreased tension at the pore and increased incidence of other membrane contacts. The fraction borne by reducing chain tension at the pore leads us to the inequality α ≥ (1 + ν)/(1 + 2ν), which is significantly different from (but compatible with) the value α 1 = 2/(1 + 2ν) required to obtain τ d ∼ τ R . It seems unlikely that the adjustment at the membrane should be disproportionately distributed between the chain tension at the pore and other membrane contacts, leading to the expectation that the inequality above is actually an equality.
We have confirmed this picture by measuring the impedance response through simulations. In Ref. (28), two of us have shown that the centre-of-mass of the first few monomers is an excellent proxy for chain tension at the pore and we assume here that this further serves as a proxy for δΦ. Based on this idea, we track δΦ (r) (t) by measuring the distance of the average centre-of-mass of the first 5 monomers from the membrane, z (5) (t) , in response to the injection of extra monomers near the pore at time 0. Specifically we consider the equilibrated right segment of the polymer, of length N/2 − 10 (with one end tethered at the pore), adding p = 10 extra monomers at the tethered end of the right segment at time 0, corresponding to p = 10, bringing its length up to N/2. Using the proxy z (5) (t) we then track δΦ (r) (t) . The clear agreement between the exponent obtained from the simulation results with the theoretical prediction of α = (1 + ν)/(1 + 2ν) can be seen in Fig. 6. We have checked that the sharp deviation of the data from the power law t −(1+ν)/(1+2ν) at long times is due to the asymptotic exponential decay as exp(−t/τ R ), although this is not shown in the figure.
Having thus shown that µ(t) ∼ t − 1+ν 1+2ν exp(−t/τ R ), we can expect that the translocation dynamics is anomalous for t < τ R , in the sense that the mean-square displace-ment of the monomers through the pore, ∆s 2 (t) ∼ t β for some β < 1 and time t < τ R , whilst beyond the Rouse time it becomes simply diffusive. The value β = α = 1+ν 1+2ν follows trivially by expressing ∆s 2 (t) in terms of (translocative) velocity correlations v(t)v(t ′ ) , which (by the Fluctuation Dissipation theorem) are given in terms of the time dependent admittance a(t − t ′ ). and hence inversely in terms of the corresponding impedance. Indeed, as shown in Fig. 7, a double-logarithmic plot of ∆s 2 (t) vs. t is consistent with ∆s 2 (t) ∼ t (1+ν)/(1+2ν) . The behaviour of ∆s 2 (t) at short times is an artifact of our model: at short times reptation moves dominate, leading to a transport mechanism for "stored lengths" (30) along the polymer's contour in which individual units of stored length cannot pass each other. As a result, the dynamics of s(t), governed by the movement of stored length units across the pore, is equivalent to a process known as "single-file diffusion" on a line, characterized by the scaling ∆s 2 (t) ∼ t 1/2 (not shown here). At long times the polymer tails will relax, leading to ∆s 2 (t) ∼ t for t > τ R . The presence of two crossovers, the first one from ∆s 2 (t) ∼ t 1/2 to ∆s 2 (t) ∼ t (1+ν)/(1+2ν) and the second one from ∆s 2 (t) ∼ t (1+ν)/(1+2ν) to ∆s 2 (t) ∼ t at t ≈ τ R , complicates the precise numerical verification of the exponent (1 + ν)/(1 + 2ν). However, as shown in Fig. 7, there is an extended regime in time at which the quantity t −(1+ν)/(1+2ν) ∆s 2 (t) is nearly constant.
The subdiffusive behaviour ∆s 2 (t) ∼ t 1+ν 1+2ν for t < τ R , combined with the diffusive behaviour for t ≥ τ R leads to the dwell time scaling as τ d ∼ N 2+ν , based on the criterion that ∆s 2 (τ d ) ∼ N . The dwell time exponent 2 + ν ≃ 2.59 is in acceptable agreement with the two numerical results on τ d in 3D as mentioned in the introduction of this letter, and in Table I below we present new high-precision simulation data in support of τ d ∼ N 2+ν , in terms of the median unthreading time.
The unthreading time τ u is defined as the time for the polymer to leave the pore with s(t = 0) = N/2 and the two polymer segments equilibrated at t = 0. Both τ u and τ d scale the same way, since τ u < τ d < 2τ u [see Eq. (5) Our results have two main significant implications:
(i) Even in the limit of small (or negligible) pore friction, equilibration time scale of a polymer is smaller than its dwell time scale. Yet, quasi-equilibrium condition cannot be assumed as the starting point to analyze the dynamics of unbiased translocation, as has been done in the mean-field theories.
(ii) Since α = (1 + ν)/(1 + 2ν) < 1, the diffusion in reaction-coordinate space is anomalous. This means that the dynamics of the translocating polymer in terms of its reaction co-ordinate cannot be captured by a Fokker-Planck equation in the limit of small (or negligible) pore friction.
In view of our results, a Fokker-Planck type equation [such as a fractional Fokker-Planck equation (FFPE)] to describe the anomalous dynamics of a translocating polymer would definitely need input from the physics of polymer translocation. It therefore remains to be checked that the assumptions underlying a FFPE does not violate the basic physics of a translocating polymer.
B. Comparison of our results with the theory of Ref. (21) We now reflect on the theory presented in Ref. (21).
We have defined τ d as the pore-blockade time in experiments; i.e., if we define a state of the polymer with s(t) = 0 as '0' (polymer just detached from the pore on one side), and with s(t) = N as 'N', then τ d is the first passage time required to travel from state 0 to state N without possible reoccurances of state 0. In Ref. (21), the authors attach a bead at the s = 0 end of the polymer, preventing it from leaving the pore; and their translocation time (τ v hereafter) is defined as the first passage time required to travel from state 0 to state N with reoccurances of state 0. This leads them to express τ v in terms of the free energy barrier that the polymer encounters on its way from state 0 to s = N/2, where the polymer's configurational entropy is the lowest. Below we settle the differences between τ v of Ref. (21) and our τ d .
Consider the case where we attach a bead at s = 0 and another at s = N , preventing it from leaving the pore. Its dynamics is then given by the sequence of states, e.g.,
...N x m x τv 0 x ′ 0 x ′ m ′ x ′ m ′ x ′ 0 x ′ 0 x m x m x m x N τ d x ′ N...
where the corresponding times taken (τ v and τ d ) are indicated. At state x and x ′ the polymer can have all values of s except 0, N/2 and N ; and at states m and m ′ , s = N/2. The notational distinction between primed and unprimed states is that a primed state can occur only between two consecutive states 0, or between two consecutive states N, while an unprimed state occurs only between state 0 and state N. A probability argument then leads us to
τ v τ d = 1 p x + p m = f x (1 + f m ) (p m + p m ′ )f m (1 + f x ) ,(8)
where p m , p m ′ and p x are the probabilities of the corresponding states, f m = p m /p m ′ and f x = p m /p x . Since the partition sum of a polymer of length n with one end tethered on a membrane is given by Z n ∼ λ n n γ1−1 with λ a non-universal constant and γ 1 = 0.68 (31), we have (15). Finally, f m ≈ 1 (32) yields τ v ∼ τ d .
p m + p m ′ = Z 2 N/2 / N s=0 Z s Z N −s ∼ 1/N . Similarly, f x ∼ 1/N
We have thus shown that the free energy barrier does not play a role for τ v , implying that the theoretical expression for τ v in Ref. (21) cannot be correct. The numerical result τ v ∼ N 2.52±0.04 in Ref. (21), however, confirms our theoretical expression τ d ∼ N 2+ν .
V. CONCLUSION
To conclude, we have shown that for the swollen Rouse chain, translocation is sub-diffusive up the configurational relaxation time (Rouse time) of the molecule, after which it has a further Fickian regime before the longer dwell time is exhausted: the mean square displacement along the chain is described by ∆s 2 (t) ∼ t (1+ν)/(1+2ν) up to t ∼ N 1+2ν , after which ∆s 2 (t) ∼ t. Consequently, the mean dwell time scales as τ d ∼ N 2+ν .
In future work, we will study the role of hydrodynamics. Rouse friction may be an appropriate model for the dynamics of long biopolymers in the environment within living cells, if it is sufficiently gel-like to support screened hydrodynamics on the timescale of their configurational relaxation. However, we should also ask what is expected in the other extreme of unscreened (Zimm) hydrodynamics. For our theoretical discussion the key difference is that, instead of the Rouse time τ R , in the Zimm case the configurational relaxation times scale with N according to τ Zimm ∼ N 3ν in 3D, which upon substitution into our earlier argument would gives the lower bound value α = (1 + ν)/(3ν) for the time exponent of the impedance, leading to τ d ∼ N 1+2ν (whose resemblance to the Rouse time is a coincidence -note that with hydrodynamics Rouse time loses all relevance). These results, however, do need to be verified by simulations incorporating hydrodynamics.
FIG. 1
1Our system to study translocation in this paper. It consists of two cells A and B that are connected by a pore of diameter unity in a membrane of thickness unity. Both cells have the same volume V (large compared to the polymer's typical size). The polymer repeatedly moves back and forth from one cell to the other through the pore. At any time, exactly one monomer can be within the pore. The Kuhn length of the polymer and the lattice spacing are also set to unity. Polymers can be in three different states (i) state A: all monomers are in cell A; (ii) state T (threaded): some monomers are in cell A and the rest in cell B; (iii) state B: all monomers are in cell B. The dwell time τ d is defined as the pore-blockade time in experiments, i.e., as how long the polymer spends in state T during a translocation event.
FIG. 3
3Collapse of c(t) for different values of N , showing that the equilibration times for phantom and self-avoiding polymers scale as N 2 and N 1+2ν respectively. Here N is the polymer length. (a) Data for phantom polymers, simulations were run with the same definition of time for all values of N ; to achieve the data collapse, the times were then scaled by a factor N 2 . (b) Data for self-avoiding polymers, simulations were run with the same definition of time for all values of N ; to achieve the data collapse, the times were scaled by a factor N 1+2ν (N 2.2 to be precise). Note that the units of time are arbitrary and clearly not important for the scaling of polymer equilibration times.
pendent polymers, andc(t) is a further arithmatic mean of the corresponding 32 different time correlation coefficients. For different values of N we measurec(t) for both translocation process of the polymer as for our system the polymers move repeatedly back and forth between cells A and B. See Fig. 1 and text for the definitions of the states A, B, T, M, T ′ and M ′ .self-avoiding and phantom polymers. When we scale the units of time by factors of N 2 and N 2.2 respectively for phantom and self-avoiding polymers, thec(t) vs. t curves collapse on top of each other. This is shown inFig. 3. Note here that 1 + 2ν = 2.175, which is sufficiently close to 2.2, and in simulations of self-avoiding polymers [Fig. 2(b)] we cannot differentiate between 1 + 2ν and 2.2.
FIG. 5 PN (s1, s2, t) for N = 400, at t = 100. Note the data collapse when plotted as a function of (s2 − s1). The distribution differs slightly from Gaussian.
colour online) Simulation results for the average chain tension component perpendicular to the membrane proxied by z (5) (∞) − z (5) (t) following monomer injection at the pore corresponding to v(t) = pδ(t), with p = 10. See text for details. Red circles: N/2 = 50, blue circles: N/2 = 100, green circles: N/2 = 150, solid black line: t −(1+ν)/(1+2ν) with ν = 0.588 for self-avoiding polymers. The steeper drop at large times correspond to the exponential decay exp(−t/τR) (we have checked this, but have not shown in this letter).
online) Double-logarithmic plot of the mean squared displacement of the reaction coordinate ∆s 2 (t) as a function of time t, for N = 100 (orange), 200 (red) and 500 (blue). The thick black line indicates the theoretically expected slope corresponding to ∆s 2 (t) ∼ t (1+ν)/(1+2ν) . The dashed black line corresponds to ∆s 2 (t) ∼ t 2/(1+2ν) , which would have been the slope of the ∆s 2 (t) vs. t curve in a double-logarithmic plot, if τ d were to scale as τR ∼ N 1+2ν .
τ A→M , τ M M and τ M→B respectively are the mean first passage time to reach state M from state A, mean time between the first occurance at state M and the last occurance of state M with possible reoccurances of state M, and the mean first passage time to reach state B from state M without any reoccurance of state M. Since on average τ A→M = τ M→B due to symmetry, and all quantities on the r.h.s. of Eqs.
].N
τ u
τ u /N 2+ν
100
65136
0.434
150
183423
0.428
200
393245
0.436
250
714619
0.445
300
1133948
0.440
400
2369379
0.437
500
4160669
0.431
Table I :
IMedian unthreading time over 1,024 runs for each N .
. B Dreiseikelmann, Microbiol. Rev. 58293B. Dreiseikelmann, Microbiol. Rev. 58, 293 (1994).
. J P Henry, J. Membr. Biol. 112139J. P. Henry et al., J. Membr. Biol. 112, 139 (1989).
. J Akimaru, PNAS USA. 886545J. Akimaru et al., PNAS USA 88, 6545 (1991).
. D Goerlich, T A Rappaport, Cell. 75615D. Goerlich and T. A. Rappaport, Cell 75, 615 (1993).
. G Schatz, B Dobberstein, Science. 2711519G. Schatz and B. Dobberstein, Science 271, 1519 (1996).
. J J Nakane, M Akeson, A Marziali, J. Phys.: Cond. Mat. 151365J. J. Nakane, M. Akeson, A. Marziali, J. Phys.: Cond. Mat. 15, R1365 (2003).
. I Szabò, J. Biol. Chem. 27225275I. Szabò et al. J. Biol. Chem. 272, 25275 (1997).
. B Hanss, PNAS USA. 951921B. Hanss et al., PNAS USA 95, 1921 (1998).
. Yun-Long Tseng, Molecular Pharm. 62864Yun-Long Tseng et al., Molecular Pharm. 62, 864 (2002).
. W T Wickner, H F Lodisch, Science. 230400W. T. Wickner and H. F. Lodisch, Science 230, 400 (1995).
. S M Simon, G Blobel, Cell. 651S. M. Simon and G. Blobel, Cell 65, 1 (1991);
. D Goerlich, I W Mattaj, Science. 2711513D. Goerlich and I. W. Mattaj, Science 271, 1513 (1996);
. K Verner, G Schatz, Science. 2411307K. Verner and G. Schatz, Science 241, 1307 (1988).
. P J Sung, W Park, Phys. Rev. E. 57730P. J. Sung and W. Park, Phys. Rev. E 57, 730 (1998);
. M Muthukumar, Phys. Rev. Lett. 863188M. Muthukumar, Phys. Rev. Lett. 86, 3188 (2001).
. P J Sung, W Park, Phys. Rev. Lett. 77783P. J. Sung and W. Park, Phys. Rev. Lett. 77, 783 (1996);
. M Muthukumar, J. Chem. Phys. 11110371M. Muthukumar, J. Chem. Phys. 111, 10371 (1999).
. D K Lubensky, D R Nelson, Biophys. J. 771824D. K. Lubensky and D. R. Nelson, Biophys. J. 77, 1824 (1999);
. P J Park, W Sung, J. Chem. Phys. 1083013P. J. Park and W. Sung, J. Chem. Phys. 108, 3013 (1998);
. E Slonkina, A B Kolomeisky, J. Chem. Phys. 1187112E. Slonkina and A. B. Kolomeisky, J. Chem. Phys. 118, 7112 (2003).
. J K Wolterink, G T Barkema, D Panja, Phys. Rev. Lett. 96208301J. K. Wolterink, G. T. Barkema and D. Panja, Phys. Rev. Lett. 96, 208301 (2006).
. J Kasianowicz, PNAS USA. 9313770J. Kasianowicz et al., PNAS USA 93, 13770 (1996);
. E Henrickson, Phys. Rev. Lett. 853057E. Henrickson et al., Phys. Rev. Lett. 85, 3057 (2000);
. A Meller, Phys. Rev. Lett. 863435A Meller et al., Phys. Rev. Lett. 86, 3435 (2001);
. M Akeson, Biophys. J. 773227M. Akeson et al., Biophys. J. 77, 3227 (1999);
. A Meller, PNAS USA. 971079A. Meller et al., PNAS USA 97, 1079 (2000);
. A J Storm, Nanoletters. 51193A. J. Storm et al., Nanoletters 5, 1193 (2005).
. J Chuang, Phys. Rev. E. 6511802J. Chuang et al., Phys. Rev. E 65, 011802 (2002);
. Y Kantor, M Kardar, Phys. Rev. E. 6921806Y. Kantor and M. Kardar, Phys. Rev. E 69, 021806 (2004).
. I Huopaniemi, K Luo, T Ala-Nissila, S.-C Ying, J. Chem. Phys. 125124901I. Huopaniemi, K. Luo, T. Ala-Nissila, S.-C. Ying, J. Chem. Phys. 125 124901 (2006).
The exponent ν is also known as the Flory exponent and sometimes also referred to as the "swelling exponent. The exponent ν is also known as the Flory exponent and sometimes also referred to as the "swelling exponent".
. R Metzler, J Klafter, Biophys. J. 852776R. Metzler and J. Klafter, Biophys. J. 85 2776 (2003).
. J L A Dubbeldam, A Milchev, V G Rostianshvili, T A Vilgis, cond-mat/070166J. L. A. Dubbeldam, A. Milchev, V. G. Rostianshvili and T. A. Vilgis, e-print archive cond-mat/070166.
Strictly speaking, τR in this expression should be replaced by the characteristic equilibration time of a tethered polymer with length of O(N ); since both scale as N 1+2ν , we use τR here. favouring notational simplicityStrictly speaking, τR in this expression should be re- placed by the characteristic equilibration time of a teth- ered polymer with length of O(N ); since both scale as N 1+2ν , we use τR here, favouring notational simplicity.
. A Van Heukelum, G T Barkema, J. Chem. Phys. 1198197A. van Heukelum and G. T. Barkema, J. Chem. Phys. 119, 8197 (2003).
. A Van Heukelum, Macromol. 366662A. van Heukelum et al., Macromol. 36, 6662 (2003);
. J Klein Wolterink, Macromol. 38J. Klein Wolterink et al., Macromol. 38, 2009 (2005).
. J Klein Wolterink, G T Barkema, Mol. Phys. 1033083J. Klein Wolterink and G. T. Barkema, Mol. Phys. 103, 3083 (2005).
. A Drzewinski, J M J Van Leeuwen, cond-mat/0609281A. Drzewinski and J.M.J. van Leeuwen, e-print archive cond-mat/0609281 (2006).
) to (mistakenly) deduce τ d = 2τu. Nevertheless, due to the independence of Eq. (5) on N our results for the scaling of τ d in Ref. Ref. (15) we overlooked the τ M M term of Eq. remain unaffectedIn Ref. (15) we overlooked the τ M M term of Eq. (3) to (mistakenly) deduce τ d = 2τu. Nevertheless, due to the independence of Eq. (5) on N our results for the scaling of τ d in Ref. (15) remain unaffected.
. D Panja, G T Barkema, cond-mat/0706.3969D. Panja and G. T. Barkema, cond-mat/0706.3969.
P.-G De Gennes, S caling concepts in polymer physics. New YorkCornell University PressP.-G. de Gennes, S caling concepts in polymer physics (Cornell University Press, New York, 1979).
. M Rubinstein, Phys. Rev. Lett. 591946M. Rubinstein, Phys. Rev. Lett. 59, 1946 (1987);
. T A J Duke, Phys. Rev. Lett. 622877T. A. J. Duke, Phys. Rev. Lett. 62, 2877 (1989).
. H W Diehla, M Shpot, Nucl. Phys. B. 528595H.W. Diehla and M. Shpot, Nucl. Phys. B 528, 595 (1998).
fm is a number slightly smaller than 1: if the polymer reaches s = N/2 from state 0, due to the memory effects. it will have slightly higher chance to go back to state 0 rather than to proceed to state N (15)fm is a number slightly smaller than 1: if the polymer reaches s = N/2 from state 0, due to the memory effects, it will have slightly higher chance to go back to state 0 rather than to proceed to state N (15).
| [] |
[
"Structure of Abell 1995 from optical and X-ray data: a galaxy cluster with an elongated radio halo",
"Structure of Abell 1995 from optical and X-ray data: a galaxy cluster with an elongated radio halo"
] | [
"W Boschin \nFundación Galileo Galilei -INAF (Telescopio Nazionale Galileo)\nRambla José Ana Fernández Perez 7, Canary IslandsE-38712Breña Baja (La Palma)Spain\n\nDipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia\nvia Tiepolo 11I-34143TriesteItaly\n",
"M Girardi \nDipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia\nvia Tiepolo 11I-34143TriesteItaly\n\nINAF -Osservatorio Astronomico di Trieste\nvia Tiepolo 11I-34143TriesteItaly\n",
"R Barrena \nInstituto de Astrofísica de Canarias\nC/Vía Láctea s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nAv. del Astrofísico Francisco Sánchez s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain\n"
] | [
"Fundación Galileo Galilei -INAF (Telescopio Nazionale Galileo)\nRambla José Ana Fernández Perez 7, Canary IslandsE-38712Breña Baja (La Palma)Spain",
"Dipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia\nvia Tiepolo 11I-34143TriesteItaly",
"Dipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia\nvia Tiepolo 11I-34143TriesteItaly",
"INAF -Osservatorio Astronomico di Trieste\nvia Tiepolo 11I-34143TriesteItaly",
"Instituto de Astrofísica de Canarias\nC/Vía Láctea s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nAv. del Astrofísico Francisco Sánchez s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain"
] | [] | Context. Abell 1995 is a puzzling galaxy cluster hosting a powerful radio halo, but it has not yet been recognized as a obvious cluster merger, as usually expected for clusters with diffuse radio emission. Aims. We aim at an exhaustive analysis of the internal structure of Abell 1995 to verify if this cluster is really dynamically relaxed, as reported in previous studies. Methods. We base our analysis on new and archival spectroscopic and photometric data for 126 galaxies in the field of Abell 1995. The study of the hot intracluster medium was performed on X-ray archival data. Results. Based on 87 fiducial cluster members, we have computed the average cluster redshift z = 0.322 and the global radial velocity dispersion σ V ∼ 1300 km s −1 . We detect two main optical subclusters separated by 1.5 ′ that cause the known NE-SW elongation of the galaxy distribution and a significant velocity gradient in the same direction. As for the X-ray analysis, we confirm that the intracluster medium is mildly elongated, but we also detect three X-ray peaks. Two X-ray peaks are offset with respect to the two galaxy peaks and lie between them, thus suggesting a bimodal merger caught in a phase of post core-core passage. The third X-ray peak lies between the NE galaxy peak and a third, minor galaxy peak suggesting a more complex merger. The difficulty of separating the two main systems leads to a large uncertainty on the line-of-sight (LOS) velocity separation and the system mass: ∆V rf,LOS = 600-2000 km s −1 and M sys = 2-5 ×10 15 h −1 70 M ⊙ , respectively. Simple analytical arguments suggest a merging scenario for Abell 1995, where two main subsystems are seen just after the collision with an intermediate projection angle.Conclusions. The high mass of Abell 1995 and the evidence of merging suggest it is not atypical among clusters with known radio halos. Interestingly, our findings reinforce the previous evidence for the peculiar dichotomy between the dark matter and galaxy distributions observed in this cluster. | 10.1051/0004-6361/201219508 | [
"https://arxiv.org/pdf/1210.5927v1.pdf"
] | 119,251,981 | 1210.5927 | b1d9a1f9694badbf93612a3ad8f74108c7a380ba |
Structure of Abell 1995 from optical and X-ray data: a galaxy cluster with an elongated radio halo
22 Oct 2012 May 3, 2014
W Boschin
Fundación Galileo Galilei -INAF (Telescopio Nazionale Galileo)
Rambla José Ana Fernández Perez 7, Canary IslandsE-38712Breña Baja (La Palma)Spain
Dipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia
via Tiepolo 11I-34143TriesteItaly
M Girardi
Dipartimento di Fisica dell'Università degli Studi di Trieste -Sezione di Astronomia
via Tiepolo 11I-34143TriesteItaly
INAF -Osservatorio Astronomico di Trieste
via Tiepolo 11I-34143TriesteItaly
R Barrena
Instituto de Astrofísica de Canarias
C/Vía Láctea s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain
Departamento de Astrofísica
Universidad de La Laguna
Av. del Astrofísico Francisco Sánchez s/n, Canary IslandsE-38205La Laguna (Tenerife)Spain
Structure of Abell 1995 from optical and X-ray data: a galaxy cluster with an elongated radio halo
22 Oct 2012 May 3, 2014Received / AcceptedAstronomy & Astrophysics manuscript no. 19508 c ESO 2014Galaxies: clusters: individual: Abell 1995 -Galaxies: clusters: general -X-rays: galaxies:clusters
Context. Abell 1995 is a puzzling galaxy cluster hosting a powerful radio halo, but it has not yet been recognized as a obvious cluster merger, as usually expected for clusters with diffuse radio emission. Aims. We aim at an exhaustive analysis of the internal structure of Abell 1995 to verify if this cluster is really dynamically relaxed, as reported in previous studies. Methods. We base our analysis on new and archival spectroscopic and photometric data for 126 galaxies in the field of Abell 1995. The study of the hot intracluster medium was performed on X-ray archival data. Results. Based on 87 fiducial cluster members, we have computed the average cluster redshift z = 0.322 and the global radial velocity dispersion σ V ∼ 1300 km s −1 . We detect two main optical subclusters separated by 1.5 ′ that cause the known NE-SW elongation of the galaxy distribution and a significant velocity gradient in the same direction. As for the X-ray analysis, we confirm that the intracluster medium is mildly elongated, but we also detect three X-ray peaks. Two X-ray peaks are offset with respect to the two galaxy peaks and lie between them, thus suggesting a bimodal merger caught in a phase of post core-core passage. The third X-ray peak lies between the NE galaxy peak and a third, minor galaxy peak suggesting a more complex merger. The difficulty of separating the two main systems leads to a large uncertainty on the line-of-sight (LOS) velocity separation and the system mass: ∆V rf,LOS = 600-2000 km s −1 and M sys = 2-5 ×10 15 h −1 70 M ⊙ , respectively. Simple analytical arguments suggest a merging scenario for Abell 1995, where two main subsystems are seen just after the collision with an intermediate projection angle.Conclusions. The high mass of Abell 1995 and the evidence of merging suggest it is not atypical among clusters with known radio halos. Interestingly, our findings reinforce the previous evidence for the peculiar dichotomy between the dark matter and galaxy distributions observed in this cluster.
Introduction
In the past decades, multiwavelength observations from ground and from space have dramatically shown the complexity of the physical phenomena occurring in galaxy clusters. An intriguing aspect of these observations is the discovery of a growing number of clusters exhibiting diffuse radio emission (on Mpc scale), i.e. large-scale areas of radio emission without any obvious galaxy counterpart (Giovannini & Feretti 2002;Ferrari et al. 2008;Venturi 2011). Particularly prominent are the radio features known as radio halos, which usually pervade the central cluster regions in a similar way to the intracluster medium (hereafter ICM). Instead, radio emission areas found at the edges of clusters are known as radio relics.
Send offprint requests to: W. Boschin, e-mail: [email protected] The cause of radio halos and relics is still under investigation. They are likely to result from synchrotron nonthermal radiation originating from relativistic electrons of the ICM moving in large-scale cluster magnetic fields. From a theoretical point of view, cluster mergers have been proposed as the key process for shedding light on the origin of these diffuse radio sources. In fact, the huge energy of these events could reaccelerate mildly relativistic electrons to relativistic energies and amplify the cluster magnetic fields (e.g., Feretti 1999). In particular, radio relics seem to be connected with large-scale shock waves occurring during mergers (e.g., Ensslin et al. 1998;Hoeft et al. 2004). Instead, radio halos are probably related to the turbulent motions of the ICM following a merger (e.g., Cassano et al. 2006;Brunetti et al. 2009), but the precise scenario is still being debated.
X-ray observations have been crucial to deriving the dynamical state of clusters hosting diffuse radio emission. Several statistical studies (see, e.g., Buote 2002;Cassano et al. 2010;Rossetti et al. 2011) have discovered interesting correlations between the properties of radio halos and relics and the ICM Xray luminosity and temperature (Giovannini & Feretti 2002 and refs. therein). This is also true when comparing point-to-point the X-ray and radio surface brightnesses (Govoni et al. 2001). Nevertheless, in a pilot study using the Sunyaev-Zel'dovich effect, Basu (2012) finds the lack of bimodality in the radio power -integrated SZ effect measurement diagram, the contrary of what is found in the radio power -X-ray luminosity diagram (Brunetti et al. 2007). This study shows the need to adopt more investigation techniques in addition to the X-ray data analysis.
Optical observations can be very helpful when checking for mergers in clusters with diffuse radio emission and studying their internal dynamics, too (e.g., Girardi & Biviano 2002). In particular, combined X-ray/optical studies can be very effective at revealing and quantifying the level of substructure, when checking for premerging subsystems and/or merger remnants. The power of this approach comes from the fact that mergers affect the ICM and the galaxy distributions in different ways, as shown by numerical simulations (e.g., Roettiger et al. 1997). Thus, X-ray and optical observations complement each other to provide a more complete picture of merger events.
It is with this scientific rationale in mind that we have begun a long-term optical observational program to investigate the properties of clusters hosting radio halos and/or relics: the DARC ("dynamical analysis of radio clusters") program (see . Among the dozens of clusters with known diffuse radio sources, we decided to perform an optical and Xray investigation of the interesting cluster Abell 1995 (hereafter A1995).
In the X-ray band, A1995 appears as a luminous and hot cluster: L X (0.1-2.4 keV)=13.42×10 44 h −2 50 erg s −1 (Böhringer et al. 2000); kT X = 7 − 9 keV (from Chandra data, see e.g. Baldi et al. 2007, Bonamente et al. 2008, and Ehlert & Ulmert 2009).
At optical wavelengths, A1995 is a rich cluster (Abell richness class = 1; Abell et al. 1989). Its light distribution is quite elongated in the NE-SW direction, but the mass shows a more circular distribution and is more concentrated than galaxies, as shown by the weak gravitational lensing reconstruction by Dahle et al. (2002). Holhjem et al. (2009) confirm this interesting discrepancy between light and mass distribution (see discussion in Sect. 5).
As for previous redshift data, Patel et al. (2000) obtained spectra for 15 member galaxies and estimate a cluster redshift of z = 0.322±0.001 and a radial velocity dispersion σ V = 1282 +153 −120 km s −1 . Irgens et al. (2002) confirm these measurements on the basis of 20 (unpublished) redshifts, six of them in common with those of Patel et al. (2000), with z = 0.3207 ± 0.0001 and σ V = 1130 +150 −110 km s −1 . Moreover, they find good agreement between the galaxy velocity dispersion and the dark matter velocity dispersion obtained from the weak gravitational lensing analysis (σ DM = 1240 ± 80 km s −1 ).
The X-ray ROSAT-HRI emission is peaked on a central bright galaxy and shows modest elongation in the NE-SW direction, which is not clearly separated from the emission of two very bright pointlike sources (see Fig. 1 of Patel et al. 2000, see also Ota & Mitsuda 2004). Ota & Mitsuda (2004) classify A1995 as a regular cluster from the stability of the X-ray centroid position and the good fit with a single β-model profile. However, despite its regular appearance, A1995 has a very large cooling time (t cool = 10.7 Gyr; see Fig. 12 of Ota & Mitsuda 2004) and thus no evidence of a cool core.
Summarizing previous results, there is some hint but no clear evidence of substructure in A1995, and indeed, on the basis of optical and X-ray appearance, Pedersen & Dahle (2007) include this cluster in the sample of relaxed systems.
About the radio wavelengths, Owen et al. (1999) first reported a possible detection of a diffuse radio source in A1995. Giovannini et al. (2009) analyzed new VLA data and discovered an evident radio halo in this cluster, with a size of ∼ 0.8 h −1 70 Mpc and radio power of P 1.4 GHz = 1.3 × 10 24 h −2 70 W Hz −1 . The radio halo appears somewhat elongated in the NE-SW direction (Giovannini et al. 2009; see also Fig. 1).
In the context of our DARC program, we proposed new spectroscopic observations of A1995 with the Telescopio Nazionale Galileo (TNG). We also performed new photometric observations at the Isaac Newton Telescope (INT) and used archival data of the Sloan Digital Sky Survey (SDSS). As for the analysis in the X-ray band, we used archival data downloaded from the Chandra Archive.
In this paper, Sect. 2 presents the new spectroscopic and photometric data. Section 3 describes the analysis of the optical data, while Sect. 4 presents the analysis of the Chandra archival data. Finally, in Sect. 5, we discuss our results and propose a scenario for the dynamical status of A1995.
Throughout this paper, we adopt a flat cosmology with H 0 = 70 km s −1 Mpc −1 (h 70 = H 0 /70 km s −1 Mpc −1 ), Ω 0 = 0.3 and Ω Λ = 0.7. With this choice of the cosmological parameters, the scale is ∼ 280 h −1 70 kpc/arcmin at the redshift of A1995. Errors are given at the 68% confidence level (hereafter c.l.), unless otherwise stated.
Galaxy data and catalog
Spectroscopic observations
We performed spectroscopic observations of A1995 in May 2009. As usual for the clusters in our DARC sample, we used the instrument DOLORES@TNG 1 in MOS mode with the LR-B grism, which covers the wavelength range 3000-8430 Å. In summary, we obtained 143 spectra from four observed masks. For each mask, the total exposure time was 5400 s.
Reduction of spectra and radial velocities computation with the cross-correlation technique (CC; Tonry & Davis 1979) were performed using standard IRAF 2 tasks, as for other clusters included in our DARC sample (for a detailed description see, e.g., Boschin et al. 2012, hereafter B12). In eight cases (IDs. 02,03,08,14,16,29,52, and 91; see Table 1) the redshift was estimated with the EMSAO package (based on the wavelength location of emission lines in the spectra). Our spectroscopic catalog lists a total of 126 galaxies.
After comparing the velocity measurements for galaxies observed with multiple masks (see discussion in, e.g., Boschin et al. 2004, we corrected the velocity errors provided by the CC technique by multiplying them for a factor ∼2. After taking the above correction into account, the median value of the cz errors is 74 km s −1 .
We also compared our data with those of Patel et al. (2000), finding 13 out of 15 galaxies in common. The two redshift measurements agree with a one-to-one relation, but the χ 2 of the fit reaches a reasonable value only by multiplying their errors by at least a factor 1.5. This leads to their typical errors being three Optical/X-ray/radio view of A1995 (direct sky view with north up). A smoothed Chandra 0.3-7 keV image (orange and yellow colors) of A1995 is superimposed on an R H -band image taken with the INT. Thin contours are the radio contour levels from Giovannini et al. (2009;VLA data at 1.4 GHz with discrete sources subtracted, courtesy of F. Govoni). Thick contours are the mass distribution contours from Holhjem et al. (2009). Numbers highlight notable galaxies mentioned in the text. times larger than our typical errors. Considering these large errors, we preferred not to add this additional information and to study our (homogeneous) sample.
Photometric observations
We performed photometric observations of A1995 in January 2008 by using the Wide Field Camera (WFC 3 ) mounted on the 2.5m INT Telescope. The sky conditions were photometric and 3 see http://www.ing.iac.es/Astronomy/instruments/wfc/ the seeing was ∼1.4 ′′ . In particular, we observed with the Harris B H (15 exposures of 600 s) and R H (18 exposures of 300 s) filters. This means a total exposure time of 9000 s and 5400 s in each band.
We reduced the data and produced our photometric galaxy catalog by using standard procedures (see, e.g., B12 for details on the reduction of the WFC images). After transformation of B H and R H magnitudes into the B and R Johnson-Cousins magnitudes (Johnson & Morgan 1953;Cousins 1976) and magnitude correction for the galactic extinction (with A B ∼ 0.06 and A R ∼ 0.04, respectively; Schlegel et al. 1998), we estimated that Table 1). Numbers highlight the IDs of notable galaxies as in Fig. 1. the completeness of the photometric sample is 90% for R ≤ 21.2 and B ≤ 22.7.
Galaxy catalog and notable galaxies
Table 1 collects all the spectroscopic and photometric information for the 126 galaxies with a measured redshift (see also Fig. 2): ID and IDm (Cols. 1 and 2) are the identification number of each galaxy and member galaxies, respectively; Col. 3 reports the equatorial coordinates, α and δ (J2000); Cols. 4 and 5 list the B and R magnitudes; Col. 6 lists the radial (heliocentrically corrected) velocites, v = cz ⊙ , with their errors, ∆v (Col. 7).
A1995 hosts a dominant galaxy (ID. 61, R = 17.78, hereafter BCG) ∼ 0.8 mag brighter than the second brightest member galaxy (ID. 76). The X-ray ROSAT-HRI emission peaks on the BCG (e.g., Patel et al. 2000).
There are several X-ray and radio emitting galaxies in the field of A1995. Among them, our ID. 89 (a background galaxy) is an evident pointlike source in Chandra archival data (see Fig. 1). Cooray et al. (1998) highlight two bright radio pointlike sources in A1995: 1453+5803 and 1452+5801. Taking a look at the radio map provided by Giovannini et al. (2009, see their Fig. 8, right panel), we identify 1453+5803 with our ID. 71, which is a cluster member. Instead, 1452+5801 is likely a background source.
Analysis of the optical data
Member selection
As for other DARC clusters, we selected cluster members by running two statistical tests. First, we used the 1D-DEDICA method (Pisani 1993 and1996) on the 126 galaxies with redshifts. The method detects A1995 as a significant peak (at >99% c.l.) in the velocity distribution located at z ∼ 0.322. The peak includes 94 (provisional) member galaxies (see Fig. 3).
Then, we used a second tool for member selection, which uses both the spatial and velocity information: the "shifting gapper" method ; see also, e.g., Girardi et al. 2011 for details on the application of this technique). Here, we only point out that the method needs the definition of a cluster center. For A1995, we chose the location of the BCG (see Table 1). The application of the "shifting gapper" rejected another seven galaxies in the outskirts of the cluster ( The ordinate is the restframe velocity, the abscissa the projected clustercentric distance. Galaxies rejected by the "shifting gapper" procedure are shown with (red) crosses. For the cluster center we adopt the location of the BCG (see text). Big green circles and small points show the differential and integral profiles of the mean velocity (in the middle panel) and the radial velocity dispersion (in the bottom panel). In the bottom panel, two horizontal lines mark the range of possible values for the ICM temperature (7-9 keV) with their respective errors transformed to σ V (see Sect. 5 for details).
Global cluster properties
The first and second moments of a distribution can be efficiently computed by using the biweight estimators of location and scale (Beers et al. 1990). Their application to the velocity distribution of our 87 cluster members provided a measurement of the mean cluster redshift and of the global radial velocity dispersion, where we found z = 0.3217± 0.0005 (i.e. v = 96 437±140 km s −1 ) and σ V = 1302 +107 −71 km s −1 , respectively. Analysis of the velocity dispersion profile (Fig. 4) suggests that the value computed for σ V is quite robust. In fact, the integral profile is asymptotically flat within the errors. Instead, the decrease in the differential profile agrees with the fact that the cluster is free of interlopers, as also seen in the velocity vs. clustercentric distance diagram (Fig. 4 -top panel). As for the mean velocity profile, the modestly higher values of the mean velocity in the external region ( Fig. 4 -middle panel) probably come from the slightly larger sampling in the NE external regions, where galaxies have higher velocities (see Sect. 3.5).
Analysis of the velocity distribution
Deviations of the velocity distribution from Gaussianity can provide signs of disturbed dynamics. This can be checked by applying classical shape estimators (Bird & Beers 1993). We did not not find significant evidence of departures from Gaussianity by using the skewness, the kurtosis, and the STI estimators.
We also searched for significant gaps in the velocity distribution. In particular, we performed the weighted gap analysis (Beers et al. 1991 and1992). We detected one significant gap that divides the cluster into two groups (see Fig. 5): GV1 (with 42 galaxies and lower velocities) and GV2 (with 45 galaxies and higher velocities). The BCG resides in the GV1 group, but it lies on the border with GV2.
When considering the spatial distributions of the galaxies of GV1 and GV2, we found that they are different at the 99.94% c.l. according to the 2D KS-test (Fasano 1987, see our Fig. 6). A statistical test useful to search for eventual subsets in the velocity distribution is the 1D-Kaye's mixture model test (1D-KMM; Ashman et al. 1994; see also, e.g., B12 for a description of the method).
We applied this technique by assessing whether a two-Gaussian partition (accordingly to the detection of the two groups GV1 and GV2) can provide a significantly better fit to the velocity distribution than a sole Gaussian. The result of the 1D-KMM test is negative. However, the best-fit result of the 1D-KMM method gives two groups (KMM1D-1 and KMM1D-2) of 40 and 47 galaxies, very similar to the groups GV1 and GV2 (but note that the BCG is now belonging to KMM1D-2, the high-velocity group). The spatial distributions of the galaxies of KMM1D-1 and KMM1D-2 are different, too (at the 99.75% c.l.). In Fig. 5 and Table 2 we present the results for the two Gaussians with the velocity dispersions computed by the 1D-KMM procedure, which considers the membership probabilities of the galaxies to belong to a group, rather than the velocity dispersions computed on the galaxy group populations after the assignment. In this way we could bypass the artificial truncation of the velocity distribution tails of KMM1D-1 and KMM1D-2, thus minimizing the danger of heavily underestimating the velocity dispersions of the two subclusters.
Analysis of the galaxy spatial distribution
We applied the 2D-DEDICA method to the sky positions of A1995 member galaxies to search for eventual subsets in the galaxy spatial distribution. We found a NE-SW elongated structure with two significant peaks, one of them (the NE one) centered on the BCG (Fig. 7). However, this finding could be affected by the magnitude incompleteness of our spectroscopic sample. To test the robustness of the 2D-DEDICA results, we resorted to the photometric catalogs.
We used the color-magnitude relation (hereafter CMR) of early-type galaxies (the dominant galaxy population in the densest, internal cluster regions; e.g. Dressler 1980) to select likely cluster members from our photometric sample (see B12 for details on the technique used for the determination of the CMR and the selection of member galaxies). We found B-R = 4.164 − 0.081 × R (see Fig. 8). Figure 9 illustrates the contour map for the likely cluster members (513 with R <21 in the whole INT field): we confirm that A1995 is described well by an elongated structure. The two most significant peaks have similar densities, one centered on the BCG galaxy and one ∼ 1.5 ′ at SW. Table 3 lists information for these two main peaks (2D-NE and 2D-SW clumps), including N S , the number of members (Col. 2), the peak densities, ρ S , relative to the densest peak Fig. 8. B-R vs. R diagram for the 126 galaxies with spectroscopic information. Small black squares are cluster members, while big red squares represent field galaxies. The solid line shows the best-fit CMR as computed from cluster members; the dashed lines mark the region where likely cluster members were selected from the photometric catalogs.
(Col. 4), and the significance of the peaks estimated through the value of χ 2 (Col. 5). We also detect a minor peak (2D-NENE in Table 3, the third in galaxy density) whose statistical significance, although nominally acceptable, is much lower than the two main peaks. We also considered the SDSS photometric catalogs (already corrected for Galactic absorption). In this case, we selected likely member galaxies by considering the CMRs in the (r ′ -i ′ vs. r ′ ) and (g ′ -r ′ vs. r ′ ) color-r ′ mag diagrams (see Goto et al. 2002 and B12). The two CMRs are r ′ -i ′ =1.009-0.022× r ′ and g ′ -r ′ =2.878-0.062× r ′ , respectively. The result for r ′ < 21.5 (the limit for the SDSS star/galaxy classification) confirms the results reported above. Fig. 9. Results of the 2D-DEDICA method (blue isodensity contour lines) applied to likely A1995 members (see text) with R <21. The origin of the coordinates is the location of the BCG (the big cross). Small circles mark the positions of likely cluster members relative to the BCG.
3D analysis: combining spatial and velocity information
The 3D tests searching for correlations between positions and velocities of member galaxies are powerful tools for revealing real substructures in clusters. First, we checked for a velocity gradient in the set of the 87 fiducial cluster members (see, e.g., den Hartog &Katgert 1996 andGirardi et al. 1996). We found a significant (at the 96% c.l.) velocity gradient with PA = 48 +33 −36 degrees (counter-clock-wise from north). This means that the NE region of the cluster is populated by higher velocity galaxies (see Fig. 6).
A classical test to detect the presence of substructures is the ∆-statistics (Dressler & Schectman 1988;hereafter DS-test). We applied this test by defining, for each galaxy, the δ parameter (determined considering the N nn =10 neighbors of each galaxy; see, e.g., B12 for a description of the method) and computing the departure ∆ of the local kinematic quantities from the global parameters. We also applied the ǫ-test (Bird 1993) and the α-test (West & Bothun 1990; see also, e.g., Pinkney et al. 1996 andFerrari et al. 2003 for details of these 3D tests). Both the DStest and the α-test detected the presence of substructures (at the 98.5% and 95% c.l., respectively).
A better interpretation of the results of the DS-test can be reached by splitting the contributions of the local mean velocity (estimator δ V ) and dispersion (estimator δ s ) to the classical δ parameter (see B12 for details; see also Girardi et al. 1997;. Moreover, we investigated the results of the DS-test obtained by changing the number of neighbors.
The two panels of Fig. 10 illustrate the significant results for the two kinematical indicators. This figure shows the spatial distribution of the 87 cluster members. Each galaxy is indicated by a bubble, where the size of the bubble is related to the value of δ, the deviation of the local kinematic estimator from the global cluster value. As for the δ V estimator, the substructure significance increases up to > 99.9% c.l. when N nn =40. Figure 10 (upper panel) shows that there are two regions of different local mean velocities at NE and SW, in agreement with the velocity gradient discussed above, and likely corresponding to a lowand a high-velocity galaxy populations. When considering the δ s estimator, the substructure significance is higher at smaller N nn . Figure 10 (lower panel) shows a NE region of low local velocity dispersions (N nn =10, significance at the 96% c.l.). Probably, in that region, there is a minor mixing of the low-and high-velocity populations.
We also applied the full 3D-KMM method. Considering GV1 and GV2 as a guess for an initial two-group partition, we find that the 3D galaxy distribution is well described by a partition of 49 and 38 galaxies (KMM3D-1 and KMM3D-2 groups). The improvement over the sole 3D Gaussian is significant at the 98% c.l. Figure 11 illustrates the distribution of KMM3D-1 and KMM3D-2 on the plane of the sky and the two Gaussians in the velocity distribution. Table 2 lists the main properties of these two groups. With this 3D test, the values of the velocity dispersions of KMM3D-1 and KMM3D-2 are much higher than those of the corresponding groups obtained with the 1D methods previously described.
We finally applied the Htree method (Serna & Gerbal 1996). This method performs a hierarchical clustering analysis and returns the relationship between cluster members based on their relative binding energies (see also, e.g., B12 and Girardi et al. 2011). Figure 12 illustrates the results of the Htree method. In this dendogram the binding energy is reported in abscissa (in arbitrary units) and galaxy pairs and subgroups lie to the left (at lower energy levels). Going down from the top of the dendogram, we note the secondary system HT2 and the main system HT1. HT2 is a group at high velocity. HT1 is the main system Fig. 10. Positions on the plane of the sky of 87 cluster members (marked by bubbles). Top panel: DS-test bubble plot for the deviation δ V,i and N nn = 40 (see text for an explanation). Bottom panel: as above, but for the deviation δ s,i and N nn = 10. In both panels, thin/blue (thick/red) bubbles show regions with a local value lower (higher) than the global one. and shows a low-velocity substructure (HT12), while its core (HT11) is dominated by the BCG. The spatial distributions of galaxies of HT1, HT2, and HT12 are plotted in Fig. 13. When using the results of the Htree method as seeds for the 3D-KMM method, we do not find any two or three-group partition significantly better than the sole 3D-Gaussian. Fig. 11. 2D distribution of 87 cluster members. Galaxies with smaller (larger) symbols have higher (lower) radial velocities. Big blue circles and red crosses indicate galaxies of KMM3D-1 and KMM3D-2. The insert plot shows the same velocity distribution of Fig. 5 and the two Gaussians corresponding to the mean velocity and velocity dispersions of KMM3D-1 and KMM3D-2 (blue thin line and red thick line, respectively).
X-ray morphological analysis
We studied the morphological properties of the ICM by using archival X-ray data taken with Chandra ACIS-S (exposure ID #906, total exposure time 58 ks). We reduced the data using the package CIAO 4 (ver. 4.2) on the chip S3 in a standard way (see, e.g., Boschin et al. 2004).
The reduced image (see Fig. 14) reveals that the ICM exhibits a regular morphology. From a quantitative point of view, this result is supported by the power-ratio (Buote & Tsai 1996) analysis performed by Hart (2008). Moreover, we used the task CIAO/Wavdetect on chip S3 to perform a wavelet multiscale analysis and found no evidence of multiple clumps in the ICM.
We also used the CIAO package Sherpa to fit an elliptical β-model profile (e.g. Boschin et al. 2004) to the X-ray photon distribution (after removing of the pointlike sources found with CIAO/Wavdetect). The best-fit model is characterized by a centroid position located ∼4 ′′ east of the BCG and a core radius r 0 = 47.3±1.2 arcsec (221±6 h −1 70 kpc). The best-fit values for other parameters are: ǫ = 0.213±0.007 (ellipticity), α = 1.74±0.04 (power law index), and θ=149.8±1.1 degrees (angle of ellipticity).
The reduced CSTAT statistic (Cash 1979) of the fit is 1.22. Thus, the elliptical beta model is an adequate description to the data. However, we checked for possible deviations in the Xray photon distribution from the above model by computing the model residuals. We find a deficiency of photons in a dumbbellshaped region extending along the SSE-NNW direction in proximity of the X-ray centroid position (see Fig. 15). Regions with 4 see http://asc.harvard.edu/ciao/ Fig. 12. Results obtained with the Htree method (see text). In this dendogram, the horizontal axis reports the binding energy while the vertical axis shows the IDms of the member galaxies (as in Table 1). positive residuals elongated in the NE-SW direction are also found all around the cluster center.
Discussion and conclusions
Our estimate of the velocity dispersion (σ V = 1302 +107 −71 km s −1 ) agrees with those of Patel et al. (2000) and Irgens et al. (2002) computed on a much smaller galaxy sample. This high value of σ V predicts kT X = 10.3 +1.4 −1.1 keV (assuming energy equipartition between galaxies and gas energy per unit mass), and thus agrees with the high measured X-ray temperature kT X = 7 − 9 keV (from Chandra data, see Baldi et al. 2007, Bonamente et al. 2008, and Ehlert & Ulmert 2009). Both σ V and T X suggest that A1995 is a massive galaxy cluster.
In the hypothesis of dynamical equilibrium (but see in the following) and typical assumptions (cluster sphericity, galaxies and mass following identical distributions), we computed virial global quantities. By considering Girardi & Mezzetti (2001; see also B12 for details), we obtained a measurement of the mass within the virial radius R vir : M(< R vir = 2.7 h −1 70 Mpc) = 3.0 +0.9 −0.8
×10 15 h −1 70 M ⊙ .
Cluster structure and mass
Substructure is detected using several analyses of the cluster galaxy population. Our optical analyses indicate the presence of two main subclusters aligned in the NE-SW direction causing the velocity gradient towards the NE direction and separated by ∼ 1.5 ′ , i.e. a projected linear distance D ∼ 0.4 h −1 70 Mpc. The two subclusters, as determined through 1D-or 3D-KMM methods, have comparable velocity dispersions within the errors, and in both cases they are likely two massive systems. The strong uncertainty in the subcluster membership reflects on the recomputation of the system mass by summing the masses of the two subclumps; i.e., M sys ∼ 2-5 ×10 15 h −1 70 M ⊙ , where the low (high) value is computed for the 1D (3D) case with a restframe velocity separation of ∆V rf,LOS ∼ 2000 km s −1 (∼ 600 km s −1 ; see Table 2). Both 1D and 3D methods have their drawbacks. As for the 1D case, the two peaks are not clearly separated in the velocity distribution. In the 3D case, our spectroscopic sample is not spatially complete and less extended than the supposed R 200 . We thus suggest that intermediate values are closer to the real one. This leads to a mass value not far from the virial mass previously computed considering the global velocity dispersion.
Moreover, we also have indications of a more complex structure: the 2D-DEDICA analysis detects a third, minor central galaxy peak aligned along the NE-SW direction. Finally, the Htree method does not detect a clear bimodal structure.
For the X-ray data analysis, we reaffirm the existence of an isophotes elongation. More quantitatively, we find isophotes are elongated with a position angle PA = θ-90 • =59.8 • (where PA is measured in a clockwise direction from the north; see Sect. 4). This PA agrees with the direction of the velocity gradient and is slightly larger than the PA suggested by the direction defined by the two main optical subclumps. This scenario is somewhat suggestive of a cluster merger. Moreover, while previous studies did not find any direct evidence of substructure in the X-ray emission (Ota & Mitsuda 2004;Pedersen & Dahle 2007), our detailed analysis of Chandra data suggests the presence of multiple clumps. The elliptical βmodel residuals resemble the residuals computed by from the fit of the X-ray photon distribution of the cluster Abell 2294 (see their Fig. 12) with a simple β-model. For Abell 2294, explain the residual image by the presence of two very closely projected subsystems. Our Fig. 15 suggests a similar scenario for A1995, with an excess of X-ray emission (positive residuals) in the NE-SW direction. The main central X-ray peak is close to the BCG. This peak and a SW one are offset with respect to the two main optical peaks and located between them, thus resembling the usual situation of a bimodal merging just after the core-core passage with the collisional X-ray components slowed down with respect to the collisionless galaxy clumps. The NE X-ray peak lies between the main optical peak and the third (minor) optical peak, thus suggesting that a third subcluster intervenes in the merger. A fourth X-ray peak (excess of positive residuals) at the NW is less significant and is not considered. Summarizing, our results confirm the active dynamical state of A1995 from the thermodynamic point of view.
Merger kinematics and diffuse radio sources
Considering only the two main subclusters, the merger can be followed through the analytic method of the bimodal model (Beers et al. 1982; see also Boschin et al. 2010). For the values of the relevant parameters (D; M sys ; ∆V rf,LOS ), we used the values reported above taking both the cases corresponding to 1D-and 3D-KMM analysis into account (hereafter case-1D and case-3D). We assumed t = 0.2 Gyr for the time elapsed from the core crossing (a typical value for clusters hosting radio halos; e.g. Markevitch et al. 2002;Girardi et al. 2008). Figure 16 Table 3). Small cross indicates the third densest peak we detect (2D-NENE).
illustrates the solutions of the bimodal model, where the mass of the system M sys is plotted against α, the projection angle between the direction defined by the subclumps and the plane of the sky. In both case-1D and case-3D we find a bound outgoing solution (BO, see Fig. 16) compatible with α ∼ 50 • . This value agrees with the strong evidence of substructure in the 3D analyses, since 3D substructure is more easily detected at intermediate angles (Pinkney et al. 1996). Note that the assumption of significantly longer times t would lead to larger, unlikely, angles of view (e.g. > 70 • for t > 1 Gyr).
As a comparison with the results in the radio band, Giovannini et al. (2009) show that, as usual, the radio halo of A1995 and the X-ray emitting ICM occupy the cluster volume in the same manner (see their Fig. 8 and our Fig. 1). We find that the elongation of the radio halo roughly agrees with the direction of the merger, as detected in several clusters, e.g. the Bullet Cluster (1E0657-56; Markevitch et al. 2002) and Abell 520 (Girardi et al. 2008); but see Abell 523 (Giovannini et al. 2011).
In conclusion, A1995, for its high mass and cluster merging evidence, is not atypical among clusters hosting radio halos. The remaining puzzling point concerns the dark matter distribution, which is quite circularly symmetric with respect to the galaxy distribution Holhjem et al. 2009), the latter indicating a NE-SW preferential direction. Owing to the assumed collisionless nature of both galaxies and dark matter particles, this is surprising evidence. Moreover, this disagrees with the results found for other extensively studied clusters, such as the Bullet Cluster (e.g., Markevitch et al. 2002) or clusters CL 0152-1357 and MS 1054-0321 (Jee et al. 2005a(Jee et al. , 2005b, where a coincidence is found between the galaxy and dark matter distributions. Nevertheless, the peculiarity of A1995 is not so extreme as in other clusters, such as CL 0024+17, where ev- Fig. 16. System mass -projection angle diagram of the analytic bimodal model applied to the two subclusters. Thin/red (thick/blue) lines refer to the case-1D (case-3D; see text). In particular, bound (solid curves) and incoming (collapsing; BI a and BI b ), bound outgoing (expanding; BO), and unbound (dashed curves) outgoing (UO) solutions are shown. The horizontal lines give the computed values of the system mass for the two cases. The dotted curves separate bound and unbound regions (above and below the dotted curves, respectively).
idence was found for a dark matter ring (Jee et al. 2007) without a obvious galaxy counterpart (Qin et al. 2008; but see also Ponente & Diego 2011, who suggest that dark matter ringlike structures could be due to systematics in lensing reconstruction). In any case, the observed dichotomy between the dark matter and galaxy distributions makes A1995 an appealing target for future studies.
Fig. 1 .
1Fig. 1. Optical/X-ray/radio view of A1995 (direct sky view with north up). A smoothed Chandra 0.3-7 keV image (orange and yellow colors) of A1995 is superimposed on an R H -band image taken with the INT. Thin contours are the radio contour levels from Giovannini et al. (2009; VLA data at 1.4 GHz with discrete sources subtracted, courtesy of F. Govoni). Thick contours are the mass distribution contours from Holhjem et al. (2009). Numbers highlight notable galaxies mentioned in the text.
Fig. 2 .
2Field of the cluster A1995 (direct sky view with north up) observed with the INT in the R H -band. Cluster members are indicated by circles, while nonmember galaxies are shown by squares (see
Fig. 4 -top panel). Finally we obtained a sample of 87 fiducial cluster members(Fig. 5).
Fig. 3 .
3Histogram of 126 galaxy redshifts. The 94 (provisional) cluster members are highlighted with the solid line (see text).
Fig. 4 .
4The 94 (provisional) cluster members (see also Fig. 3) in the phase space (see top panel).
Fig. 5 .
5The 87 galaxies recognized as cluster members. Top panel: radial velocity distribution with the arrow indicating the velocity of the BCG. Red and blue Gaussians are the best twogroup partition fits according to the 1D-KMM test (see text). Bottom panel: Stripe density plot. The position of the significant gap is marked by an arrow.
Fig. 6 .
62D distribution of 87 cluster members. Galaxies with smaller (larger) symbols have higher (lower) radial velocities. Blue circles and red crosses identify galaxies of GV1 and GV2. The origin of the coordinates is the location of the BCG. The solid and dashed lines indicate the position angle of the cluster velocity gradient (see text) and relative errors, respectively.
Fig. 7 .
7Isodensity contours of the cluster members spatial distribution computed with 2D-DEDICA. The origin of the coordinates is the location of the BCG (the big cross). Small circles mark the positions of cluster members relative to the BCG.
Fig. 13 .
132D distribution of 87 cluster members. Big blue circles and red crosses indicate galaxies of HT1 (main system) and HT2 (high-velocity subcluster). The insert plot shows the same velocity distribution ofFig. 5(dashed line) and that of HT1 (blue thin line) and HT2 (red thick line). The subcluster HT12 of HT1 is indicated by small blue crosses in the main plot and this causes the low-velocity tail in the insert plot.
Fig. 14 .
14Chandra image of A1995 (smoothed X-ray emission in the energy band 0.3-7 keV, direct sky view with north up). The field of view is 8.5 ′ ×8.5 ′ .
Fig. 15 .
15The smoothed X-ray emission of A1995 (direct sky view with north up). Black (white) contours show the positive (negative) elliptical β-mode residuals. Big crosses indicate the two main optical clumps detected with the 2D-DEDICA method (2D-NE and 2D-SW in
Table 1 .
1Catalog of 126 galaxies in the field of A1995 with measured radial velocities, where † highlights the ID. of the BCG. km s −1 ) 01 − 14 52 18.12, +58 01 33.3 19.18 18.16 18112 101 02 − 14 52 24.17, +58 00 27.5 22.31 21.18 74182 82 03 − 14 52 26.31, +58 02 34.0 20.50 18.75 70357 170 04 − 14 52 30.82, +58 02 58.2 19.45 18.40 49107 129 05 1 14 52 33.02, +58 02 27.6 22.62 20.09 95649 48 06 2 14 52 33.15, +58 00 13.9 22.75 20.38 97643 72 07 − 14 52 33.19, +57 59 33.2 22.93 20.35 121114 208 08 3 14 52 34.39, +57 59 54.6 22.77 21.41 95562 100 09 − 14 52 34.92, +58 00 40.5 22.54 19.91 128719 141 10 − 14 52 36.19, +58 00 40.6 21.77 20.36 103177 105 11 4 14 52 36.53, +58 00 44.9 22.42 20.11 95879 77 12 − 14 52 37.80, +58 01 26.9 21.99 19.96 138505 73 13 − 14 52 38.18, +57 59 15.2 22.67 20.78 134283 128 14 − 14 52 39.12, +58 00 54.0 22.82 21.22 143292 100 15 5 14 52 39.87, +58 02 28.6 23.42 20.93 94671 73 16 − 14 52 40.85, +57 59 19.3 23.29 21.42 103596 100 17 − 14 52 40.89, +58 00 14.5 22.36 19.50 128750 72 18 6 14 52 41.04, +58 03 38.5 22.07 19.58 95191 41 19 7 14 52 41.47, +58 04 02.7 22.62 20.15 96695 97 20 8 14 52 42.26, +57 59 29.3 23.54 20.83 96151 76 21 9 14 52 42.72, +58 00 41.2 22.95 20.43 94533 85 22 − 14 52 43.56, +58 03 04.4 21.82 19.85 142517 117 23 10 14 52 43.78, +58 01 29.5 21.29 18.81 98437 65 24 11 14 52 45.07, +58 03 13.3 22.27 19.72 93027 120 25 12 14 52 46.39, +58 01 12.4 21.82 19.50 95659 88 26 13 14 52 46.53, +58 01 36.7 23.31 20.62 97678 116 27 14 14 52 47.26, +58 01 57.2 22.04 19.45 95027 49 28 15 14 52 47.45, +58 01 37.0 22.52 19.76 98072 108 29 − 14 52 47.86, +58 01 10.0 22.66 21.46 134784 100 30 16 14 52 47.86, +58 02 01.0 21.73 19.37 93980 45 31 − 14 52 48.86, +58 04 39.9 20.17 18.99 20008 68 32 17 14 52 48.98, +58 01 48.2 23.16 20.54 96159 85 33 18 14 52 49.20, +58 02 18.7 22.50 19.87 95919 85 34 19 14 52 49.92, +58 00 26.3 20.80 19.18 99092 87 35 20 14 52 50.11, +58 01 26.9 21.37 18.88 94063 57 36 21 14 52 50.33, +58 01 50.7 23.05 20.32 98429 77 37 − 14 52 50.67, +58 04 06.1 23.84 20.76 157123 60 38 22 14 52 51.17, +58 02 04.6 22.37 19.68 97314 81 39 23 14 52 51.38, +58 02 36.5 21.99 19.84 94876 57 40 24 14 52 51.94, +58 02 33.4 22.00 19.21 95569 49 41 25 14 52 51.96, +58 02 10.1 21.93 19.29 97901 48 42 − 14 52 52.03, +58 00 45.9 23.46 21.36 184420 68ID IDm
α, δ (J2000)
B
R
v
∆v
(
Table 1 .
1Continued.ID IDm
α, δ (J2000)
B
R
v
∆v
( km s −1 )
43
− 14 52 52.44, +58 00 49.1 22.82 21.37 214244 61
44 26 14 52 52.54, +58 05 01.7 22.33 19.81 97005 61
Table 1 .
1Continued.ID IDm
α, δ (J2000)
B
R
v
∆v
( km s −1 )
85 63 14 53 04.01 + 58 04 04.1 21.37 18.86 98245 56
Table 2 .
2Kinematical properties of several subsystems.System
N g
< v >
σ V
M
Notes
km s −1
km s −1 10 15 h −1
70 M ⊙
Whole system 87 96437 ± 140 1302 +107
−71
3.0
KMM1D − 1 40
95130 a
954 a
1.2
1D low velocity subcluster
KMM1D − 2 47
97472 a
805 a
0.7
1D high velocity subcluster
KMM3D − 1 49 96055 ± 200 1389 +122
−91
3.6
3D low velocity subcluster
KMM3D − 2 38 96850 ± 178 1080 +166
−113
1.7
3D high velocity subcluster
HT1
51 95829 ± 134 950 +99
−107
1.2
main system
HT2
16 98200 ± 90 342 +76
−44
0.05 secondary (high velocity) system
a Here we consider < v > and σ V as given by the KMM software, where galaxies are opportunely weighted according to their membership
probability (see Sect. 3.3).
Table 3 .
3Substructure from analysis of the INT photometric sample.Subclump
N S
α, δ (J2000)
ρ S χ 2
S
h : m : s, • : ′ : ′′
2D − SW (INT R < 21)
21 14 52 48.9 + 58 01 48 1.00 20
2D − NE (INT R < 21)
23 14 52 57.6 + 58 03 03 0.99 18
2D − NENE (INT R < 21) 13 14 53 04.1 + 58 03 50 0.70 11
see http://www.tng.iac.es/instruments/lrs 2 see http://iraf.net
Acknowledgements. M.G. acknowledges financial support from ASI-INAF (I/088/06/0 grant) and PRININAF2010. This work has been supported by the Programa Nacional de Astronomía y Astrofísica of the Spanish Ministry of Science and Innovation under grants AYA2010-21322-C03-02, AYA2007-67965-C03-01, and AYA2010-21887-C04-04.This publication is based on observations made with the Telescopio Nazionale Galileo (TNG) and the Isaac Newton Telescope (INT
. G O Abell, H G CorwinJr, R P Olowin, ApJS. 701Abell, G. O., Corwin, H. G. Jr., & Olowin, R. P. 1989, ApJS, 70, 1
. K M Ashman, C M Bird, S E Zepf, AJ. 1082348Ashman, K. M., Bird, C. M., & Zepf, S. E. 1994, AJ, 108, 2348
. A Baldi, S Ettori, P Mazzotta, P Tozzi, S Borgani, ApJ. 666835Baldi, A., Ettori, S.; Mazzotta, P., Tozzi, P., & Borgani, S. 2007, ApJ, 666, 835
. K Basu, MNRAS. 421112Basu, K. 2012, MNRAS, 421, L112
. T C Beers, K Flynn, K Gebhardt, AJ. 10032Beers, T. C., Flynn, K., & Gebhardt, K. 1990, AJ, 100, 32
. T C Beers, W Forman, J P Huchra, C Jones, K Gebhardt, AJ. 1021581Beers, T. C., Forman, W., Huchra, J. P., Jones, C., & Gebhardt, K. 1991, AJ, 102, 1581
. T C Beers, K Gebhardt, J P Huchra, ApJ. 400410Beers, T. C., Gebhardt, K., Huchra, J. P., et al. 1992, ApJ, 400, 410
. T C Beers, M J Geller, J P Huchra, ApJ. 25723Beers, T. C., Geller, M. J., & Huchra, J. P. 1982, ApJ, 257, 23
. C M Bird, T C Beers, AJ. 1051596Bird, C. M., & Beers, T. C. 1993, AJ, 105, 1596
. H Böhringer, W Voges, J P Huchra, ApJS. 129435Böhringer, H., Voges, W., Huchra, J. P., et al. 2000, ApJS, 129, 435
. M Bonamente, M Joy, S J Laroque, ApJ. 675106Bonamente, M., Joy, M., LaRoque, S. J., et al. 2008, ApJ, 675, 106
. W Boschin, R Barrena, M Girardi, A&A. 52178Boschin, W., Barrena, R., & Girardi, M. 2010, A&A, 521, A78
. W Boschin, M Girardi, R Barrena, A&A. 416839Boschin, W., Girardi, M., Barrena, R., et al. 2004, A&A, 416, 839
. W Boschin, M Girardi, R Barrena, M Nonino, A&A. 540B1243Boschin, W., Girardi, M., Barrena, R., & Nonino, M. 2012, A&A, 540, A43 (B12)
. G Brunetti, R Cassano, K Dolag, G Setti, A&A. 507661Brunetti, G., Cassano, R., Dolag, K., & Setti, G. 2009, A&A, 507, 661
. G Brunetti, T Venturi, D Dallacasa, ApJ. 6705Brunetti, G., Venturi, T., Dallacasa, D., et al. 2007, ApJ, 670, L5
D A. ; D A Buote, J C Tsai, Optical Analysis of Cluster Mergers Buote. L. Feretti, I. M. Gioia, & G. GiovanniniThe NetherlandsKluwer Ac. Pub45827Merging Processes in Galaxy ClustersBuote, D. A. 2002, in "Merging Processes in Galaxy Clusters", eds. L. Feretti, I. M. Gioia, & G. Giovannini (The Netherlands, Kluwer Ac. Pub.): Optical Analysis of Cluster Mergers Buote, D. A., & Tsai, J. C. 1996, ApJ, 458, 27
. W Cash, ApJ. 228939Cash, W. 1979, ApJ, 228, 939
. R Cassano, G Brunetti, G Setti, MNRAS. 3691577Cassano, R., Brunetti, G., & Setti, G. 2006, MNRAS, 369, 1577
. R Cassano, S Ettori, S Giacintucci, ApJ. 72182Cassano, R., Ettori, S., Giacintucci, S., et al. 2010, ApJ, 721, 82
. A R Cooray, L Grego, W L Holzapfel, AJ. 1151388Cooray, A. R., Grego, L., Holzapfel, W. L., et al. 1998, AJ, 115, 1388
. A W J Cousins, Mem. R. Astr. Soc. 8125Cousins, A. W. J., 1976, Mem. R. Astr. Soc, 81, 25
. H Dahle, N Kaiser, R J Irgens, P B Lilje, S J Maddox, ApJS. 139313Dahle, H., Kaiser, N., Irgens, R. J., Lilje, P. B., & Maddox, S. J. 2002, ApJS, 139, 313
. R Den Hartog, P Katgert, MNRAS. 279349den Hartog, R., & Katgert, P. 1996, MNRAS, 279, 349
. A Dressler, ApJ. 236351Dressler, A. 1980, ApJ, 236, 351
. A Dressler, S A Shectman, AJ. 95985Dressler, A., & Shectman, S. A. 1988, AJ, 95, 985
. S Ehlert, M P Ulmer, A&A. 50335Ehlert, S., & Ulmer, M. P. 2009, A&A, 503, 35
. T A Ensslin, P L Biermann, U Klein, S Kohle, A&A. 332395Ensslin, T. A., Biermann, P. L., Klein, U., & Kohle, S. 1998, A&A, 332, 395
. D Fadda, M Girardi, G Giuricin, F Mardirossian, M Mezzetti, ApJ. 473670Fadda, D., Girardi, M., Giuricin, G., Mardirossian, F., & Mezzetti, M. 1996, ApJ, 473, 670
. G Fasano, A Franceschini, MNRAS. 225155Fasano, G., & Franceschini, A. 1987, MNRAS, 225, 155
. L Feretti, No. 271MPE ReportFeretti, L. 1999, MPE Report No. 271
. C Ferrari, F Govoni, S Schindler, A M Bykov, Y Rephaeli, Space Sci. Rev. 13493Ferrari, C., Govoni, F., Schindler, S., Bykov, A. M., & Rephaeli, Y. 2008, Space Sci. Rev., 134, 93
. C Ferrari, S Maurogordato, A Cappi, C Benoist, A&A. 399813Ferrari, C., Maurogordato, S., Cappi, A., & Benoist, C. 2003, A&A, 399, 813
. G Giovannini, A Bonafede, L Feretti, A&A. 5071257Giovannini, G., Bonafede, A., Feretti, L., et al. 2009, A&A, 507, 1257
G Giovannini, L ; G Feretti, L Feretti, M Girardi, Diffuse Radio Sources and Cluster Mergers Giovannini. L. Feretti, I. M. Gioia, & G. GiovanniniThe NetherlandsKluwer Ac. Pub5305Merging Processes in Galaxy ClustersGiovannini, G., & Feretti, L. 2002, in "Merging Processes in Galaxy Clusters", eds. L. Feretti, I. M. Gioia, & G. Giovannini (The Netherlands, Kluwer Ac. Pub.): Diffuse Radio Sources and Cluster Mergers Giovannini, G., Feretti, L., Girardi, M., et al. 2011, A&A, 530, L5
. M Girardi, S Bardelli, R Barrena, A&A. 53689Girardi, M., Bardelli, S., Barrena, R., et al. 2011, A&A, 536, A89
M Girardi, R Barrena, W Boschin, Contribution to the conference "Galaxy clusters: observations, physics and cosmology. held in Garching (Germany. Published online at the siteGirardi, M., Barrena, R., & Boschin, W. 2010, Contribution to the con- ference "Galaxy clusters: observations, physics and cosmology", held in Garching (Germany), July 26-30 2010. Published online at the site http://www.mpa-garching.mpg.de/∼clust10/
. M Girardi, R Barrena, W Boschin, E Ellingson, A&A. 491379Girardi, M., Barrena, R., Boschin, W., & Ellingson, E. 2008, A&A, 491, 379
M Girardi, A ; M Biviano, W Boschin, R Barrena, Optical Analysis of Cluster Mergers Girardi. L. Feretti, I. M. Gioia, & G. GiovanniniThe NetherlandsKluwer Ac. Pub51765Merging Processes in Galaxy ClustersGirardi, M., & Biviano, A. 2002, in "Merging Processes in Galaxy Clusters", eds. L. Feretti, I. M. Gioia, & G. Giovannini (The Netherlands, Kluwer Ac. Pub.): Optical Analysis of Cluster Mergers Girardi, M., Boschin, W., & Barrena, R. 2010, A&A, 517, A65
. M Girardi, E Escalera, D Fadda, ApJ. 48211Girardi, M., Escalera, E., Fadda, D., et al. 1997, ApJ, 482, 11
. M Girardi, D Fadda, G Giuricin, ApJ. 45761Girardi, M., Fadda, D., Giuricin, G. et al. 1996, ApJ, 457, 61
. M Girardi, M Mezzetti, ApJ. 54879Girardi, M., & Mezzetti, M. 2001, ApJ, 548, 79
. T Goto, M Sekiguchi, R C Nichol, AJ. 1231807Goto, T., Sekiguchi, M., Nichol, R. C., et al. 2002, AJ, 123, 1807
. F Govoni, T A Ensslin, L Feretti, G Giovannini, A&A. 369441Govoni, F., Ensslin, T. A., Feretti, L., & Giovannini, G. 2001, A&A, 369, 441
. B Hart, arXiv:0801.4093PhD Thesis, preprintHart, B. 2008, PhD Thesis, preprint arXiv:0801.4093
. M Hoeft, M Brüggen, G Yepes, MNRAS. 347389Hoeft, M., Brüggen, M., & Yepes, G. 2004, MNRAS, 347, 389
. K Holhjem, M Schirmer, H Dahle, A&A. 5041Holhjem, K., Schirmer, M., & Dahle, H. 2009, A&A, 504, 1
. R J Irgens, P B Lilje, H Dahle, S J Maddox, ApJ. 579227Irgens, R. J., Lilje, P. B., Dahle, H., & Maddox, S. J. 2002, ApJ, 579, 227
. M J Jee, H C Ford, G D Illingworth, ApJ. 661728Jee, M. J., Ford, H. C., Illingworth, G. D., et al. 2007, ApJ, 661, 728
. M J Jee, R L White, N Benítez, ApJ. 61846Jee, M. J., White, R. L., Benítez, N., et al. 2005a, ApJ, 618, 46
. M J Jee, R L White, H C Ford, ApJ. 634813Jee, M. J., White, R. L., Ford, H. C., et al. 2005b, ApJ, 634, 813
. H L Johnson, W W Morgan, ApJ. 117313Johnson, H. L., & Morgan, W. W. 1953, ApJ, 117, 313
. M Markevitch, A H Gonzalez, L David, ApJ. 56727Markevitch, M., Gonzalez, A. H., David, L., et al. 2002, ApJ, 567, L27
. N Ota, K Mitsuda, A&A. 428757Ota, N., & Mitsuda, K. 2004, A&A, 428, 757
F Owen, G Morrison, W Voges, proceedings of the workshop "Diffuse Thermal and Relativistic Plasma in Galaxy Clusters. H. Böhringer, L. Feretti, & P. Schueckerthe workshop "Diffuse Thermal and Relativistic Plasma in Galaxy Clusters271Owen, F., Morrison, G., & Voges, W. 1999, proceedings of the workshop "Diffuse Thermal and Relativistic Plasma in Galaxy Clusters", eds. H. Böhringer, L. Feretti, & P. Schuecker, MPE Report 271, pp. 9-11
. S K Patel, M Joy, J E Carlstrom, ApJ. 54137Patel, S. K., Joy, M., Carlstrom, J. E., et al. 2000, ApJ, 541, 37
. K Pedersen, H Dahle, ApJ. 66726Pedersen, K., & Dahle, H. 2007, ApJ, 667, 26
. J Pinkney, K Roettiger, J O Burns, C M Bird, ApJS. 1041Pinkney, J., Roettiger, K., Burns, J. O., & Bird, C. M. 1996, ApJS, 104, 1
. A Pisani, MNRAS. 265706Pisani, A. 1993, MNRAS, 265, 706
. A Pisani, MNRAS. 278697Pisani, A. 1996, MNRAS, 278, 697
. P P Ponente, J M Diego, A&A. 535119Ponente, P. P., & Diego, J. M. 2011, A&A, 535, A119
. B Qin, H.-Y Shan, A Tilquin, ApJ. 67981Qin, B., Shan, H.-Y., & Tilquin, A. 2008, ApJ, 679, L81
. K Roettiger, C Loken, J O Burns, ApJS. 109307Roettiger, K., Loken, C., & Burns, J. O. 1997, ApJS, 109, 307
. M Rossetti, D Eckert, B M Cavalleri, A&A. 532123Rossetti, M., Eckert, D., Cavalleri, B. M., et al. 2011, A&A, 532, A123
. D J Schlegel, D P Finkbeiner, M Davis, ApJ. 500525Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
. A Serna, D Gerbal, A&A. 30965Serna, A., & Gerbal, D. 1996, A&A, 309, 65
. J Tonry, M Davis, ApJ. 841511Tonry, J., & Davis, M. 1979, ApJ, 84, 1511
. T Venturi, Mem. SAIt. 82499Venturi, T. 2011, Mem. SAIt, 82, 499
. M J West, G D Bothun, ApJ. 35036West, M. J., & Bothun, G. D. 1990, ApJ, 350, 36
| [] |
[
"On the use of total state decompositions for the study of reduced dynamics",
"On the use of total state decompositions for the study of reduced dynamics",
"On the use of total state decompositions for the study of reduced dynamics",
"On the use of total state decompositions for the study of reduced dynamics"
] | [
"Andrea Smirne \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n",
"Nina Megier \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n\nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\n80-308GdańskPoland\n",
"Bassano Vacchini \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n",
"Andrea Smirne \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n",
"Nina Megier \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n\nInternational Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\n80-308GdańskPoland\n",
"Bassano Vacchini \nDipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly\n\nIstituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly\n"
] | [
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly",
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly",
"International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\n80-308GdańskPoland",
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly",
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly",
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly",
"International Centre for Theory of Quantum Technologies (ICTQT)\nUniversity of Gdańsk\n80-308GdańskPoland",
"Dipartimento di Fisica \"Aldo Pontremoli\"\nUniversità degli Studi di Milano\nvia Celoria 1620133MilanItaly",
"Istituto Nazionale di Fisica Nucleare\nSezione di Milano\nvia Celoria 1620133MilanItaly"
] | [] | The description of the dynamics of an open quantum system in the presence of initial correlations with the environment needs different mathematical tools than the standard approach to reduced dynamics, which is based on the use of a time-dependent completely positive trace preserving (CPTP) map. Here, we take into account an approach that is based on a decomposition of any possibly correlated bipartite state as a conical combination involving statistical operators on the environment and general linear operators on the system, which allows one to fix the reduced-system evolution via a finite set of time-dependent CPTP maps. In particular, we show that such a decomposition always exists, also for infinite dimensional Hilbert spaces, and that the number of resulting CPTP maps is bounded by the Schmidt rank of the initial global state. We further investigate the case where the CPTP maps are semigroups with generators in the Gorini-Kossakowski-Lindblad-Sudarshan form; for two simple qubit models, we identify the positivity domain defined by the initial states that are mapped into proper states at any time of the evolution fixed by the CPTP semigroups. | 10.1142/s1230161222500081 | [
"https://export.arxiv.org/pdf/2209.02288v1.pdf"
] | 252,089,589 | 2209.02288 | 891e56011200c6454b2571908a1a64e467c4e39f |
On the use of total state decompositions for the study of reduced dynamics
Andrea Smirne
Dipartimento di Fisica "Aldo Pontremoli"
Università degli Studi di Milano
via Celoria 1620133MilanItaly
Istituto Nazionale di Fisica Nucleare
Sezione di Milano
via Celoria 1620133MilanItaly
Nina Megier
Dipartimento di Fisica "Aldo Pontremoli"
Università degli Studi di Milano
via Celoria 1620133MilanItaly
Istituto Nazionale di Fisica Nucleare
Sezione di Milano
via Celoria 1620133MilanItaly
International Centre for Theory of Quantum Technologies (ICTQT)
University of Gdańsk
80-308GdańskPoland
Bassano Vacchini
Dipartimento di Fisica "Aldo Pontremoli"
Università degli Studi di Milano
via Celoria 1620133MilanItaly
Istituto Nazionale di Fisica Nucleare
Sezione di Milano
via Celoria 1620133MilanItaly
On the use of total state decompositions for the study of reduced dynamics
The description of the dynamics of an open quantum system in the presence of initial correlations with the environment needs different mathematical tools than the standard approach to reduced dynamics, which is based on the use of a time-dependent completely positive trace preserving (CPTP) map. Here, we take into account an approach that is based on a decomposition of any possibly correlated bipartite state as a conical combination involving statistical operators on the environment and general linear operators on the system, which allows one to fix the reduced-system evolution via a finite set of time-dependent CPTP maps. In particular, we show that such a decomposition always exists, also for infinite dimensional Hilbert spaces, and that the number of resulting CPTP maps is bounded by the Schmidt rank of the initial global state. We further investigate the case where the CPTP maps are semigroups with generators in the Gorini-Kossakowski-Lindblad-Sudarshan form; for two simple qubit models, we identify the positivity domain defined by the initial states that are mapped into proper states at any time of the evolution fixed by the CPTP semigroups.
system-environment correlations, resorting to the identification of a restricted domain of allowed initial states, and possibly preserving the notion of complete positivity in such a scenario [13][14][15][16][17][18][19].
Very recently [20], a different approach has been introduced, which, relying on the frame decomposition of the bipartite initial state, leads to the definition of a set of completely positive trace preserving (CPTP) time-dependent maps that describe the evolution of a precisely identified set of initial states. More specifically, the exploited decomposition relies on the introduction of a positive frame on the set of the open-system Hilbert-Schmidt operators and, provided that such a positive frame exists, it allows one to deal with fully general initial conditions. The number of involved CPTP maps depends on the chosen initial state and it is anyway bounded by the square of the dimensionality of the open system. This approach has been used to devise an efficient control of general open-system evolutions [21] and to characterize multipartite photonic systems [22]; furthermore, it has been combined with perturbative projection-operator techniques [23] to derive an uncoupled system of homogeneous master equations that can be applied under general initial conditions [24].
In this paper, we introduce two different positive decompositions, one starting from the generalized Pauli matrices for a generic finite dimension, and one directly built on any orthonormal basis of a possibly infinite dimensional Hilbert space. The latter, in particular, explicitly shows that the description of the open-system evolution in the presence of initial correlations via a set of time-dependent CPTP maps can be defined in full generality, for any initial global state and for any dimension of the open-system Hilbert space. Moreover, in the finite dimensional case, we prove that the number of needed CPTP maps always coincides with the Schmidt rank of the initial global state, both for pure and mixed states. This clarifies in a quantitative way the enhanced complexity needed to describe open-system dynamics when moving from an initial product state, where one CPTP map is enough, to initially correlated states. In the latter case, CPTP maps can still be used, but at the price of increasing their number according to the Schmidt rank of the initial global state, which will be indeed strictly larger than one in the presence of correlations.
Finally, in the last part of the paper, we have considered the case in which the CPTP maps are semigroups with generators in the Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) form. For the case of qubit Pauli channels, we have derived and investigated an explicit condition that defines the set of initial open-system states that are mapped into positive states at any subsequent time of the evolution, when each of the semigroups is applied to a different term of the conical decomposition yielding the initial reduced state.
The rest of the paper is organized as follows. In Sec.II, we briefly recall the framework of open-system dynamics in the presence of initial system-environment correlations and how it can be described through a family of time-dependent CPTP maps via the approach introduced in [20], and we further recall the elements of the theory of frames that we will exploit in the following.
In Sec.III, we show that this approach can be extended to infinite-dimensional Hilbert spaces of the open-system, and we prove that in the finite dimensional case the number of required CPTP maps can be linked in full generality to the Schmidt rank of the initial global state. In Sec.IV, we specialize our analysis to the case were the CPTP maps are semigroups and we give a detailed characterization of initial reduced states compatible with such a choice, in the case of qubit Pauli channels. Finally, in Sec.V, the conclusions of our work are presented.
II. ONE-SIDED POSITIVE DECOMPOSITION
A. Open quantum system dynamics via completely positive maps
We consider the standard framework to describe the dynamics of an open quantum system [23]. We have a bipartite global system, consisting of the open system S, which is associated with the Hilbert space H S , and the environment E, associated with H E , so that the global system is associated with the tensor product Hilbert space H S ⊗H E . Moreover, the global system is assumed to evolve unitarily, i.e., the joint system-environment evolution is fixed by the unitary operators U (t) (here and in the following, t 0 = 0 is the initial time). Let S(H) denote the set of statistical operators, i.e. positive linear operators with unit trace, on H. The reduced open-system state ρ S (t) can be written at any time t as a map from S (H S ⊗ H E ) to S(H S ),
ρ S (t) = Tr E U (t)ρ SE U † (t) ,(1)
where Tr E is the partial trace on the environmental degrees of freedom. The map defined in Eq. (1) is CPTP, but its domain involves the whole S (H S ⊗ H E ), while when dealing with the evolution of an open quantum system, one would like to describe the dynamics via maps defined on S(H S ) only.
This goal is achieved in [20], relying on the decomposition of the bipartite statistical operator
ρ SE ∈ S (H S ⊗ H E ) as ρ SE = N α=1 ω α D α ⊗ ρ α ,(2)
where ρ α ∈ S(H E ) and ω α ≥ 0, while the D α are operators within the set L 2 (H S ) of Hilbert-Schmidt operators on H S , i.e., the trace of the square of their absolute value is finite; note that these operators are indeed not necessarily positive. We will call the representation of ρ SE in Eq. (2) one-sided positive decomposition (OPD), to stress that positive operators on the environmental side are needed; we will further call cost of the OPD the minimum number N of terms with which the sum in Eq.(2) can be expressed. If the D α are positive operators, ρ SE in Eq.(2) is a separable state [25]; if, in addition, the D α or the ρ α or both are given by a family of orthogonal projections, ρ SE is a zero discord state [8,26,27]. On the other hand, let us stress once more that general bipartite states, including any kind of classical or quantum correlations, can be decomposed via (infinitely many) OPDs.
From the point of view of the description of open-system dynamics, the key advantage of the OPD is that, starting from it, the reduced state at time t can always be obtained by means of a family of time-dependent CPTP maps defined on operators acting on H S only. Replacing Eq. (2) into Eq.(1), we have in fact
ρ S (t) = N α=1 ω α Φ α (t)[D α ],(3)
where Φ α (t) :
L 2 (H S ) → L 2 (H S ) (4) A → Φ α (t)[A] = Tr E [U (t)A ⊗ ρ α U (t) † ].
The initial reduced state
ρ S = N α=1 ω α D α(5)
is mapped into the reduced state at time t by the N CPTP maps {Φ α (t)} 1,...,N on L 2 (H S ). An initial product state corresponds to the case N = 1, which directly reduces to the usual description in terms of a single CPTP map [23,28]. The presence of initial correlations implies that generally N > 1 CPTP maps are needed; on the other hand, the same set {Φ α (t)} 1,...,N of maps can be used for all the states connected by any local operation on S [20].
B. Theory of frames: reconstruction formula and positive frames
Here, we briefly present the main features of the theory of frames that will be relevant for our purposes, directly referring to the set L 2 (H S ); we will further restrict to frames consisting of a countable set of operators, but, indeed, extensions to uncountable sets are possible. For a more general treatment of frame theory, the interested reader is referred to [29,30].
Consider the Hilbert space of Hilbert-Schmidt operators L 2 (H S ) equipped with the scalar prod-
uct (A, B) = Tr S A † B , A, B ∈ L 2 (H S ),(6)
where Tr S denotes the trace over H S , and a family of operators {F α } α∈I , where each F α ∈ L 2 (H S ) and the index α takes values in the countable set I, such that
α∈I | (F α , A) | 2 < ∞(7)
for every A ∈ L 2 (H S ). The associated map on L 2 (H S )
Ξ : L 2 (H S ) → L 2 (H S ) (8) A → Ξ[A] = α∈I (F α , A) F α
is called frame map; the operators F α are indeed not necessarily orthogonal, nor linearly independent. Now, one says that the family of operators {F α } α∈I is a frame of L 2 (H S ) whenever Eq. (7) holds and the corresponding frame map Ξ satisfies the following lower and upper bound:
m A 2 ≤ (A, Ξ[A]) ≤ M A 2 , ∀A ∈ L 2 (H S ),(9)
for some 0 < m ≤ M < ∞, where A 2 = Tr S A † A is the Hilbert-Schmidt norm induced by the scalar product in Eq.(6); in particular, the lower bound with m strictly larger than zero implies that {F α } α∈I spans L 2 (H S ). On the whole, the definition of frame is equivalent to the requirement that Ξ is bounded and invertible, with bounded inverse Ξ −1 . Using such properties, so that we can write
A = Ξ −1 [Ξ[A]] = α∈I (F α , A) Ξ −1 [F α ] ,
we decompose any Hilbert-Schmidt operator according to
A = α∈I (F α , A) F α , A ∈ L 2 (H S ),(10)
where we defined
F α = Ξ −1 [F α ].(11)
Importantly, Eq.(10) allows us to reconstruct any operator in L 2 (H S ) in terms of the elements of the chosen frame. Indeed, this is a generalization of the reconstruction formula of any vector of a Hilbert space by means of an orthonormal basis. In fact, orthonormal bases are a special case of frames, for which Ξ corresponds to the identity map and thus Eq.(9) holds with m = M = 1 [31].
It is easy to see that the family F α α∈I is itself a frame, whose frame map coincides with Ξ −1 , so that the corresponding reconstruction formula reads
A = α∈I ( F α , A)F α , A ∈ L 2 (H S ); F α α∈I is called the canonical dual frame of {F α } α∈I . More in general, a dual frame of {F α } α∈I
is a frame {D α } α∈I , such that for any operator A the following reconstruction formula holds:
A = α∈I (F α , A)D α = α∈I (D α , A)F α , A ∈ L 2 (H S );(12)
as seen, every frame has at least one dual frame, which is the canonical dual frame.
We can now move back to the bipartite setting that is of interest for us, so that, besides the open system S, we also consider the environment E and we deal with Hilbert-Schmidt operators
on the tensor product H S ⊗ H E , i.e., O SE ∈ L 2 (H S ⊗ H E )
. Given a frame of operators referred to the open system, {F α } α∈I , and a dual frame
{D α } α∈I , with F α , D α ∈ L 2 (H S ), together with a frame of environmental operators, {X β } β∈J , along with a dual frame {Y β } β∈J , with X β , Y β ∈ L 2 (H E ), one can readily see that {F α ⊗ X β } α∈I,β∈J and {D α ⊗ Y β } α∈I,β∈J provide us with a
frame of L 2 (H S ⊗ H E ) and its dual. We can thus apply the reconstruction formula to any systemenvironment Hilbert-Schmidt operator,
O SE = α∈I,β∈J (F α ⊗ X β , O SE )D α ⊗ Y β = α∈I,β∈J Tr SE (F † α ⊗ X † β )O SE D α ⊗ Y β = α∈I D α ⊗ β∈J Tr E Tr S (F † α ⊗ 1 E )O SE X † β Y β ,
where in the second identity we used the definition of the Hilbert-Schmidt scalar product of L 2 (H S ⊗ H E ), as in Eq.(6) but where the trace is now taken over all the global S − E degrees of freedom, while in the third identity we used that Tr
SE (F † α ⊗ X † β )O SE = Tr E Tr S (F † α ⊗ 1 E )O SE X † β ,O SE = α∈I D α ⊗ Tr S (F † α ⊗ 1 E )O SE ;(13)
in other terms, a frame of open-system operators naturally induces a decomposition formula for the global operators.
Comparing Eq.(13) with Eq. (2), we see that in order to define an OPD we have to guarantee that when we focus on system-environment states, Indeed, this is the case if we have a positive frame, i.e., a frame {F α } α∈I made up of positive operators, F α ≥ 0 for any α, which implies that
ρ SE ∈ S(H S ⊗ H E ) ⊂ L 2 (H S ⊗ H E ),Tr S (F † α ⊗ 1 E )ρ SE = Tr S [(F α ⊗ 1 E )ρ SE ] ≥ 0.
In fact, introducing the trace of these operators
ω α = Tr E Tr S (F † α ⊗ 1 E )ρ SE ≥ 0(14)
and defining the environmental statistical operators ρ α ∈ S(H E ) via the assignment
ω α ρ α = Tr S (F † α ⊗ 1 E )ρ SE(15)
(so that ρ α is arbitrary when Tr S (F † α ⊗ 1 E )ρ SE = 0), we end up with an OPD. Furthermore, we note that the coefficients ω α only depend on the reduced system state ρ S = Tr E [ρ SE ], since they are equivalently expressed by
ω α = Tr S F † α ρ S .(16)
In the next section, we present two different explicit choices of positive operators allowing us to express every initial global state with an OPD, but before that let us note the following. Once we have assigned a certain initial global state ρ SE , there is a (slightly) weaker sufficient condition than using a positive frame to define an OPD. We could allow some elements of the frame not to be positive, as long as the corresponding environmental operator Tr S (F † α ⊗ 1 E )ρ SE is equal to the null operator, and then also the corresponding ω α as in Eq. (14) is equal to zero. More in general, it might be useful to consider an OPD where the frame is chosen according to the specific initial state (or set of initial states) at hand, and possibly even according to the global unitary evolution, so as to simplify the dynamical representation in Eq.(3) as much as possible. We leave the study of how to optimize the choice of the frame in the sense now indicated for future investigation.
III. EXISTENCE AND COST OF THE DECOMPOSITION
A. Positive decomposition in finite and infinite-dimensional systems
We introduce now two different constructions of an OPD valid for any global state ρ SE ; the first one is referred to a finite dimensional Hilbert space H S and relies on the definition of a positive frame, which generalizes straightforwardly the frame for qubit systems introduced in [20]. The second OPD, instead, includes fully general Hilbert spaces H S and is defined in terms of a family of positive operators that, in the infinite dimensional case, do not constitute a frame but still allow us to exploit a reconstruction formula as in Eq. (13).
Let us then first consider a d-dimensional H S , with d < ∞, and the d 2 generalized Pauli matrices (also known as generalized Gell-Mann matrices) [32] f (d)
kj = 1 √ 2 |k j| + |j k| ; for k > j −i |j k| − |k j| ; for k < j h (d) k = 1 √ d 1 d ; for k = 1 h (d−1) k ⊕ 0; for k = 2, . . . , d − 1 1 √ d(d−1) 1 (d−1) ⊕ (1 − d) ; for k = d ,(17)F (d) 11 = h (d) 1 ; F (d) kk = √ d k − 1 k h (d) 1 + h (d) k ; for k = 2, ..., d F (d) kj = 1 √ 2 1 d + f (d) kj ; for k = j = 1, ..., d.(18)
The positivity of the frame elements in Eq.(18) directly follows from the fact that the minimum
eigenvalue of h (d) k is − (k − 1)/k, while √ dh (d)
1 is equal to the identity matrix, and the minimum eigenvalue of f (d)
kj is −1/ √ 2.
The inverse of the frame map provides us with the canonical dual frame, see Eq. (11),
D (d) 11 = h (d) 1 − √ d d k=2 k − 1 k h (d) k − d 2 j =k f (d) kj ; D (d) kk = h (d) k ; for k = 2, ..., d D (d) kj = f (d) kj ; for k = j = 1, ..., d.(19)
We will then call Pauli-OPD the decomposition in Eq.(2) fixed by the positive frame in Eq. (18) and its dual in Eq. (19), see also Eqs. (14) and (15). Importantly, we note that since D and the second to an infinite dimensional one; to simplify the notation, from now on the range of values of k will be implied). We start by defining
b kj = 1 √ 2 |k j| + |j k| ; for k > j −i |j k| − |k j| ; for k < j (20) b kk = |k k| ,O SE = kj b kj ⊗ Tr S [(b kj ⊗ 1 E )O SE ] .(21)
If we now consider 2 families of operators P kj = k j M k j kj b k j and Q kj = k j N k j kj b k j such that the corresponding real coefficients satisfy
k j M kj k j N k j k j = δ kk δ jj ,(22)
Eq. (21) is equivalent to
O SE = kj Q kj ⊗ Tr S [(P kj ⊗ 1 E )O SE ] ;(23)
indeed, if the operators P kj are positive, with the same reasoning as the one at the end of Sec.II B, we can conclude that the previous relation provides any ρ SE ∈ L 2 (H S ⊗ H E ) with a valid OPD.
Our choice for the positive operators, aimed at reproducing the non-diagonal elements of Eq. (18) and using simple diagonal operators, is the following:
P kk = |k k| ; P kj = 1 √ 2 1 + b kj ; for k = j.(24)
Using Eq. (22), we can thus complete the decomposition in Eq.(23) with the family of operators
Q kk = |k k| − 1 √ 2 k =j b kj ; Q kj = b kj ; for k = j.(25)
We will call the OPD fixed by Eqs. does not hold, as, e.g., if we consider the operator A = b kk = |k k| we find the divergent series
k j | Tr S [b kk P k j ]| 2 = 1 + k =j 1 2 .
Thus, the basis-induced OPD shows that, strictly speaking, frames are not necessary to define a proper OPD, the key point rather being the definition of a reconstruction formula as in Eq.(13), or, equivalently, Eq.(23) via a family of positive open-system operators (F α or P kj ).
B. Cost of the decomposition and Schmidt rank of the initial global state
Moving back to a finite-dimensional Hilbert space H S with dimension d, both the Pauli and the basis-induced OPD allow us to express any global state by means of an OPD with d 2 terms.
On the other hand, assigned a certain state ρ SE , it might well be that it is possible to write it equivalently via an OPD with a lower number of terms. Recall that we denote with N the minimal number of terms appearing in the OPD of a given state ρ SE , that is the cost of the OPD of ρ SE ; moreover, since we deal now with finite-dimensional Hilbert spaces, we will consider OPDs induced by positive frames. Here, we show that the cost N of the OPD of a given state ρ SE is equal to the Schmidt rank of ρ SE .
As first step, we show that given a bipartite statistical operator ρ SE , the cost N of its OPD equals from which it follows that ρ SE is equivalently represented by
ρ SE = N −1 α=1 ω α D α ⊗ ρ α ,
where we introduced the new frame
D α = D α + ω N ω α c α D N ; for 1 ≤ α ≤ N − 1 D α = D α ; for N ≤ α ≤ D.(26)
Correspondingly, the duality relation is preserved if we also define
F α = F α ; for 1 ≤ α ≤ N − 1 F N = F N − N −1 α=1 ω N ω α c α F α F α = F α ; for N + 1 ≤ α ≤ D.(27)
This frame is not necessarily positive since F N may have negative eigenvalues, but this does not affect the resulting OPD, as
Tr S (F N ⊗ 1 E )ρ SE = 0;(28)ω α ρ α = Tr S (F α ⊗ 1 E )ρ SE = D β=1 q βα λ β η β = I β=1 q βα λ β η β .(29)
This equation states that the set of linearly independent vectors {ω α ρ α } α=1,...,I is generated by a family {λ β η β } β=1,...,I with I < I, which is a contradiction. We can thus conclude that, starting from any OPD, it is always possible to reduce the number of terms appearing in it to the number I of independent operators in the set {ω α ρ α } α=1,...D , but not more, i.e., N = I; note that the reasoning used in the reductio ad absurdum also implies that I does not depend on the specific frame used to define the OPD of ρ SE .
The connection between the cost N of an OPD and the number I of linearly independent environmental states appearing in the OPD directly leads us to link N with the Schmidt rank.
Any bipartite state ρ SE ∈ S(H S ⊗ H E ) can be associated with a Schmidt decomposition, which reads:
ρ SE = R k=1 λ k G S k ⊗ G E k ,(30)
with λ k > 0 and {G S,E k } k=1,...R orthonormal Hermitian operators on H S , H E ; R is the Schmidt rank of ρ SE and R ≤ d 2 ; importantly, the Schmidt rank is directly related with the amount of correlations in ρ SE [33,34]. Now, from Eq. (30) it follows (31) and that the operators {λ k G E k } k=1,...,R are linearly independent. For an OPD, see Eq.
λ k G E k = Tr S (G S k ⊗ 1 E )ρ SE
(2), we have the similar relation Eq. (15) and we can always take into account a minimal OPD of ρ SE such that the N operators {ω α ρ α } α=1,...,N are linearly independent, as discussed above. In addition, we consider, respectively, an orthonormal basis {G S k } k=1,...,d 2 of L 2 (H S ) whose first R elements coincide with the open-system Schmidt operators in Eq. (30), also implying Tr S (G S k ⊗1 E )ρ SE = 0 for k = R+1, . . . , d 2 , and a complete frame {F α } α=1,...,D whose first N elements coincide with those defining the minimal OPD, while the others satisfy Tr S (F α ⊗ 1 E )ρ SE = 0 for α = N + 1, . . . , D, see Eq. (28). Exploiting the decompositions, G S k = D α=1 g kα F α and F α = d 2 k=1 f αk G S k , we thus obtain from Eqs. (15) and (31)
λ k G E k = N α=1 g kα ω α ρ α ; ω α ρ α = R k=1 f αk λ k G E k .(32)
Hence, since the linearly independent sets {λ k G E k } k=1,...,R and {ω α ρ α } α=1,...,N generate each other, they have the same number of elements, i.e., N = R. We thus conclude that the cost N of the OPD coincides with the Schmidt rank of the global statistical operator ρ SE , in this way generalizing to mixed states the link between Schmidt rank and OPD discussed in [20] for pure global states.
IV. CASE STUDY: COMBINATION OF SEMIGROUP MAPS
We now provide an explicit example of an open-system evolution fixed via a family of CPTP maps Φ α (t) according to Eq.(3). In Ref. [24] we exploited the possible relevance of OPD in order to obtain a perturbative expansion of a microscopically motivated open system dynamics in the presence of initial correlations. We will here take a complementary phenomenological approach, exploring whether convenient choices of the CPTP maps Φ α (t) can lead to a well defined description of the system evolution. In particular, we take the maps Φ α (t) to be CPTP semigroups, i.e.,
Φ α (t) = e Lαt ,(33)
where L α are generators in the GKLS form [35,36]
L α [ρ] = d 2 −1 j=1 γ α,j L α,j ρL † α,j − 1 2 L † α,j L α,j , ρ ,(34)
with γ α,j ≥ 0 and L α,j linear independent linear operators on H S , and we recall that d is the finite dimension of the open-system Hilbert space H S .
The idea of combining a family of semigroup maps to go beyond the description of Markovian dynamics has been used by Chruściński and Kossakowski in several pioneering works [37][38][39], aiming at the identification of well-defined integro-differential master equations accounting for non-Markovian dynamics. In [40] a system of GKSL evolutions was used to take into account statistical correlations between system and environment leading to a non-Markovian dynamics within a projection operator approach. Here, by using Eq.(3) to connect the initial reduced state with its value at a generic time, we explore whether a family of semigroups can be exploited in the context of open-system dynamics in the presence of initial correlations with the environment.
Indeed, while every e Lαt is CPTP, so that also a convex combination of these maps would be CPTP, we are now applying each e Lαt to a distinct element of the dual frame D α , which needs not to be positive. As a consequence, it is a-priori not guaranteed that even though we start from an initial positive state ρ S in Eq. (5), the state at time t fixed by Eq.(3) will be positive, too. In the following, we show that it is actually possible to identify a proper set of initial states that are mapped into states at any time t, for a relevant class of two-level system dynamics. Let us stress that, as common in phenomenological approaches, the fact that we can introduce a well-defined evolution on open-system states does not mean by itself that the evolution can be derived from a full microscopic model. Within the formalism exploited here, the existence of a microscopic model would consist in the presence of initial global states and a global unitary evolution such that the action of the maps Φ α (t) could be expressed as in Eq.(4); whether this is the case is a challenging question, which we leave for future investigation.
A. Pauli channels
We consider a two-level open quantum system and we further exploit the positive frame induced by the Pauli basis of operators, introduced in Sec.III A. In particular, the dual-frame operators D α are as in Eq. (19), which for the case d = 2 we are dealing with simply read
D 0 = 1 √ 2 σ 0 − 3 α=1 σ α , D α = 1 √ 2 σ α , for α = 1, 2, 3,(35)
where σ 0 is the identity, the σ α , α = 1, 2, 3 are the usual Pauli matrices, and we reordered the frame indices as α = 0, . . . , 3. Given a reduced operator of the form (compare with Eq. (5))
ρ S = 1 √ 2 3 α=0 υ α D α ,(36)
where for the sake of convenience we have set υ α = √ 2ω α with ω α as in Eq. (5), it has trace one if and only if
υ 0 = 1,(37)
while its positivity is ensured by the validity of
(1 − υ 1 ) 2 + (1 − υ 2 ) 2 + (1 − υ 3 ) 2 ≤ 1.(38)
Hence, the initial positivity domain, that is the set of points {v 1 , v 2 , v 3 } for which ρ S as in Eq. (36) can be taken as a proper initial state, is a ball with radius 1 and origin in the point {1, 1, 1}.
Our goal is then to determine whether there is a subset of the initial positivity domain such that the state ρ S (t) obtained via Eq.(3) is positive at any time t. In particular, we take into account the maps corresponding to the so-called Pauli channels [41,42], whose generators are given by
L α [ρ] = 3 j=1 γ α,j (t) (σ j ρσ j − ρ) .(39)
When the rates γ α,j (t) are non-negative and time independent each of this generator corresponds to a GKSL master equation, otherwise the dynamics can be highly non-Markovian also for a single L α [43]. Here, as said, we restrict to semigroups; note that while the coefficients γ α,j can be different for the different L α s, the Lindblad operators are always the same, i.e., the Pauli matrices. The corresponding CPTP maps take the form
φ α (t)[ρ] = 3 j=0 p α,j (t)σ j ρσ j ,(40)
where the coefficients p α,j (t) are readily expressed by introducing the vectors p α (t), whose components are ( p α (t)) j = p α,j (t), so that one has
p α (t) = 1 4 1 1 1 1 1 1 −1 −1 1 −1 1 −1 1 −1 −1 1 λ α (t)(41)
where ( λ α (t)) j = λ α,j (t) with
λ α,0 = 1, λ α,j = e −2(γ α,k +γα,m)t , j = k = m = 1, 2, 3,(42)
i.e., the exponential decays characterizing semigroup dynamics. Note that the {λ α,j (t)} j=0,...,3 are eigenvalues of the map φ α (t)
φ α (t)[σ j ] = λ α,j (t)σ j ,(43)
while the coefficients {p α,j (t)} j=0,...,3 form a probability distribution, i.e. p α,j (t) ≥ 0 and 3 j=0 p α,j (t) = 1.
For the sake of simplicity, we now restrict to two different semigroup maps, one for the dual frame element D 0 , which we denote as φ(t) := φ 0 (t), and another for the other elements, which we denote asφ(t) := φ 1 (t) = φ 2 (t) = φ 3 (t); accordingly all the parameters referred to the first (second) map will be indicated without (with) a tilde. The state at time t obtained via Eq.(3) then reads ρ S (t) = 1 2
1 1 + λ 1 (t)v 1 − λ 1 (t) σ 1 + λ 2 (t)v 2 − λ 2 (t) σ 2 + λ 3 (t)v 3 − λ 3 (t) σ 3 ,(44)
so that the positivity domain at a generic time is specified by the condition
λ 1 (t) −λ 1 (t)v 1 2 + λ 2 (t) −λ 2 (t)v 2 2 + λ 3 (t) −λ 3 (t)v 3 2 ≤ 1,(45)
which defines the interior of an axis-aligned ellipsoid with semi-axes lengths 1/λ 1 (t), 1/λ 2 (t) and
1/λ 3 (t), centered at the point {λ 1 (t)/λ 1 (t), λ 2 (t)/λ 2 (t), λ 3 (t)/λ 3 (t)}.
B. Results
Summarizing the previous discussion, our primary goal is thus to find the values {v 1 , v 2 , v 3 } that satisfy both Eq. (38) and (45), where the former inequality defines the initial positivity domain, while the latter depends on the dynamical parameters λ α (t) andλ α (t).
To this aim, it is advantageous to decouple the quantities referred to, respectively, φ(t) and
φ(t). Note that we can interpret Eq. (45) as an equation of a ball with the axes rescaled byλs:
(λ 1 (t) −x(t)) 2 + (λ 2 (t) −ỹ(t)) 2 + (λ 3 (t) −z(t)) 2 ≤ 1.(46)
We call the set of points {x(t),ỹ(t),z(t)} satisfying this inequality an evolved positivity ball, as for t = 0 it gives the initial positivity domain as in Eq. (38). On the other hand, Eq. (38) after such rescaling takes the form
1 λ 2 1 (t) λ 1 (t) −x(t) 2 + 1 λ 2 2 (t) λ 2 (t) −ỹ(t) 2 + 1 λ 2 3 (t) λ 3 (t) −z(t) 2 ≤ 1,(47)
and the set of all evolved initial points is an axis-aligned ellipsoid centered at {λ 1 (t),λ 2 (t),λ 3 (t)} with semi-axes lengthsλ 1 (t),λ 2 (t) andλ 3 (t). Accordingly, the intersection of this ellipsoid with the evolved positivity ball with radius 1 and origin in {λ 1 (t), λ 2 (t), λ 3 (t)} gives the wanted set of Fig. 1 for the following choice of the parameters: γ 1 =γ 1 = 0, γ 2 = γ 3 = 2γ 2 = 2γ 3 = γ, which correspond to The larger values of the γs with respect to theγs result in a quicker motion of the positivity ball than of the ellipsoid. Accordingly, some of the states in the initial positivity domain will anyhow lead to temporarily negative evolved matrix ρ S (t).
λ 1 (t) = e −4γt , λ 2 (t) = e −2γt , λ 3 (t) =e −2γt , λ 1 (t) = e −2γt ,λ 2 (t) = e −γt ,λ 3 (t) =e −γt .(48)
Example II: If, on the other hand, two of theγs equal zero, one can see a qualitatively different behaviour, as some of the choices of initial points lead to eternally negative evolved matrix ρ S (t).
For example, for γ 1 =γ 1 =γ 2 = 0, γ 2 = γ 3 =γ 3 = γ, which correspond to λ 1 (t) = e −4γt , λ 2 (t) = e −2γt , λ 3 (t) =e −2γt , λ 1 (t) = e −2γt ,λ 2 (t) = e −2γt ,λ 3 (t) =1,
the z-coordinate of the initial points is constant: {e −2γt v 1 , e −2γt v 2 , v 3 }. Accordingly, the semi-axes length in this direction of the ellipsoid does not change in time and its center remains on the same
x-y plane, see Fig. 2. All of the choices of points with v 3 > 1 will at some point of the evolution leave the evolved positivity ball forever.
V. CONCLUSION
In this paper, we have considered an approach to describe the dynamics of open quantum systems that allows one to take into account fully general initially correlated system-environment states. The starting point is the OPD decomposition of the initial global state, which involves positive and normalized operators on the environmental side and fixes the evolution of the open system through a set of time-dependent CPTP maps. Such a decomposition can be defined via a
the decomposition can be expressed in terms of environmental statistical operators and positive coefficients.
that is an orthonormal basis of Hermitian operators in L 2 (H S ); note that in d dimensions this basis differs from the basis of generalized Pauli operators only in its diagonal elements. Using a basis of L 2 (H S ⊗ H E ) made of tensor product operators between the elements of, respectively, the basis in Eq.(20) and a basis of L 2 (H E ) (analogously to what has been done to derive Eq.(13)), we can decompose any O SE ∈ L 2 (H S ⊗ H E ) as
(23)-(25) basis-induced OPD, to stress its connection with the initial choice of the basis {|k } of H S . Such an OPD can be used both for finite and infinite dimensional systems, but we stress that only in the former case the family of operators defined in Eq.(24) is actually a frame. In the infinite dimensional case, in fact, the upper bound in Eq.(9)
the number I of linearly independent operators in the set of environmental states ρ α associated with non-zero coefficients ω α . Importantly, this number does not depend on the specific frame chosen to perform the decomposition and therefore the cost is a property of the operator itself. Hence, consider any positive frame {F α } α=1,...D of L 2 (H S ), with D elements (D ≥ d 2 as the frame spans L 2 (H S )), along with a dual frame {D α } α=1,...D . As shown Sec.II B, given any bipartite statistical operator ρ SE , we can write its OPD with respect to this frame asρ SE = N α=1 ω α D α ⊗ ρ α ,where ρ α are environmental statistical operators defined via Eq.(15) and we have ordered the frame elements such that the first N ≤ D coefficients ω α defined as in Eq.(14) are strictly positive, while the last D − N are equal to zero. Now, let us denote as I the number of linearly independent ρ α in the set {ρ α } α=1,...N , which indeed coincides with the number of linearly independent operators in the set {ω α ρ α } α=1,...D ; if N > I, we can write ρ N = N −1 α=1 c α ρ α for some coefficients c α ∈ R,
compare with the discussion at the end of Sec.II B. Indeed, this procedure can be repeated if also {ρ α } α=1,...,N −1 are linearly dependent, i.e., N − 1 > I, and so on: if there are I linear independent operators in {ρ α } α=1,...,N , it is always possible to write an OPD with only I terms. Crucially, no further reduction is possible as can be shown by reductio ad absurdum. Suppose, in fact, that one can construct a different OPD with I < I terms ρ SE = I β=1 λ β Q β ⊗ η β , starting from a possibly different frame of operators {Q β } β=1,...,D , dual to a positive frame {P β } β=1,...,D . For what seen above, the set of statistical operators {η β } β=1,...,I can be taken linearly independentand λ β = Tr SE [(P β ⊗ 1 E )ρ SE ] = 0 for β > I , see Eq.(28), without loss of generality. But then using the decomposition of F α on the frame {P β } β=1,...,D , F α = D β=1 q βα P β , we obtain
vs. If the whole ellipsoid is contained within the evolved positivity ball at a given time t, all initial density operators ρ S stay positive at this point of time. By virtue of the rescaling introduced above, the form and location of the ellipsoid is only governed by theλs, while the positivity ball evolves according to the λs. In Fig.1 and 2 we report the ellipsoid of evolved intial points defined by Eq.(47) (in red) and the evolved positivity ball defined by Eq.(46) (in blue), for two different choices of the parameters defining the semigroup maps. Example I: In a generic case all the λs andλs describe exponential decay, i.e. according to Eq.(42) at most one of the γs andγs is zero. In this case the centers of both the evolved positivity ball and the evolved ellipsoid converge to {0, 0, 0}. Additionally, the ellipsoid asymptotically shrinks to a point, while the size of the evolved positivity ball does not change. Hence, for large enough times the ellipsoid will be completely contained into the positivity ball. For intermediate times, a portion of the ellipsoid can leave the evolved positivity ball, as seen in
Figure 1 .
1Example I: evolved initial points (red) and the evolved positivity ball (blue). Left: γt = 1/10, Middle: γt = 1/2, Right: γt = 2.
Figure 2 .
2Example II: evolved initial points (red) and the evolved positivity ball (blue). Left: γt = 1/10, Middle: γt = 1/2, Right: γt = 2. positive frame on the open quantum system and it is in general highly non-unique. In particular, we introduced two OPD decompositions, one that is based on the generalized Pauli matrices and one directly on the basis elements of the open-system Hilbert space; notably, the latter can be defined also for infinite-dimensional Hilbert spaces, thus showing how the OPD-based approach to open-system dynamics is not restricted to the finite-dimensional case. In addition, we have demonstrated that the cost of the decomposition, that is the number of terms involved in it and thus the number of resulting CPTP maps fixing the reduced dynamics, is always bounded by the Schmidt rank of the initial global state. This is a particularly useful feature of the approach, since it implies that the open-system evolution can be fixed by a number of equations, for example as in [24], that is a direct expression of the amount of correlations between the open system and the environment and it is ultimately bounded by the dimensionality of the open system. Finally, we have studied the case where the CPTP maps defining the reduced dynamics along with the OPD decomposition are semigroups fixed by the Gorini-Kossakowski-Lindblad-Sudarshan form. In particular, in the case of qubit Pauli channels, we have derived explicit conditions identifying the positivity domain defined by the initial reduced states that are mapped to proper states at any time and we have distinguished two qualitatively different regimes; one where all the states compatible with a given initial decomposition are eventually mapped into proper states after a transient time interval, and one where some states lead to eternally negative evolved matrices.
with 1 E the identity operator on H E . Hence, noting that the expression in the curly brackets in the last line is simply the reconstruction formula for the environmental operator Tr S (F † α ⊗ 1 E )O SE on the frame {X β } β∈J and its dual {Y β } β∈J , we conclude that any global S − E Hilbert-Schmidt operator O SE ∈ L 2 (H S ⊗ H E ) can be decomposed as
where {|k } k=1,...d is an orthonormal basis of H S and ⊕ denotes the matrix direct sum; these matrices are Hermitian and traceless, apart from h and they correspond to the Pauli and Gell-Mann matrices for d = 2 and d = 3, respectively. A positive frame is thus defined by setting (with a mapping of the index α to the couple (k, j))(d)
1 with trace
√
d,
element of the dual frame with non-vanishing trace, Eq.(2) implies that ρ E = Tr S [ρ SE ] coincides with the first environmental statistical operator appearing in the Pauli-OPD, ρ E = ρ 1 . Let us now consider a possibly infinite dimensional Hilbert space H S and an orthonormal basis {|k } k=1,...,d;k∈N (where the first set of values of k refers to a finite d-dimensional Hilbert spaced)
11 is the only
ACKNOWLEDGMENTSAll authors acknowledge support from UniMi, via Transition Grant H2020, PSR-2 2020 and PSR-2 2021. NM was funded by the Alexander von Humboldt Foundation in form of a Fedor-Lynen Fellowship and project ApresSF, supported by the National Science Centre under the QuantERA programme, funded by the European Union's Horizon 2020 research and innovation programme.
Quantum Brownian motion: The functional integral approach. H Grabert, P Schramm, G.-L Ingold, 10.1016/0370-1573(88)90023-3Phys. Rep. 168115H. Grabert, P. Schramm, and G.-L. Ingold, Quantum Brownian motion: The functional integral ap- proach, Phys. Rep. 168, 115 (1988).
Reduced dynamics need not be completely positive. P Pechukas, 10.1103/PhysRevLett.73.1060Phys. Rev. Lett. 731060P. Pechukas, Reduced dynamics need not be completely positive, Phys. Rev. Lett. 73, 1060 (1994).
Reduced dynamics need not be completely positive. R Alicki, 10.1103/PhysRevLett.75.3020Phys. Rev. Lett. 753020Comment onR. Alicki, Comment on "Reduced dynamics need not be completely positive", Phys. Rev. Lett. 75, 3020 (1995).
On the existence of quantum subdynamics. G Lindblad, 10.1088/0305-4470/29/14/037J. Phys. A: Mathematical and General. 294197G. Lindblad, On the existence of quantum subdynamics, J. Phys. A: Mathematical and General 29, 4197 (1996).
Dynamics of open quantum systems initially entangled with environment: Beyond the Kraus representation. P Štelmachovič, V Bužek, 10.1103/PhysRevA.64.062106Phys. Rev. A. 6462106P. Štelmachovič and V. Bužek, Dynamics of open quantum systems initially entangled with environ- ment: Beyond the Kraus representation, Phys. Rev. A 64, 062106 (2001).
A many-body approach to quantum transport dynamics: Initial correlations and memory effects. P Myöhänen, A Stan, G Stefanucci, R Van Leeuwen, 10.1209/0295-5075/84/67001Europhysics Letters). 8467001EPLP. Myöhänen, A. Stan, G. Stefanucci, and R. van Leeuwen, A many-body approach to quantum trans- port dynamics: Initial correlations and memory effects, EPL (Europhysics Letters) 84, 67001 (2008).
The importance of initial correlations in rate dynamics: A consistent non-Markovian master equation approach. A Pomyalov, C Meier, D Tannor, 10.1016/j.chemphys.2010.02.017Chem. Phys. 37098A. Pomyalov, C. Meier, and D. Tannor, The importance of initial correlations in rate dynamics: A consistent non-Markovian master equation approach, Chem. Phys. 370, 98 (2010).
The classical-quantum boundary for correlations: Discord and related measures. K Modi, A Brodutch, H Cable, T Paterek, V Vedral, 10.1103/RevModPhys.84.1655Rev. Mod. Phys. 841655K. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral, The classical-quantum boundary for correlations: Discord and related measures, Rev. Mod. Phys. 84, 1655 (2012).
Reduced dynamical maps in the presence of initial correlations. B Vacchini, G Amato, 10.1038/srep37328Sci. Rep. 637328B. Vacchini and G. Amato, Reduced dynamical maps in the presence of initial correlations, Sci. Rep. 6, 37328 (2016).
Why initial system-environment correlations do not imply the failure of complete positivity: A causal perspective. D Schmid, K Ried, R W Spekkens, 10.1103/PhysRevA.100.022112Phys. Rev. A. 10022112D. Schmid, K. Ried, and R. W. Spekkens, Why initial system-environment correlations do not imply the failure of complete positivity: A causal perspective, Phys. Rev. A 100, 022112 (2019).
Effect of initial system-environment correlations with spin environments. M Majeed, A Z Chaudhry, 10.1140/epjd/e2018-90416-0Eur. Phys. J. D. 7316M. Majeed and A. Z. Chaudhry, Effect of initial system-environment correlations with spin environ- ments, Eur. Phys. J. D 73, 16 (2019).
M Merkli, arXiv:2107.02515Correlation decay and Markovianity in open systems. M. Merkli, Correlation decay and Markovianity in open systems, arXiv:2107.02515 (2021).
Dynamics of initially entangled open quantum systems. T F Jordan, A Shaji, E C G Sudarshan, 10.1103/PhysRevA.70.052110Phys. Rev. A. 7052110T. F. Jordan, A. Shaji, and E. C. G. Sudarshan, Dynamics of initially entangled open quantum systems, Phys. Rev. A 70, 052110 (2004).
Mapping the Schrödinger picture of open quantum dynamics. T F Jordan, A Shaji, E C G Sudarshan, 10.1103/PhysRevA.73.012106Phys. Rev. A. 7312106T. F. Jordan, A. Shaji, and E. C. G. Sudarshan, Mapping the Schrödinger picture of open quantum dynamics, Phys. Rev. A 73, 012106 (2006).
Completely positive maps and classical correlations. C A Rodríguez-Rosario, K Modi, A Kuah, A Shaji, E C G Sudarshan, 10.1088/1751-8113/41/20/205301J. Phys. A: Math. and Theor. 41205301C. A. Rodríguez-Rosario, K. Modi, A. meng Kuah, A. Shaji, and E. C. G. Sudarshan, Completely positive maps and classical correlations, J. Phys. A: Math. and Theor. 41, 205301 (2008).
Vanishing quantum discord is necessary and sufficient for completely positive maps. A Shabani, D A Lidar, 10.1103/PhysRevLett.102.100402Phys. Rev. Lett. 102100402A. Shabani and D. A. Lidar, Vanishing quantum discord is necessary and sufficient for completely positive maps, Phys. Rev. Lett. 102, 100402 (2009).
Vanishing quantum discord is not necessary for completely positive maps. A Brodutch, A Datta, K Modi, A Rivas, C A Rodríguez-Rosario, 10.1103/PhysRevA.87.042301Phys. Rev. A. 8742301A. Brodutch, A. Datta, K. Modi, A. Rivas, and C. A. Rodríguez-Rosario, Vanishing quantum discord is not necessary for completely positive maps, Phys. Rev. A 87, 042301 (2013).
Complete positivity, Markovianity, and the quantum data-processing inequality, in the presence of initial system-environment correlations. F Buscemi, 10.1103/PhysRevLett.113.140502Phys. Rev. Lett. 113140502F. Buscemi, Complete positivity, Markovianity, and the quantum data-processing inequality, in the presence of initial system-environment correlations, Phys. Rev. Lett. 113, 140502 (2014).
Beyond complete positivity. J M Dominy, D A Lidar, 10.1007/s11128-015-1228-1Quant. Inf. Proc. 151349J. M. Dominy and D. A. Lidar, Beyond complete positivity, Quant. Inf. Proc. 15, 1349 (2016).
Dynamics of initially correlated open quantum systems: Theory and applications. G A Paz-Silva, M J W Hall, H M Wiseman, 10.1103/PhysRevA.100.042120Phys. Rev. A. 10042120G. A. Paz-Silva, M. J. W. Hall, and H. M. Wiseman, Dynamics of initially correlated open quantum systems: Theory and applications, Phys. Rev. A 100, 042120 (2019).
Framebased filter-function formalism for quantum characterization and control. T Chalermpusitarak, B Tonekaboni, Y Wang, L M Norris, L Viola, G A Paz-Silva, 10.1103/PRXQuantum.2.030315PRX Quantum. 230315T. Chalermpusitarak, B. Tonekaboni, Y. Wang, L. M. Norris, L. Viola, and G. A. Paz-Silva, Frame- based filter-function formalism for quantum characterization and control, PRX Quantum 2, 030315 (2021).
Photonic dephasing dynamics and the role of initial correlations. S Raja, K P Athulya, A Shaji, J Piilo, 10.1103/PhysRevA.101.042127Phys. Rev. A. 10142127S. Hamedani Raja, K. P. Athulya, A. Shaji, and J. Piilo, Photonic dephasing dynamics and the role of initial correlations, Phys. Rev. A 101, 042127 (2020).
H.-P Breuer, F Petruccione, The Theory of Open Quantum Systems. OxfordOxford University PressH.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, Oxford, 2002).
Adapted projection operator technique for the treatment of initial correlations. A Trevisan, A Smirne, N Megier, B Vacchini, 10.1103/PhysRevA.104.052215Phys. Rev. A. 10452215A. Trevisan, A. Smirne, N. Megier, and B. Vacchini, Adapted projection operator technique for the treatment of initial correlations, Phys. Rev. A 104, 052215 (2021).
Geometry of quantum states: an introduction to quantum entanglement. I Bengtsson, K Zyczkowski, Cambridge University PressCambridgeI. Bengtsson and K. Zyczkowski, Geometry of quantum states: an introduction to quantum entanglement (Cambridge University Press, Cambridge, 2006).
Quantum discord: A measure of the quantumness of correlations. H Ollivier, W H Zurek, 10.1103/PhysRevLett.88.017901Phys. Rev. Lett. 8817901H. Ollivier and W. H. Zurek, Quantum discord: A measure of the quantumness of correlations, Phys. Rev. Lett. 88, 017901 (2001).
Classical, quantum and total correlations. L Henderson, V Vedral, 10.1088/0305-4470/34/35/315J. Phys. A: Math. and Gen. 346899L. Henderson and V. Vedral, Classical, quantum and total correlations, J. Phys. A: Math. and Gen. 34, 6899 (2001).
Quantum non-Markovianity: characterization, quantification and detection. Á Rivas, S F Huelga, M B Plenio, 10.1088/0034-4885/77/9/094001Rep. Progr. Phys. 7794001Á. Rivas, S. F. Huelga, and M. B. Plenio, Quantum non-Markovianity: characterization, quantification and detection, Rep. Progr. Phys. 77, 094001 (2014).
S T Ali, J.-P Antoine, J.-P Gazeau, Coherent States, Wavelets, and Their Generalizations, Theoretical and Mathematical Physics. New YorkSpringerS. T. Ali, J.-P. Antoine, and J.-P. Gazeau, Coherent States, Wavelets, and Their Generalizations, Theoretical and Mathematical Physics (Springer, New York, 2000).
Symmetric informationally complete quantum measurements. J M Renes, R Blume-Kohout, A J Scott, C M Caves, 10.1063/1.1737053J. Math. Phys. 452171J. M. Renes, R. Blume-Kohout, A. J. Scott, and C. M. Caves, Symmetric informationally complete quantum measurements, J. Math. Phys. 45, 2171 (2004).
1, the frame {F α } α∈I is called a Parseval frame, as Eq.(10) reduces to A = α∈I (F α , A)F α. Whenever Eq.(9) holds with m = M =. interestingly, there are Parseval frames that are not orthonormal basesWhenever Eq.(9) holds with m = M = 1, the frame {F α } α∈I is called a Parseval frame, as Eq.(10) reduces to A = α∈I (F α , A)F α ; interestingly, there are Parseval frames that are not orthonormal bases.
The Bloch vector for N-level systems. G Kimura, 10.1016/S0375-9601(03)00941-1Phys. Lett. A. 314339G. Kimura, The Bloch vector for N-level systems, Phys. Lett. A 314, 339 (2003).
Separability for weakly irreducible matrices. D Cariello, 10.26421/QIC14.15-16-4Quantum Inf. Comput. 141308D. Cariello, Separability for weakly irreducible matrices, Quantum Inf. Comput. 14, 1308 (2014).
Does symmetry imply PPT property?. D Cariello, 10.26421/QIC15.9-10-4Quantum Inf. 15812D. Cariello, Does symmetry imply PPT property?, Quantum Inf. Comput. 15, 812 (2015).
On the generators of quantum dynamical semigroups. G Lindblad, 10.1007/BF01608499Comm. Math. Phys. 48119G. Lindblad, On the generators of quantum dynamical semigroups, Comm. Math. Phys. 48, 119 (1976).
Completely positive dynamical semigroups of N -level systems. V Gorini, A Kossakowski, E C G Sudarshan, 10.1063/1.522979J. Math. Phys. 17821V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, Completely positive dynamical semigroups of N -level systems, J. Math. Phys. 17, 821 (1976).
Non-Markovian quantum dynamics: Local versus nonlocal. D Chruściński, A Kossakowski, 10.1103/PhysRevLett.104.070406Phys. Rev. Lett. 10470406D. Chruściński and A. Kossakowski, Non-Markovian quantum dynamics: Local versus nonlocal, Phys. Rev. Lett. 104, 070406 (2010).
Local approach to the non-Markovian evolution of quantum systems. D Chruściński, A Kossakowski, 10.1142/S0219749911007149Int. J. Quant. Inf. 09129D. Chruściński and A. Kossakowski, Local approach to the non-Markovian evolution of quantum sys- tems, Int. J. Quant. Inf. 09, 129 (2011).
From Markovian semigroup to non-Markovian quantum evolution. D Chruściński, A Kossakowski, 10.1209/0295-5075/97/20005Europhysics Letters). 9720005EPLD. Chruściński and A. Kossakowski, From Markovian semigroup to non-Markovian quantum evolution, EPL (Europhysics Letters) 97, 20005 (2012).
Non-Markovian generalization of the Lindblad theory of open quantum systems. H.-P Breuer, Phys. Rev. A. 7522103H.-P. Breuer, Non-Markovian generalization of the Lindblad theory of open quantum systems, Phys. Rev. A 75, 022103 (2007).
A classical appraisal of quantum definitions of non-markovian dynamics. B Vacchini, 10.1088/0953-4075/45/15/154007Journal of Physics B: Atomic, Molecular and Optical Physics. 45154007B. Vacchini, A classical appraisal of quantum definitions of non-markovian dynamics, Journal of Physics B: Atomic, Molecular and Optical Physics 45, 154007 (2012).
Non-Markovian random unitary qubit dynamics. D Chruściński, F A Wudarski, 10.1016/j.physleta.2013.04.020Phys. Lett. A. 3771425D. Chruściński and F. A. Wudarski, Non-Markovian random unitary qubit dynamics, Phys. Lett. A 377, 1425 (2013).
Eternal non-Markovianity: from random unitary to Markov chain realisations. N Megier, D Chruściński, J Piilo, W T Strunz, 10.1038/s41598-017-06059-5Sci. Rep. 76379N. Megier, D. Chruściński, J. Piilo, and W. T. Strunz, Eternal non-Markovianity: from random unitary to Markov chain realisations, Sci. Rep. 7, 6379 (2017).
| [] |
[
"Focussing Quantum States",
"Focussing Quantum States"
] | [
"M Sentef \nInstitute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n",
"A P Kampf \nInstitute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n",
"S Hembacher \nInstitute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n",
"J Mannhart \nInstitute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany\n"
] | [
"Institute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany",
"Institute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany",
"Institute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany",
"Institute of Physics\nCenter for Electronic Correlations and Magnetism\nUniversity of Augsburg\n86135AugsburgGermany"
] | [] | Does the size of atoms present a lower limit to the size of electronic structures that can be fabricated in solids? This limit can be overcome by using devices that exploit quantum mechanical scattering of electron waves at atoms arranged in focussing geometries on selected surfaces. Calculations reveal that features smaller than a hydrogen atom can be obtained. These structures are potentially useful for device applications and offer a route to the fabrication of ultrafine and well defined tips for scanning tunneling microscopy.PACS numbers: 73.20.-r, 73.63.-b, 85.35.-p | 10.1103/physrevb.74.153407 | [
"https://arxiv.org/pdf/cond-mat/0512056v1.pdf"
] | 119,435,580 | cond-mat/0512056 | fb651415720ae9f2dd802f58cc09ad8d5879eb1c |
Focussing Quantum States
2 Dec 2005
M Sentef
Institute of Physics
Center for Electronic Correlations and Magnetism
University of Augsburg
86135AugsburgGermany
A P Kampf
Institute of Physics
Center for Electronic Correlations and Magnetism
University of Augsburg
86135AugsburgGermany
S Hembacher
Institute of Physics
Center for Electronic Correlations and Magnetism
University of Augsburg
86135AugsburgGermany
J Mannhart
Institute of Physics
Center for Electronic Correlations and Magnetism
University of Augsburg
86135AugsburgGermany
Focussing Quantum States
2 Dec 2005
Does the size of atoms present a lower limit to the size of electronic structures that can be fabricated in solids? This limit can be overcome by using devices that exploit quantum mechanical scattering of electron waves at atoms arranged in focussing geometries on selected surfaces. Calculations reveal that features smaller than a hydrogen atom can be obtained. These structures are potentially useful for device applications and offer a route to the fabrication of ultrafine and well defined tips for scanning tunneling microscopy.PACS numbers: 73.20.-r, 73.63.-b, 85.35.-p
The manufacture of ever smaller objects is an ongoing pursuit of science and technology, which at the end of the 20th century led to the fabrication of nanometersized structures. A seminal highlight was accomplished in 1993 with the manipulation of single atoms [1], which were even assembled into crystallites [2]. It obviously seems prohibited to construct even smaller structures. How could this be done?
Here, we explore the possibility to design ultrasmall electronic structures by manipulating electronic surface states of metals. We will present examples revealing that electron density peaks as small as 1Å can be achieved. The width of the electronic peak is hereby limited only on the scale of the shortest wavelength of the surface band states. By shrinking the size of interference peaks of electronic surface states, new options for device application arise. Electron density peaks ofÅ-width may for example be exploited as ultrafine and well defined quantum states, to be used as tips in scanning tunneling microscopy (STM).
The approach discussed below builds on experimental investigations of electronic surface states. Electrons in Shockley surface states of metals can be scattered by surface steps and by individual atoms placed on the surface [1,3,4]. Complex interference patterns have been generated in artificially manufactured corrals of circular or elliptical shape [5,6]. Even quantum mirage phenomena have been induced in such corrals [7,8,9]. In quantum corrals, electrons are focussed on well defined areas on the surface, thereby creating locations with an enhanced local density of states and therefore an enhanced electron density with typical sizes of 1-2 nm. This work has opened a route for manipulating quantum states almost on the atomic level and raises the question whether it is possible to design arrangements of atoms with optimized focussing properties for quantum waves. Can quantum structures on sub-Å lengthscales be realized?
Fundamental as well as practical problems are encountered on the road to sharply focussed quantum states. First, one may ask whether Heisenberg's uncertainty principle [11] ultimately sets a limit for the spatial extent of fine structure in a quantum mechanical wavefunction. On the practical side, the rules of optics cannot be applied to design the focussing structures for quantum waves. This is because electronic waves with short wavelengths are needed to finely focus the electrons, but scattering of such high-energy particles involves anisotropic non-s-wave channels. Since the higher angular momentum scattering channels have no counterpart in classical wave mechanics, the design rules of conventional optical instruments cannot be used to device instruments for focussing quantum mechanical waves with short wavelengths.
Using model calculations of surface wave scattering from hard spheres, we consider here focussing arrangements built from scattering centers (see Fig. 1), designed to achieve ultra-narrow peak widths. Complex interference patterns are obtained and analyzed for parabolic and semi-elliptic geometries. It is shown that in this way locally enhanced electron densities with sub-Å lateral size can be realized.
The guiding idea for our approach is to design quantum mechanical (electronic) states Ψ(r, p) with effective widths ∆r and ∆p in real space and in momentum space, respectively, such that |Ψ| 2 forms a spike of width ∆r * . Heisenberg's uncertainty relation requires that ∆r∆p ≥ /2, where is Planck's constant. While this fundamental principle of quantum mechanics inevitably controls any measurement process, it is important that the uncertainty relation does not preclude the possibility to structure the electronic wavefunction on a lengthscale ∆r * much smaller than ∆r. Therefore, the principles of quantum mechanics do not set a lower limit for generat-ing ultrasmall electronic structures, although these will possibly have a small local probability density in the spike volume ∆r * . Rather, in a superposition of quantum mechanical waves, ∆r * is often limited by the largest available momentum, which thereby imposes an upper limit on ∆p. For the purpose of focussing electronic waves in a crystalline solid this suggests to use high-energy waves preferentially in band states with a large effective mass.
To explore the size of the smallest area into which the electrons can be focussed with practical experimental setups, we performed model calculations in two space dimensions. Scattering centers of radius r 0 are arranged in open focussing geometries with either parabolic or semielliptic shape (see Fig. 1). An electronic surface wave, generated for example by a tunnel junction, is considered to enter the focussing arrangement as a plane wave with wavevector k. The wave propagates along the symmetry axis of a regular arrangement of hard disks, with which we model individual atoms placed on a metallic surface with a spacing d ∼ 10 r 0 as is typical for Fe adatom corrals [1,5]. For long wavelengths λ ≫ r 0 , realized for surface state electrons on copper (111) surfaces, only isotropic s-wave scattering is significant. In this case, multiple scattering events and absorption from the scattering centers can be straightforwardly considered [5]. For shorter wavelengths, the established scattering analysis has to be extended to include higher angular momentum scattering channels.
In the absence of multiple scattering the scattering state has the asymptotic form (for kr ≫ 1)
ψ(r) ≃ e ik·r + ν f (ϑ ν )e ik·Rν e ikrν √ kr ν ,(1)
where R ν denotes the position of the ν-th scattering center, and r ν = r−R ν measures with the polar coordinates r ν and ϑ ν the relative position to the disk at R ν . Introducing partial wave phase shifts, the scattering amplitude follows as
f (ϑ) = 2i π e iδ0 sin δ 0 + ∞ m=1
2e iδm sin δ m cos mϑ .
(2) The parameter m counts the scattering channel; the corresponding phase shifts are determined by tan δ m = J m (kr 0 )/N m (kr 0 ), where J m and N m denote the Bessel functions of the first and second kind, respectively.
In the restriction to s-wave scattering, repeated scattering events are included by extending Eq. (1) to
ψ(r) ≃ e ik·r + b T · 1 + A + A 2 + A 3 + . . . · a(r) = e ik·r + b T · [1 − A] −1 · a(r).(3)
Here, b = (b 1 , . . . , b N ) for N scattering centers with b ν = e ik·Rν accounts for the phase factors related to the individual disk positions. The amplitudes for the waves scattered from the disk at R ν to the disk at R µ (ν = µ) form an N × N matrix with
A νµ = f 0 e ikrνµ kr νµ ,(4)
where r νµ = |R ν − R µ |. Similarly, the amplitude of the wave scattered from R µ to r is
a µ (r) = f 0 e ikrµ kr µ .(5)
The scattering amplitude f 0 is related to the s-wave phase shift δ 0 by
f 0 = 2i π e iδ0 sin δ 0 = 1 √ 2πi e 2iδ0 − 1 .(6)
The possible partial absorption of the incident electronic surface wave by inelastic scattering and scattering into bulk states is incorporated by allowing the phase shifts to become complex [5], corresponding to the replacement e 2iδ0 → α 0 e 2iδ0 in Eq. (6). Henceforth, δ 0 is a real number; the absorption coefficient α 0 is 1 for non-absorbing adatoms and vanishes for complete attenuation. For wavelengths which become almost comparable to the size of an atom, higher angular momentum scattering channels are important. To give an example, for kr 0 = 2πr 0 /λ = 1.24 (see below) the scattering phase shifts in s-, p-, and d-channels are δ 0 = 69 • , δ 1 = −41 • , and δ 2 = −8 • . With the restriction to double scattering from each disk the ansatz for the asymptotic scattering state is extended to
ψ(r) ≃ e ik·r + N ν=1 e ik·Rν f (ϑ ν ) e ikrν √ kr ν (7) + N µ,ν=1;µ =ν e ik·Rν f (ϑ νµ ) e ikrνµ kr νµ f (ϑ µ − ϑ νµ ) e ikrµ kr µ ,
where ϑ νµ is the angle for the position of the scattering disk µ in the polar coordinate system attached to disk ν. Without absorption, and neglecting the still small contribution of the d-wave scattering channel only, the s-and p-wave contributions (m = 0 and m = 1) are included in the angular dependent scattering amplitude given in Eq. (2). In a first attempt, the focussing properties of a device consisting of two parabolic "quantum mirrors" arranged like a reflector telescope, have been calculated. The substrate was assumed to be the Cu (111) surface, and 29 hard disks with radius r 0 = 0.63Å were chosen to present Co 3+ ions as scatterers. The focal distance of the parabola is f = 4.9Å, and the average disk spacing is 8Å. The wavelength of the incoming wave was taken to be λ = 12Å. At this wavelength λ ≫ r 0 , so that only s-wave scattering has to be considered. In Fig. 2 we show the resulting absolute square |ψ(r)| 2 of the scattering state. Guided by the successful quantitative analysis The arrangement consists of a large parabolic mirror formed by 29 scattering centers (white pillars) and a small mirror consisting of 3 additional scatterers in a reflector telescope geometry. The plot shows the distribution |ψ(r)| 2 of the electronic scattering state. Only s-wave scattering is included. "A" marks the most prominent peak near the tip, while the peak "B" emerges near the focus point as a result of the quantum mirror geometry.
of the current-voltage characteristics at the center of a circular quantum corral of iron atoms on a copper surface [5], the "black dot" attenuation limit α 0 = 0 was adopted. The image shown in Fig. 2 is the pattern that would be observed in a standard STM local density of states measurement.
Near the tip of the parabola intense interference peaks with a full width at half maximum (FWHM) ∼ 4.2Å are produced (see for example peak A in Fig. 2). Due to the 1/ √ r decay of the amplitude for the scattered waves the peak heights are larger the closer the peaks are to the scattering atom [13]. Resulting from the focussing of the second, smaller "quantum mirror" additional peaks emerge near its focal point (see for example peak B in Fig. 2). The width of peak B, ∼ 3.5Å at FWHM, is just fractions of the incoming wavelength. The peak, however, has a small intensity.
There are obvious routes to further improve the focussing. First, materials capable of sustaining surface waves with considerably smaller wavelengths may be used. The goal to achieve interference peaks with subatomic widths precludes the use of surface eigenstates of noble metal surfaces, whose typical wavelengths are ∼ 15Å [12]. The recently observed Friedel oscillations on beryllium (0001) surfaces with wavelengths as short as 3.2Å [10] suggest Be as a candidate material. Other options for tuning the electronic density distribution include using non-monochromatic waves and optimizing the arrangement of the surface adatom scatterers and the geometry of the quantum mirror. The development of a mathematical algorithm to select a focussing arrange- ment is quite a non-trivial task, and we have therefore explicitly tried several device geometries. Of the ones explored, particularly sharp peaks were obtained by using a semi-ellipse, as we will demonstrate in the following.
As shown in Fig. 3, 17 hard disks were placed on the contourline of a semi-ellipse with eccentricity e = 0.5 and an average disk spacing of 6Å. In this calculation the wavelength of the incoming wave of 3.2Å and a disk radius of again r 0 = 0.63Å was chosen. Fig. 3 shows the resulting contour plot of |ψ(r)| 2 for the scattering state calculated from Eq. (8). The complex structures in this interference pattern originate in part from the angular dependent p-wave scattering channels, which have no counterpart in classical geometrical optics. Fig. 4 shows a larger magnification of the area marked by the white dashed square in Fig. 3. This area contains the most prominent constructive interference peak in this semielliptic focussing quantum mirror geometry. The peak has an anisotropic shape with an almost elliptic cross section; along the horizontal direction the FWHM of this peak is merely 0.92Å. This is less than 2 Bohr radii. The peak width is therefore smaller than the nominal size of the 1s orbital of hydrogen.
So far we have not made an attempt to uniquely determine the optimum disk arrangement, which leads to the sharpest interference structure. Alternative focussing geometries with different selected positions of the scattering centers may very well lead to even sharper interference peaks. Initial ideas of "wave function engineering" by a special-purpose design of quantum corral geometries have recently been formulated in the attempt to generate special predefined mirage phenomena [14]. It is likely that similar strategies can be followed to identify arrangements of scattering centers with optimized focussing properties. If surface waves with wavelengths of just a fewÅ are considered, such optimization strategies will necessarily have to include also non-s-wave scattering channels.
Our calculations reveal in a proof-of-principle that special arrangements of individual atoms on surfaces allow to create electron states with diameters comparable to the size of a hydrogen atom. These states may be coupled to bulk states and be used in devices such as highly focussed sources of tunneling electrons, as for example required for STM tips. The focussing of spin polarized surface states may furthermore allow to image magnetic structures on the atomic or subatomic scale. The controlled design and device applications of electronic structures on the sub-Å scale may therefore emerge as a real possibility.
We thank F. Giessibl, D. Vollhardt, G. Schön, Ø. Fischer, M. Sekania, C. W. Schneider, T. Kopp, R. Claessen, and D. Pohl for thoughtful discussions. This work was supported by the BMBF (EKM-project 13N6918) and the Deutsche Forschungsgemeinschaft (SFB 484).
FIG. 1 :
1(Color online) Schematic view of the focussing geometry. An electronic surface wave is generated with a tunnel junction and propagates towards an arrangement of scattering centers (red semi-spheres). Multiple interference peaks emerge from the superposition of scattered waves.
FIG. 2 :
2(Color online) Focussing of an electron wave with wavelength 12Å by scattering from two quantum mirrors.
FIG. 3 :
3(Color online) Distribution of an electron scattering state |ψ(r)| 2 achieved by scattering from a semi-elliptic arrangement. The wavelength of the incoming wave is λ = 3.2 A -the wavelength of Friedel oscillations on Be (0001) surfaces. s-and p-wave scattering channels are included.
FIG. 4 :
4(Color online) Magnification of the area marked by the white, dashed square inFig. 3. The width of this peak of the electron density is 0.92Å (FWHM).
. M F Crommie, C P Lutz, D M Eigler, Science. 363218NatureM. F. Crommie, C. P. Lutz, and D. M. Eigler, Nature (London) 363, 524 (1993); Science 262, 218 (1993).
See the IBM homepage at. See the IBM homepage at http://www.almaden.ibm. com/vis/stm/hexagone.html.
. Y Hasegawa, P Avouris, Phys. Rev. Lett. 711071Y. Hasegawa and P. Avouris, Phys. Rev. Lett. 71, 1071 (1993).
. L Bürgi, O Jeandupeux, H Brune, K Kern, Phys. Rev. Lett. 824516L. Bürgi, O. Jeandupeux, H. Brune, and K. Kern, Phys. Rev. Lett. 82, 4516 (1999).
. E J Heller, M F Crommie, C P Lutz, D M Eigler, Nature. 369464E. J. Heller, M. F. Crommie, C. P. Lutz, and D. M. Eigler, Nature (London) 369, 464 (1994).
. J Kliewer, R Berndt, E V Chulkov, V M Silkin, P M Echenique, S Crampin, Science. 2881899J. Kliewer, R. Berndt, E. V. Chulkov, V. M. Silkin, P. M. Echenique, and S. Crampin, Science 288, 1899 (2000).
. H C Manoharan, C P Lutz, D M Eigler, Nature. 403512H. C. Manoharan, C. P. Lutz, and D. M. Eigler, Nature (London) 403, 512 (2000).
The theoretical work on quantum corrals and mirages was recently reviewed by. Rev. Mod. Phys. G. A. Fiete and E. J. Heller75933The theoretical work on quantum corrals and mirages was recently reviewed by G. A. Fiete and E. J. Heller, Rev. Mod. Phys. 75, 933 (2003).
A A See, A M Aligia, Lobos, S1095 (2005) and references therein. 17See also A. A. Aligia and A. M. Lobos, J. Phys. Cond. Mat. 17, S1095 (2005) and references therein.
. P T Sprunger, L Petersen, E W Plummer, E Laegsgaard, F Besenbacher, Surface Science. 2751765P. T. Sprunger, L. Petersen, E. W. Plummer, E. Laegs- gaard, and F. Besenbacher, Surface Science 275, 1765 (1997).
. W Heisenberg, Z. Phys. 43172W. Heisenberg, Z. Phys. 43, 172 (1927).
S G Davison, M Steslicka, Basic Theory of Surface States. Oxford, New YorkS. G. Davison and M. Steslicka, Basic Theory of Surface States (Oxford, New York, 1996)
Because we use the asymptotic form for the scattered waves, which is only very accurate on distances larger than 5 atomic radii, the amplitude for ψ(r) is overestimated in the very near vicinity of each scattering disk. Because we use the asymptotic form for the scattered waves, which is only very accurate on distances larger than 5 atomic radii, the amplitude for ψ(r) is overesti- mated in the very near vicinity of each scattering disk.
. A A Correa, F A Reboredo, C A Balseiro, Phys. Rev. B. 7135418A. A. Correa, F. A. Reboredo, and C. A. Balseiro, Phys. Rev. B 71, 035418 (2005).
| [] |
[
"A New Channel Subspace Characterization for Channel Estimation in RIS-Aided Communications",
"A New Channel Subspace Characterization for Channel Estimation in RIS-Aided Communications"
] | [
"Mehdi Haghshenas \nDepartment of Electronics, Information and Bioengineering\nPolitecnico di Milano\n20133MilanItaly\n",
"Parisa Ramezani \nDepartment of Computer Science\nKTH Royal Institute of Technology\nSE-100 44StockholmSweden\n",
"Maurizio Magarini [email protected] \nDepartment of Electronics, Information and Bioengineering\nPolitecnico di Milano\n20133MilanItaly\n",
"Emil Björnson \nDepartment of Computer Science\nKTH Royal Institute of Technology\nSE-100 44StockholmSweden\n"
] | [
"Department of Electronics, Information and Bioengineering\nPolitecnico di Milano\n20133MilanItaly",
"Department of Computer Science\nKTH Royal Institute of Technology\nSE-100 44StockholmSweden",
"Department of Electronics, Information and Bioengineering\nPolitecnico di Milano\n20133MilanItaly",
"Department of Computer Science\nKTH Royal Institute of Technology\nSE-100 44StockholmSweden"
] | [] | A reconfigurable intelligent surface (RIS) is a holographic MIMO surface composed of a large number of passive elements that can induce adjustable phase shifts to the impinging waves. By creating virtual line-of-sight (LOS) paths between the transmitter and the receiver, RIS can be a game changer for millimeter-wave (mmWave) communication systems that typically suffer from severe signal attenuation. Reaping the benefits of RIS, however, relies on the accuracy of the channel estimation, which is a challenging task due to the large number of RIS elements. Specifically, conventional channel estimators require a pilot overhead equal to the number of RIS elements, which is impractical. Herein, we propose a novel way to approximately represent the RIS channels in a lower-dimensional subspace and derive the basis vectors for the identified subspace. We use this channel structure to only send pilots in this subspace, thereby vastly saving on the pilot overhead. Numerical results demonstrate that when the RIS has an element spacing of a quarter of the wavelength, our method reduces the pilot overhead by 80% with retained or even improved performance. | null | [
"https://export.arxiv.org/pdf/2304.02087v1.pdf"
] | 257,952,272 | 2304.02087 | ce0229fa4fc1fd854c386802b61ceacd9e1c6b23 |
A New Channel Subspace Characterization for Channel Estimation in RIS-Aided Communications
4 Apr 2023
Mehdi Haghshenas
Department of Electronics, Information and Bioengineering
Politecnico di Milano
20133MilanItaly
Parisa Ramezani
Department of Computer Science
KTH Royal Institute of Technology
SE-100 44StockholmSweden
Maurizio Magarini [email protected]
Department of Electronics, Information and Bioengineering
Politecnico di Milano
20133MilanItaly
Emil Björnson
Department of Computer Science
KTH Royal Institute of Technology
SE-100 44StockholmSweden
A New Channel Subspace Characterization for Channel Estimation in RIS-Aided Communications
4 Apr 2023Index Terms-Holographic MIMOreconfigurable intelligent surfacechannel estimationchannel subspace characterization
A reconfigurable intelligent surface (RIS) is a holographic MIMO surface composed of a large number of passive elements that can induce adjustable phase shifts to the impinging waves. By creating virtual line-of-sight (LOS) paths between the transmitter and the receiver, RIS can be a game changer for millimeter-wave (mmWave) communication systems that typically suffer from severe signal attenuation. Reaping the benefits of RIS, however, relies on the accuracy of the channel estimation, which is a challenging task due to the large number of RIS elements. Specifically, conventional channel estimators require a pilot overhead equal to the number of RIS elements, which is impractical. Herein, we propose a novel way to approximately represent the RIS channels in a lower-dimensional subspace and derive the basis vectors for the identified subspace. We use this channel structure to only send pilots in this subspace, thereby vastly saving on the pilot overhead. Numerical results demonstrate that when the RIS has an element spacing of a quarter of the wavelength, our method reduces the pilot overhead by 80% with retained or even improved performance.
I. INTRODUCTION
Holographic MIMO (HMIMO) surfaces are softwarecontrolled meta surfaces made of sub-wavelength elements that can collectively control the electromagnetic (EM) response of the surface and manipulate the attributes of the EM waves [1]. Thanks to the recent advancements in micro electromechanical systems, the elements of HMIMO surfaces can be reconfigured dynamically and in real-time, thus catering to immediate changes of wireless networks. Among different types of these surfaces, the passive HMIMO surface, also known as reconfigurable intelligent surface (RIS), has recently attracted significant interest in academia and industry due to its low-cost and low-complexity design. Being able to co-phase the multi-path signals and reflect them towards the desired direction, RIS has been extensively studied for combating the blockage problem in wireless networks [2].
Millimeter wave (mmWave) systems are a specific example where the communication is impaired by blockage, resulting in a low signal-to-noise ratio (SNR) and unreliable connections [3]. RIS can therefore play an important role in enhancing the performance of mmWave communication systems by pro-This work was supported by the FFL18-0277 grant from the Swedish Foundation for Strategic Research. viding strong virtual line-of-sight (LoS) paths between the transmitter and the receiver.
To attain the promised functionality, accurate channel state information (CSI) of the cascaded channel from the transmitter to the receiver via the RIS is essential. Since the cascaded channel's pathloss is high per element, an RIS must compensate by having a massive number of elements and capitalizing on the quadratic beamforming gain [2]. Hence, the main challenge is how to efficiently estimate a large number of channel coefficients. Compressed sensing and least square (LS) are two widely-adopted signal processing techniques for channel estimation in RIS-aided systems [4]. Specifically, several works exploit the spatial structure and sparsity of the channel in the mmWave band to reduce the pilot overhead [5]- [7]. They convert channel estimation into sparse signal recovery problems and employ compressed sensing methods to solve them. These methods have high computational complexity for practically large RIS sizes and require strong assumptions on the angular separability between paths that might not be satisfied in practice. Moreover, compressed methods require high SNRs, which might not be available in mmWave bands and with the high pathloss of the cascaded channels. The conventional LS estimator is also impractical due to its prohibitive pilot overhead as it requires one pilot per RIS element [8]. Another channel estimation approach has been recently investigated in [9] that takes advantage of the existing spatial channel correlation in RIS-aided communications to reduce the pilot overhead. Specifically, the authors evaluate the correlation matrix of the BS-RIS-user cascaded channel and identify a subspace of reduced dimension where all the cascaded channels approximately lie.
In this paper, by exploiting the fact that the locations of the BS and the RIS are fixed, we propose a new method to further reduce the pilot length compared to [9] that does not require the knowledge of spatial correlation matrix of the channel. We consider an RIS-aided mmWave communication system where the direct path between the base station (BS) and user is blocked. Since the BS and RIS are in fixed positions, the corresponding channel component is known in advance. We show that for an RIS with M elements, the RIS-related channels approximately reside in a subspace of dimension η with η ≪ M . We first derive an orthogonal basis for this subspace and then develop a channel estimation framework where pilots are only transmitted in the derived subspace, while still enabling estimation of the entire cascaded channel and efficient RIS configuration for SNR maximization. Numerical results show that, despite its lower pilot overhead and complexity, our method either outperforms LS or attains comparable performance depending on the transmit power.
II. SYSTEM MODEL
We consider a single-cell scenario where the BS serves single-antenna users through virtual LOS paths provided by the RIS. Accordingly, the RIS is deployed to have LOS paths toward the BS and all prospective user locations. The BS is equipped with a uniform linear array (ULA) of N antennas, while the RIS is configured as a UPA with a total of M antennas. The reflecting elements are arranged along the yz axis with M H and M V elements on the y and z axes, respectively, such that M = M H M V . Since users are randomly distributed in the coverage area, their channels are unknown to the BS. Therefore, when a user wishes to connect to the BS, channel estimation should be performed at the BS prior to the data transmission. In the channel estimation phase, the UE transmits a pilot signal x t ∈ C at time instance t. Assuming no direct path between the UE and BS, the received signal y t ∈ C N at the BS can be modeled as [10]
y t = Hdiag(φ t )hx t + n t = Vφ t x t + n t ,(1)
where n t ∼ N C (0, σ 2 I N ) is the additive independent complex Gaussian noise and the diagonal matrix diag(φ t ) contains the RIS phase shift configuration φ t = [e jφ1,t , . . . , e jφM,t ] T used at time t. Additionally, H ∈ C N ×M is the channel between the BS and RIS, while h ∈ C M is channel between the RIS and UE. We define the cascaded BS-RIS-UE channel as V = Hdiag(h) ∈ C N ×M . We consider a mmWave band so the channels are modeled according to the geometric Saleh-Valenzuela channel model [11], [12]:
H = L ℓ=1 β ℓ a BS (ϕ ℓ BS )a RIS (ϕ ℓ AOD , θ ℓ AOD ) H (2) h = L ′ ℓ=1 ζ ℓ a RIS (ϕ ℓ AOA , θ ℓ AOA ),(3)
where the superscript ℓ corresponds to the ℓ-th path in the multipath scenario, L and L ′ are the number of paths, β and ζ are the complex channel gain coefficients, and a BS is the BS's array response vector defined as
where the azimuth angle ϕ and elevation angle θ seen from the RIS can be the angle of departure (AOD) towards the BS in (2), and AOAs from the user in (3). Moreover, i(m) = mod(m − 1, M H ) and j(m) = ⌊ m−1 MH ⌋ are the horizontal and vertical indicies of the m-th RIS element, and mod is the modulo operator. Similarly, d H and d V are the horizontal and vertical element spacing normalized by the carrier wavelength.
We consider transmission of P pilot symbols, where P will be specified later. Assuming the channels to be fixed during the estimation phase, Φ = [φ 1 , . . . , φ P ] ∈ C M×P collects the P RIS configurations. Accordingly, we can write the collective received pilot symbols Y = [y 1 , . . . , y P ] ∈ C N ×P as
Y = VΦX + N,(6)
where X = diag(x 1 , . . . , x P ) and N = [n 1 , . . . , n P ] ∈ C N ×P are the collective transmitted pilot symbols and noise matrix, respectively.
III. SPANNED CHANNEL SUBSPACE BY THE RIS
The channel coefficients are similar for adjacent RIS elements, and determined by the number of clusters and their angular distribution over the propagation environment. Even in an isotropic scattering environment, the spatial correlation between two RIS elements is a sinc function of the interelement distance divided by λ/2 [13]. Hence, for a typical RIS with a planar structure and sub-λ/2 element spacing, there will be substantial spatial correlation under isotropic scattering. The spatial correlation is even stronger in the mmWave band, which is highly non-isotropic with one dominant LOS path and a few additional paths. One important implication of spatial correlation is that any M -dimensional RIS channel h of the type in (3) belongs to a subspace with a dimension substantially lower than M [13]. In this section, we demonstrate how the basis vectors for this subspace can be generated for a UPA.
The channel in (3) is a linear combination of array response vectors and we will identify a set of orthogonal array response vectors that can be used to represent any such channel. There are multiple ways this can be done, but we assume a RIS (π/2, 0) is one of the vectors and will identify the remaining ones. Using the expression of the array response for UPAs in (5), we can write the inner product of two array response vectors obtained with the angle pairs (ϕ 1 , θ 1 ) and
(ϕ 2 , θ 2 ) as [14, Ch. 7] a(ϕ 2 , θ 2 ) H a(ϕ 1 , θ 1 ) = M m=1 e j2π(dVj(m)Ω+dHi(m)Ψ) = MV−1 k=0 e j2πdVkΩ S(Ω) MH−1 l=0 e j2πdHlΨ T (Ψ) ,(7)
where
Ω = sin(θ 2 ) − sin(θ 1 ),(8)Ψ = cos(θ 2 ) sin(ϕ 2 ) − cos(θ 1 ) sin(ϕ 1 ).(9)
Using standard techniques [14,Ch. 7], we can express S(Ω) and T (Ψ) as
S(Ω) = sin(πM V d V Ω) M V sin(πd V Ω) ,(10)T (Ψ) = sin(πM H d H Ψ) M H sin(πd H Ψ) .(11)
We want to identify a set of angle pairs that result in mutually orthogonal array responses. The inner product in (7) is zero if either S(Ω) or T (Ψ) is zero. From (10) we notice that S( k MVdV ) = 0 for k = ±1, . . . , ±(M V − 1), and S(Ω) is periodic with period 1 dV . Since we assumed that (π/2, 0) is one angle pair, we consider θ 1 = 0 as a starting point and identify other elevation angles by solving S(Ω) = 0. Since the function is periodic,
for k = ±1, . . . , ± MV 2 we have S(Ω) = 0 if Ω = sin(θ 2 ) = k M V d V ,(12)
and solving for θ 2 , we obtain a set of elevation angles that are mutually orthogonal. After collecting all the elevation angles from the previous step in the set Θ = {θ 1 , . . . , θ n }, for each θ ∈ Θ we also need to find the azimuth angles that result in T (Ψ) = 0. This is required to preserve the orthogonality among all the array response vectors with the same elevation angle. From (11), we observe that T ( l MHdH ) = 0 for l = ±1, . . . , ±(M H − 1), and it is periodic with period 1 dH . Similarly, we consider ϕ 1 = π/2 as the reference point for each elevation angle θ i ∈ Θ to identify other azimuth angles that satisfy T (Ψ) = 0. Therefore, for l = ±1, . . . , ±(M H − 1), we construct the set
Φ(θ i ) = ϕ 2 : Ψ = cos(θ i ) (sin(ϕ 2 ) − 1) = l M H d H(13)
to collect all the azimuth angles that correspond to orthogonal beams associated with elevation angle of θ i . In summary, we have determined a set of elevation angles that generates orthogonal channel directions. This step establishes the solid blue lines in the elevation-azimuth plane in Fig. 1 for an 8 × 8 RIS with d H = d V = 1 4 . In this figure, every two points belonging to different blue lines are mutually orthogonal. In the second step, for each blue line, we determine a collection of azimuth angles where any two choices satisfy the orthogonality condition (13). In Fig. 1, these angles are shown with red crosses. We observe that for higher values of |θ|, we have fewer points. This is because the beamwidth is inversely proportional to cos(θ) and diverging from boresight where θ = 0, the beamwidth increases [15]. For any (ϕ, θ) in between the crosses in Fig. 1, the corresponding array response vector can be expressed as a linear combination of the selected orthogonal array response vectors.
Following the above mentioned approach, the set A = {(ϕ 1 , θ 1 ), . . . , (ϕ η , θ η )} collects all the azimuth and elevation pairs (red crosses in Fig. 1) that are generated in this way. Accordingly, set B = {a RIS (ϕ i , θ i ) : (ϕ i , θ i ) ∈ A} collects all the orthogonal array response vectors that form a basis of the The cardinality |A| = |B| = η defines the dimension of the subspace. This value is also known as the spatial degreesof-freedom (DOF) and for a large planar aperture it can be asymptotically approximated as [16], [17]
η ≈ πM H d H M V d V .(14)
We notice that η increases linearly with the area M H d H M V d V of the aperture normalized by the wavelength.
To show under what conditions the approximate/asymptotic formula in (14) is a good prediction of the number of basis vectors generated by Algorithm 1, Fig. 2 shows η/M for the proposed algorithm and the approximate formula. The dashed lines show the approximate DOF in (14), while the solid lines show the values obtained using the proposed algorithm. As it can be seen, the approximation in (14) is tight for large arrays and the tightness appears earlier for arrays with small element spacings. Another observation from Fig. 2 is that η < M and the ratio becomes particularly small when the element spacing is smaller than λ/2, which is the intended range for RIS. The implication of this is that the channel h ∈ C M in (3) can be
i , θ i ) ∈ B, such that h = η i=1 c i a RIS (ϕ i , θ i ),(15)
where c i is the channel coefficient associated with the basis vector a RIS (ϕ i , θ i ). In Sec. IV, we will utilize this basis representation when performing channel estimation to reduce the required pilot length from M to η. However, it should be noticed that (15) is strictly speaking an approximate representation and there exists a tiny fraction of the signal power outside its span. To clarify further, we investigate the eigenvalues of the spatial channel correlation matrix R = E{a RIS (ϕ, θ)a H RIS (ϕ, θ)} for a 32 × 32 RIS. Fig. 3 illustrates the eigenvalues of R in descending order where ϕ ∼ U [−π/3, π/3], θ ∼ U [−π/2, π/2], and element spacings of d H = d V ∈ { 1 4 , 1 8 } are considered. The value of η obtained by our algorithm is denoted by a star marker to specify the channel dimension that our proposed method considers. We observe that the number of large eigenvalues equals the number of basis functions in the representation (15), while the remaining eigenvalues decrease rapidly meaning that 98.7% of the channel power lies in the identified η dimensions. The remaining channel power can be neglected in practice, but cause a performance saturation at very large SNRs, which will be discussed further in Sec. V.
IV. CHANNEL ESTIMATION AND RIS CONFIGURATION
In this section, we first explain the intuition behind the RIS phase configuration and the corresponding power gain at the BS considering perfect CSI. Then, we propose a novel method to perform channel estimation and RIS phase configuration by exploiting the channel parametrization in (15).
A. RIS Configuration with Perfect CSI and General Channels
Suppose the signal x is transmitted with power ρ. To recover it from the received signal vector y in (1), the BS applies the unit-norm receive combiner w: where w should be configured to maximize the SNR andñ = w H n ∼ N C (0, σ 2 ). The achievable rate is log 2 (1 + SNR), where the SNR is
y = w H Hdiag(φ)hx + w H n = w H Vφx +ñ,(16)SNR = ρ|w H Vφ| 2 σ 2 .(17)
The RIS phase configuration φ and the receive combiner w should be jointly selected to maximize the SNR. If the BS has perfect knowledge of the cascaded channel V, it can compute the local optimal using the alternating optimization
φ opt = e j·arg(V H w) ,(18)w opt = Vφ opt Vφ opt ,(19)
and communicated to RIS via a backhaul connection [2]. Recall that the cascaded channel can be expressed as V = Hdiag(h). Since w H Vφ = w H Hdiag(φ)h, the optimized RIS configuration can be divided into two parts:
φ = φ tx ⊙ φ rx ,(20)
where ⊙ denotes the Hadamard product. The part φ tx acts as a transmit precoder towards the BS and φ rx acts as receive combiner from the user. In the next subsection, we explain how these two parts should be designed to maximize the SNR in (17) for LOS-dominant channels.
B. RIS Configuration with Perfect CSI and LOS Channels
We will now impose a practical structure on the channel. The BS and RIS are fixed in position and the channel is LOSdominant in a mmWave band. Accordingly, we adopt the static LOS channel model to simplify the channel matrix in (2) to
H = β br a BS (ϕ BS )a RIS (ϕ AOD , θ AOD ) H ,(21)
where β br is the complex channel gain between RIS and BS. Utilizing the fact that the channel matrix in (21) has rank one, we can express it as H = u 1 λ 1 v H 1 , where
u 1 = 1 √ N a BS (ϕ BS ), λ 1 = √ M N β br , and v 1 = 1 √ M a RIS (ϕ AOD , θ AOD ).
We can then rewrite (16) as
y = w H u 1 λ 1 v H 1 diag(h)φx +ñ = w H u 1ṽ H 1 φx +ñ,(22)
whereṽ 1 = λ 1 diag(h * )v 1 . Accordingly, the SNR in (17) can be simplified as
SNR = ρ|w H u 1 | 2 |ṽ H 1 φ| 2 σ 2 .(23)
This expression enables separate optimization of the receive combining w and the RIS configuration φ. The maximum SNR is achieved when w = u 1 [14] and the unit-modulus entries of φ have the same phase as the corresponding entries inṽ 1 . The latter corresponds to maximizing |ṽ H
1 φ| 2 = |λ 1 | 2 |v H 1 diag(h)φ| 2 = |λ 1 | 2 |v H 1 diag(φ tx )diag(φ rx )h| 2 , which is achieved by φ tx = √ M v 1 and φ rx = e −j·arg(h)
. In case of a LOS channel h = ζa RIS (ϕ AOA , θ AOA ), then |w H u 1 | 2 = 1 and |ṽ H 1 φ| 2 = M |λ 1 ζ| 2 = M 2 N |β br ζ| 2 . Hence, under perfect CSI, the SNR is proportional to N and to M 2 . This is the behavior to strive for with imperfect CSI.
C. Joint Channel Estimation and Phase Shift Optimization
Upon the network deployment, the angles ϕ BS , ϕ AOD , and θ AOD in (21) are fixed and known to the BS. Following the discussion in Sec. IV-B, the vectors w and φ tx can then be selected optimally as
w = u 1 = 1 √ N a BS (ϕ BS ) and φ tx = √ M v 1 = a RIS (ϕ AOD , θ AOD ).
The key practical challenge is to select φ rx = e −j arg(h) when the UE-RIS channel h is unknown a priori. To estimate the unknown part of the cascaded channel, the UE transmits a constant pilot signal x = √ ρ with power ρ at η time instances. We configure the RIS phase shift vector at each time instance as
φ t = φ tx ⊙ a RIS (ϕ t , θ t ),(24)
by considering the set of orthogonal basis vectors a RIS (ϕ t , θ t ) ∈ B derived in Sec. III. By adding time indices on the received signal, RIS configuration, and noise, we can rewrite (22) for t = 1, . . . , η as
y t = √ ρw H u 1ṽ H 1 φ t +ñ t = √ ρw H Vφ t +ñ t ,(25)
where the scalar w H Vφ t is the projection of w H V onto φ t . Since w = u 1 , it follows that w H u 1 = 1 and y t therefore corresponds to the projection ofṽ 1 on φ t . By collecting the received signals in (25) in the vector y ∈ C η and the RIS configurations in Φ = [φ 1 , · · · , φ η ] ∈ C M×η , we obtain
y T = √ ρṽ H 1 Φ +ñ T .(26)
We can compute the reduced-subspace LS (RS-LS) estimate ofṽ 1 in the subspace spanned by the columns of Φ aŝ
v 1 = Φy * M √ ρ = 1 M ΦΦ Hṽ 1 + 1 M √ ρ Φñ * ,(27)
where 1 M ΦΦ H is the projection matrix onto the subspace of dimension η that we derived in Sec. III. Since we know thatṽ 1 approximately falls into this subspace, the projection matrix 1 M ΦΦ H basically works as an identity matrix. However, the noise term 1 M √ ρ Φñ * has the covariance matrix σ 2 ρM 2 ΦΦ H with the trace σ 2 ρ η M , thus all the noise outside the signal subspace is removed which improves the estimation quality. The resulting estimate of the entire cascaded channel iŝ
V = u 1v H 1 = wv H 1 .(28)
In line with Sec. IV-B, we select the phase configuration aŝ
φ = e j·arg(v1) .(29)
V. NUMERICAL EVALUATION
We consider a LOS simulation scenario where the user can be anywhere in the RIS coverage area specified by ϕ ∈ [−π/3, π/3], θ ∈ [−π/2, 0], and distance d ∈ [30, 50] meters with uniform probability. The carrier frequency is 28 GHz, the noise power is σ 2 = −96 dBm, and the bandwidth is 20 MHz. The BS is equipped with a ULA of 128 antennas with antenna spacing d BS = 1/2. The RIS is equipped with a UPA with M H = M V = 128 elements, and the element spacing d H = d V = 1 4 . We will vary the transmit power ρ. The LOS propagation channel between the RIS and BS is generated according to (21), where β br = β 0 e −j 2π λ d br with β 0 = −81.4 dB corresponding to the distance of d br = 10 m [18]. For simplicity we assume the RIS is deployed at the same elevation as the BS such that the BS see the RIS at the boresight; that is, ϕ BS = θ AOD = 0 and ϕ AOD = π/6.
We adopt a correlated Rician fading model to generate the channel between the RIS and UE. The Rician factor (the ratio of the power of the LOS component to all the NLOS components) is evaluated based on K = 13 − 0.03 · d 1 m [dB], where d is the distance between the RIS and UE [19]. Initially, the LOS complex coefficient is evaluated as β 0 e −j 2π λ d , where β 0 = −61.4 − 20 log 10 (d/1 m) dB [18]. Accordingly, for the NLOS paths ℓ = 1, we generate the complex channel coefficient as ζ ℓ ∼ N C (0, γ ℓ ), where the associated power γ ℓ is evaluated based on the parameters reported in [18].
We will now evaluate the performance of the proposed approach that we described in Sec. IV, where we estimated the channel using (28) and configured φ according to (29). We use the conventional LS estimator from [8] as the benchmark. The receive combining and RIS configuration are then computed through the alternating optimization in (18) and (19) by treating the LS estimate as being perfect. For an estimated cascaded channelV we define the normalized mean-squarederror (NMSE) as
NMSE = 1 K K i=1 V i −V i 2 F V i 2 F ,(30)
where K is the number of realizations in the Monte Carlo simulation, and · F is the Frobenius norm. Fig. 4 shows the NMSE as a function of the pilot transmit power ρ, averaged over K = 500 user locations. According to (27), our designed estimator reduces the total noise power by a factor η/M compared to the LS estimator, without reducing the total received signal power. In this setup, η/M ≈ 0.198. The figure shows that our proposed scheme vastly outperforms LS in terms of NMSE although the pilot length in our method is only 19.8% of that with the LS estimator. Moreover, we notice that the NMSE with our estimator approaches an error floor at very high transmit power. We anticipated such behavior in Sec. III since our proposed algorithm discards channel dimensions containing very low power and only accounts for the significant dimensions. Nevertheless, the attained accuracy is adequate to configure the RIS phase shift very well. Fig. 5 shows the average SNR for data transmission in (17) with respect to the pilot transmit power. We compare our proposed method with the maximum SNR obtained with perfect CSI and with the LS estimator as before. By increasing the transmit power, the gap between the optimal and achieved SNR decreases. This is due to the higher SNR during the channel estimation. Our method outperforms LS when the power budget is limited while the LS estimator is slightly better at very high transmit power values. This is a consequence of the fact that a tiny fraction of the channel power falls outside the proposed subspace considered in the channel estimation. Such a loss is acceptable considering that our method reduced the pilot overhead by 80.2% compared to the LS estimator. This overhead saving improves the overall data rate as more time is dedicated to data transmission.
VI. CONCLUSION
In this paper, we proposed a resource-efficient joint channel estimation and RIS phase configuration method that exploits the fact that the channels involving the RIS approximately reside in a low-dimensional subspace. We first characterized this subspace by deriving a set of basis vectors that spans it and used these bases to obtain a set of RIS configurations to consider during channel estimation. We demonstrated that our method outperforms the conventional LS estimator in terms of channel estimation accuracy while requiring a shorter pilot length. For example, when the element spacing is a quarter of the wavelength, the pilot overhead can be reduced by 80%.
a
BS (ϕ BS ) = [1, e j2πdBS sin(ϕBS) , . . . , e j2π(N −1)dBS sin(ϕBS) ] H , (4) where ϕ BS is the azimuth-angle of arrival (AOA) of the RIS seen from the BS, d BS is antenna spacing normalized by the carrier wavelength, and a RIS (ϕ, θ) is the array response of the RIS defined as a RIS (ϕ, θ) = 1, . . . , e j2π[i(m)dH cos(θ) sin(ϕ)+j(m)dV sin(θ)] , . . . , e j2π[(MH−1)dH cos(θ) sin(ϕ)+(MV−1)dV sin(θ)] H ,
Fig. 1 :
1The points in the elevation-azimuth plane that give mutually orthogonal array response vectors when using an 8 ×8 RIS with d H = d V = 1 4 .Algorithm 1The proposed algorithm to generate a set of basis vectors that spans the subspace of the RIS channel.1: Initialize A = ∅, B = ∅ 2: Θ = {θ = arcsin( k MVdV ) for k = 0, ±1, . . . , ± MV 2 } 3: for θ i ∈ Θ do 4: for l = 0, ±1, . . . , ±(M H − 18: end for 9: for (ϕ i , θ i ) ∈ A do 10: B ← B ∪ a RIS (ϕ i , θ i ) 11: end for subspace in C M where the RIS channel resides. Algorithm 1 summarizes the procedure to generate a set B of basis vectors.
Fig. 2 :
2The ratio η/M for different array sizes and element spacings.expressed as a linear combination of the η orthogonal basis vectors a RIS (ϕ
Fig. 3 :
3Eigenvalues in descending order for an RIS with 32×32 elements.
Fig. 4 :
4NMSE versus pilot transmit power.
Fig. 5 :
5Average SNR with respect to the transmit power.
Holographic mimo surfaces for 6g wireless networks: Opportunities, challenges, and trends. C Huang, S Hu, G C Alexandropoulos, A Zappone, C Yuen, R Zhang, M Di Renzo, M Debbah, IEEE Wireless Communications. 275C. Huang, S. Hu, G. C. Alexandropoulos, A. Zappone, C. Yuen, R. Zhang, M. Di Renzo, and M. Debbah, "Holographic mimo surfaces for 6g wireless networks: Opportunities, challenges, and trends," IEEE Wireless Communications, vol. 27, no. 5, pp. 118-125, 2020.
Intelligent reflecting surface-aided wireless communications: A tutorial. Q Wu, S Zhang, B Zheng, C You, R Zhang, IEEE Transactions on Communications. 695Q. Wu, S. Zhang, B. Zheng, C. You, and R. Zhang, "Intelligent reflecting surface-aided wireless communications: A tutorial," IEEE Transactions on Communications, vol. 69, no. 5, pp. 3313-3351, 2021.
Millimeter wave communication: A comprehensive survey. X Wang, L Kong, F Kong, F Qiu, M Xia, S Arnon, G Chen, IEEE Communications Surveys & Tutorials. 20X. Wang, L. Kong, F. Kong, F. Qiu, M. Xia, S. Arnon, and G. Chen, "Millimeter wave communication: A comprehensive survey," IEEE Communications Surveys & Tutorials, vol. 20, pp. 1616-1653, 2018.
A survey on channel estimation and practical passive beamforming design for intelligent reflecting surface aided wireless communications. B Zheng, C You, W Mei, R Zhang, IEEE Communications Surveys & Tutorials. 242B. Zheng, C. You, W. Mei, and R. Zhang, "A survey on channel estima- tion and practical passive beamforming design for intelligent reflecting surface aided wireless communications," IEEE Communications Surveys & Tutorials, vol. 24, no. 2, pp. 1035-1071, 2022.
Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems. P Wang, J Fang, H Duan, H Li, IEEE Signal Processing Letters. 27P. Wang, J. Fang, H. Duan, and H. Li, "Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems," IEEE Signal Processing Letters, vol. 27, pp. 905-909, 2020.
ADMM based channel estimation for RISs aided millimeter wave communications. H Liu, J Zhang, Q Wu, H Xiao, B Ai, IEEE Communications Letters. 259H. Liu, J. Zhang, Q. Wu, H. Xiao, and B. Ai, "ADMM based channel estimation for RISs aided millimeter wave communications," IEEE Communications Letters, vol. 25, no. 9, pp. 2894-2898, 2021.
TRICE: A channel estimation framework for RIS-aided millimeter-wave MIMO systems. K Ardah, S Gherekhloo, A L F De Almeida, M Haardt, IEEE Signal Processing Letters. 28K. Ardah, S. Gherekhloo, A. L. F. de Almeida, and M. Haardt, "TRICE: A channel estimation framework for RIS-aided millimeter-wave MIMO systems," IEEE Signal Processing Letters, vol. 28, pp. 513-517, 2021.
An optimal channel estimation scheme for intelligent reflecting surfaces based on a minimum variance unbiased estimator. T L Jensen, E. De Carvalho, IEEE International Conference on Acoustics, Speech and Signal Processing. T. L. Jensen and E. De Carvalho, "An optimal channel estimation scheme for intelligent reflecting surfaces based on a minimum variance unbiased estimator," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 5000-5004.
Exploiting array geometry for reduced-subspace channel estimation in RIS-aided communications. Ö T Demir, E Björnson, L Sanguinetti, 2022 IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM). Ö. T. Demir, E. Björnson, and L. Sanguinetti, "Exploiting array ge- ometry for reduced-subspace channel estimation in RIS-aided commu- nications," in 2022 IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM), 2022, pp. 455-459.
An overview of signal processing techniques for RIS/IRS-aided wireless systems. C Pan, G Zhou, K Zhi, S Hong, T Wu, Y Pan, H Ren, M D Renzo, A L Swindlehurst, R Zhang, A Y Zhang, IEEE Journal of Selected Topics in Signal Processing. C. Pan, G. Zhou, K. Zhi, S. Hong, T. Wu, Y. Pan, H. Ren, M. D. Renzo, A. L. Swindlehurst, R. Zhang, and A. Y. Zhang, "An overview of signal processing techniques for RIS/IRS-aided wireless systems," IEEE Journal of Selected Topics in Signal Processing, pp. 1-35, 2022.
On the physical interpretation of the Saleh-Valenzuela model and the definition of its power delay profiles. A Meijerink, A F Molisch, IEEE Transactions on Antennas and Propagation. 629A. Meijerink and A. F. Molisch, "On the physical interpretation of the Saleh-Valenzuela model and the definition of its power delay profiles," IEEE Transactions on Antennas and Propagation, vol. 62, no. 9, pp. 4780-4793, 2014.
A statistical model for indoor multipath propagation. A Saleh, R Valenzuela, IEEE Journal on Selected Areas in Communications. 52A. Saleh and R. Valenzuela, "A statistical model for indoor multipath propagation," IEEE Journal on Selected Areas in Communications, vol. 5, no. 2, pp. 128-137, 1987.
Rayleigh fading modeling and channel hardening for reconfigurable intelligent surfaces. E Björnson, L Sanguinetti, IEEE Wireless Communications Letters. 104E. Björnson and L. Sanguinetti, "Rayleigh fading modeling and channel hardening for reconfigurable intelligent surfaces," IEEE Wireless Com- munications Letters, vol. 10, no. 4, pp. 830-834, 2021.
E Björnson, J Hoydis, L Sanguinetti, Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency. E. Björnson, J. Hoydis, and L. Sanguinetti, Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency, 2017.
Antenna theory: Analysis and design. C A Balanis, WileyC. A. Balanis, Antenna theory: Analysis and design. Wiley, 2016.
Degrees of freedom of holographic MIMO channels. A Pizzo, T L Marzetta, L Sanguinetti, IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). A. Pizzo, T. L. Marzetta, and L. Sanguinetti, "Degrees of freedom of holographic MIMO channels," in IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2020, pp. 1-5.
Beyond massive MIMO: The potential of data transmission with large intelligent surfaces. S Hu, F Rusek, O Edfors, IEEE Transactions on Signal Processing. 6610S. Hu, F. Rusek, and O. Edfors, "Beyond massive MIMO: The potential of data transmission with large intelligent surfaces," IEEE Transactions on Signal Processing, vol. 66, no. 10, pp. 2746-2758, 2018.
Millimeter wave channel modeling and cellular capacity evaluation. M R Akdeniz, Y Liu, M K Samimi, S Sun, S Rangan, T S Rappaport, E Erkip, IEEE Journal on Selected Areas in Communications. 326M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, "Millimeter wave channel modeling and cellular capacity evaluation," IEEE Journal on Selected Areas in Com- munications, vol. 32, no. 6, pp. 1164-1179, 2014.
Spatial channel model for Multiple Input Multiple Output (MIMO) simulations. TR 25.9963GPP. 3GPP, "Spatial channel model for Multiple Input Multiple Output (MIMO) simulations," TR 25.996, Jul 2020.
| [] |
[
"Deep Neural Networks for Query Expansion using Word Embeddings",
"Deep Neural Networks for Query Expansion using Word Embeddings"
] | [
"Ayyoob Imani [email protected] \nTehran University\nTehranIran\n",
"Amir Vakili [email protected] \nTehran University\nTehranIran\n",
"Ali Montazer [email protected] \nUniversity of Massachusetts Amherst\nAmherstUSA\n",
"Azadeh Shakery [email protected] \nTehran University\nTehranIran\n"
] | [
"Tehran University\nTehranIran",
"Tehran University\nTehranIran",
"University of Massachusetts Amherst\nAmherstUSA",
"Tehran University\nTehranIran"
] | [] | Query expansion is a method for alleviating the vocabulary mismatch problem present in information retrieval tasks. Previous works have shown that terms selected for query expansion by traditional methods such as pseudo-relevance feedback are not always helpful to the retrieval process. In this paper, we show that this is also true for more recently proposed embedding-based query expansion methods. We then introduce an artificial neural network classifier to predict the usefulness of query expansion terms. This classifier uses term word embeddings as inputs. We perform experiments on four TREC newswire and web collections show that using terms selected by the classifier for expansion significantly improves retrieval performance when compared to competitive baselines. The results are also shown to be more robust than the baselines. | 10.1007/978-3-030-15719-7_26 | [
"https://arxiv.org/pdf/1811.03514v1.pdf"
] | 53,206,970 | 1811.03514 | fcb8f65122772692e81baff3ca8ca5af679eb965 |
Deep Neural Networks for Query Expansion using Word Embeddings
Ayyoob Imani [email protected]
Tehran University
TehranIran
Amir Vakili [email protected]
Tehran University
TehranIran
Ali Montazer [email protected]
University of Massachusetts Amherst
AmherstUSA
Azadeh Shakery [email protected]
Tehran University
TehranIran
Deep Neural Networks for Query Expansion using Word Embeddings
Query Expansion · Word Embeddings · Siamese Network
Query expansion is a method for alleviating the vocabulary mismatch problem present in information retrieval tasks. Previous works have shown that terms selected for query expansion by traditional methods such as pseudo-relevance feedback are not always helpful to the retrieval process. In this paper, we show that this is also true for more recently proposed embedding-based query expansion methods. We then introduce an artificial neural network classifier to predict the usefulness of query expansion terms. This classifier uses term word embeddings as inputs. We perform experiments on four TREC newswire and web collections show that using terms selected by the classifier for expansion significantly improves retrieval performance when compared to competitive baselines. The results are also shown to be more robust than the baselines.
Introduction
Query expansion is a method for alleviating the vocabulary mismatch problem present in information retrieval tasks. This is a fundamental problem where users and authors often use different terms describing the same concepts, which in turn harms retrieval performance. In this paper, we aim to distinguish terms helpful to the query expansion process through the use of an artificial neural network classifier.
Various methods for selecting expansion words exist, however, pseudo-relevance feedback (PRF) is the most well-known. PRF assumes that the top retrieved documents for a query will contain terms relevant to the query which can help distinguish other relevant documents from non-relevant documents. However, [2] showed that not all terms extracted using PRF methods help the retrieval process and many of these terms even have a negative effect. [2] divided terms into three categories: good, bad and neutral. When used as expansion terms for a query, good terms increase and bad terms decrease retrieval performance. Neutral terms have no effect if added. An ideal query expansion method would add only good terms to the query. Therefore, [2] proposed a supervised learning method for the classification of such terms.
[2] uses a feature vector to train a classifier for separating good expansion terms from bad ones. This feature vector includes the following features: terms distribution in the feedback documents and terms distribution in the whole collection, co-occurrences of the expansion term with the original query terms, and proximity of the expansion terms to the query terms.
Another approach for improving the selection of query expansion terms is considering the semantic similarity of the candidate term with the query terms [13,10,20,21,1]. As an example, [13] proposed a semantic similarity constraint for PRF methods and showed that adhering this constraint improves retrieval performance.
Another method that takes semantic similarity into consideration is using word embeddings to expand queries. [10] aimed to expand queries with terms semantically related to query terms. To this end, they trained word embeddings on document corpora using the Word2Vec Continuous bag of words approach [12]. This technique learns low dimensional vectors for words in a vocabulary based on their co-occurrence with other words within a specified window size. These vectors are both semantic and syntactical representations of their corresponding words. The learning is done unsupervised therefore these vectors are query-independent. [10] then uses these word embeddings to expand queries either by adding terms closest to the centroid of the query word embedding vectors (referred to in following sections as Average Word Embedding or AWE) or combining terms closest to individual query terms in the word embedding space. [20] has a similar approach for query expansion. It first defines a more discriminative similarity metric than cosine-similarity. Next, it introduces two embedding based query expansion models with different simplification assumptions. One model assumes that query terms are independent of each other (referred to in following sections as Embedding Query Expansion 1 or EQE1), and the other one assumes that the semantic similarity between two given terms is independent of the query.
In related work [5] showed that training local training of word embeddings using retrieved documents improves query expansion effectiveness. [23] proposed using supervised training and word embeddings to learn term weights to be used in retrieval models such as BM25.
In this paper, we aim to use pre-trained word embeddings in selecting suitable expansion terms. Furthermore, we propose an artificial neural network (ANN) model for this classification task. We use a siamese neural network architecture inspired by [9] in order to lessen the impact of limited training data. Siamese network architectures have been gaining popularity in recent years in the information retrieval community [19,8,6,16,18] and have achieved impressive performance in various tasks. Using the distributed representation of terms, this network learns whether a term is semantically suitable for expanding a query. Our neural network approach intends to go beyond simple vector similarity and learns the latent features present within word embeddings responsible for term effectiveness or ineffectiveness when used for query expansion. In short, the proposed approaches main advantages are that it no longer requires manual feature design and no longer relies on simplistic similarity functions as it uses a trainable classifier.
We evaluate the effectiveness of our approach on four TREC collections. We compare results with traditional approaches and more recent methods. Results show that integrating term classifications using our novel ANN model significantly improves retrieval performance. We also show that our proposed method is more robust compared to the baselines.
The rest of this paper is organized as follows: In section 2 we discuss [2]'s method for labeling expansion terms in greater detail, in 3 we introduce our classifier model and explain its integration with the retrieval process, in section 4 we present experiments performed and in section 5 we discuss the results obtained from the experiments.
Good, bad, and neutral expansion terms
In order to identify terms helpful to query expansion, we follow [2] and divide candidate terms into three classes: good, bad and neutral. For a particular query, a good term will increase retrieval performance and a bad term will decrease it. Neutral terms have no effect. In order to train our classifier, we require a dataset containing query and candidate expansion terms which have been labeled as one of the three classes mentioned. For creating this dataset, we take each query and first average the embedding vectors of its query terms; this approach has been proposed in [12] and used in other works such as [10,20]. Then, we use cosine-similarity to find top 1000 terms that are closest to the averaged vector. Finally, to identify the class of a term, we use the method proposed at [2]). Briefly, we add the expansion to the query and perform retrieval using Query likelihood method; if the mean average precision increases, the term is a good expansion term, but if it decreases, the term is a bad expansion term. If the change is not tangible, the term is neutral.
To examine the impact of selecting good expansion terms, we explore the performance of an oracle retrieval model that expands queries with only good expansion terms. Table 1 shows the ratio of the classes of terms along with the performance of the oracle retrieval model. The first set of columns depicts statistics for the top 1000 closest terms to the average of query word embeddings, and the second set is for the terms that exist in first 10 pseudo-relevant documents retrieved using the query likelihood method. The last two columns show the performance of the original query likelihood method and the expanded query by the mixture model. The performance is measured using mean average precision for the top 1000 results. We can see that only a small percentage of expansion terms are good expansion terms in all collections, which may explain the slight improvements achieved by query expansion methods using word embeddings compared to the improvements by pseudo-relevance methods. Fig. 1. Architecture of the proposed siamese network. The architecture consists of two identical models projecting two separate inputs (query and expansion term pair) into a common embedding space and then comparing the two projections to get a final similarity score. This score tells us whether the candidate expansion terms of the two classes belong to the same class (good, bad or neutral).
Expansion Term Classification
In this section, we first present the model used for the classification task, then we explain how the classifier results are integrated into the retrieval process.
Problem Formulation
Suppose we have a dataset D = {(q i , x i , l i )} N i=1 where q i = {q i,1
, · · · , q i,k } represents a query and x i is a candidate expansion term. l i ∈ {Good, Neutral, Bad} is the label denoting the expansion terms effectiveness for the query. Our goal is to learn a model g(·, ·) with D. For any query and candidate expression term pair (q, x), g(q, x) classifies the expansion term as either good, neutral or bad.
Model Overview
We propose the deep expansion classifier (DEC) to model g(·, ·). The architecture is depicted in Figure 1. A major roadblock when using ANN approaches in information retrieval tasks is the lack of training data. To overcome this, various methods such as using weak supervision for training ranking networks [4] have been proposed. In this paper, we use the learning technique proposed in [9]. The siamese network proposed was designed to overcome the lack of training data by learning whether two samples are of the same class or not rather than directly predicting which class a sample belongs to. Using this technique we can increase available training samples from n to n(n − 1). In order to determine which class a query and expansion term pair (q,x) belongs to we can use the trained model to compare it to a set of previously classified pairs (q, x), half of which are classified as good. We calculate the probability of a candidate expansion term being good based on the number of times it is put in the same class as a good pair or a different class than a bad pair.
Modeling the Query and Candidate Expansion Term Relation
Given a query q and a candidate expansion term x, the model connects them as a sequence, then maps the words to their embeddings through a shared look-up table. A bidirectional long short term memory (LSTM) Network is then used to construct new contextual embeddings for each of the terms in the sequence. LSTMs [7] are a variant of recurrent neural networks (RNN) designed to overcome the vanishing/exploding gradient problem present in regular RNNs with the usage of memory cells. These cells store information over long histories. The flow of information into and out of these memory cells is controlled by three gates (input, output and forget). Given a sequence of word embeddings including k query terms and one expansion term (e q,1 , · · · , e q,k , e x ), an LSTM will output a new sequence where each element captures the contextual information seen before it.
In bidirectional LSTMs the sequence is propagated both forwards and backwards and the two forwards and backwards contextual embeddings are concatenated. The outputs of the bidirectional LSTM layer is fed to a fully connected layer with a softmax activation function. This final layer represents the relation between query terms and the candidate expansion term.
Expansion term classification and Term re-weighting
The representation built must then be compared to the representation of another queryexpansion term pair in order to determine whether the two pairs belong to the same class or not. An element-wise subtraction is performed on the two representations and the result is then fed into a fully connected layer with a softmax activation function which outputs the final score depicting whether the expansion terms belong to the same class.
In order to determine whether an expansion term candidate is good for a query, we feed the pair along with a set of query-expansion term pairs whose classes have been previously determined. We calculate the probability of a candidate expansion term being good as
P (l = Good|(q, x)) = Ng+N nb N
where N is the total number of pairs we compare to and N g + N nb is the number of times the pair being tested was determined to be in the same class a good pair or in a different class than a bad pair.
Finally, we reweight the expansion term weights obtained using the AWE method by multiplying them by P (l = Good|(q, x)). The AWE method weights the candidate expansion terms based on the cosine similarity of the expansion term embedding and the query term embeddings centroid. So the final weight for an expansion term will be: (1 + α · p(l = Good)) · δ(q, x) where δ is the cosine similarity function and α is a hyper-parameter. The parameters were updated by stochastic gradient descent with the Adam algorithm. Hidden size for the bidirectional LSTM is set to 200. The query and expansion term representation vector is set to 400. Our initial learning rate is 0.001 with a mini-batch size of 32. We train models on a single machine. We use k-fold cross validation (k depends on the number of queries available for each dataset) and average the evaluation metrics.
Experiment
Comparison Approaches
For evaluation, we only compare to methods that select expansion term candidates based on word embeddings and not other information sources such as the top retrieved documents for each query (PRF). As we use general purpose word embeddings, we also do not compare to methods that train embeddings specifically for query expansion such as [17,22].
For evaluating our proposed method we consider three baselines: (1) the standard query likelihood model using maximum likelihood estimation, (2) AWE where expansion terms closest to the centroid of query term embeddings are selected [10], and (3) EQE1 where expansion terms are scored by their multiplicative similarity to the query terms [20].
Evaluation Metrics
We use mean average precision (MAP) as our main evaluation metric. We also compare precision for the top 10 retrieved documents (P@10). Statistical significance tests are performed using two-tailed paired t-test at a 95% confidence level. In order to evaluate the robustness of the proposed methods we use the robustness index (RI) [3] which is defined as N+−N− |Q| where |Q| is the number of queries and N + /N − are the number of queries which have improved/decreased compared to the baseline. Table 2 presents the performance of the baselines and our proposed method. These methods expand queries with semantically related terms. The results show that the query expansion classifier DEC outperforms all baselines in terms of MAP. For all three embeddings-based methods, the performance gains are more pronounced in the two newswire collections. This may be due to the fact that these collections are more homogeneous than the web corpora. Web corpora will generally be noisier which in turn will affect classifier performance. Another reason could be due to the fact that our word-embeddings (GloVe) were pre-trained on Wikipedia and newswire articles. This would make them more suitable for use in newswire collections. Using word embeddings pre-trained using common crawl data may yield better performance in web corpora.
Results and Discussion
To summarize, the proposed method outperforms other state-of-the-art methods utilizing word embeddings for query expansion. The results are also more robust than previous approaches. This indicates that for selecting candidate expansion terms, simple similarity functions such as cosine similarity can be improved upon using ANN classifiers.
Conclusion and Future Work
In this paper, we proposed a neural network architecture for classifying terms based on their effectiveness in query expansion. The neural network uses only pre-trained word embeddings and no manual feature selection or initial retrieval using the query is necessary. We evaluated the proposed methods using four TREC collections. The results showed that the proposed method significantly outperforms other word embedding based approaches and traditional pseudo-relevance feedback. The method is also shown to be more robust than the baselines.
For future work, one possible method is integrating topic vectors trained using methods such as Latent Dirichlet Allocation into the classification process. Another is using word embeddings trained using methods other than Word2Vec and GloVe (such as Lda2Vec [14] or Paragraph2Vec [11]) or training on domain-specific corpora.
evaluating the proposed methods we use four standard TREC collections: AP (Associated Press 1988-1989), Robust (TREC Robust Track 2004), WT2G and WT10G (TREC Web track 1999, 2000-2001). The first two collections contain news articles and the second two are general web crawls. We used the title of topics as queries. The words are stopped using the inquiry stop word list and no stemming was used. We used pre-trained word embeddings with a dimension of 200 extracted using the GloVe [15] method on a 6 billion token collection (Wikipedia 2014 dump plus Gigawords 5).
Table 1 .
1Query expansion term statistics for used collectionsEmbedding-based
Pseudo Relevance Docs
QLM
(MAP)
QLM
+MM
(MAP)
Good
(%)
Neutral
(%)
Bad
(%)
Oracle
(MAP)
Good
(%)
Neutral
(%)
Bad
(%)
Oracle
(MAP)
AP
3.8
55.5
40.5 0.2981 16.2
53.6
30.1 0.3983 0.2206 0.2749
Robust 5.6
62.4
31.9 0.3122 21.9
55.4
22.6 0.4021 0.2176 0.2658
WT2g
6.2
37.8
55.8 0.3442 15.7
61.3
22.9 0.4383 0.2404 0.2593
WT10g 3.4
75.6
20.9 0.2410 14.1
63.6
22.2 0.2871 0.1837 0.1902
Table 2 .
2Evaluation results on the four datasets. The superscripts 1/2/3 denote that the MAP improvements over QLM/AWE/EQE1 are statistically significant. The highest value in each column is marked in bold. .2344 12 0.3442 0.29 0.2278 12 0.4016 0.25 0.2463 1 0.4188 0.17 0.1867 0.2432 0.18 DEC 0.2403 123 0.3434 0.31 0.2358 123 0.4057 0.31 0.2489 1 0.4213 0.16 0.1891 12 0.2434 0.20AP
Robust
WT2g
WT10g
MAP
P@10 RI MAP
P@10 RI MAP
P@10 RI MAP
P@10 RI
QLM 0.2206
0.3432 -
0.2176
0.3856 -
0.2404 0.4167 -
0.1837
0.2420 -
AWE 0.2312 1
0.3392 0.12 0.2230 1
0.3899 0.10 0.2456 1 0.4169 0.08 0.1849
0.2399 0.10
EQE1 0
A comparison of deep learning based query expansion with pseudo-relevance feedback and mutual information. M Almasri, C Berrut, J P Chevallet, European conference on information retrieval. SpringerAlmasri, M., Berrut, C., Chevallet, J.P.: A comparison of deep learning based query expansion with pseudo-relevance feedback and mutual information. In: European conference on informa- tion retrieval. pp. 709-715. Springer (2016)
Selecting good expansion terms for pseudo-relevance feedback. G Cao, J Y Nie, J Gao, S Robertson, Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. the 31st annual international ACM SIGIR conference on Research and development in information retrievalACMCao, G., Nie, J.Y., Gao, J., Robertson, S.: Selecting good expansion terms for pseudo-relevance feedback. In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. pp. 243-250. ACM (2008)
Reducing the risk of query expansion via robust constrained optimization. K Collins-Thompson, Proceedings of the 18th ACM conference on Information and knowledge management. the 18th ACM conference on Information and knowledge managementACMCollins-Thompson, K.: Reducing the risk of query expansion via robust constrained optimiza- tion. In: Proceedings of the 18th ACM conference on Information and knowledge management. pp. 837-846. ACM (2009)
Neural ranking models with weak supervision. M Dehghani, H Zamani, A Severyn, J Kamps, W B Croft, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalACMDehghani, M., Zamani, H., Severyn, A., Kamps, J., Croft, W.B.: Neural ranking models with weak supervision. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 65-74. ACM (2017)
F Diaz, B Mitra, N Craswell, arXiv:1605.07891Query expansion with locally-trained word embeddings. arXiv preprintDiaz, F., Mitra, B., Craswell, N.: Query expansion with locally-trained word embeddings. arXiv preprint arXiv:1605.07891 (2016)
Pairwise word interaction modeling with deep neural networks for semantic similarity measurement. H He, J Lin, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesHe, H., Lin, J.: Pairwise word interaction modeling with deep neural networks for semantic similarity measurement. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 937-948 (2016)
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735-1780 (1997)
Learning deep structured semantic models for web search using clickthrough data. P S Huang, X He, J Gao, L Deng, A Acero, L Heck, Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. the 22nd ACM international conference on Conference on information & knowledge managementACMHuang, P.S., He, X., Gao, J., Deng, L., Acero, A., Heck, L.: Learning deep structured semantic models for web search using clickthrough data. In: Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. pp. 2333-2338. ACM (2013)
Siamese neural networks for one-shot image recognition. G Koch, R Zemel, R Salakhutdinov, ICML Deep Learning Workshop. 2Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop. vol. 2 (2015)
Query expansion using word embeddings. S Kuzi, A Shtok, O Kurland, Proceedings of the 25th ACM international on conference on information and knowledge management. the 25th ACM international on conference on information and knowledge managementACMKuzi, S., Shtok, A., Kurland, O.: Query expansion using word embeddings. In: Proceedings of the 25th ACM international on conference on information and knowledge management. pp. 1929-1932. ACM (2016)
Distributed representations of sentences and documents. Q Le, T Mikolov, International Conference on Machine Learning. Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: International Conference on Machine Learning. pp. 1188-1196 (2014)
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Axiomatic analysis for improving the loglogistic feedback model. A Montazeralghaem, H Zamani, A Shakery, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalACMMontazeralghaem, A., Zamani, H., Shakery, A.: Axiomatic analysis for improving the log- logistic feedback model. In: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. pp. 765-768. ACM (2016)
Mixing dirichlet topic models and word embeddings to make lda2vec. C E Moody, arXiv:1605.02019arXiv preprintMoody, C.E.: Mixing dirichlet topic models and word embeddings to make lda2vec. arXiv preprint arXiv:1605.02019 (2016)
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Pennington, J., Socher, R., Manning, C.: Glove: Global vectors for word representation. In: Pro- ceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pp. 1532-1543 (2014)
Learning to rank short text pairs with convolutional deep neural networks. A Severyn, A Moschitti, Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. the 38th international ACM SIGIR conference on research and development in information retrievalACMSeveryn, A., Moschitti, A.: Learning to rank short text pairs with convolutional deep neural networks. In: Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. pp. 373-382. ACM (2015)
Learning concept embeddings for query expansion by quantum entropy minimization. A Sordoni, Y Bengio, J Y Nie, In: AAAI. 14Sordoni, A., Bengio, Y., Nie, J.Y.: Learning concept embeddings for query expansion by quan- tum entropy minimization. In: AAAI. vol. 14, pp. 1586-1592 (2014)
A compare-aggregate model for matching text sequences. S Wang, J Jiang, arXiv:1611.01747arXiv preprintWang, S., Jiang, J.: A compare-aggregate model for matching text sequences. arXiv preprint arXiv:1611.01747 (2016)
Neural matching models for question retrieval and next question prediction in conversation. L Yang, H Zamani, Y Zhang, J Guo, W B Croft, arXiv:1707.05409arXiv preprintYang, L., Zamani, H., Zhang, Y., Guo, J., Croft, W.B.: Neural matching models for question retrieval and next question prediction in conversation. arXiv preprint arXiv:1707.05409 (2017)
Embedding-based query language models. H Zamani, W B Croft, Proceedings of the 2016 ACM international conference on the theory of information retrieval. the 2016 ACM international conference on the theory of information retrievalACMZamani, H., Croft, W.B.: Embedding-based query language models. In: Proceedings of the 2016 ACM international conference on the theory of information retrieval. pp. 147-156. ACM (2016)
Estimating embedding vectors for queries. H Zamani, W B Croft, Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval. the 2016 ACM International Conference on the Theory of Information RetrievalACMZamani, H., Croft, W.B.: Estimating embedding vectors for queries. In: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval. pp. 123-132. ACM (2016)
Relevance-based word embedding. H Zamani, W B Croft, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalACMZamani, H., Croft, W.B.: Relevance-based word embedding. In: Proceedings of the 40th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 505-514. ACM (2017)
Learning to reweight terms with distributed representations. G Zheng, J Callan, Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. the 38th international ACM SIGIR conference on research and development in information retrievalACMZheng, G., Callan, J.: Learning to reweight terms with distributed representations. In: Pro- ceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. pp. 575-584. ACM (2015)
| [] |
[
"TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes 3D Textured Model Orientation Field High-res Patch High-res Network Feature Sampled Points Geodesic Patch Label TextureNet Semantic Segmentation",
"TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes 3D Textured Model Orientation Field High-res Patch High-res Network Feature Sampled Points Geodesic Patch Label TextureNet Semantic Segmentation"
] | [
"Jingwei Huang \nStanford University\n\n",
"Haotian Zhang \nStanford University\n\n",
"Li Yi \nStanford University\n\n",
"Thomas Funkhouser \nPrinceton University\n\n",
"Matthias Nießner \nTechnical University of Munich\n\n",
"Leonidas Guibas \nStanford University\n\n"
] | [
"Stanford University\n",
"Stanford University\n",
"Stanford University\n",
"Princeton University\n",
"Technical University of Munich\n",
"Stanford University\n"
] | [] | Figure 1: TextureNet takes as input a 3D textured mesh. The mesh is parameterized with a consistent 4-way rotationally symmetric (4-RoSy) field, which is used to extract oriented patches from the texture at a set of sample points. Networks of 4-RoSy convolutional operators extract features from the patches and used for 3D semantic segmentation.AbstractWe introduce, TextureNet, a neural network architecture designed to extract features from high-resolution signals associated with 3D surface meshes (e.g., color texture maps). The key idea is to utilize a 4-rotational symmetric (4-RoSy) field to define a domain for convolution on a surface. Though 4-RoSy fields have several properties favorable for convolution on surfaces (low distortion, few singularities, consistent parameterization, etc.), orientations are ambiguous up to 4-fold rotation at any sample point. So, we introduce a new convolutional operator invariant to the 4-RoSy ambiguity and use it in a network to extract features from high-resolution signals on geodesic neighborhoods of a surface. In comparison to alternatives, such as PointNetbased methods which lack a notion of orientation, the coherent structure given by these neighborhoods results in significantly stronger features. As an example application, we demonstrate the benefits of our architecture for 3D semantic segmentation of textured 3D meshes. The results show that our method outperforms all existing methods on the basis of mean IoU by a significant margin in both geometry-only (6.4%) and RGB+Geometry (6.9-8.2%) settings. | 10.1109/cvpr.2019.00457 | [
"https://arxiv.org/pdf/1812.00020v2.pdf"
] | 54,435,421 | 1812.00020 | 4ea595f34d005cbe0f6b5ffe283f54cbd744115b |
TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes 3D Textured Model Orientation Field High-res Patch High-res Network Feature Sampled Points Geodesic Patch Label TextureNet Semantic Segmentation
Jingwei Huang
Stanford University
Haotian Zhang
Stanford University
Li Yi
Stanford University
Thomas Funkhouser
Princeton University
Matthias Nießner
Technical University of Munich
Leonidas Guibas
Stanford University
TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes 3D Textured Model Orientation Field High-res Patch High-res Network Feature Sampled Points Geodesic Patch Label TextureNet Semantic Segmentation
Figure 1: TextureNet takes as input a 3D textured mesh. The mesh is parameterized with a consistent 4-way rotationally symmetric (4-RoSy) field, which is used to extract oriented patches from the texture at a set of sample points. Networks of 4-RoSy convolutional operators extract features from the patches and used for 3D semantic segmentation.AbstractWe introduce, TextureNet, a neural network architecture designed to extract features from high-resolution signals associated with 3D surface meshes (e.g., color texture maps). The key idea is to utilize a 4-rotational symmetric (4-RoSy) field to define a domain for convolution on a surface. Though 4-RoSy fields have several properties favorable for convolution on surfaces (low distortion, few singularities, consistent parameterization, etc.), orientations are ambiguous up to 4-fold rotation at any sample point. So, we introduce a new convolutional operator invariant to the 4-RoSy ambiguity and use it in a network to extract features from high-resolution signals on geodesic neighborhoods of a surface. In comparison to alternatives, such as PointNetbased methods which lack a notion of orientation, the coherent structure given by these neighborhoods results in significantly stronger features. As an example application, we demonstrate the benefits of our architecture for 3D semantic segmentation of textured 3D meshes. The results show that our method outperforms all existing methods on the basis of mean IoU by a significant margin in both geometry-only (6.4%) and RGB+Geometry (6.9-8.2%) settings.
Introduction
In recent years, there has been tremendous progress in RGB-D scanning methods that allow reliable tracking and reconstruction of 3D surfaces using hand-held, consumer-grade devices [8,19,29,30,44,22,11]. Though these methods are now able to reconstruct high-resolution textured 3D meshes suitable for visualization, understanding the 3D semantics of the scanned scenes is still a relatively open research problem.
There has been a lot of recent work on semantic segmentation of 3D data using convolutional neural networks (CNNs). Typically, features extracted from the scanned inputs (e.g., positions, normals, height above ground, colors, etc.) are projected onto a coarse sampling of 3D locations, and then a network of 3D convolutional filters is trained to extract features for semantic classification -e.g., using convolutions over voxels [45,27,32,38,9,13], octrees [35], point clouds [31,33], or mesh vertices [26]. The advantage of this approach over 2D image-based methods is that convolutions operate directly on 3D data, and thus are relatively unaffected by view-dependent effects of images, such as perspective, occlusion, lighting, and background clutter. However, the resolution of current 3D representations is generally quite low (2cm is typical), and so the ability of 3D CNNs to discriminate fine-scale semantic patterns is usually far below their color image counterparts [25,16].
To address this issue, we propose a new convolutional neural network, TextureNet, that extracts features directly from high-resolution signals associated with 3D surface meshes. Given a map that associates high-resolution signals with a 3D mesh surface (e.g., RGB photographic texture), we define convolutional filters that operate on those 1 arXiv:1812.00020v2 [cs.CV] 28 Mar 2019 signals within domains defined by geodesic surface neighborhoods. This approach combines the advantages of feature extraction from high-resolution signals (as in [10]) with the advantages of view-independent convolution on 3D surface domains (as in [41]). This combination is important for the example in labeling the chair in Figure 1, whose surface fabric is easily recognizable in a color texture map.
During our investigation of this approach, we had to address several research issues, the most significant of which is how to define on geodesic neighborhoods of a mesh. One approach could be to compute a global UV parameterization for the entire surface and then define convolutional operators directly in UV space; however, that approach would induce significant deformation due to flattening, not always follow surface features, and/or produce seams at surface cuts. Another approach could be to compute UV parameterizations for local neighborhoods independently; however, then adjacent neighborhoods might not be oriented consistently, reducing the ability of a network to learn orientation-dependent features. Instead, we compute a 4-RoSy (four-fold rotationally symmetric) field on the surface using QuadriFlow [18] and define a new 4-RoSy convolutional operator that explicitly accounts for the 4fold rotational ambiguity of the cross field parameterization. Here, 4-RoSy field is a set of tangent directions associated with vertices, where neighboring directions are parallel to each other by rotating one of them around surface normal by 360K/4 degrees (K ∈ Z). Since the 4-RoSy field from QuadriFlow has no seams, aligns to shape features, induces relatively little distortion, has few singularities, and consistently orients adjacent neighborhoods (up to 4-way rotation), it provides a favorable trade-off between distortion and orientation invariance.
Results on 3D semantic segmentation benchmarks show an improvement of 4-RoSy convolution on surfaces over alternative geometry-only approaches (by 6.4%), plus significantly further improvement when applied to high-resolution color signals (by 6.9-8.2% ). With ablation studies, we verify the importance of the consistent orientation of a 4-RoSy field and demonstrate that our sampling and convolution operator works better than other alternatives.
Overall, our core research contributions are:
• a novel learning-based method for extracting features from high-resolution signals living on surfaces embedded in 3D, based on consistent local parameterizations, • a new 4-RoSy convolutional operator designed for cross fields on general surfaces in 3D, • a new deep network architecture, TextureNet, composed of 4-RoSy convolutional operators, • an extensive experimental investigation of alternative convolutional operators for semantic segmentation of surfaces in 3D.
Related Work
3D Deep Learning. With the availability of 3D shape databases [45,7,38] and real-world labeled 3D scanning data [37,1,9,6], there is significant interest in deep learning on three-dimensional data. Early work developed CNNs operating on 3D volumetric grids [45,27]. They have been used for 3D shape classification [32,35], semantic segmentation [9,13], object completion [12], and scene completion [13]. More recently, researchers have developed methods that can take a 3D point cloud as input to a neural network and predict object classes or semantic point labels [31,33,41,39,2]. AtlasNet [14] learns to generate surfaces of the 3D shape. In our work, we utilize a sparse point sampled data representation, however, we exploit high resolution signals on geometric surface structures with a new 4-RoSy surface convolution kernel.
Convolutions on Meshes. Several researchers have proposed methods for applying convolutional neural networks intrinsically on manifold meshes. FeaStNet [42] proposes a graph operator that establishes correspondences between filter weights. Jiang et al. [21] applies differential operators on unstructured spherical grids. GCNN [26] proposes using discrete patch operators on tangent planes parameterized by radius and angles. However, the orientation of their selected geodesic patches is arbitrary, and the parameterization is highly distorted or inconsistent at regions with high Gaussian curvature. ACNN [3] observes this limitation and introduces the anisotropic heat kernels derived from principal curvatures. MoNet [28] further generalizes the architecture with the learnable gaussian kernels for convolutions. The principal curvature based frame selection method is adopted by Xu et al. [46] for segmentation of nonrigid surfaces, by Tatarchenko et al. [41] for semantic segmentation of point clouds, and by ADD [4] for shape correspondence in the spectral domain. It naturally removes orientation ambiguity but fails to consider frame inconsistency problem, which is critical when performing feature aggregation. Its problems are particularly pronounced in indoor scenes (which often have many planar regions where principal curvature is undetermined) and in real-world scans (which often have noisy and uneven sampling where consistent principal curvatures are difficult to predict). In contrast, we define a 4-RoSy field that provides consistent orientations for neighboring convolution domains. Figure 2: TextureNet architecture. We propose a UNet [36] architecture for hierarchical feature extraction. The key innovation in the architecture is the texture convolution layer. We efficiently query the local geodesic patch for each surface point, associate each neighborhood with a local, orientation-consistent texture coordinate. This allows us to extract the local 3D surface features as well as high-resolution signals such as associated RGB input.
take full advantage of the high-frequency patterns therein. An alternative approach is to combine features extracted from RGB images in a multi-view CNN [40]. This approach has been used for 3D semantic segmentation in 3DMV [10], where features are extracted from 2D RGB images and then back-projected into a 3D voxel grid where they are merged and further processed with 3D voxel convolutions. Like our approach, 3DMV processes high-resolution RGB signals; however it convolves them in a 2D image plane, where occlusions and background clutter are confounding. In contrast, our method directly convolves high-resolution signals intrinsically on the 3D surface which is view-independent.
Approach
Our approach performs convolutions on high-resolution signals with geodesic convolutions directly on 3D surface meshes. The input is a 3D mesh associated with a highresolution surface signal (e.g., a color texture map), and the outputs are learned features for a dense set of sample points that can be used for semantic segmentation and other tasks.
Our main contribution is defining a smooth, consistently oriented domain for surface convolutions based on fourway rotationally symmetric (4-RoSy) fields. We observe that 3D surfaces can be mapped with low-distortion to twodimensional parameterizations anchored at dense sample points with locally consistent orientations and few singularities if we allow for a four-way ambiguity in the orientation at the sample points. We leverage that observation in TextureNet by computing a 4-RoSy field and point sampling using QuadriFlow [18] and then building a network using new 4-RoSy convolutional filters (TextureConv) that are invariant to the four-way rotational ambiguity.
We utilize this network design to learn and extract features from high-resolution signals on surfaces by extracting surface patches with high-resolution signals oriented by the 4-RoSy field at each sample point. The surface patches are convolved by a few TextureConv layers, pooled at sample points, and then convolved further with TextureConv layers in a UNet [36] architecture, as shown in figure 2. For down-sampling and up-sampling, we use the furthest point sampling and three-nearest neighbor interpolation method proposed by PointNet++ [33]. The output of the network is a set of features associated with point samples that can be used for classification and other tasks. The following sections describe the main components of the network in detail.
High-Resolution Signal Representation
Our network takes as input a high-resolution signal associated with a 3D surface mesh. In the first steps of processing, it generates a set of sample points on the mesh and defines a parameterized high-resolution patch for each sample (Section 3.2) as follow: For each sample point p i , we first compute its geodesic neighborhood Ω ρ (p i ) (Eq. 1) with radius ρ. Then, we sample an NxN point cloud {q xy | − N/2 ≤ x, y < N/2}. The texture coordinate for q xy is ((x+0.5)d, (y+0.5)d) -d is the distance between the adjacent pixels in the texture patch. In practice, we select N = 10 and d = 4mm. Finally, we use our newly proposed "TextureConv" and max-pooling operators (Section 3.3) to extract the high-res feature f i for each point p i .
4-RoSy Surface parameterization
A critical aspect of our network is to define a consistently-oriented geodesic surface parameterization for any position on a 3D mesh. Starting with some basic definitions, for a sampled point p on the surface, we can locally parameterize its tangent plane by two orthogonal tangent vectors i and j. Also, for any point q on the surface, there exists a shortest path on the surface connecting p and q, e.g., the orange path in figure 3(a). By unfolding it to the tangent plane, we can map q along the shortest path to q * . Using these constructs, we define the local texture coordinate q in p's neighborhood as
t p (q) = i T j T (q * − p).
We additionally define the local geodesic neighborhood of p with receptive field ρ as
Ω ρ (p) = {q | ||t p (q)|| ∞ < ρ}.
(1) The selection for the set of mesh sampled positions {p} and their tangent vectors i and j is critical for the success of learning on a surface domain. Ideally, we would select points whose spacing is uniform and whose tangent directions are consistently oriented at neighbors, such that the underlying parameterization has no distortions or seams, as shown in Figure 4(a). With those properties, we could learn convolutional operators with translation invariance exactly as we would for images. Unfortunately, these properties are only achievable if the surface is a flat plane. For a general 3D surface, we can only hope to select a set of point samples and tangent vectors that minimize deviations between spacings of points and distortions of local surface parameterizations. Figure 4(b) shows an example where harmonic surface parameterization introduces large-scale distortiona 2D convolution would include a large receptive field at the nose but a small one at the neck. Figure 4(c) shows a geometry image [15] parameterization with high distortion in the orientation -convolutions on such a map would have randomly distorted and irregular receptive fields, making it difficult for a network to learn canonical features. Unfortunately, a smoothly varying direction field on the surface is usually hard to obtain. According to the study of the direction field design [34,23], the best-known approach to mitigate the distortion is to compute a four-way rotationally symmetric (4-RoSy) orientation field, which minimizes the deviation by incorporating directional ambiguity. Additionally, the orientation field needs a consistent definition among different geometries, and the most intuitive way is to make it align with the shape features like the principal curvatures. Fortunately, the extrinsic energy is used by [20,18] to realize it. Therefore, we compute the extrinsic 4-Rosy orientation field at a uniform distribution of point samples using QuadriFlow [18] and use it to define the tangent vectors at any position on the surface. Because of the directional ambiguity, we randomly pick one direction from the cross as i and compute j = n × i for any position.
Although there is a 4-way rotational ambiguity in this local parameterization of the surface (which will be addressed with a new convolutional operator in the next section), the resulting 4-RoSy field provides a way to extract geodesic neighborhoods consistently across the entire surface, even near singularities. Figure 5 (a,b,c) shows the ambiguity of possible unfolded neighborhoods at a singularity. Since QuadriFlow [18] treats singularities as faces rather than vertices, all sampled positions have the well-defined orientation field. More importantly, the parameterization of every geodesic neighborhood is well-defined with our shortest path patch parameterization. For example, only Figure 5(a) is a valid parameterization for the purple spot, while the location for the blue and orange spots in Figures 5(b) and (c) are unfolded along the paths that are not the shortest. Unfolding a geodesic neighborhood around the singularity also causes another potential issue that a seam cut is usually required, leading to a gap at the 3-singularity or multiplesurface coverage at the 5-singularity. For example, there is a gap at the bottom-right corner in Figure 5(a) caused by the seam cut shown as the green dot line. Fortunately, the location of the seam is also well-defined with our shortestpath definition: it must be the shortest geodesic path going through the singularity. Therefore, our definition of the local neighborhood guarantees a canonical way of surface parameterization even around corners and singularities.
4-RoSy Surface Convolution Operator
TextureNet is a network architecture composed of convolutional operators acting on geodesic neighborhoods of sample points with 4-RoSy parameterizations. The input to each convolutional layer is three-fold: 1) a set of 3D sample points associated with features (e.g., RGB, normals, or features computed from high-resolution surface patches or pre- vious layers); 2) a coordinate system stored as two tangent vectors representing the 4-RoSy cross field for each point sample; and 3) a coarse triangle mesh, where each face is associated with the set of extracted sampled points and connectivity indices that support fast geodesic patch query and texture coordinate computation for the samples inside a geodesic neighborhood, much like the PTex [5] representation for textures.
Our key contribution in this section is the design of a convolution operator suitable for 4-RoSy fields. The problem is that we cannot use traditional 3x3 convolution kernels on domains parameterized with 4-RoSy fields without inducing inconsistent feature aggregation at higher levels. Figure 6 demonstrates the problem for a simple example. Figure 6(a) shows 3x3 convolution in a traditional flat domain. Figure 6(b) shows the frames defined by our 4-RoSy orientation field of the 3D cube where red spots represent the singularities. Although the cross-field in the orange patch is consistent under the 4-RoSy metric, the frames are not parallel when they are unfolded into a plane (figure 6(c)). Aggregation of features inside such a patch is therefore problematic.
"TextureConv" is our solution to remove the directional ambiguity. It consists of four layers (in figure 2), including geodesic patch search, texture space grouping, convolution and aggregation. To extract the geodesic patch for each input point Ω ρ (p), we use breadth-first search with the priority queue to extract the face set in the order of geodesic distance from face center to p. We estimate the texture coordinate at the face center as well as its local tangent coordinate system, recorded as (t f , i f , j f ). In order to expand the search tree from face u to v, we can approximate the texture coordinate at the face center as
t v = t u +(i u , j u ) T (c v −c u ),
where c f represents the center position of the face f . i v and j v can be computed by rotating the coordinate system around the shared edge from face u to v. After having the face set inside the geodesic patch, we can find the sampled points set associated with these faces. We estimate the texture coordinate of every sampled point q associated with each face f as t p (q) = t f + (i f , j f ) T (q − c f ). By testing ||t p (q)|| ∞ < ρ, we can determine the sampled points inside the geodesic patch Ω ρ (p).
The texture space grouping layer segments the local neighborhood into 3x3 patches in the texture space, each of which is a square with edge length as 2ρ/3, as shown in figure 2 (after the "grouping arrow"). We could directly borrow the image convolution method linearly transform each point feature with 9 different weights according to their belonging patch. However, we propose a 4-RoSy convolution kernel to deal with the directional ambiguity. As shown in figure 2, all sampled points can be categorized as at the corners ({p 1 j }), edges ({p 2 j }) or the center ({p 3 j }). Each sampled point feature is convolved with a 1x1 convolution as h 1 , h 2 or h 3 based on its category. The extracted 4-rosy feature removes the ambiguity and allows higher-level feature aggregation. The channel-wise aggregation operator g can be max-pooling or average-pooling followed by the ReLu layer. In the task for semantic segmentation, we choose max-pooling since it is better at preserving salient signals.
Evaluation
To investigate the performance of TextureNet, we ran a series of 3D semantic segmentation experiments for indoor scenes. In all experiments, we train and test on the standard splits of the ScanNet [9] and Matterport3D [9] datasets. Following previous works, we report mean class intersectionover-union (mIoU) results for ScanNet and mean class accuracy for Matterport3D.
Comparison to State-of-the-Art. Our main result is a comparison of TextureNet to state-of-the-art methods for 3D semantic segmentation. For this experiment, all methods utilize both color and geometry in their native formats. Specifically, PointNet++ [33], Tangent Convolution [41], SplatNet [39] use points with per-point normals and colors; 3DMV [10] uses 2D image features back-projected onto voxels; and Ours uses high-res 10x10 texture patches extracted from geodesic neighborhoods at sample points. Table 1 reports the mean IoU scores for all 20 classes of the ScanNet benchmark on the ScanNet (v2) and mean class accuracy on Matterport3D datasets. They show that Tex-tureNet (Ours) provides the best results on 18/20 classes for Scannet and 12/20 classes for Matterport3D. Overall, the mean class IoU for Ours is 8.2% higher than the previous state-of-the-art (3DMV) on ScanNet (48.4% vs. 56.6%), and our mean class accuracy is 6.9% higher on Matter-port3D (56.1% vs. 63.0%).
Qualitative visual comparisons of the results shown in Figures 7-9 suggest that the differences between methods are often where high-resolution surface patterns are discriminating (e.g., the curtain and pillows in the top row of Figure 7) and where geodesic neighborhoods are more in- Table 1: Comparison with the state-of-the-art methods for 3D semantic segmentation on the (a) ScanNet v2, and (b) Mat-terport3D [6] benchmarks. PN + , SplatNet, and Tangent Convolution use points with per-point normal and color as input. 3DMV uses 2D images and voxels. Ours uses grid points with high-res 10x10 texture patches. Figure 7: Visualization on ScanNet (v2) [9]. In the first row, we correctly predicts the lamp, pillow, picture, and part of the cabinet, while other methods fail. In the second row, we predict the window and the trash bin correctly, while 3DMV [10] predicts part of the window as the trash bin and other methods fail. The third row (zoom-in) highlights the differences.
GT PointNet++ 3DMV TangentConv SplatNet Ours
(a) Ground Truth (b) Ball (c) Ours Figure 8: Visual results using different neighborhoods. With euclidean ball as a neighborhood, part of the table is predicted as the chair, since they belong to the same euclidean ball. This issue is solved by extracting features from the geodesic patches.
formative than Euclidean ones (e.g., the lamp next to the bed). Figure 8 shows a case where convolutions with the geodesic neighborhoods clearly outperform their Euclidean counterparts. In Figure 8(b), part of the table is predicted as chair, probably because it is in a Euclidean ball covering nearby chairs. This problem is solved with our method based on geodesic patch neighborhoods. As shown in Figure 8(c), the table and the chairs are clearly segmented.
Effect of 4-RoSy Surface Parameterization. Our second experiment is designed to test how different surface parameterizations affect semantic segmentation performance -i.e., how does the choice of the orientation field affect the learning process? The simplest choice is to pick an arbitrary direction on the tangent plane as the x-axis, similar to GCNN [26], (Figure 10(a)). A second option adopted by Tangent Convolution [41] considers a set of points q in a Table 2: Mean IoU for different direction fields on ScanNet (v2). The input is a pointcloud with a normal and rgb color for each point. Random refers to randomly picking an arbitrary direction for each sampled point. Intrinsic refers to solving for a 4-rosy field with intrinsic energy. EigenVec refers to solving for a direction field with the principal curvature. Extrinsic is our method, which solves a 4-rosy field with extrinsic energy.
PointNet++
TangentConv SplatNet Ours GT Figure 9: Visual results on Matterport3D [6]. In all examples, our method is better at predicting the door, the toilet, the sink, the bathtub, and the curtain.
Euclidean ball centered at p and parameterizes the tangent plane by two eigenvectors corresponding to the largest two eigenvalues of the covariance matrix q (p − q)(p − q) T . A critical problem of this formulation is that the principal directions cannot be robustly analyzed at planar regions or noisy surfaces (Figure 10(b)). It also introduces inconsistency to the coordinate systems of the neighboring points, which vexes the feature aggregation at higher levels. A third alternative is to use the intrinsic energy function [20] or other widely used direction field synthesis technique [34,23], which is not geometry-aware and therefore variant to 3D rigid transformation (Figure 10(c)). Our choice is to use the extrinsic energy to synthesize the direction field [18,20], which is globally consistent and only variant to geometry itself (Figure 10(d)).
To test the impact of this choice, we compare all of these alternative direction fields to create the local neighborhood parameterizations for our architecture and compare the results of 3D semantic segmentation on ScanNet (v1) test set. As shown in Table 2, the choice for random direction field performs worst since it does not provide consistent parameterization. The tangent convolution suffers from the same issue, but gets a better result since it aligns with the shape features. The intrinsic parameterization aligns with the shape features, but is not a canonical parameterization -for example, different rigid transformations of the same shape lead to different parameterizations. The extrinsic energy provides a canonical and consistent surface parameterization. As a result, the extrinsic 4-rosy orientation field achieves the best results.
Effect of 4-RoSy Surface Convolution. Our third experiment is designed to test how the choice for the surface convolution operator affects learning. In Table 4 Table 3: Mean IoU for different color inputs on ScanNet (v2). XYZ represents our network using raw point input; i.e., geometry only. NRGB represents our network taking input as the sampled points with per-point normal and color. Highres represents our network taking per-point normal and the 10x10 surface texture patch for each sampled point. Table 4: Mean Class IoU with different texture convolution operators on ScanNet (v2). The input is the pointcloud for the first row (Geometry) and the pointcloud associated with the normal and rgb signal for the second row (NRGB).
Input PN + (A) PN + GCNN 1 GCNN
ing, respectively. GCNN 1 and GCNN are geodesic convolutional neural networks [26] with N ρ = 3, N θ = 1 and N ρ = N θ = 3 respectively. ACNN represents anisotropic convolutional neural networks [3] with N ρ = 3, N θ = 1. RoSy 1 means a 3x3 convolution along the direction of the 1-rosy orientation field. RoSy 4 picks an arbitrary direction from the cross in the 4-rosy field. RoSy 4 (m) applies 3x3 convolution for each direction of the cross in the 4-rosy field, aggregated by max pooling. Ours(A) and Ours represent our method with average and max pooling aggregation. We find that GCNN, ACNN and RoSy 4 produce the lowest IoUs, because they suffer from inconsistency of frames when features are aggregated. GCNN 1 does not suffer from this issue since there is only a single bin in the angle dimension. RoSy 4 (m) uses the max-pooling to canonicalize the feature extraction, which is independent of the orientation selection, and produces better results than RoSy 4 . RoSy 1 achieves a higher score by generating a more globally consistent orientation field with higher distortion. From this study, the combination of 4-rosy orientation field and our TextureNet is the best option for the segmentation task among these methods. Since we precompute the local parametrization, our training efficiency is similar to that of GCNN. Please refer to Supplemental D for the detailed performance with each class.
Effect of High-Resolution Color. Our fourth experiment tests how much convolving with high-resolution surface colors affects semantic segmentation. Table 3 compares the performance of our network with uncolored sampled points (XYZ), sampled points with the per-point surface normal and color (NRGB), and with the per-point normal and the 10x10 color texture patch (Highres) as input. According to Table 4, our network is already superior with only XYZ or additional NRGB because of the convolution operator. We find that providing TextureNet with Highres colors improves the mean class IoU by 3.3%. As expected, the impact is stronger from some semantic classes than otherse.g., the IoUs for the bookshelf and picture classes increase 63.1→71.3% and 15.8→21.1%, respectively. We show additional comparison to O-CNN [43] which enables highres signals for voxels in Supplemental E.
Comparisons Using Only Surface Geometry. As a final experiment, we evaluate the value of the proposed 3D network for semantic segmentation of inputs with only surface geometry (without color). During experiments on Scan-Net, TextureNet achieves 50.6% mIoU, which is 6.4% better than the previous state-of-the-art. In comparison, Scan-Net
Conclusion
TextureNet bridges the gap between 2D image convolution and 3D deep learning using 4-RoSy surface parameterizations. We propose a new method for learning from high-resolution signals on 3D meshes by computing local geodesic neighborhoods with consistent 4-RoSy coordinate systems. We design a network of 4-RoSy texture convolution operators that are able to learn surface features that significantly improve over the state-of-the-art performance for 3D semantic segmentation of 3D surfaces with color (by 6.9-8.2%). Code and data will be publicly available. Topics for further work include investigating the utility of Tex-tureNet for extracting features from other high-resolution signals on meshes (e.g., displacement maps, bump maps, curvature maps, etc.) and applications of TextureNet to other computer vision tasks (e.g., instance detection, pose estimation, part decomposition, texture synthesis, etc.).
Supplemental
A. Comparison to 2D Convolution on Texture Atlas
We did an additional experiment to compare our convolution operator with traditional image convolutions on a color texture atlas created with a standard UV parameterization, as shown in Figure 11. For this experiment, we trained a state-of-the-art network (DenseNet [17]) on the semantic labels mapped to the texture map image. The results with that method are not very good -the mean class IoU is only 12.2%, as compared to 56.6% with our method. We conjecture the reason is that UV parameterizations are not consistent across examples and convolutions are affected by texture seams. We additionally tried an as-rigid-as-possible parameterization, which achieves 16.8 IoU (ours is 58.1). The poor performance is mainly due to convolutions over regions with seams, large distortions, and inconsistent orientations -i.e., the main problems that our 4-rosy approach aims to resolve.
B. Evaluation of Neighborhood Selection Methods
The next experiment tests whether the geodesic neighborhoods used by TextureNet convolutional operators are better than volumetric ones used by PointNet++. To test this, we compare the performance of the original Point-Net++ network which takes the Euclidean ball as the neighborhood, with slightly modified versions which take a cuboid or our geodesic patch as a neighborhood. As shown in Table 5, the geodesic patch achieves a slightly higher score. This might be due to the reason that it is easier for the network to learn the boundary on the 2D subsurface than on the 3D space.
C. Effect of Point Sampling Method
The next experiment tests the impact of our proposed point sampling method. While PointNet++ [33] adopts the furthest point sampling method to preprocess the data, we use QuadriFlow [18] to sample the points on the surface. It maintains uniform edge length in surface parametrization, and therefore usually provides more uniformly distributed samples on the surface considering the geodesic distance. Figure 12 shows the proportion of each class in the Scan-Net dataset with QuadriFlow and furthest point sampling.
We use TextureNet to learn the semantic labels with their input and our samples. Table 6 shows the class IoU for the prediction. With more samples for minor classes like the counter, desk, and curtain, our sampling method performs better. Figure 12 shows the visualization of different sampling results. Visually, our sampling method leads to more uniformly distributed points on the surface. Table 7 provides detailed results for the performance of different surface convolution operators on ScanNet dataset [9] with input as the point cloud or the point cloud associated with the normal and RGB color for each point (expanding on Table 5: PointNet++ prediction using different neighborhood. The input is the sampled positions computed with our sampling method. Ball represents the euclidean ball. CubeX represents a tangent cuboid with the same volume as that of the ball, but has the width and length X times of the ball radius. Ours is using the geodesic patch with the same radius of the ball. Table 6: PointNet++ prediction taking the positions of the pointcloud from different sampling methods including the furthest point sampling (FPS) and Quadriflow (Quad). lutional neural networks [26] with N ρ = 3, N θ = 1 and N ρ = N θ = 3 respectively. ACNN represents anisotropic convolutional neural networks [3] with N ρ = 3 = N θ = 3. RoSy 1 refers to a 3x3 convolution along the direction of the 1-rosy orientation field. RoSy 4 picks an arbitrary direction from the cross in the 4-rosy field. RoSy 4 (m) applies 3x3 convolution for each direction of the cross in the 4-rosy field, aggregated by max pooling. Ours(A) and Ours represent our method with average-pooling and max-pooling aggregation.
D. Further Results on Effect of 4-RoSy Surface Convolution
E. Comparison to Octree-based Approaches
Existing volume-based octree methods have been used mostly for stand-alone objects from ShapeNet. For larger scenes, memory is a severe limitation. As a test, we tried O-CNN [43] on chunks of radius 1.35m 3 using a 12GB GPU, which fits 6 conv/deconv layers and a feature dimension of 256 at resolution 256 3 . This test yielded a mean IoU of 30.8 with NRGB and 27.8 with pure geometry. In contrast, the surface-based convolution of TextureNet is much more efficient (2D rather than 3D), allowing for a total of 18 conv/deconv layers with max feature dimension of 1024, and achieves 58.1 with high-res color, 54.8 with NRGB, and 50.6 with pure geometry.
F. Further Comparisons Using Only Surface Geometry
This section provides more detailed results for the experiment described in the last paragraph of Section 4 of the paper, where we evaluate the value of the proposed 3D network for semantic segmentation of inputs with only surface geometry (without color). During experiments on Scan-Net, TextureNet achieves 50.6% mIoU, which is 6.4% bet-ter than the previous state-of-the-art. In comparison, Scan-Net [
G. Effect of 4-RoSy convolution on traditional image convolution
We also compared our 4-RoSy operator with the traditional image convolution on the MNIST dataset [24]. We use a simple network containing two MLP layers and two fully connected layers. The performance of the original network is 99.1%. By replacing the convolution with our 4-RoSy operator in the MLP layers, we achieve 98.5% classification accuracy. Therefore, our 4-RoSy kernel is comparable to the traditional convolutions even on the standard images.
H. Visual comparison of Different Resolutions
In Figure 14, we show the predictions of TextureNet with different color resolutions as input. The first column is the 3D model. The second column shows the ground truth semantic labels. The high-res signals of the red regions are shown in the third column. The last two columns are predictions from TextureNet with per-point color (low-res) or high-res texture patch as input. As a result, TextureNet performs better given the input with high-res signals.
I. Visualization of the Semantic Segmentation
We compare TextureNet with the state-of-the-art method on ScanNet Dataset [9] and Matterport3D Dataset. On both datasets, we outperform existing methods (see the main paper). Figure 15 and 16 show examples of prediction from Table 7: Texture Convolution Operator Comparison. The input is the pointcloud in (a) and the pointcloud associated with the normal and the rgb color for each point in (b). PN + (A) and PN + represent PointNet++ with average-pooling and maxpooling, respectively. GCNN 1 and GCNN are geodesic convolutional neural networks [26] with N ρ = 3, N θ = 1 and N ρ = N θ = 3 respectively. ACNN represents anisotropic convolutional neural networks [3] with N ρ = 3, N θ = 1. RoSy 1 means a 3x3 convolution along the direction of the 1-rosy orientation field. RoSy 4 picks an arbitrary direction from the cross in the 4rosy field. RoSy 4 (m) applies 3x3 convolution for each direction of the cross in the 4-rosy field, aggregated by maxpooling. Ours(A) and Ours represent our method with average-pooling and max-pooling aggregation. Table 8: Geometry-only: comparison to the state-of-the-art for 3D convolution with pure geometry as input; i.e., no RGB information used in any of these experiments. We can show that our method also outperforms existing geometry-only approaches.
several methods on ScanNet. Figure 17 show examples of prediction from different methods on Matterport3D Dataset.
13
3D Model
GT Low-res High-res Color Signal
(b ) Figure 3 :Figure 4 :
)34Visualization of the Geodesic Patches (a) Parameterization of the ( ) * (a) Local texture coordinate. (b) Visualization of geodesic neighborhoods Ω ρ (ρ = 20 cm) of a set of randomly sampled vertices. (a) With appropriate method like Quadriflow, we can get the surface parameterization aligning to shape features with negligible distortions. (b) Harmonic parameterizations leads to high distortion in the scale. (c) Geometry images [15] result in high distortion in the orientation.
Figure 5 :
5At the singularity of the cube, (a)-(c) provides three different ways of unfolding the local neighborhood. Such ambiguity is removed around the singularity by our texture coordinate definition using the shortest path. For the purple point, (a) is a valid neighborhood, while the blue points in (b) and orange points in (c) are unfolded along the paths which are not the shortest. Similarly, the ambiguity of the gap location is removed.
Figure 6 :
6(a) Traditional convolution kernel on a regular grid. (b) Frames defined by the orientation field on a 3D cube. (c) For the patch highlighted in orange in (b), multilayer feature aggregation would be problematic with traditional convolution due to the frame inconsistency caused by the directional ambiguity of the orientation field.
Figure 10 :
10Direction fields from different methods. (a) Random directions lead to inconsistent frames. (b) Eigenvectors suffer from the same issue at flat area. (c) Intrinsic-energy based orientation field does not align to the shape features. (d) Our extrinsic-based method generates consistent orientation fields aligned with surface features.
[9] = 30.6%, Tangent Convolution [41] = 40.9%, Point-Net++ [33] = 43.5%, and SplatNet [39] = 44.2%. Detailed class IoU results are provided in Supplemental F.
Figure 11 :
11An example of the texture image.
Figure 12 :
12Visualization of Different Sampling methods.
Figure 13 :
13Class distribution with different sampling. The y-axis represents the portion of each class across all scenes. Except for classes of wall, floor, and bookshelf, our method achieves more samples than the furthest sampling method. As a result, PointNet++ achieves better results in most classes with our sampling method.
Input wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave FPS 70.
Inputwall floor cab bed chair sofa table door wind shf pic cntr desk curt fridg show toil sink bath other avg ScanNet[9
Figure 14 :
14Visual comparison of Different Resolutions. The column row is the 3D model. The second column shows the ground truth semantic labels. The high-res signals of the red regions are shown in the third column. The last two columns are predictions from TextureNet with per-point color (low-res) or high-res texture patch as input.
Figure 15 :
15Visualization of the Semantic Segmentation on ScanNet Dataset.
Figure 16 :Figure 17 :
1617Visualization Visualization of the Semantic Segmentation on Matterport Dataset. 17
Input wall floor cab bed chair sofa table door wind shf pic cntr desk curt fridg show toil sink bath other avg PN+ [33] 66.4 91.5 27.8 56.3 64.0 52.7 37.3 28.3 36.1 59.2 6.7 28.0 26.2 45.4 25.6 22.0 63.5 38.8 54.4 20.0 42.5
SplatNet [39] 69.9 92.5 31.1 51.1 65.6 51.0 38.3 19.7 26.7 60.6 0.0 24.5 32.8 40.5 0.0 24.9 59.3 27.1 47.2 22.7 39.3
Tangent [41] 63.3 91.8 36.9 64.6 64.5 56.2 42.7 27.9 35.2 47.4 14.7 35.3 28.2 25.8 28.3 29.4 61.9 48.7 43.7 29.8 43.8
3DMV [10] 60.2 79.6 42.4 53.8 60.6 50.7 41.3 37.8 53.9 64.3 21.4 31.0 43.3 57.4 53.7 20.8 69.3 47.2 48.4 30.1 48.4
Ours
68.0 93.5 49.4 66.4 71.9 63.6 46.4 39.6 56.8 67.1 22.5 44.5 41.1 67.8 41.2 53.5 79.4 56.5 67.2 35.6 56.6
(a) ScanNet (v2) (mean class IoU)
Input
wall floor cab bed chair sofa table door wind shf pic cntr desk curt ceil fridg show toil sink bath other avg
PN + [33] 80.1 81.3 34.1 71.8 59.7 63.5 58.1 49.6 28.7 1.
Input wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other aveRandom 37.6 92.5 37.0 63.7 28.5 56.9 27.6 15.3 31.0 47.6 16.5 36.6 53.3 51.2 15.4 24.7 59.3 47.6 53.3 27.0 41.1
Intrinsic 47.4 91.9 35.3 62.5 55.8 44.8 37.5 29.8 40.5 40.9 16.7 41.5 39.9 42.1 20.4 24.3 85.6 44.5 58.3 29.5 44.4
EigenVec 45.3 79.0 32.2 53.4 59.8 40.4 32.2 28.8 40.5 43.4 17.8 39.5 32.7 40.6 22.5 25.0 82.4 48.1 54.8 32.6 42.5
Extrinsic 69.8 92.3 44.8 69.4 75.8 67.1 56.8 39.4 41.1 63.1 15.8 57.4 46.5 48.3 36.9 40.0 78.1 54.0 65.4 34.4 54.8
, PN + (A) and PN + represent PointNet++ with average and max pool-Input wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other aveXYZ 64.8 90.0 39.3 65.8 74.8 66.6 50.5 33.9 35.6 58.0 14.0 54.3 42.1 45.4 30.9 43.0 67.7 47.9 55.8 32.2 50.6
NRGB 69.8 92.3 44.8 69.4 75.8 67.1 56.8 39.4 41.1 63.1 15.8 57.4 46.5 48.3 36.9 40.0 78.1 54.0 65.4 34.4 54.8
Highres 75.0 94.4 46.8 67.3 78.1 64.0 63.5 44.8 46.0 71.3 21.1 44.4 47.5 52.5 35.2 51.3 80.3 51.7 67.6 40.2 58.1
Table 4
4of the paper). PN + (A) and PN + represent PointNet++ with average-pooling and maxpooling, respectively. GCNN 1 and GCNN are geodesic convo-Input wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave Ball 68.1 96.2 34.9 41.2 61.8 43.0 24.1 5.0 19.2 41.7 0.0 4.7 11.8 17.7 20.1 30.8 72.2 43.7 55.2 8.7 35.0 Cube1 65.3 95.8 29.0 57.0 61.2 46.2 42.7 17.8 11.8 35.1 0.7 37.3 39.0 55.4 8.5 43.9 63.0 30.6 52.4 15.0 40.4 Cube2 58.7 90.0 61.6 62.6 59.3 50.4 40.2 31.3 15.1 45.6 1.9 29.4 23.9 53.1 18.2 41.8 81.7 34.1 51.8 25.2 43.9 Cube4 32.7 86.8 59.6 49.1 51.3 33.7 30.0 27.0 11.8 33.8 0.9 20.9 19.5 40.3 15.1 29.8 54.1 27.7 41.7 17.0 34.2 Ours 61.5 95.0 40.1 60.0 74.9 52.8 46.1 31.6 19.7 50.3 5.9 33.9 25.9 58.2 30.0 48.6 85.2 47.1 48.8 28.5 47.2
9] = 30.6%, Tangent Convolution [41] = 40.9%, Point-Net++ [33] = 43.5%, and SplatNet [39] = 44.2%. Detailed class IoU results are provided inTable 8.
AcknowledgementsThis work is supported in part by Google, Intel, Amozon, a Vannevar Bush faculty fellowship, a TUM Foundation Fellowship, a TUM-IAS Rudolf Mößbauer Fellowship, the ERC Starting Grant Scan2CAD, and the NSF grants VEC-1539014/1539099, CHS-1528025 and IIS-1763268. It makes use of data from Matterport. 8 .
Joint 2d-3d-semantic data for indoor scene understanding. I Armeni, S Sax, A R Zamir, S Savarese, arXiv:1702.01105arXiv preprintI. Armeni, S. Sax, A. R. Zamir, and S. Savarese. Joint 2d-3d- semantic data for indoor scene understanding. arXiv preprint arXiv:1702.01105, 2017.
M Atzmon, H Maron, Y Lipman, arXiv:1803.10091Point convolutional neural networks by extension operators. arXiv preprintM. Atzmon, H. Maron, and Y. Lipman. Point convolu- tional neural networks by extension operators. arXiv preprint arXiv:1803.10091, 2018.
Learning shape correspondence with anisotropic convolutional neural networks. D Boscaini, J Masci, E Rodolà, M Bronstein, Advances in Neural Information Processing Systems. D. Boscaini, J. Masci, E. Rodolà, and M. Bronstein. Learn- ing shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Pro- cessing Systems, pages 3189-3197, 2016.
Anisotropic diffusion descriptors. D Boscaini, J Masci, E Rodolà, M M Bronstein, D Cremers, Computer Graphics Forum. Wiley Online Library35D. Boscaini, J. Masci, E. Rodolà, M. M. Bronstein, and D. Cremers. Anisotropic diffusion descriptors. In Computer Graphics Forum, volume 35, pages 431-441. Wiley Online Library, 2016.
Ptex: Per-face texture mapping for production rendering. B Burley, D Lacewell, Computer Graphics Forum. Wiley Online Library27B. Burley and D. Lacewell. Ptex: Per-face texture mapping for production rendering. In Computer Graphics Forum, vol- ume 27, pages 1155-1164. Wiley Online Library, 2008.
A Chang, A Dai, T Funkhouser, M Halber, M Nießner, M Savva, S Song, A Zeng, Y Zhang, arXiv:1709.06158Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprintA. Chang, A. Dai, T. Funkhouser, M. Halber, M. Nießner, M. Savva, S. Song, A. Zeng, and Y. Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017.
A X Chang, T Funkhouser, L Guibas, P Hanrahan, Q Huang, Z Li, S Savarese, M Savva, S Song, H Su, arXiv:1512.03012An information-rich 3d model repository. arXiv preprintA. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
A volumetric method for building complex models from range images. B Curless, M Levoy, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. the 23rd annual conference on Computer graphics and interactive techniquesACMB. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interac- tive techniques, pages 303-312. ACM, 1996.
Scannet: Richly-annotated 3d reconstructions of indoor scenes. A Dai, A X Chang, M Savva, M Halber, T A Funkhouser, M Nießner, CVPR. 210A. Dai, A. X. Chang, M. Savva, M. Halber, T. A. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, volume 2, page 10, 2017.
Joint 3d-multi-view prediction for 3d semantic scene segmentation. A Dai, M Nießner, arXiv:1803.104093arXiv preprintA. Dai and M. Nießner. 3dmv: Joint 3d-multi-view pre- diction for 3d semantic scene segmentation. arXiv preprint arXiv:1803.10409, 2018.
Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. A Dai, M Nießner, M Zollhöfer, S Izadi, C Theobalt, ACM Transactions on Graphics (TOG). 36476A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt. Bundlefusion: Real-time globally consistent 3d reconstruc- tion using on-the-fly surface reintegration. ACM Transac- tions on Graphics (TOG), 36(4):76a, 2017.
Shape completion using 3d-encoder-predictor cnns and shape synthesis. A Dai, C R Qi, M Nießner, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)3A. Dai, C. R. Qi, and M. Nießner. Shape completion us- ing 3d-encoder-predictor cnns and shape synthesis. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), volume 3, 2017.
Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. A Dai, D Ritchie, M Bokeloh, S Reed, J Sturm, M Nießner, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)IEEEA. Dai, D. Ritchie, M. Bokeloh, S. Reed, J. Sturm, and M. Nießner. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2018.
A papier-mâché approach to learning 3d surface generation. T Groueix, M Fisher, V G Kim, B C Russell, M Aubry, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionT. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry. A papier-mâché approach to learning 3d surface generation. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 216-224, 2018.
Geometry images. X Gu, S J Gortler, H Hoppe, ACM Transactions on Graphics (TOG). 213X. Gu, S. J. Gortler, and H. Hoppe. Geometry images. ACM Transactions on Graphics (TOG), 21(3):355-361, 2002.
Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, 2017 IEEE International Conference on. IEEEComputer Vision (ICCVK. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Con- ference on, pages 2980-2988. IEEE, 2017.
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, CVPR. 13G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, vol- ume 1, page 3, 2017.
Quadriflow: A scalable and robust method for quadrangulation. J Huang, Y Zhou, M Nießner, J R Shewchuk, L J Guibas, Computer Graphics Forum. Wiley Online Library37J. Huang, Y. Zhou, M. Nießner, J. R. Shewchuk, and L. J. Guibas. Quadriflow: A scalable and robust method for quad- rangulation. In Computer Graphics Forum, volume 37, pages 147-160. Wiley Online Library, 2018.
Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. S Izadi, D Kim, O Hilliges, D Molyneaux, R Newcombe, P Kohli, J Shotton, S Hodges, D Freeman, A Davison, Proceedings of the 24th annual ACM symposium on User interface software and technology. the 24th annual ACM symposium on User interface software and technologyACMS. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, et al. Kinectfusion: real-time 3d reconstruction and inter- action using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559-568. ACM, 2011.
Instant field-aligned meshes. W Jakob, M Tarini, D Panozzo, O Sorkine-Hornung, 189:1-189:15ACM Transactions on Graphics. 346W. Jakob, M. Tarini, D. Panozzo, and O. Sorkine-Hornung. Instant field-aligned meshes. ACM Transactions on Graph- ics, 34(6):189:1-189:15, Oct. 2015.
C Jiang, J Huang, K Kashinath, P Marcus, M Niessner, arXiv:1901.02039Spherical cnns on unstructured grids. arXiv preprintC. Jiang, J. Huang, K. Kashinath, P. Marcus, M. Niessner, et al. Spherical cnns on unstructured grids. arXiv preprint arXiv:1901.02039, 2019.
Very high frame rate volumetric integration of depth images on mobile devices. O Kähler, V A Prisacariu, C Y Ren, X Sun, P Torr, D Murray, IEEE transactions on visualization and computer graphics. 2111O. Kähler, V. A. Prisacariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray. Very high frame rate volumetric integration of depth images on mobile devices. IEEE transactions on visu- alization and computer graphics, 21(11):1241-1250, 2015.
Metric-driven RoSy field design and remeshing. Y.-K Lai, M Jin, X Xie, Y He, J Palacios, E Zhang, S.-M Hu, X Gu, IEEE Transactions on Visualization and Computer Graphics. 161Y.-K. Lai, M. Jin, X. Xie, Y. He, J. Palacios, E. Zhang, S.- M. Hu, and X. Gu. Metric-driven RoSy field design and remeshing. IEEE Transactions on Visualization and Com- puter Graphics, 16(1):95-108, 2010.
Mnist handwritten digit database. AT&T Labs. Y Lecun, C Cortes, C Burges, 2Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database. AT&T Labs [Online]. Available: http://yann. le- cun. com/exdb/mnist, 2, 2010.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 3431-3440, 2015.
Geodesic convolutional neural networks on riemannian manifolds. J Masci, D Boscaini, M Bronstein, P Vandergheynst, Proceedings of the IEEE international conference on computer vision workshops. the IEEE international conference on computer vision workshopsJ. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on riemannian man- ifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37-45, 2015.
Voxnet: A 3d convolutional neural network for real-time object recognition. D Maturana, S Scherer, Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEED. Maturana and S. Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 922-928. IEEE, 2015.
Geometric deep learning on graphs and manifolds using mixture model cnns. F Monti, D Boscaini, J Masci, E Rodola, J Svoboda, M M Bronstein, Proc. CVPR. CVPR13F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proc. CVPR, vol- ume 1, page 3, 2017.
Kinectfusion: Real-time dense surface mapping and tracking. R A Newcombe, S Izadi, O Hilliges, D Molyneaux, D Kim, A J Davison, P Kohi, J Shotton, S Hodges, A Fitzgibbon, 10th IEEE international symposium on. IEEEMixed and augmented reality (ISMAR)R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface map- ping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pages 127- 136. IEEE, 2011.
Real-time 3d reconstruction at scale using voxel hashing. M Nießner, M Zollhöfer, S Izadi, M Stamminger, ACM Transactions on Graphics (ToG). 326169M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger. Real-time 3d reconstruction at scale using voxel hashing. ACM Transactions on Graphics (ToG), 32(6):169, 2013.
Pointnet: Deep learning on point sets for 3d classification and segmentation. C R Qi, H Su, K Mo, L J Guibas, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)14C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1(2):4, 2017.
Volumetric and multi-view cnns for object classification on 3d data. C R Qi, H Su, M Nießner, A Dai, M Yan, L Guibas, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)IEEEC. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. Guibas. Volumetric and multi-view cnns for object classification on 3d data. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2016.
Pointnet++: Deep hierarchical feature learning on point sets in a metric space. C R Qi, L Yi, H Su, L J Guibas, Advances in Neural Information Processing Systems. C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hi- erarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pages 5099-5108, 2017.
n-symmetry direction field design. N Ray, B Vallet, W C Li, B Lévy, ACM Transactions on Graphics (TOG). 27210N. Ray, B. Vallet, W. C. Li, and B. Lévy. n-symmetry di- rection field design. ACM Transactions on Graphics (TOG), 27(2):10, 2008.
Octnet: Learning deep 3d representations at high resolutions. G Riegler, A O Ulusoy, A Geiger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition3G. Riegler, A. O. Ulusoy, and A. Geiger. Octnet: Learn- ing deep 3d representations at high resolutions. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 3, 2017.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox. U-net: Convo- lutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
Sun rgb-d: A rgbd scene understanding benchmark suite. S Song, S P Lichtenberg, J Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS. Song, S. P. Lichtenberg, and J. Xiao. Sun rgb-d: A rgb- d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 567-576, 2015.
Semantic scene completion from a single depth image. S Song, F Yu, A Zeng, A X Chang, M Savva, T Funkhouser, Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEES. Song, F. Yu, A. Zeng, A. X. Chang, M. Savva, and T. Funkhouser. Semantic scene completion from a single depth image. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 190-198. IEEE, 2017.
. H Su, V Jampani, D Sun, S Maji, E Kalogerakis, M.-H , H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M.-H.
Splatnet: Sparse lattice networks for point cloud processing. J Yang, Kautz, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYang, and J. Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 2530-2539, 2018.
Multiview convolutional neural networks for 3d shape recognition. H Su, S Maji, E Kalogerakis, E Learned-Miller, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionH. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multi- view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on com- puter vision, pages 945-953, 2015.
Tangent convolutions for dense prediction in 3d. M Tatarchenko, J Park, V Koltun, Q.-Y Zhou, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionM. Tatarchenko, J. Park, V. Koltun, and Q.-Y. Zhou. Tan- gent convolutions for dense prediction in 3d. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3887-3896, 2018.
Feastnet: Featuresteered graph convolutions for 3d shape analysis. N Verma, E Boyer, J Verbeek, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionN. Verma, E. Boyer, and J. Verbeek. Feastnet: Feature- steered graph convolutions for 3d shape analysis. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2598-2606, 2018.
O-cnn: Octree-based convolutional neural networks for 3d shape analysis. P.-S Wang, Y Liu, Y.-X Guo, C.-Y. Sun, X Tong, ACM Transactions on Graphics (TOG). 36472P.-S. Wang, Y. Liu, Y.-X. Guo, C.-Y. Sun, and X. Tong. O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (TOG), 36(4):72, 2017.
Elasticfusion: Real-time dense slam and light source estimation. T Whelan, R F Salas-Moreno, B Glocker, A J Davison, S Leutenegger, The International Journal of Robotics Research. 3514T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison, and S. Leutenegger. Elasticfusion: Real-time dense slam and light source estimation. The International Journal of Robotics Research, 35(14):1697-1716, 2016.
3d shapenets: A deep representation for volumetric shapes. Z Wu, S Song, A Khosla, F Yu, L Zhang, X Tang, J Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZ. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015.
Directionally convolutional networks for 3d shape segmentation. H Xu, M Dong, Z Zhong, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionH. Xu, M. Dong, and Z. Zhong. Directionally convolutional networks for 3d shape segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2698-2707, 2017.
Operator wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave PN + (A) 55. 367 80.2 23.1 41.6 54.1 55.9 68.6 11.2 20.0 41.1 5.3 37.5Operator wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave PN + (A) 55.7 80.2 23.1 41.6 54.1 55.9 68.6 11.2 20.0 41.1 5.3 37.5 36.2
6 (a) Pointcloud Operator wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave PN + (A) 66. 58.0 14.0 54.3 42.1 45.4 30.9 43.0 67.7 47.9 55.8 32.2833Ours(A) 51.5 87.1 26.0 44.7 65.0 46.4 42.5 18.5 31.4 29.0 8.0 40.6 24.9 11.5 18.9 34.9 61.2 43.0 50.2 23.8 38.0 Ours 64.8 90.0 39.3 65. 6 94.7 29.9 50.5 64.9 52.9 56.5 17.4 19.7 45.0 0.0 36.5 30.4 21.5 13.5 19.1 49.6 30.3 45.6 16.6 38.1Ours(A) 51.5 87.1 26.0 44.7 65.0 46.4 42.5 18.5 31.4 29.0 8.0 40.6 24.9 11.5 18.9 34.9 61.2 43.0 50.2 23.8 38.0 Ours 64.8 90.0 39.3 65.8 74.8 66.6 50.5 33.9 35.6 58.0 14.0 54.3 42.1 45.4 30.9 43.0 67.7 47.9 55.8 32.2 50.6 (a) Pointcloud Operator wall floor cab bed chair sofa table door wind bkshf pic cntr desk curt fridg show toil sink bath other ave PN + (A) 66.6 94.7 29.9 50.5 64.9 52.9 56.5 17.4 19.7 45.0 0.0 36.5 30.4 21.5 13.5 19.1 49.6 30.3 45.6 16.6 38.1
| [] |
[
"√ T Regret Bound for Adaptive LQR",
"√ T Regret Bound for Adaptive LQR"
] | [
"Yiwen Lu ",
"Yilin Mo "
] | [] | [] | The Linear-Quadratic Regulation (LQR) problem with unknown system parameters has been widely studied, but it has remained unclear whetherÕ( √ T ) regret, which is the best known dependence on time, can be achieved almost surely. In this paper, we propose an adaptive LQR controller with almost surelyÕ( √ T ) regret upper bound. The controller features a circuit-breaking mechanism, which circumvents potential safety breach and guarantees the convergence of the system parameter estimate, but is shown to be triggered only finitely often and hence has negligible effect on the asymptotic performance of the controller. The proposed controller is also validated via simulation on Tennessee Eastman Process (TEP), a commonly used industrial process example. the Department of Automation and BNRist, | null | [
"https://export.arxiv.org/pdf/2301.05537v1.pdf"
] | 255,825,843 | 2301.05537 | cf008309284dc102969abd4674a4d27770203294 |
√ T Regret Bound for Adaptive LQR
Yiwen Lu
Yilin Mo
√ T Regret Bound for Adaptive LQR
Almost Surely
The Linear-Quadratic Regulation (LQR) problem with unknown system parameters has been widely studied, but it has remained unclear whetherÕ( √ T ) regret, which is the best known dependence on time, can be achieved almost surely. In this paper, we propose an adaptive LQR controller with almost surelyÕ( √ T ) regret upper bound. The controller features a circuit-breaking mechanism, which circumvents potential safety breach and guarantees the convergence of the system parameter estimate, but is shown to be triggered only finitely often and hence has negligible effect on the asymptotic performance of the controller. The proposed controller is also validated via simulation on Tennessee Eastman Process (TEP), a commonly used industrial process example. the Department of Automation and BNRist,
I. INTRODUCTION
Adaptive control, the study of decision-making under the parametric uncertainty of dynamical systems, has been pursued for decades [1]- [4]. Although early research mainly focused on the the aspect of convergence and stability, recent years have witnessed significant advances in the quantitative performance analysis of adaptive controllers, especially for multivariate systems. In particular, in the adaptive Linear-Quadratic Regulation (LQR) setting considered in this paper, the controller attempts to solve the stochastic LQR problem without access to the true system parameters, and its performance is evaluated via regret, the cumulative deviation from the optimal cost over time. This adaptive LQR setting has been widely studied in the past decade [5]- [11], but it has remained unclear whether adaptive controllers for LQR can achieveÕ( √ T ) regret, whose asymptotic dependence on the time T is the best known (up to poly-logarithmic factors) 1 , almost surely.
Existing regret upper bounds on adaptive LQR, summarized in Table I, are either weaker than almost-sure in terms of the type of probabilistic guarantee, or suboptimal in terms of asymptotic dependence on time. By means of optimism-inface-of-uncertainty [7], Thompson sampling [6], or -greedy algorithm [9], anÕ( √ T ) regret can be achieved with probability at least 1 − δ. In other words, these algorithms may not converge to the optimal one or could even be destablizing with a non-zero probability δ. In practice, such a failure probability may hinder the application of these algorithms in safety-critical scenarios. From a theoretical perspective, we argue that it is highly difficult to extend these algorithms to provide stronger performance guarantees. To be specific, despite different exploration strategies, the aforementioned methods all compute the control input using a linear feedback gain synthesized from a least-squares estimate of system parameters. Due to Gaussian process noise, the derived linear feedback gain may be destabilizing, regardless of the amount of data collected. For these algorithms, the probability of such catastrophic event is bounded by a positive δ > 0, which is a predetermined design parameter and cannot be changed during online operation.
Alternative methods that may preclude the above described failure probability include introducing additional stability assumptions, using parameter estimation algorithms with stronger convergence guarantees, and adding a layer of safeguard around the linear feedback gain. Faradonbeh et al. [10] achievesÕ( √ T ) regret almost surely under the assumption that the closed-loop system remains stable all the time, based on a stabilization set obtained from adaptive stabilization [12]. Since the stabilization set is estimated from finite data and violates the desired property with a nonzero probability, this method essentially has a nonzero failure probability as well. Guo [13] achieves sub-linear regret almost surely by adopting a variant of ordinary least squares with annealing weight assigned to recent data, a parameter estimation algorithm convergent even with unstable trajectory data. However, the stronger convergence guarantee may come at the cost of less sharp asymptotic rate, and it is unclear whether the regret of this method can achieveÕ( √ T ) dependence on time. Wang et al. [11] achievesÕ( √ T ) regret with a convergencein-probability guarantee, via the use of a switched, rather than linear feedback controller, which falls back to a known stabilizing gain on the detection of large states. However, the controller design of this work does not rule out the frequent switching between the learned and fallback gains, a typical source of instability in switched linear systems [14], which restricts the correctness of their results to the case of commutative closed-loop system matrices. Moreover, the regret analysis in this work is not sufficiently refined to lead to almost sure guarantees. In this paper, we present an adaptive LQR controller with O( √ T ) regret almost surely, only assuming the availability of a known stabilizing feedback gain, which is a common assumption in the literature. This is achieved by a "circuitbreaking" mechanism motivated similarly as [11], which circumvents safety breach by supervising the norm of the state and deploying the feedback gain when necessary. In contrast to [11], however, by enforcing a properly chosen dwell time on the fallback mode, the stability of the closed-loop system under our proposed controller is unaffected by switching. Another insight underlying our analysis is that the above mentioned circuit-breaking mechanism is triggered only finitely often. This fact implies that the conservativeness of the proposed controller, which prevents the system from being destabilized in the early stage and hence ensures the convergence of the system parameter estimates, may have negligible effect on the asymptotic performance of the system. Although similar phenomena have also been observed in [11], [15], we derive an upper bound on the time of the last trigger (Theorem 3, item 7)), a property missing from pervious works that paves the way to our almost sure regret guarantee.
Outline
The remainder of this manuscript is organized as follows: Section II introduces the problem setting. Section III describes the proposed controller. Section IV states and proves the theoretical properties of the closed-loop system under the proposed controller, and establishes the main regret upper bound. Section V validates the theoretical results using a numerical example. Finally, Section VI summarizes the manuscript and discusses directions for future work.
Notations
The set of nonnegative integers is denoted by N, and the set of positive integers is denoted by N * . The n-dimensional Euclidean space is denoted by R n , and the n-dimensional unit sphere is denoted by S n . The n × n identity matrix is denoted by I n . For a square matrix M , ρ(M ) denotes the spectral radius of M , and tr(M ) denotes the trace of M . For a real symmetric matrix M , M 0 denotes that M is positive definite. For any matrix M , M † denotes the Moore-Penrose inverse of M . For two vectors u, v ∈ R n , u, v denotes their inner product. For a vector v, v denotes its 2-norm, and v P = P 1/2 v for P 0. For a matrix M , M denotes its induced 2-norm, and M F denotes its Frobenius norm. For a random vector x, x ∼ N (µ, Σ) denotes x is Gaussian distributed with mean µ and covariance Σ. For a random variable X, X ∼ χ 2 (n) denotes X has a chi-squared distribution with n degrees of freedom. P(·) denotes the probability operator, and E[·] denotes the expectation operator.
For non-negative quantities f, g, which can be deterministic or random, we say f g to denote that f ≤ C 1 g+C 2 for some universal constants C 1 > 0 and C 2 > 0, and f g to denote that g f . For a random function f (T ) and a deterministic function g(T ), we say f (T ) = O(g(T )) to denote that
lim sup T →∞ f (T )/g(T ) < ∞, and f (T ) =Õ(g(T )) to denote that f (T ) = O(g(T )(log(T )) α ) for some α > 0.
II. PROBLEM FORMULATION
This paper considers a fully observed discrete-time linear system with Gaussian process noise specified as follows:
x k+1 = Ax k + Bu k + w k , k ∈ N * , x 1 = 0,(1)
where x k ∈ R n is the state, u k ∈ R m is the control input, and w k i.i.d.
∼ N (0, W ) is the process noise, where W 0. It is assumed without loss of generality that W = I n , but all the conclusions apply to general positive definite W up to the scaling of constants. It is also assumed that the system and input matrices A, B are unknown to the controller, but (A, B) is controllable, and that the system is open-loop stable, i.e., ρ(A) < 1. Consequently, there exists P 0 0 that satisfies the discrete Lyapunov equation
A P 0 A − P 0 + Q = 0,(2)
and there exists a scalar 0 < ρ 0 < 1 such that
A P 0 A ≺ ρ 0 P 0 .(3)
Remark 1. It has been commonly assumed in the literature [9], [11] that (A, B) being stabilizable by a known feedback gain K 0 , i.e., ρ(A + BK 0 ) < 1. In such case, the system can be rewritten as
x k+1 = A x k + Bu k + w k ,(4)
where A = A + BK 0 , u k = u k − K 0 x k , which reduces the problem to the case of open-loop stable systems. Therefore, we may assume without loss of generality that the system is open-loop stable, i.e., K 0 = 0, for the simplicity of notations.
The following average infinite-horizon quadratic cost is considered:
J = lim sup T →∞ 1 T E T k=1 x k Qx k + u k Ru k ,(5)
where Q 0, R 0 are fixed weight matrices specified by the system operator. It is well known that the optimal control law is the linear feedback control law of the form u(x) = K * x, where the optimal feedback gain K * can be specified as:
K * = − R + B P * B −1 B P * A,(6)
and P * is the unique positive definite solution to the discrete algebraic Riccati equation
P * = A P * A − A P * B R + B P * B −1 B P * A + Q.(7)
The corresponding optimal cost is J * = tr E w k w k P * = tr(W P * ) = tr(P * ).
The matrix P * also satisfies the discrete Lyapunov equation
(A + BK * ) P * (A + BK * ) − P * + Q + K RK = 0,(9)
which implies that there exists a scalar 0 < ρ * < 1 such that
(A + BK * ) P * (A + BK * ) ≺ ρ * P * .(10)
Since A, B are unknown to the controller in the considered setting, it is not possible to directly compute the optimal control law from (6) and (7). Instead, the controller learns the optimal control law online, whose performance is measured via the regret defined as follows:
R(T ) = T k=1 (x k Qx k + u k Ru k ) − T J * .(11)
The goal of the controller is to minimize the asymptotic growth of R(T ).
III. CONTROLLER DESIGN
The logic of the proposed controller is presented in Algorithm 1. It can also be illustrated by the block diagram in Fig. 1. The remainder of this section would be devoted to explaining the components of the controller:
Algorithm 1 Proposed controller 1:K 0 ← 0 2: ξ ← 0 3: for k = 1, 2, . . . do 4:
Update parameter estimates k ,B k using (12) 5: if ( k ,B k ) is controllable then 6: UpdateK k from (6)-(7), replacing (A, B) with ( k ,B k ).
u pr k ← k −1/4 v k , where v k ∼ N (0, I m ) 20: Apply u k ← u cb k + u pr k
The proposed controller is a variant of the certainty equivalent controller [16], where the latter applies the input u ce k = K k x k , where the feedback gainK k is calculated from (6)-(7) by treating the current estimates of the system parameterŝ A k ,B k as the true values. The differences of the proposed controller compared to the standard certainty equivalent controller is that i) it includes a "circuit-breaking" mechanism, which replace u ce k with zero in certain circumstances; ii) it superposes the control input at each step with a probing noise u pr k . The circuit-breaking mechanism replaces u ce k with zero for the subsequent t k steps when the norm of the certainty equivalent control input u ce k exceeds a threshold M k . The intuition behind this design is that a large certainty equivalent control input is indicative of having applied a destabilizing feedback gain recently, and circuit-breaking may prevent the state from exploding by leveraging the innate stability of the system, and hence help with the convergence of the parameter Circuitbreaking logic Fig. 1: Block diagram of the closed-loop system under the proposed controller. The control input is the superposition of a deterministic input u cb and a random probing input u pr . The deterministic part u cb is normally the same as the certainty equivalent input u ce , but takes the value zero when circuitbreaking is triggered, where ξ is a counter for circuit-breaking. The certainty equivalent gain is updated using the parameter estimator in the meantime. estimator and the asymptotic performance of the controller. The threshold M k is increased with the time index k, so that the "false alarm rate" of circuit-breaking, caused by the occasional occurrence of large noise vectors, decays to zero. The dwell time t k is also increased with time, in order to circumvent the potential oscillation of the system caused by the frequent switching betweenK and 0 (c.f. [17]). Both M k and t k are chosen to grow logarithmically with k, which would support our technical guarantees.
ξ > 0 0 K u ce u cb + Plant (eq. (1)) Estimator (eq. (7)) ξ w u pr u x ξ = 0
Similarly to [9], [11], a probing noise u pr k is superposed to the control input at each step to provide sufficient excitation to the system, which is required for the estimation of system parameters. Specifically, the probing noise is chosen to be u pr
k = k −1/4 v k , where v k i.i.d.
∼ N (0, I m ), which would correspond to the optimal rate of regret.
The estimates of the system parameters k ,B k are updated using an ordinary least squares estimator.
Denote Θ = [A B],Θ k = [Â kBk ], and z k = [x k u k ]
, then according to x k+1 = Θz k , the ordinary least squares estimator can be specified aŝ
Θ k = k−1 t=1 x t z t k−1 t=1 z t z t † .(12)
Remark 2. In the presented algorithm, the certainty equivalent gain is updated at every step k (see line 6 of Algorithm 1), but it may also be updated "logarithmically often" [11] (e.g., at steps k = 2 i , i ∈ N * ), which is more computationally efficient in practice. It can be verified that all our theoretical results also apply to the case of logarithmically often updates.
IV. MAIN RESULTS
Underlying our analysis of the closed-loop system under the proposed controller are two random times, defined below:
T stab := inf T A t k P * A t k < ρP * , A + BK k P * A + BK k < ρP * , ∀k ≥ T ,(13)
where ρ = (1 + ρ * )/2, and t k is the dwell time defined in line 12 of Algorithm 1. And
T nocb := inf T | u cb k ≡ u ce k , ∀k ≥ T .(14)
With the above two random times, the evolution of the system can be divided into three stages: 1) From the beginning to T stab , the adaptive controller gradually refines its estimate of system parameters, and hence improves the performance of the certainty equivalent feedback gain, until the system becomes stabilized in the sense that there is a common Lyapunov function between the two modes under circuit-breaking as indicated in (13). 2) From T stab to T nocb , the closed-loop system is stable as is ensured by the aforementioned common Lyapunov function, and under mild regularity conditions on the noise, an upper bound on the magnitude certainty equivalent control input u ce k eventually drops below the circuitbreaking threshold M k .
3) From T nocb on, circuit-breaking is not triggered any more, and the system behaves similarly as the closed-loop system under the optimal controller, with only small perturbations on the feedback gain which stem from the parameter estimation error and converge to zero. In the following theorem, we state several properties of the closed-loop system, based on which the above two random times are quantified: Theorem 3. Let n, m be the dimensions of the state and input vectors respectively, and P 0 , ρ 0 , P * , ρ * be defined in (2), (3), (7), (10) respectively. Then the following properties hold:
1) For 0 < δ ≤ 1/2, the event E noise (δ) := max{ w k , v k } ≤ 2 √ n + 1 log(k/δ), ∀k ∈ N *(15)
occurs with probability at least 1 − 2δ.
2) For 0 < δ ≤ 1/(8n 2 ), the event E cov (δ) := k i=1 (w i w i − I n ) ≤ 7n √ k log(8n 2 k/δ), ∀k ∈ N *(16)
occurs with probability at least 1 − δ. 3) On the event E noise (δ), it holds
x k ≤ C x log(k/δ), ∀k ∈ N * ,(17)
where
C x = ( B + 1)(2 √ n + 1 + 1) P 0 P −1 0 1 − ρ 1/2 0 .(18)
4) For 0 < δ ≤ 1/6, the event
E cross (δ) := k i=1 w i P * (Ax i + Bu cb i ) ≤ C cross √ k(log(k/δ)) 2 , ∀k ∈ N * (19)
occurs with probability at least 1 − 6δ, where
C cross = 4 √ n + 1 P * ( A C x + B ).(20)
5) For δ satisfying
0 < δ < min (800C V ) −1 , exp 24(m + n) 1/3 , (21) the event E est (δ) := Θ k − Θ 2 ≤ C Θ k −1/2 log(k/δ), ∀k ≥ k 0 ,(22)
occurs with probability at least 1 − 6δ, where
k 0 = 600(m + n) log(1/δ) + 5400 ,(23)C V = C x + 2 √ n + 1 + 1,(24)C Θ = (3200n/9)(5n/2 + 2). (25) 6) On the event E est (δ), it holds T stab (log(1/δ)) 2 .(26)
7) On the event E noise (δ) ∩ E est (δ), for any α > 0, it holds
T nocb (1/δ) α .(27)
Remark 4. The conclusions of Theorem 3 can be explained as follows: Items 1) and 2) defines two high-probability events on the regularity of noise. Item 3) bounds the state norm under regular noise, based on which item 4) bounds the growth of a cross term between noise and state that would be useful in regret analysis. Item 5) bounds the parameter estimation error, based on which item 6) bounds T stab . Finally, item 7) states that the circuit-breaking mechanism is triggered only finitely, and bounds T nocb , the time after which the circuit-breaking is not triggered any more.
The following corollary characterizes the tail probabilities of T stab and T nocb , which follows directly from items 6) and 7) of Theorem 3:
Corollary 5. For T stab defined in (13) and T nocb defined in (14), as T → ∞, it holds 1)
P(T stab ≥ T ) = O(exp(−c √ T )),(28)
where c > 0 is a system-dependent constant.
2) For any α > 0,
P(T nocb ≥ T ) = O(T −α ).(29)
Building upon the properties stated in Theorem 3, a highprobability bound on the regret under the proposed controller can be ensured: Theorem 6. Given a failure probability δ, there exists a constant T 0 (1/δ) 1/4 , such that for any fixed T > T 0 , it holds with probability at least 1 − δ that the regret defined in (11) satisfies
R(T ) (1/δ) 1/4 + √ T (log(T /δ)) 3 .(30)
As corollaries of Theorem 6, one can obtain a bound on the tail probability of R(T ) (Theorem 7), and the main conclusion of this work, i.e., an almost sure bound on R(T ) (Theorem 8):
Theorem 7. For sufficiently large T , it holds P R(T ) ≥ C R √ T (log(T )) 3 ≤ 1 T 2 ,(31)
where C R is a system-dependent constant.
Proof. The conclusion follows from invoking Theorem 6 with δ = 1/T 2 .
Theorem 8. It holds almost surely that
R(T ) =Õ √ T .(32)
Proof. By Theorem 7, we have
∞ T =1 P R(T ) ≥ C R √ T (log(T )) 3 < +∞.(33)
By Borel-Cantelli lemma, it holds almost surely that the event
R(T ) ≥ C R √ T (log(T )) 3(34)
occurs finitely often, i.e.,
R(T ) =Õ √ T , a.s.(35)
The remainder of this section is dedicated to proving Theorems 3 and 6.
A. Proof of Theorem 3, item 1)
Proof. Since w k ∼ N (0, I n ), it holds w k ∼ χ 2 (n). Applying the Chernoff bound, for any a > 0, and any 0 < t < 1/2, it holds P(X ≥ a) ≤ E e tX /e ta = (1 − 2t) −n/2 exp(−ta). (36)
Choosing t = 1/4, we have
P( w k ≥ a) ≤ 2 n/2 exp(−a 2 /4)(37)
for any k ∈ N * and a > 0. Invoking (37) with a k = 2(log(ck 2 /δ)) 1/2 , where c = 2 n/2 π 2 /6, we have
∞ k=1 P( w k ≥ a k ) ≤ ∞ k=1 2 n/2 c −1 δ/k 2 = δ,(38)
i.e., it holds with probability at least 1 − δ that
w k ≤ 2(log(ck 2 /δ)) 1/2 ≤ 2 √ n + 1 log(k/δ), ∀k ∈ N * .(39)
Similarly, it also holds with probability at least
1 − δ that v k ≤ 2 √ n + 1 log(k/δ), ∀k ∈ N * .(40)
Combining (39) and (40) leads to the conclusion.
B. Proof of Theorem 3, item 2)
We start with a concentration bound on the sum of product of Gaussian random variables:
Lemma 9. Let X i i.i.d. ∼ N (0, 1), Y i i.i.d. ∼ N (0, 1), and {X i }, {Y i } be mutually independent, then 1) With probability at least 1 − 4δ, k i=1 X 2 i − k ≤ 7 √ k log(k/δ), ∀k ∈ N * .(41)
2) With probability at least 1 − 8δ,
k i=1 X i Y i ≤ 5 √ k log(k/δ), ∀k ∈ N * .(42)
Proof. Since k i=1 X 2 i ∼ χ 2 (k), according to [18, Lemma 1], for any a > 0 and any k ∈ N * , it holds
P k i=1 X 2 i − k ≥ 2 √ na + 2a ≤ 2 exp(−a).(43)
Fix k and choose a = log(k 2 /δ), and it follows that with probability at least 1 − 2δ/k 2 ,
k i=1 X 2 i − k ≤ 2 k log(k 2 /δ) + 2 log(k 2 /δ) ≤ 7 √ k log(k/δ).(44)
Taking the union bound over k ∈ N * , one can show that (41) holds with probability at least 1 − 2δ
∞ k=1 (1/k 2 ) > 1 − 4δ, and hence claim 1) is proved. Since X i Y i = (X i +Y i ) 2 /4+(X i −Y i ) 2 /4, and X i +Y i i.i.d. ∼ N (0, 2), X i −Y i i.i.d. ∼ N (0, 2), claim 2) follows from applying claim 1) to {(X i + Y i )/ √ 2} and {(X i − Y i )/ √
2} respectively and taking the union bound.
Theorem 3, item 2) follows from the above lemma:
Proof. Applying Lemma 9, item 1) to the diagonal elements, and Lemma 9, item 2) to the off-diagonal elements of k i=1 (w i w i − I), and taking the union bound, one can show that with probability at least 1 − 8n 2 δ,
k i=1 (w i w i − I n ) ≤ 7 √ k log(k/δ),(45)
where the inequality holds component-wise. Hence, with probability at least 1 − 8n 2 δ,
k i=1 (w i w i − I n ) 2 ≤ k i=1 (w i w i − I n ) 2 F ≤ n 2 (7 √ k log(k/δ)),(46)
and scaling the failure probability leads to the conclusion.
C. Proof of Theorem 3, item 3)
Proof. Notice
x k = A k−2 d 1 + A k−3 d 2 + · · · + d k−1 ,(47)
where
d k = B(u cb k + u pr k ) + w k . On E noise (δ), it holds d k ≤ B log(k) + 2( B + 1) √ n + 1 log(k/δ) ≤ ( B + 1)(2 √ n + 1 + 1) log(k/δ).(48)
Furthermore, from (2) and (47), it holds
x k P ≤ ρ (k−2)/2 0 d 1 P + ρ (k−3)/2 0 d 2 P + · · · + d k−1 P ≤ 1 1 − ρ 1/2 0 d k P ,(49)
from which the conclusion follows.
D. Proof of Theorem 3, item 4)
This result is a corollary of a time-uniform version of Azuma-Hoeffding inequality [19], stated below: Lemma 10. Let {φ k } k≥1 be a martingale difference sequence adapted to the filtration {F k } satisfying |φ k | ≤ d k a.s., then with probability at least 1 − 4δ, it holds
k i=1 φ i ≤ 2 k i=1 d 2 i log(k/δ), ∀k.(50)
Proof. By Azuma-Hoeffding inequality [19], for a fixed k, it holds with probability at least 1 − 2δ/k 2 that
k i=1 φ i ≤ 2 k i=1 d 2 i log(k 2 /δ) ≤ 2 k i=1 d 2 i log(k/δ).
(51) Taking the union bound over k ∈ N * , one can prove that (50) holds with probability at least 1 − 2δ
∞ k=1 (1/k 2 ) > 1 − 4δ.
Theorem 3, item 4) follows from the above lemma:
Proof. According to Theorem 3, item 1), we only need to prove
P(E cross (δ) | E noise (δ)) ≥ 1 − 4δ.(52)
Therefore, we condition the remainder of the proof upon the event E noise (δ). Let F k be the σ-algebra generated by v 1 , w 1 , v 2 , w 2 , . . . , v k−1 , w k−1 , v k . Since x k ∈ F k−1 , u cb k ∈ F k−1 , E[w k | F k−1 ] = 0 due to symmetry, it holds
E w k P * Ax k + Bu cb k F k =E [w k | F k−1 ] P * Ax k + Bu cb k = 0,(53)
i.e., {w k P * (Ax k +Bu cb k )} is a martingale difference sequence adapted to the filtration {F k }. Furthermore, by Theorem 3, item 3, we have
w k P * Ax k + Bu cb k ≤ w k P * A x k + B u cb k ≤ 1 2 C cross log(k/δ).(54)
Hence, the conclusion follows from applying Lemma 10.
E. Proof of Theorem 3, item 5)
This subsection is devoted to deriving the time-uniform upper bound on estimation error stated in Theorem 3, item 5). Throughout this subsection, we denote Θ
= [A B],Θ k = [Â kBk ], z k = [x k u k ] , and V k = k−1 i=1 z i z i .
The proof can be split into three parts: firstly, we characterize the estimation error Θ k − Θ in terms of the maximum and minimum eigenvalues of the regressor covariance matrix V k , using a result in martingale least squares [5]. Secondly, an upper bound on the V k , which is a consequence of the non-explosiveness of states, can be derived as an corollary of Theorem 3, item 3). Finally, an upper bound on V −1 k , or equivalently a lower bound on the minimum eigenvalue of V k , which is a consequence of sufficient excitation of the system, can be proved using an anti-concentration bound on block martingale small-ball (BMSB) processes [20]. The three parts would be discussed respectively below.
1) Upper bound on least squares error, in terms of V k :
Lemma 11 ( [5, Corollary 1 of Theorem 3]). Let
S k = k i=1 η i m i−1 , U k = k i=1 m i−1 m i−1 ,(55)
where {F k } k∈N * is a filtration, {η k } k∈N * is a random scalar sequence with η k | F k being conditionally σ 2 -sub-Gaussian, and {m k } k∈N * is a random vector sequence with m k ∈ F k . Then with probability at least 1 − δ,
S k (U0+U k ) −1 ≤ 2σ 2 · log det(U 0 ) −1/2 det(U 0 + U k ) 1/2 /δ , ∀k ∈ N * ,(56)
where U 0 0 is an arbitrarily chosen constant positive semidefinite matrix.
Proposition 12. With probability at least 1 − δ,
Θ k − Θ 2 ≤ 2n V −1 k log n δ + n 2 log(1 + V k ) ,
∀k ≥ m + n + 1.
Proof. Let F k be the σ-algebra generated by v 1 , w 1 , v 2 , w 2 , . . . , v k−1 , w k−1 , v k . From x k+1 = Θz k + w k , we havê
Θ k − Θ = k−1 i=1 w i z i k−1 i=1 z i z i † ,(58)
where
w k |F k ∼ N (0, I n ), z k ∈ F k . With V k = k−1 i=1 z i z i , we have V i 0 a.
s. for k ≥ m + n + 1. Now we can apply Lemma 11 to each row ofΘ k − Θ: for each of j = 1, . . . , n, let S j,k = k−1 i=1 (e j w i )z i , where e j is the j-th standard unit vector. By invoking Lemma 11 with U k = V k and U 0 = I m+n , we have: with probability at least 1 − δ,
e j (Θ k − Θ) 2 = V −1 k S j,k ≤ V −1 k S j,k (I+V k ) −1 ≤ 2 V −1 k log det(I + V k ) 1/2 /δ ≤ 2 V −1 k log 1 δ + n 2 log(1 + V k ) .(59)
Taking the union bound over j = 1, . . . , n, we have: with probability at least 1 − nδ,
Θ k − Θ 2 ≤ Θ k − Θ 2 F ≤ n j=1 e j (Θ k − Θ) 2 ≤2n V −1 k log 1 δ + n 2 log(1 + V k ) .(60)
Scaling the failure probability results in the conclusion.
2) Upper bound on V k : Proposition 13. On the event E noise (δ) defined in (15), it holds
V k ≤ C V k(log(k/δ)) 2 ,(61)
where C V is defined in (24).
Proof. On E noise (δ), we have
u k ≤ log(k) + 2 √ n + 1 log(k/δ),(62)
and by Theorem 3, item 3), we have
x k ≤ C x log(k/δ).(63)
Hence,
z k ≤ x k + u k ≤ C V log(k/δ),(64)
which implies
V k ≤ k−1 i=1 z k 2 ≤ C V k(log(k/δ)) 2 .(65)
3) Upper bound on V −1 k : We shall borrow the techniques of analyzing BMSB processes from Simchowitz et al. [20] to bound V −1 k . The BMSB process is defined as follows:
Definition 14 ( [20, Definition 2.1]). Suppose that {φ k } k∈N * is a real-valued stochastic process adapted to the filtration {F k }. We say the process {φ k } satisfies the (l, ν, p) block martingale small-ball (BMSB) condition if:
1 l l i=1 P (|φ k+i | ≥ ν | F t ) ≥ p, ∀k ∈ N * .(66)
The following lemma verifies that {z k }, projected along an arbitrary direction, is BMSB:
Lemma 15. For any µ ∈ S m+n , the process { z i , µ } k−1 i=1 satisfies the (1, k −1/4 , 3/10) BMSB condition. Proof. Let F k be the σ-algebra generated by v 1 , w 1 , v 2 , w 2 , . . . , v k−1 , w k−1 , v k . Since z i = x i u i = x i u cb i + u pr i ,(67)and x i+1 = A i x t + Bu pr i + w i , where A i takes value from {A, A + BK i } and belongs to F i , we have | z i+1 , µ | = | x i+1 , µ 1 + u cb i+1 , µ 2 + u pr i+1 , µ 2 | ≥ | A i x i + Bu pr i + w i , µ 1 + u pr i+1 , µ 2 | = A i x i + Bu pr i + w i u pr i+1 , µ ,(68)
where µ 1 = [I n 0]µ, µ 2 = [0 I m ]µ. Therefore, we only need to verify
P A i x i + Bu pr i + w i u pr i+1 , µ ≥ k −1/4 F i ≥ 3 10 . (69) Since A i x i ∈ F i , and u pr i | F i , w i | F i , u pr i+1 | F i are all Gaussian, A i x i + Bu pr i + w i u pr i+1 , µ ,(70)
as an affine function of the above terms, is also F iconditionally Gaussian, whose mean and variance are:
E A i x i + Bu pr i + w i u pr i+1 , µ F i = A i x i , µ 1 , E A i x i + Bu pr i + w i u pr i+1 , µ − A i x i , µ 1 2 F i = E Bu pr i + w i u pr i+1 , µ 2 F i = µ T i −1/2 BB + I 0 0 (i + 1) −1/2 µ ≥ µ T I 0 0 k −1/2 µ ≥ k −1/2 .(71)
The conclusion (69) then follows from the fact that for any X ∼ N (µ, σ 2 ), it holds P(|X| ≥ σ) ≥ P(|X − µ| ≥ σ) ≥ 3/10.
An upper bound on V −1 k can be obtained by applying an anti-concentration property of BMSB, along with a covering argument in [20]: Lemma 16. For any fixed k ≥ 2 and 0 < δ k ≤ 1/2, it holds
P V −1 k ≥ 1600 9 k −1/2 ≤ δ k + e − 9 800 k+(m+n) log(800C V k 1/2 (log(k/δ k )) 2 ) ,(72)
where C V is defined in Theorem 3, item 5).
Proof. Firstly, for any fixed µ ∈ S m+n , applying [20,Prop. 2.5]
to the process { z i , µ } k−1 i=1 , which is (1, k −1/4 , 3/10)- BMSB by Lemma 15, we have P k−1 i=1 z i , µ 2 ≤ k −1/2 (3/10) 2 8 k ≤ e − (3/10) 2 k 8 ,(73)
i.e.,
P µ V k µ ≤ 9 800 k 1/2 ≤ e − 9 800 k .(74)
Next, we shall choose multiple µ's and using a covering argument to lower bound the minimum eigenvalue of V k , and hence to upper bound V −1 k : let Γ = C V k(log(k/δ k )) 2 I m+n , and Γ = (9/800)k 1/2 I m+n . Let T be a minimal 1/4-net of S Γ in the norm · Γ , then by [20, Lemma D.1], we have log |T | ≤ (m + n) log(9) + log det(ΓΓ −1 )
≤ (m + n) log(800C V k 1/2 (log(k/δ k )) 2 ).(75)
According to (74) and (75), we have P µ V k µ ≤ 9 800 k 1/2 , ∀µ ∈ T ≤ |T |e − 9 800 k ≤ e − 9 800 k+(m+n) log(800C V k 1/2 (log(k/δ k )) 2 ) .
On the other hand, according to Proposition 13 and Theorem 3, item 1), we have
P V k ≤ C V k(log(k/δ k )) 2 ≤ δ k ,(77)
From (76), (77) and [20,Lemma D.1], it follows that P (V k Γ/2) is no greater than the RHS of (72), which is equivalent to the conclusion.
The next proposition converts Lemma 16 into a timeuniform bound:
Proposition 17. For δ satisfying (21) and k 0 defined in (23), it holds
P V −1 k ≤ 1600 9 k −1/2 , ∀k ≥ k 0 ≤ 3δ.(78)
Proof. For k ≥ k 0 , with δ k = δ/k 2 , it holds
k ≥ (1300(m + n)) 4/3 ,(79)
and hence, (m + n) log(800C V k 1/2 (log(k/δ k )) 2 ) ≤(m + n) 3 log(1/δ) + 13 2 log(k)
≤ 1 200 k + 13 2 (m + n)k 1/4 ≤ 1 100 k.(80)
Substituting (80) into (72), we have
P V −1 k ≥ 1600 9 k −1/2 ≤ δ k 2 + e − 1 800 k .(81)
Taking the union bound over k = k 0 , k 0 + 1, . . ., we have
P V −1 k ≥ 1600 9 k −1/2 , ∀k ≥ k 0 ≤ π 2 6 δ + 801e − 1 800 k0 ≤ 3δ.(82)
Theorem 3, item 5) follows from Propositions 12, 13 and 17.
F. Proof of Theorem 3, item 6)
Proof. Let
T 1 = inf T A t k P * A t k < ρP * , ∀k ≥ T ,(83)T 2 = inf T A + BK k P * A + BK k < ρP * , ∀k ≥ T .(84)
We shall bound T 1 and T 2 respectively: 1) From the assumption ρ(A) < 1, it holds
lim k→∞ A k P A k = 0,(85)
which, together with t k = log(k) , implies T 1 is a finite constant independent of δ, i.e., T 1 1. 2) Since (A + BK) P * (A + BK) is a continuous function of K, from (10), there exists a system-dependent constant K , such that (A + BK) P * (A + BK) < ρP * whenever K − K * < K . On the other hand, sincê K k is a continuous function ofΘ k [9, Proposition 6], there exists a system-dependent constant Θ , such that K − K * < K as long as Θ k − Θ < Θ . It follows from Theorem 3, item 5) that Θ k − Θ < Θ whenever k ≥ (9C 2 Θ / 2 Θ )(log(1/δ)) 2 , and hence T 2 (log(1/δ)) 2 . In summary, it holds T stab = max{T 1 , T 2 } (log(1/δ)) 2 .
G. Proof of Theorem 3, item 7)
This subsection is dedicated to bounding the time after which the circuit-breaking is not triggered any more. The outline of the proof is stated as follows: firstly, we define a subsequence notation to deal with the dwell time t k of circuit breaking. Secondly, an upper bound on the state and the certainty equivalent input after T stab is derived, which is shown to be asymptotically smaller than the circuit-breaking threshold M k = log(k). Based on the above upper bound on the certainty equivalent input, we can finally bound T nocb , i.e., the time it takes for the certainty equivalent input to stay below the threshold M k , which leads to the desired conclusion. 1) A subsequence notation: Consider the subsequence of states and inputs, where steps within the circuit-breaking period are skipped, defined below:
i(1) = 1, i(k + 1) = i(k) + 1 u cb i(k) = 0 i(k) + t i(k) u cb i(k) = 0 ,(86)x k = x i(k) ,ũ ce k = u ce i(k) ,ũ cb k = u cb i(k) .(87)
Consider T stab defined in (13). We can defineT stab as the first index in the above subsequence for which the stabilization condition is satisfied, i.e.,
T stab = inf{T | i(T ) ≥ T stab }.(88)
2) Upper bound on state and certainty equivalent input after T stab :
Proposition 18. On the event E noise (δ) ∩ E est (δ), it holds xT stab +k ρ k/2 log(1/δ) + log(k/δ),(89)
ũ cẽ
T stab +k ρ k/2 log(1/δ) + log(k/δ),(90)
where ρ = (1 + ρ * )/2.
Proof. We can expandxT stab +k as:
xT stab+k =ÃT stab+k−1ÃTstab+k−2 · · ·ÃT stabxTstab + AT stab+k−1ÃTstab+k−2 · · ·ÃT stab+1wTstab + · · · + wT stab+k−1 ,(91)
where:
•Ã j ∈ {A + BK i(j) , A t i(j) }, and must satisfyà j Pà j < ρP for j ≥T stab , by definition ofT stab in (13);
•w j ∈ {w i(j) , t i(j) −1 τ =0
A τ w i(j−1)+τ }, and on the event E noise (δ), it must satisfy w j A log(j/δ) log(j/δ) for any j, where A = ∞ τ =0 A τ . Combining the above two items, we have xT stab+k ρ k/2 xT stab + log i T stab + k /δ . (92)
We shall next bound xT stab and i(T stab + k) respectively:
1) According to the definition ofT stab and i(·) in (86) and (88), we have
i T stab ≤ T stab + log(T stab ).(93)
On E est (δ), according to Theorem 3, item 6, we have T stab (log(1/δ)) 2 , and hence i T stab (log(1/δ)) 2 + log((log(1/δ)) 2 )
(log(1/δ)) 2 .(94)
Furthermore, according to Theorem 3, item 3, on E noise (δ), we have x k log(k/δ) for any k, and hence
xT stab = x i(Tstab) log (log(1/δ)) 2 δ = log(1/δ) + log((log(1/δ)) 2 ) log(1/δ).(95)
2) By definition of i(·) in (86), we have i T stab + k + 1 ≤T stab + k + log T stab + k , ∀k ∈ N * .
(96) Applying induction to (93) and (96), we can obtain
i T stab + k k log(T stab k) + T stab .(97)
Substituting T stab (log(k/δ)) 2 , which holds on E est (δ) according to Theorem 3, item 6), into (97), we have
i T stab + k k(log(k/δ)) 2 .(98)
Hence, inequality (89) follows from substituting (95) is uniformly bounded by definition of T stab in (13), we have
ũ cẽ Tstab+k ≤ K i(Tstab+k) xT stab+k xT stab+k ,(99)
which implies (90).
3) Upper bound on T nocb : Now we are ready to prove Theorem 3, item 7):
Proof. For any > 0, according to (98), we have i(T stab +k) (1/δ) α+ as long as k (1/δ) α− . Hence, we only need to prove
u cb k ≡ u ce k , ∀k ≥ i T stab + k 0 ,(100)
for some k 0 (1/δ) α− . Notice that (100) is equivalent tõ
u cb Tstab+k ≡ũ cẽ Tstab+k , ∀k ≥ k 0 ,(101)
which is then equivalent to
ũ cẽ Tstab+k ≤ M k = log(k), ∀k ≥ k 0 .(102)
According to Proposition 18, we only need to verify
ρ k/2 log(1/δ) + log(k/δ) log(k),(103)
whenever k (1/δ) α− . In such case, we have
ρ k/2 log(1/δ) + log(k/δ) log(1/δ) + log(k) + log(1/δ) log(k) + log(k) log(k),(104)
from which the conclusion follows.
H. Proof of Theorem 6
In this subsection, we first decompose the regret of the proposed controller into multiple terms, then derive upper bounds on the terms respectively to obtain the high-probability regret bound stated in Theorem 6. 1) Regret decomposition: We first state a supporting lemma: Lemma 19. Let K * , P * be defined in (6), (7) respectively, and K = K * + ∆K, then
Q + K RK + (A + BK) P * (A + BK) − P * =∆K (R + B P * B)∆K.(105)
Proof. Substituting the Lyapunov equation (9) into the LHS of (105), we have
Q + K RK + (A + BK) P * (A + BK) − P * =K RK − (K * ) RK * + (A + BK) P * (A + BK)− (A + BK * ) P * (A + BK * ) =∆K R + B P * B ∆K + G + G ,(106)
where
G = ∆K (R + B P * B)K * + B P * A .(107)
By definition of K * in (6), we have G = 0, and hence the conclusion holds.
Now we are ready to state the decomposition of regret:
Proposition 20. The regret of the proposed controller defined in (11) can be decomposed as
R(T ) = 7 i=1 R i (T ),(108)
with the terms R i (T ) defined as:
R 1 (T ) = T k=1 x k (K k − K * ) (R + B T P * B)(K k − K * )x k ,(109)R 2 (T ) = 2 T k=1 (u pr k ) B P * (A + BK k )x k ,(110)R 3 (T ) = 2 T k=1 w k P * (A + BK k )x k ,(111)R 4 (T ) = T k=1 (s k P * s k − w k P * w k ),(112)R 5 (T ) = T k=1 w k P * w k − T J * ,(113)R 6 (T ) = x 1 P * x 1 − x T +1 P * x T +1 ,(114)R 7 (T ) = T k=1 2 (u pr k ) Ru cb k + (u pr k ) Ru pr k ,(115)
where K k , s k are defined as:
K k = K k u cb k = u ce k 0 otherwise , s k = Bu pr k + w k .(116)
Proof. From u k = u cb k + u pr k = K k x k + u pr k , it holds
R(T ) = T k=1 x k Qx k + u k Ru k − T J * = T k=1 x k Q + K k RK k x k + 2 (u pr k ) Ru cb k + (u pr k ) Ru pr k − T J * = T k=1 x k Q + K k RK k x k + x k+1 P * x k+1 − x k P * x k − T J * + R 6 (T ) + R 7 (T ).(117)
We can further expand the summands in the RHS of the above inequality: from x k+1 = (A + BK k )x k + s k , we have
x k Q + K k RK k x k + x k+1 P * x k+1 − x k P * x k =x k Q + K k RK k + (A + BK k ) P * (A + BK k ) − P * x k + 2s k P * (A + BK k )x k + s k P * s k =x k (K k − K * ) (R + B P * B)(K k − K * )x k + 2s k P * (A + BK k )x k + s k P * s k ,(118)
where the last equality follows from Lemma 19. It follows from simple algebra that
T k=1 x k Q + K k RK k x k + x k+1 P * x k+1 − x k P * x k − T J * = 5 i=1 R i (T ),(119)
and hence the conclusion follows.
2) Upper bound on regret terms: Next we shall bound the terms R i (T ) (i = 1, . . . , 8) respectively: Proposition 21. The regret terms defined in (109)-(115) can be bounded as follows:
1) On the event E noise (δ) ∩ E est (δ), for T > T nocb , it holds
R 1 (T ) (1/δ) 1/4 (log(1/δ)) 4 + √ T (log(T /δ)) 3 . (120)
2) On the event E noise (δ), it holds
|R 2 (T )| √ T (log(T /δ)) 3/2 .(121)
3) On the event E cross (δ), it holds
|R 3 (T )| √ T (log(T /δ)) 2 .(122)
4) On the event E noise (δ), it holds
|R 4 (T )| ≤ √ T log(T /δ).(123)
5) On the event E cov (δ), it holds
|R 5 (T )| T log(1/δ).(124)
6) On the event E noise (δ), it holds |R 6 (T )| (log(T /δ)) 2 .
7) On the event E noise (δ), it holds
|R 7 (T )| √ T log(T /δ).(126)
Proof. 1) Let
r 1k = x k (K k − K * ) R + B P * B (K k − K * )x k .
(127) We shall next bound Tnocb k=1 r 1k and T k=Tnocb+1 r 1k respectively: a) For k ≤ T nocb , we have x k log(k/δ) by Theorem 3, item 3), and K k x k = u cb k ≤ log(k). Therefore, it holds
r 1k (log(k/δ)) 2 ,(128)
and hence, Tnocb k=1 r 1k T nocb (log(T nocb /δ)) 2 .
Invoking Theorem 3, item 7 with α = 1/4, we get Tnocb k=1 r 1k (1/δ) 1/4 (log(1/δ)) 4 .
b) For k ≤ T nocb , by definition of T nocb , we have K k = K k . Hence, by definition of E est and the fact thatK k is a continuous function ofΘ k [9, Proposition 6], we have
K k − K * = K k − K * Θ k − Θ k −1/4 (log(k/δ)) 1/2 .(131)
Furthermore, by Theorem 3, item 3, we have x k log(k/δ), and hence,
r 1k K k −K * 2 x k 2 k −1/2 (log(k/δ)) 3 . (132) Therefore, T k=Tnocb+1 r 1k √ T (log(T /δ)) 3 .(133)
Summing up (130) and (133) leads to (120).
2) Let
r 2k = (u pr k ) B P * (A + BK k )x k ,(134)
whose factors can be bounded as follows:
• u pr k = k 1/2 v k k −1/2 (log(k/δ)) 1/2 by defini- tion of E noise ; • x k log(k/δ) by Theorem 3, item 3); • K k x k = u cb k ≤ M k = log(k) according to the proposed controller. Hence, |r 2k | k −1/2 (log(k/δ)) 3/2 ,(135)
and summing up (135) from k = 1 to T leads to (121).
3) The inequality (122) follows directly from the definition of E cross (δ) in (19).
4) Let r 4k = s k P * s k − w k P * w k .(136)
From s k = w k + Bu pr k = w k + k −1/2 Bv k , we have
r 4k = 2k −1/2 w k P * Bv k + k −1 v k B P * Bv k . (137)
Hence, by definition of E noise (δ), we have |r 4k | k −1/2 log(k/δ).
(138)
Summing up (138) from k = 1 to T leas to (123). 5) Since J * = tr(P * ) (see (8)), we have
|R 5 (T )| = T k=1 tr(w k w k P ) − T tr(P ) = tr T k=1 (w k w k − I n ) P T k=1 (w k w k − I n ) T log(1/δ),(139)
where the last inequality follows from the definition of E cov (δ), which proves (124). 6) The inequality (125) is a direct corollary of Theorem 3, item 3). 7) Let r 7k = 2 (u pr k ) Ru cb k + (u pr k ) Ru pr k ,
where u pr k and u cb k satisfy: • u pr k = k −1/2 v k k −1/2 (log(k/δ)) 1/2 by definition of E noise (δ);
• u cb k ≤ M k = log(k) according to the proposed controller. Hence, |r 7k | k −1/2 log(k/δ),
and summing up (141) from k = 1 to T leads to (126).
Theorem 6 follows from combining Propositions 20 and 21.
V. SIMULATION
In this section, the proposed controller is validated on the Tennessee Eastman Process (TEP) [21]. In particular, we consider a simplified version of TEP similar to the one in [22], with full state feedback. The system is open-loop stable, and has state dimension n = 8 and input dimension m = 4. The process noise distribution is chosen to be w k i.i.d.
∼ N (0, I n ). The weight matrices of LQR are chosen to be Q = I n and R = I m . The plant under the proposed controller is simulated for 10000 independent trials, each with T = 3 × 10 8 steps. As mentioned in Remark 2, the certainty equivalent gainK k is updated only at steps k = 2 i , i ∈ N * for the sake of fast computation.
The evolution of regret against time is plotted in Fig. 2. For the ease of observation, we plot the relative average regret R(T )/(T J * ) against the total time step T , where J * is the optimal cost. Fig. 2 shows 5 among the 10000 trials, from which one can observe a 1/ √ T convergence rate of the relative average regret (i.e., a 1 order-of-magnitude increase in T corresponds to a 0.5 order-of-magnitude decrease in R(T )/(T J * )), which matches the √ T theoretical growth rate of regret. To inspect the statistical properties of all the trials, we sort them by the average regret at the last step, and plot the worst, median and mean cases in Fig. 2b. One can observe that the average regret converge to zero even in the worst case, which validates the almost-sure guarantee in Theorem 8. An insight to behold on the performance of the proposed controller is that the circuit-breaking mechanism is triggered only finitely, and the time of the last trigger T nocb , as stated in Corollary 5, has a super-polynomial tail. This insight is also empirically validated: among all the 10000 trials, circuitbreaking is never triggered after step 1.4×10 6 , and a histogram of T nocb is shown in Fig. 3, from which one can observe that the empirical distribution of T nocb has a fast decaying tail. In this paper, we propose an adaptive LQR controller that can achieveÕ( √ T ) regret almost surely. A key underlying the controller design is a circuit-breaking mechanism, which ensures the convergence of the parameter estimate, but is triggered only finitely often and hence has negligible effect on the asymptotic performance. A future direction would be extending such circuit-breaking mechanism to the partially observed LQG setting.
Fig. 2 :
2Double-log plot of average regret against time step
Fig. 3 :
3Histogram of T nocb among all sample paths VI. CONCLUSION
TABLE I :
IComparison with selected existing works on adaptive LQRWork
Rate
Type of guarantee
[13]
Not provided Almost sure
[6], [7], [9]Õ(
√
T )
Probability 1 − δ
[10]Õ(
√
T )
Almost sure *
[11]Õ(
√
T )
Convergence in probability *
This workÕ(
√
T )
Almost sure
The authors are with
Adaptive control around 1960. K J Astrom, IEEE Control Systems Magazine. 163K. J. Astrom, "Adaptive control around 1960," IEEE Control Systems Magazine, vol. 16, no. 3, pp. 44-49, 1996.
On self tuning regulators. K J Åström, B Wittenmark, Automatica. 92K. J.Åström and B. Wittenmark, "On self tuning regulators," Automat- ica, vol. 9, no. 2, pp. 185-199, 1973.
Global stability of parameter-adaptive control systems. A Morse, IEEE Transactions on Automatic Control. 253A. Morse, "Global stability of parameter-adaptive control systems," IEEE Transactions on Automatic Control, vol. 25, no. 3, pp. 433-439, 1980.
Extended least squares and their applications to adaptive control and prediction in linear systems. T Lai, C.-Z Wei, IEEE Transactions on Automatic Control. 3110T. Lai and C.-Z. Wei, "Extended least squares and their applications to adaptive control and prediction in linear systems," IEEE Transactions on Automatic Control, vol. 31, no. 10, pp. 898-906, 1986.
Online least squares estimation with self-normalized processes: An application to bandit problems. Y Abbasi-Yadkori, D Pál, C Szepesvári, arXiv:1102.2670arXiv preprintY. Abbasi-Yadkori, D. Pál, and C. Szepesvári, "Online least squares estimation with self-normalized processes: An application to bandit problems," arXiv preprint arXiv:1102.2670, 2011.
Improved regret bounds for thompson sampling in linear quadratic control problems. M Abeille, A Lazaric, International Conference on Machine Learning. PMLRM. Abeille and A. Lazaric, "Improved regret bounds for thompson sam- pling in linear quadratic control problems," in International Conference on Machine Learning. PMLR, 2018, pp. 1-9.
Learning linear-quadratic regulators efficiently with only √ t regret. A Cohen, T Koren, Y Mansour, International Conference on Machine Learning. PMLRA. Cohen, T. Koren, and Y. Mansour, "Learning linear-quadratic regu- lators efficiently with only √ t regret," in International Conference on Machine Learning. PMLR, 2019, pp. 1300-1309.
Regret bounds for robust adaptive control of the linear quadratic regulator. S Dean, H Mania, N Matni, B Recht, S Tu, Advances in Neural Information Processing Systems. 31S. Dean, H. Mania, N. Matni, B. Recht, and S. Tu, "Regret bounds for robust adaptive control of the linear quadratic regulator," Advances in Neural Information Processing Systems, vol. 31, 2018.
Naive exploration is optimal for online lqr. M Simchowitz, D Foster, International Conference on Machine Learning. PMLR, 2020. M. Simchowitz and D. Foster, "Naive exploration is optimal for online lqr," in International Conference on Machine Learning. PMLR, 2020, pp. 8937-8948.
On adaptive linear-quadratic regulators. M K S Faradonbeh, A Tewari, G Michailidis, Automatica. 117108982M. K. S. Faradonbeh, A. Tewari, and G. Michailidis, "On adaptive linear-quadratic regulators," Automatica, vol. 117, p. 108982, 2020.
Exact asymptotics for linear quadratic adaptive control. F Wang, L Janson, J. Mach. Learn. Res. 22F. Wang and L. Janson, "Exact asymptotics for linear quadratic adaptive control." J. Mach. Learn. Res., vol. 22, pp. 265-1, 2021.
Finite-time adaptive stabilization of linear systems. M K S Faradonbeh, A Tewari, G Michailidis, IEEE Transactions on Automatic Control. 648M. K. S. Faradonbeh, A. Tewari, and G. Michailidis, "Finite-time adap- tive stabilization of linear systems," IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3498-3505, 2018.
Self-convergence of weighted least-squares with applications to stochastic adaptive control. L Guo, IEEE transactions on automatic control. 411L. Guo, "Self-convergence of weighted least-squares with applications to stochastic adaptive control," IEEE transactions on automatic control, vol. 41, no. 1, pp. 79-89, 1996.
Stability and stabilizability of switched linear systems: a survey of recent results. H Lin, P J Antsaklis, IEEE Transactions on Automatic control. 542H. Lin and P. J. Antsaklis, "Stability and stabilizability of switched linear systems: a survey of recent results," IEEE Transactions on Automatic control, vol. 54, no. 2, pp. 308-322, 2009.
Convergence rate of an els-based adaptive tracker. L Guo, H Chen, System Sciece and Mathematical Sciences. 12131L. Guo and H. Chen, "Convergence rate of an els-based adaptive tracker," System Sciece and Mathematical Sciences, vol. 1, no. 2, p. 131, 1988.
Certainty equivalence is efficient for linear quadratic control. H Mania, S Tu, B Recht, Advances in Neural Information Processing Systems. 32H. Mania, S. Tu, and B. Recht, "Certainty equivalence is efficient for linear quadratic control," Advances in Neural Information Processing Systems, vol. 32, 2019.
Ensuring the safety of uncertified linear statefeedback controllers via switching. Y Lu, Y Mo, arXiv:2205.08817arXiv preprintY. Lu and Y. Mo, "Ensuring the safety of uncertified linear state- feedback controllers via switching," arXiv preprint arXiv:2205.08817, 2022.
Adaptive estimation of a quadratic functional by model selection. B Laurent, P Massart, Annals of Statistics. B. Laurent and P. Massart, "Adaptive estimation of a quadratic functional by model selection," Annals of Statistics, pp. 1302-1338, 2000.
Weighted sums of certain dependent random variables. K Azuma, Tohoku Mathematical Journal, Second Series. 193K. Azuma, "Weighted sums of certain dependent random variables," Tohoku Mathematical Journal, Second Series, vol. 19, no. 3, pp. 357- 367, 1967.
Learning without mixing: Towards a sharp analysis of linear system identification. M Simchowitz, H Mania, S Tu, M I Jordan, B Recht, Conference On Learning Theory. PMLRM. Simchowitz, H. Mania, S. Tu, M. I. Jordan, and B. Recht, "Learning without mixing: Towards a sharp analysis of linear system identifica- tion," in Conference On Learning Theory. PMLR, 2018, pp. 439-473.
A plant-wide industrial process control problem. J J Downs, E F Vogel, Computers & chemical engineering. 173J. J. Downs and E. F. Vogel, "A plant-wide industrial process control problem," Computers & chemical engineering, vol. 17, no. 3, pp. 245- 255, 1993.
An online approach to physical watermark design. H Liu, Y Mo, J Yan, L Xie, K H Johansson, IEEE Transactions on Automatic Control. 659H. Liu, Y. Mo, J. Yan, L. Xie, and K. H. Johansson, "An online approach to physical watermark design," IEEE Transactions on Automatic Control, vol. 65, no. 9, pp. 3895-3902, 2020.
| [] |
[
"A Longitudinal Analysis of a Social Network of Intellectual History",
"A Longitudinal Analysis of a Social Network of Intellectual History"
] | [
"Cindarella Petz [email protected] \nBavarian School of Public Policy\nTechnical University of Munich\nMunichGermany\n",
"Raji Ghawi [email protected] \nBavarian School of Public Policy\nTechnical University of Munich\nMunichGermany\n",
"Jürgen Pfeffer [email protected] \nBavarian School of Public Policy\nTechnical University of Munich\nMunichGermany\n"
] | [
"Bavarian School of Public Policy\nTechnical University of Munich\nMunichGermany",
"Bavarian School of Public Policy\nTechnical University of Munich\nMunichGermany",
"Bavarian School of Public Policy\nTechnical University of Munich\nMunichGermany"
] | [] | The history of intellectuals consists of a complicated web of influences and interconnections of philosophers, scientists, writers, their work, and ideas. How did these influences evolve over time? Who were the most influential scholars in a period? To answer these questions, we mined a network of influence of over 12,500 intellectuals, extracted from the Linked Open Data provider YAGO. We enriched this network with a longitudinal perspective, and analysed time-sliced projections of the complete network differentiating between within-era, inter-era, and accumulated-era networks. We thus identified various patterns of intellectuals and eras, and studied their development in time. We show which scholars were most influential in different eras, and who took prominent knowledge broker roles. One essential finding is that the highest impact of an era's scholar was on their contemporaries, as well as the inter-era influence of each period was strongest to its consecutive one. Further, we see quantitative evidence that there was no re-discovery of Antiquity during the Renaissance, but a continuous reception since the Middle Ages. | 10.1109/asonam49781.2020.9381318 | [
"https://arxiv.org/pdf/2009.03604v1.pdf"
] | 221,534,947 | 2009.03604 | 88b1c0c1c16ad25262bb06786cdbfdcff556ab4b |
A Longitudinal Analysis of a Social Network of Intellectual History
Cindarella Petz [email protected]
Bavarian School of Public Policy
Technical University of Munich
MunichGermany
Raji Ghawi [email protected]
Bavarian School of Public Policy
Technical University of Munich
MunichGermany
Jürgen Pfeffer [email protected]
Bavarian School of Public Policy
Technical University of Munich
MunichGermany
A Longitudinal Analysis of a Social Network of Intellectual History
The history of intellectuals consists of a complicated web of influences and interconnections of philosophers, scientists, writers, their work, and ideas. How did these influences evolve over time? Who were the most influential scholars in a period? To answer these questions, we mined a network of influence of over 12,500 intellectuals, extracted from the Linked Open Data provider YAGO. We enriched this network with a longitudinal perspective, and analysed time-sliced projections of the complete network differentiating between within-era, inter-era, and accumulated-era networks. We thus identified various patterns of intellectuals and eras, and studied their development in time. We show which scholars were most influential in different eras, and who took prominent knowledge broker roles. One essential finding is that the highest impact of an era's scholar was on their contemporaries, as well as the inter-era influence of each period was strongest to its consecutive one. Further, we see quantitative evidence that there was no re-discovery of Antiquity during the Renaissance, but a continuous reception since the Middle Ages.
I. INTRODUCTION
"No self is of itself alone", wrote Erwin Schrödinger in 1918 [16] and noted, "It has a long chain of intellectual ancestors". The history of intellectuals is comprised of a myriad of such long chains, embedded in a tapestry of competing influences of "ageless" ideas, which -in the words of the French scholar Bonaventura D'Argonne in 1699 -"embrace [...] the whole world" [10].
To understand the dynamics of influence and spread of ideas through history, the embeddness and interconnections of scholarship should be taken into account. A network approach offers to identify the most influential scholars via their positions in a network of intellectual influence through the history. This allows to study their social relations [26], [12], [20], and to provide deep insights into the underlying social structure.
A recent study by Ghawi et al. [6] addressed the analysis of such a social network of intellectual influence incorporating over 12,500 scholars from international origins since the beginning of historiography. In this paper, we build upon [6], and extend the analysis of that network by incorporating a temporal dimension. We analyze the network of scholars dependent to their time, adding a longitudinal perspective on how scholars formed networks. As such, we opt for an inclusive, global perspective on the history of intellectuals. This perspective of a vast longitudinal global network of intellectuals answers to recent discussions on not-global-enough research within intellectual history [11]. We thus attempt to go beyond the traditional "master narratives" [5] of a Western European centrist view on intellectual history [24]. The goal of this paper is not only to understand how the influence relations among scholars evolved over time, but also to get deep insights on their influence on historical periods. The questions we seek to answer are:
• How did these influence networks evolve over time?
• Who were the most influential scholars in a period? • Which patterns of influence did emerge? To answer these questions, we analyze the evolution of influences in time in order to identify periods and scholars, who stand out.
The contributions of this paper are:
• We incorporate a longitudinal perspective on the social network analysis of intellectuals based on a global periodization of history. • We identify patterns of influence, and their distribution in within-, inter-, and accumulated-era influence networks. • We identify influence signatures of scholars and eras. • We identify scholars with various knowledge broker roles. This paper is organized as follows. Section II reviews related works. In Section III, we briefly outline the data set's characteristics and pre-processing. Section IV presents the network analysis of the entire network, and its timesliced projections into partial influence networks (within-era, inter-era, and accumulated-era), featuring their basic network metrics, degree distribution and connectivity. In section V, we identify different influence patterns of scholars and eras. Section VI is devoted to the longitudinal analysis of brokerage roles in scholars.
II. RELATED WORK
The term of intellectual history combines a plethora of approaches on discourse analysis, evolution of ideas, intellectual genealogies, and the history of books, various scientific disciplines, political thought, and intellectual social context [27], [8]. These studies are usually limited to certain regions or time spans as a trade-off for thorough comparative and textual analysis. Endeavors to write a "Global Intellectual History" [17] were criticized for focusing on the more wellknown intellectual thinkers despite including a transnational comparative perspective [23].
Network Methodologies allow to analyze intellectual history and as such the history of intellectuals as big data encompassing time and space with a focus on their inter-connections. So far, computational methods were used in the study of communication networks of the respublica litteraria, in which various studies modeled the Early Modern scholarly book and letter exchanges as networks. Among the first was "Mapping the Republic of Letters" at Stanford University in 2008 [2].
More recent studies incorporated a temporal perspective on these epistolary networks [25].
A recent study [6] proposed to study the entire history of intellectuals with the means of a network approach. This paper defined the most influential as those with the longest reaching influence (influence cascades), and identified as such Antique and Medieval Islam scholars, and as the one with the most out-going influences, Karl Marx. In this paper, we extend this analysis by incorporating a temporal dimension, in order to establish a deeper insight on how these influences evolved in time.
Much research has been devoted to the area of longitudinal social networks [18], [14], [22], [13]. Longitudinal network studies aim at understanding how social structures develop or change over time usually by employing panel data [12]. Snapshots of the social network at different points in time are analyzed in order to explain the changes in the social structure between two (or more) points in time, in terms of the characteristics of the actors, their positions in the network, or their former interactions.
In this paper, we do not use the classical notion of network snapshot, which is a static network depicted at a given point in time. Rather, we split the time span (i.e., the history) into consecutive periods (eras), and embed the network nodes (actors) into the eras in which they lived. This way, the microlevel influence among actors can be viewed as a macro-level influence among periods of history. This enables the analysis of the influence network within each era, between different eras, and in an accumulative manner.
In its core, a network of scholarly influence is a citation network, answering to who is influenced by whom [1]. However, in citation networks the influence is indirect, as the relation is originally among documents, from which a social relation among the authors is inferred. In the data set used here, the influence relation is rather direct among intellectuals.
III. DATA
A. Data Acquisition and Preprocessing
The source of information used in this paper originated YAGO (Yet Another Great Ontology) [15], a pioneering semantic knowledge base that links open data on people, cities, countries, and organizations from Wikipedia, WordNet, and GeoNames. At YAGO, an influence relation appears in terms of the influences predicate that relates a scholar to another when the latter is influenced by the ideas, thoughts, or works of the former. The accuracy of this relation was evaluated by YAGO at 95%. We extracted a data set that encompasses all influence relationships available in YAGO, using appropriate SPARQL queries that implement social networks mining from LOD techniques [7]. The result consisted of 22,818 directed links among 12,705 intellectuals, that made up the nodes and edges of our target social network of influence. In order to incorporate a time dimension to our analysis, we extracted birth and death dates of each scholar. Some scholars had missing birth and/or death dates, which we deduced by subtracting 60 years from the death date, and vice versa, up to the symbolic year of 2020. When both dates were missing, we manually verified them. During this process, we had to remove some entities, as they did not correspond to intellectuals. Those were either 1) concepts, e.g., 'German philosophy' and 'Megarian school', 2) legendary characters, e.g., 'Gilgamesh' and 'Scheherazade', or 3) bands e.g., 'Rancid' and 'Tube'. To this end, we obtained a new data set of 12,577 actors with complete dates of birth and death.
B. Periodization
Introducing a longitudinal perspective, we opted for a periodization taking global events into account. Any periodization is a construct of analysis, as each field of research has its own timeline characterizing periods [21], which are dependent on different caesura for the respective object of research [19]. This complicates an overarching longitudinal perspective on a global scale. We used Osterhammel's global periodization [19] to match the internationality of the scholars, and worked with six consecutive periods (eras): Antiquity (up to 600 AD), Middle Ages (600, 1350), Early Modern Period (1350, 1760), Transitioning Period (1760, 1870), Modern Age (1870, 1945), and Contemporary Period (1945, 2020).
One conceptual challenge was to map actors into eras. Many actors fit to more than one period's timeline. We opted for a single era membership approach since it is more intuitive and easier to conceive, and reduces the complexity of analysis and computations, while grasping the essential membership to an era of each scholar. It also offers adequate results when we compare eras, since it avoids redundancy.
In order to assign a single era to an actor, we used the following method: We assumed that scholars would not be active in the first 20 years of their lives. Therefore, we calculated the mid point of the scholar's lifespan ignoring the first 20 years of their age, then we assigned the era in which this mid point occurs.
After this initial assignment process, we verified the global validity of assignments by counting the number of influence links from one era to another. We observed that there were some reverse links of eras, i.e., an influence relation from an actor in a recent era towards an actor assigned to an older era. Those anomaly cases (about 200) were basically due to:
• Errors in dates:
some dates were stated in the Hijri calendar, instead of the Gregorian calendar, and some dates were BC and missing the negative sign.
• Errors in direction of the relationship: source and target actors were wrongly switched. • Inappropriate era-actor assignments. The anomalies due to errors have been manually corrected. The cases of inappropriate assignment were technically not erroneous. This usually happened when the influencer lived much longer than the influenced, elevating the influencer's period into a more recent one. We solved this by iteratively re-assigning either the influencer backward to the era of the influenced, or the influenced forward to the era of the influencer. As a result, each actor is assigned to exactly one era, such that no reverse links of eras exist. The final cleaned dataset consists of 22,485 influence links among 12,506 actors.
IV. ANALYSIS Figure 1 shows each era's continuous density of scholars based on their lifespan.
With scholars embedded in their respective eras, the entire influence network can be time-sliced: we projected it into several partial networks based on the source era (of the influencer) and target era (of the influenced scholar). When the source and target eras are the same, we called the partial network a within-era influence network. When the source and target eras are different, we called the partial network an interera influence network. There are no reverse links from a later era to a previous one due to pre-processing.
After time-slicing the whole network, we received six within-era networks corresponding to all the six eras, and 15 inter-era networks, corresponding to all chronologically ordered but not necessarily consecutive pairs of different eras. Additionally, we constructed six accumulated-era influence networks of all scholars living up to and including the target era. Figure 2 shows the proportion of influence links among all pairs of eras. There, we can make already two major observations for inter-and within-era influence relations: For one, the highest fraction of influence received by scholars of each era come from its own era. This means that the internal impact of any era is in general higher than its external impact. In absolute numbers, the vast majority of links occur within the Contemporary era, followed by links from Modern Age to Contemporary, and within Modern Age, which is clearly owed to the increased amount of scholars in these periods. The inter-era influences of each period is strongest to its consecutive period. As our earliest period, Antiquity receives only influence links from itself, whereas the influence received in the Middle Ages are 82% internal, and 18% from Antiquity. Subsequently, the amount of the within-era influence shrinks throughout the consecutive periods, but still remains the biggest influence. Noteworthy here is the high proportion of influences of Antiquity on the Early Modern Period, which represents their increased reception during the Renaissance. However, the proportionately many links of Antiquity to the Middle Ages reassert the shift in historical research that the Renaissance did not "re-discover" Antiquity, but was received before in the Middle Ages as well [4, p. 3-4].
A. Within-Eras Influence Networks
In the following, we analysed the six within-era influence networks, which represent the internal impact of an era. We extracted the following metrics, as shown in Table I: • Number of nodes N , and edges E, and density D. • Average out-degree (= avg. in-degree due to the properties of a directed graph) • Max. in-degree, max. out-degree, and max. degree. • WCC: number of weakly connected components.
• LWCC: size of the largest weakly connected component. • SCC: number of strongly connected components, when the number of nodes is > 1). • Reciprocity and transitivity. We included N A in Table I in order to contain that the number of nodes N in a within-era network could be less than the number of actors of that era A. This is owed to the fact that not all scholars of an era necessarily participated in its within-era influence network. Some scholars influenced or were influenced by actors of different eras only. However, around 80% of scholars in each era were active in these withinera networks. The highest value of 86% of the Middle Ages refer to their relative self-containment as an era, as well as Over all eras, the amount of nodes and edges steadily increased, while the density of networks decreased. On average, the out-degree is relatively stable around 1.25, where the highest value of 1.5 occurs in Antiquity, and the lowest of 1.14 in the Early Modern period. When we compare the evolution of the max. out-degree in time, we find that the expected continuous increase did not always hold due to two exceptionally high observations at Antiquity and Modern Age. Mutual ties among contemporaries were in general very low. We can report none in Antiquity, and only one in the Middle Ages between Avicenna and Al-Brn. In the Early Modern period, eight mutual relations were observed, including e.g. Gottfried Leibniz (1646-1716) and David Bernoulli (1700-1782), whereas 13 mutual relations in the Transitioning period, such as Friedrich Engels (1820-1895) and Karl Marx (1818-1883), or Johann Goethe (1749-1832) and Friedrich Schelling (1775-1854). In the Modern Age, the number of mutual ties increased to 51 (e.g. Jean-Paul Sartre and Simone de Beauvoir (1908-1986)); and to 54 in the Contemporary period. Who was most influential on their contemporaries? Table II lists the top five scholars per era based on their out-degree in the within-era influence networks. The highest within-era out-degree over all times was achieved by Friedrich Nietzsche (1844-1900) of the Modern Age with 68 outgoing influence links to other scholars of his era.
B. Inter-Era Influence Networks
Inter-era influence networks are partial networks where the source era is preceding the target era. We interpreted these networks as bipartite, as the actors belong to different groups, the source era and the target era. Therefore, only edges between nodes sets are possible. Table III shows the metrics for those inter-era influence networks. In general, each era had the most links with its consecutive era, and additionally with the Contemporary period's scholars. Exception to this rule was Antiquity, which saw its first peak with the Early Modern period relating to Renaissance interests. Their densities were again decreasing through the combinations, except for those periods that had less links to other periods, such as the Middle Ages to the Transitioning period.
Which scholar influenced a successive era the most? Table IV shows the scholars with the highest degrees in the inter-era networks. Noteworthy here is Karl Marx, who had the highest out-degree over all times from the Transitioning period to the Contemporary age, followed by modern philosopher Friedrich Nietzsche and Martin Heidegger on Contemporary scholars.
C. Accumulative Influence Networks
For each era, we constructed an accumulative influence network of all influence links among scholars who lived up to and including that era. We performed essential social network analysis on these six accumulated-eras networks, which combine the internal and external impact of eras. The final network of Contemporary Age is the same as the complete network over all periods [6]. Fig. 4 shows the best connected scholars for each era, that influence at least 10 others, in the final accumulated network. We clearly see two joined networks of hubs. The right part is very diverse in terms of including different eras and different fields such as philosophy, theology and science scholars. The left part consists mainly of writers since the long 19. Century (1789-1914); Alexander Pushkin (1799-1837) is one of the eldest nodes there. This writers' network shows little diversity to other historical periods and consists mostly of Modern and Contemporary age writers. That writers are less connected to the philosophy, theology, and science scholars shows that these groups referenced themselves more consistently. Table V shows the metrics of accumulated-era networks. Regarding node degrees change over consecutively accumulated eras, we observe that at all eras the maximum out-degree is greater than the maximum in-degree. Moreover, those maximum degrees continuously increase over eras, in contrast to within-era networks. The average out degree changes slightly over time, taking its lowest value of 1.45 at Middle Ages, and highest value of 1.8 at Contemporary age. Noteworthy is the drastic collapse of the largest Weak Component in the Early Modern period, which was steadily rising since.
Who was the most influential intellectual in an era? Figure 5 shows the evolution of the ten most influential scholars in the complete network based on their out-degree progressing in the accumulative networks. The top two ranks of the most prolific scholars were consistently taken over by Antique philosophers Plato, and Aristotle (who among contemporaries was only in rank 6) Contemporary scholars came on third rank in the Middle Ages (Avicenna), in the Early Modern period (Ibn Tufail, John Locke, Ren Descartes), and in the Transitioning period (John Locke, Johann Goethe). This changed in the Modern Age, when Transitioning period scholars Immanuel Kant and Hegel took the first ranks. Aristotle still remained in the top five. The highest out-degree over all times is observed at Contemporary
V. PATTERNS OF INFLUENCE OVER ERAS
In this section, we study the influence patterns of scholars over eras. We construct influence signatures based on how much on average a scholar influenced an era, and which patterns of directed influences characterize an era.
1) Influence Power of Scholars: For each scholar, we construct their influence signature as a sequence of their influence links towards each era starting from their own. For example, the influence signature of Aristotle was [10,12,19,11,16,46], which meant, he had 10 influence links within Antiquity, 12 links towards the Middle Ages, etc. Using those signatures, we define the longitudinal influence power of a scholar as the average of their influence signature. A scholar would have a high influence power when he has (1) a high number of influence links (2) over all or many eras. In contrast, having few influence links over several eras, or many links over few eras would give low value of this influence power measure. For example, with an average around 19 both Aristotle and Shakespeare had similar influence powers. In absolute numbers, Aristotle had almost twice the number of Shakespeare's influence links (114 to 73, respectively). While Aristotle influenced all 6 eras, and Shakespeare only 4, the ratio of the links per era decreased for Aristotle, resulting in their similar influence powers. This measure provides an indicator of the influence power of an intellectual throughout history, and combines both the intensity and the diversity of influence.
It also allows us to compare scholars from different eras. Table VI shows the top 5 scholars based on the longitudinal influence power. Here, Aristotle, Thomas Aquinas, William Shakespeare, Karl Marx, Friedrich Nietzsche and the writer Vladimir Nabokov are identified by their in-fluence power as the most influential intellectuals of their respective periods. The highest longitudinal influence powers over all times had Nietzsche (73), followed by Nabokov (58) and Marx (52). 2) Influence Patterns: Which directed influences were most common in an era? We derive to these influence patterns of eras by replacing any non-zero entries by X of the scholar's influence signatures, and aggregate all occurrences of each pattern for each era. We thus ignore the actual values of influence (intensity), but keep the temporal effect (diversity). For example, the influence pattern [X, 0, · · · , 0] means that the scholarly influences goes to the first (own) era only, with no influence on other eras. The pattern [X, X, · · · , X] signifies that the influence is distributed over all applicable eras, regardless of the actual values. Table VII gives the top patterns of each era with the pattern's frequency of occurrence with regard to the respective era.
A ML EM T MR C Antiquity × 0 0 0 0 0 43% 0 0 0 0 0 × 8% 0 × 0 0 0 0 7% 0 0 × 0 0 0 7% MiddleAges × 0 0 0 0 56% 0 × 0 0 0 9% × × 0 0 0 7% 0 0 0 0 × 6% EarlyModern × 0 0 0 51% 0 × 0 0 13% 0 0 0 × 7% × × × × 7% Transition × 0 0 35% 0 × 0 29% × × × 11% × × 0 9% 0 0 × 8% 0 × × 7% ModernAge 0 × 38.8% × 0 36.7% × × 24.5% Contemporary × 100%
For example, for the Middle Ages the most frequent pattern was [−, X, 0, 0, 0, 0], which represents that 56% of scholars only influenced contemporaries with no influences on other eras. Over all eras, the most common pattern was of within-era influence, followed by the influence on the consecutive period. Exception to this rule was the Modern period, which had this rule reversed, and an higher influence on the Contemporary period than on its own. Since the Early Modern period, the pattern of influencing all successive eras including its own becomes more frequent (from 7% on), and rising in each successive period.
VI. BROKERAGE ROLE
Which roles had scholars in regard to their influence on others? We look at roles by following the brokerage approach by Gould and Fernandez [9] by analyzing non-transitive triads, in which a node A has a tie to node B, and B has a tie to node C, but there is no tie between A and C. In these triads, B is thought to play a structural role called a broker.
The possible roles are shown in Figure 6, which are adapted from the work of Gould and Fernandez in [9], and Everett and Borgatti [3]. 1 This allows us to consider to what extent a node's importance is based on joining two nodes that are members of the node's own era, or on joining others outside their group. We interpret nodal membership in groups as eras. In Table VIII, we analyse the above described brokerage roles for each period. Over all eras, 23% of all scholars on average had at least one of the above described brokerage roles. Since the Early Modern period, the amount of scholars with exactly one brokerage role stays very stable on about 12-13%, slightly higher in the Antiquity and Middle Ages. Both the first and the last of the periods could have a maximum of 2 different brokerage roles, because pre-processing didn't allow reverse links. Therefore, Representative and Liaison brokerage was impossible for Contemporary, as well as Liaison and Gatekeeper brokerage for Antiquity. Coordinator and Gatekeeper roles represent the scholars importance within their own period. Gatekeeper had inter-period influences, and in turn influenced their contemporaries. From Middle to Modern Age, the amount of scholars with all four brokerage roles steadily increased. Noteworthy here were Thomas Aquinas (Middle Ages), Gottfried Leibniz (Early Modern Period), Georg Hegel (Transitioning Period), and Martin Heidegger (Modern Age), who appeared most often in super brokerage roles: They combined Liaison, Gatekeeper, Representative, and Coordinator roles alike in their respective periods. Surprisingly though, scholars with 3 brokerage roles were roughly ten times less common than those with all brokerages (compare Table VIII).
VII. CONCLUSIONS
In this paper, we incorporated a longitudinal aspect in the study of the influence networks of scholars. First, we extracted their social network of influence from YAGO, a pioneering data source of Linked Open Data. Rigorous pre-processing resulted in a network of 12,705 intellectuals with 22,818 edges, including information on each scholar's era. We opted for a global approach to the periodization of history, resulting in six consecutive eras to study.
Our main question was whether we could identify patterns of influence, and their change over time. Therefore we performed essential network analysis on every time-sliced projection of the entire network in within-era, inter-era, and accumulated-era influence networks. We investigated their social network metrics, degree distribution and connectivity. An influence pattern throughout all eras was that the internal impact of any era was higher than its external impact. The vast majority of scholars influenced scholars of their own period (= within-era influence) with an relatively stable average outdegree. There were only few instances of reciprocity. When accumulating eras, the max. degrees drastically increased. However, over all eras, the maximum out-degree stayed greater than the maximum in-degree. In inter-era influence networks, each era influenced most its consecutive one, and additionally the Contemporary period. Exception to this rule was a spike in the absolute links of antique influences on the Early Modern Period, representing the increased reception of antique scholars during the Renaissance. However, proportionally Antiquity's influence on Early Modernity was as high as on the Middle Ages, which reasserts the shift in historical research that the Renaissance thinkers did not "re-discover" Antiquity, but that medieval scholars also received it [4, p. 3-4].
With a longitudinal perspective, we can add a more pronounced view on who the most influential intellectuals were. The scholar with the highest out-degree over all periods on contemporaries (= within-era) was Modern age scholar Friedrich Nietzsche. Plato in Antiquity, Avicenna in the Middle Ages, John Locke in the Early Modern period, Johann Goethe in the Transition period and Vladimir Nabokov in Contemporary were the most influential on the contemporaries of their respective periods.
When accumulating eras, the most influential intellectuals of an era change: there, Plato was the most influential for Antiquity and the Middle Ages, Aristotle for the Early Modern and Transitioning period, Immanuel Kant for Modern Age. In the Contemporary period, and therefore for the complete network of intellectuals, it was Karl Marx.
In the inter-era network analysis, Transitioning period scholar Karl Marx had the highest out-degree over all times to the Contemporary age. Second places over all time took Modern intellectuals Friedrich Nietzsche and Martin Heidegger on the Contemporary period.
We constructed the longitudinal influence power of intellectuals based on the average of their influences on eras, which favours consistency of influence. Here, again Aristotle, Thomas Aquinas, William Shakespeare, Karl Marx, Friedrich Nietzsche and Vladimir Nabokov were the most consistently influential intellectuals of their respective periods. The highest influences had Nietzsche, Nabokov, and Marx.
In terms of knowledge brokering, we could identify Coordinator, Gatekeeper, Representative and Liasion knowledge brokers, whom we interpreted as passing influence between and within eras. We found scholars with all four different brokerage roles were medieval scholar Thomas Aquinas, Early Modern polygraph Gottfried Leibniz, Georg Hegel of the Transitioning Period, and the Modern philosopher Martin Heidegger.
This study of the longitudinal patterns of influence such is suited to further the insights on the interconnections of influence of thinkers, and the dynamics of eras alike.
Therefore we plan to study the evolution of communities in these accumulated networks in future work. In addition, we like to compare this YAGO network of intellectual influence with a more detailed network of scholars based on the main books on intellectual history, in order to establish their differences and insights on this field.
Fig. 1 .Fig. 2 .
12Number of actors alive in each year based on their assigned eras. Percentage of received influences in each era.
Fig. 3 .
3Weakly connected components in within-era influence networksFigure 3shows the number of weakly connected components (WCCs) in the within-era networks of each era, and the relative size of the largest ones w.r.t the whole corresponding network. The number of WCCs increased gradually over the consecutive eras. In general, the networks consisted of one giant component, which encompassed the majority of nodes, while the rest of components were relatively smaller. This was particularly developed in Antiquity and Middle Ages, where the giant component constitute of 82% and 77% of
Fig. 4 .
4Network of the most influential actors with at least 10 out-going influences. Node size = proximity prestige, node color = era, links within an era are colored with the color of the era, the other links are gray.
Fig. 5 .
5Top 10 of the most influential intellectuals of the complete network based on their out-degree, and their progression in the accumulated-era networks. age, where Karl Marx had 158 outgoing influence links to other scholars of all eras, followed by Nietzsche, Hegel and Kant.
Fig. 6 .
6Brokerage Roles of the top right node of each triad, adapted from Gould and Fernandez (1989)[9].
TABLE I METRICS
IOF WITHIN-ERA NETWORKSEra
A
ML
EM
T
MR
C
N
219
303
610
761
2102
6081
N/A
82%
86%
81%
70%
73%
85%
E
327
387
694
927
2806
7960
Density
.0068
.0042
.0019
.0016
.0006
.0002
avg. out-degree
1.49
1.28
1.14
1.22
1.33
1.31
max in-degree
12
9
17
27
21
26
max out-degree
20
16
23
32
68
58
max degree
32
20
32
41
73
58
WCC
11
21
94
108
208
582
Largest WCC
179
233
245
436
1495
4379
82%
77%
40%
57%
71%
72%
SCC
0
2
6
8
31
38
Reciprocity
0
0.005
0.023
0.028
0.036
0.014
Transitivity
0.064
0.066
0.071
0.042
0.029
0.017
the lowest value in the Transitioning period of 70% to its
high out-going influences.
TABLE II
IITOP 5 ACTORS, PER ERA, BASED ON OUT-DEGREE IN WITHIN-ERA INFLUENCE NETWORKS.Antiquity
MiddleAges
EarlyModern
Plato
20
Avicenna
16
John Locke
23
Aesop
13
Muhammad
11
Ren Descartes
22
Pythagoras
10
Al-Ghazali
11
Isaac Newton
15
Plotinus
10
Ban Ms
8
Hugo Grotius
13
Euhemerus
10
J. S. Eriugena
8
Leibniz
11
Transition
Modern
Contemporary
Goethe
32
Nietzsche
68
Vladimir Nabokov
58
Hegel
29
Jules Verne
35
Friedrich Hayek
50
Lord Byron
24
Henri Bergson
35
Richard Pryor
50
Immanuel Kant
22
Leo Tolstoy
24
Jacques Derrida
48
von Schelling
17
Edmund Husserl
22
Michel Foucault
47
the nodes, while the second largest were at 6% and 3%,
respectively. The Early Modern period constitutes an exception
to this giant component rule: the largest one was at 40% only,
and the second largest at 16%. Looking at their composi-
tion, the first consisted of natural scientists, mathematicians,
and philosophers, such as Descartes, Newton, and Leibniz,
while the smaller one compromised of artists and painters,
such as Rembrandt and Raphael. The single giant component
phenomenon appeared again in subsequent eras. For instance,
in the Transitioning period, there were 108 WCCs, where
the largest two incorporated 57% and 1.3% of the nodes.
In Modern and Contemporary age, the largest components
comprised about 70% of nodes.
TABLE III METRICS
IIIOF INTER-ERAS INFLUENCE NETWORKSsource →
N
E
Ns
Nt D
in-degree out degree
target
avg max avg max
A → MA
82
87
38
44 .052 1.98 7
2.29 12
A → EM
117
145
46
71 .044 2.04 7
3.15 19
A → T
66
66
29
37 .062 1.78 5
2.28 11
A → MA
101
114
42
59 .046 1.93 11 2.71 23
A → C
169
177
49
120 .030 1.47 6
3.61 46
ML → EM
149
144
66
83 .026 1.73 9
2.18 21
ML → T
52
36
22
30 .055 1.20 5
1.64 6
ML → MR
77
62
27
50 .046 1.24 4
2.30 12
ML → C
146
121
50
96 .025 1.26 6
2.42 34
EM → T
392
432
159
233 .012 1.85 16 2.72 24
EM → MR
262
269
101
161 .016 1.67 13 2.66 15
EM → C
437
432
125
312 .011 1.38 7
3.46 35
T → MR
1,111 1,373
436
675 .005 2.03 19 3.15 53
T → C
888 1,041
212
676 .007 1.54 9
4.91 112
MR → C
3,817 4,885 1,271 2,546 .002 1.92 17 3.84 78
TABLE IV TOP
IVSCHOLARS WITH HIGHEST OUT-DEGREE IN THE INTER-ERANETWORKS
s → t
First Rank
Second Rank
A → ML
Aristotle
12
Augustine of Hippo
6
A → EM
Aristotle
19
Plato
14
A → T
Aristotle
11
Plato
9
A → MR
Plato
23
Aristotle
16
A → C
Aristotle
46
Plato
32
ML → EM
Ibn Tufail
21
Thomas Aquinas
9
ML → T
Petrarch
6
Dante Alighieri
5
ML → MR
Dante Alighieri
12
Thomas Aquinas
11
ML → C
Thomas Aquinas
34
Dante Alighieri
10
EM → T
J. J. Rousseau
24
Shakespeare
21
EM → MR
Baruch Spinoza
15
Shakespeare
15
EM → C
Shakespeare
35
David Hume
25
T → MR
Immanuel Kant
53
Karl Marx
43
T → C
Karl Marx
112
Hegel
67
MR → C
Nietzsche
78
Martin Heidegger
73
TABLE V METRICS
VOF ACCUMULATIVE-ERA NETWORKSEra
A
ML
EM
T
MR
C
N
219
552
1,227
2,141
4,697
12,506
E
327
801
1,784
3,245
7,869
22,485
Nsrc
54
155
388
677
1,501
3,890
Ninner
71
178
353
597
1,331
3,080
N sink
94
219
486
867
1,865
5,536
Density
.0068
.0026
.0012
.0007
.0004
.0001
avg. out-degree
1.49
1.45
1.45
1.5
1.68
1.80
max in-degree
12
16
26
38
48
48
max out-degree
20
24
41
52
75
158
max degree
32
36
50
60
116
196
WCC
11
30
110
211
390
817
Largest WCC
179
441
797
1513
3550
10192
82%
80%
65%
71%
76%
81%
SCC
0
2
8
16
47
85
Reciprocity
0
0.002
0.010
0.014
0.019
0.011
Transitivity
0.064
0.067
0.064
0.056
0.039
0.021
Out-degree
TABLE VI TOP
VI5 ACTORS BASED ON THE LONGITUDINAL INFLUENCE POWER.Antiquity
MiddleAges
EarlyModern
Aristotle
19.0 Thomas Aquinas
12.6 William Shakespeare 18.2
Plato
17.0 Dante Alighieri
6.0 Baruch Spinoza
14.8
Augustine of Hippo
6.0 Ibn Tufail
5.8 Ren Descartes
14.0
Plotinus
4.7 Avicenna
4.6 John Locke
13.0
Heraclitus
4.2 Al-Ghazali
3.6 David Hume
12.5
Transition
ModernAge
Contemporary
Karl Marx
52.6 Friedrich Nietzsche 73.0 Vladimir Nabokov
58.0
Hegel
45.7 Martin Heidegger
45.0 Friedrich Hayek
50.0
Immanuel Kant
45.0 Ludwig Wittgenstein 40.0 Richard Pryor
50.0
Sren Kierkegaard
25.3 James Joyce
39.5 Jacques Derrida
48.0
Fyodor Dostoyevsky 23.0 Sigmund Freud
32.0 Michel Foucault
47.0
TABLE VII TOP
VIIFREQUENT INFLUENCE PATTERNS OF ERAS (FROM LEFT TO RIGHT)
The scholars with the highest scores for Gatekeeper in their respective periods were medieval polymath Avicenna (980-1037), Early Modern philosopher Ren Descartes (1596-1650), and Immanuel Kant (1724-1804), Friedrich Nietzsche (1844-1900), and Michel Foucault (1926-1984). The highest scores of Coordinators had Plato, again Avicenna, John Locke (1632-1704), Johann Goethe (1749-1832), again Friedrich Nietzsche, and contemporary horror writer Stephen King (born 1947). As Coordinators, these scholars represented an within-period influence. Liaison brokers would have the longest time frame of influence, which includes three successive periods. Highest scores had Dominican friar Thomas Aquinas (1225-1274), Early Modern philosopher Baruch Spinoza (1632-1677), and again Immanuel Kant and Friedrich Nietzsche as Liaisons. Representatives took the reverse role of an Gatekeeper: They had an within-era influence, that spread to a successive era. Here, Plato, Thomas Aquinas, David Hume (1711-1776), Karl Marx (1818-1883) and Martin Heidegger (1889-1976) stood out.
TABLE VIII NUMBER
VIIIAND FRACTION OF ACTORS TAKING 1, 2, 3 OR 4 ROLESNo. of Roles
1
2
3
4
Antiquity
55 (21%)
30 (11%)
MiddleAges
62 (18%)
32 (9%)
12 (3%)
EarlyModern
101 (13%)
51 (7%)
2 (0.3%)
38 (5%)
Transition
136 (12%)
87 (8%)
6 (0.8%)
70 (6%)
ModernAge
363 (13%)
269 (9%)
5 (0.7%)
200 (7%)
Contemporary
879 (12%)
536 (7%)
overall
1,596
1,005
13
320
12.8%
8.0%
0.1%
2.6%
The fifth brokerage role, the Consultant, where A and C belong to one period, and B belongs to another, is not possible in our network, as we didn't allow reverse influences of a more recent period onto a previous one by preprocessing.
Understanding Large Temporal Networks and Spatial Networks: Exploration, Pattern Searching, Visualization and Network Evolution. Vladimir Batagelj, Patrick Doreian, Anuska Ferligoj, Natasa Kejzar, Wiley Series in Computational and Quantitative Social Science. John Wiley & SonsLtdfirst edition editionVladimir Batagelj, Patrick Doreian, Anuska Ferligoj, and Natasa Kejzar. Understanding Large Temporal Networks and Spatial Networks: Explo- ration, Pattern Searching, Visualization and Network Evolution. Wiley Series in Computational and Quantitative Social Science. John Wiley & Sons, Ltd, first edition edition, 2014.
Historical Research in a Digital Age: Reflections from the Mapping the Republic of Letters Project. Dan Edelstein, Paula Findlen, Giovanna Ceserani, Caroline Winterer, Nicole Coleman, The American Historical Review. 1222Oxford AcademicDan Edelstein, Paula Findlen, Giovanna Ceserani, Caroline Winterer, and Nicole Coleman. Historical Research in a Digital Age: Reflections from the Mapping the Republic of Letters Project. The American Historical Review, 122(2):400-424, April 2017. Publisher: Oxford Academic.
Categorical Attribute based Centrality: E-I and G-F Centrality. Martin Everett, Stephen Borgatti, Social Networks. 34Martin Everett and Stephen Borgatti. Categorical Attribute based Centrality: E-I and G-F Centrality. Social Networks, 34:562-569, 2012.
Introduction. Jane Fejfer, Tobias Fischer-Hansen, Annette Rathje, The Rediscovery of Antiquity: The Role of the Artist. Jane Feijfer, Tobias Fischer-Hansen, and Annette RathjeCopenhagenMuseum Tusculanum PressJane Fejfer, Tobias Fischer-Hansen, and Annette Rathje. Introduction. In Jane Feijfer, Tobias Fischer-Hansen, and Annette Rathje, editors, The Rediscovery of Antiquity: The Role of the Artist, pages 11-20. Museum Tusculanum Press, Copenhagen, 2003.
Forum: A world of ideas: New pathways in global intellectual history, c.1880-1930. Stefanie Gänger, Su Lin Lewis, Modern Intellectual History. 102Stefanie Gänger and Su Lin Lewis. Forum: A world of ideas: New pathways in global intellectual history, c.1880-1930. Modern Intellectual History, 10(2), 2013.
Analysis of a Social Network of Intellectual Influence. Raji Ghawi, Cindarella Petz, Jürgen Pfeffer, Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS). Granada, SpainOn the Shoulders of GiantsRaji Ghawi, Cindarella Petz, and Jürgen Pfeffer. 'On the Shoulders of Giants', Analysis of a Social Network of Intellectual Influence. In Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), Granada, Spain, pages 248-255, 2019.
Mining Social Networks from Linked Open Data. Raji Ghawi, Jürgen Pfeffer, Dominik Endres, Mehwish Alam, and Diana Ş otropa. ChamSpringer International PublishingGraph-Based Representation and ReasoningRaji Ghawi and Jürgen Pfeffer. Mining Social Networks from Linked Open Data. In Dominik Endres, Mehwish Alam, and Diana Ş otropa, editors, Graph-Based Representation and Reasoning, pages 221-229, Cham, 2019. Springer International Publishing.
What is Intellectual History? A Frankly Partisan Introduction to a Frequently Misunderstood Field. Peter E Gordon, Peter E. Gordon. What is Intellectual History? A Frankly Partisan Introduction to a Frequently Misunderstood Field. 2013.
Structures of Mediation: A Formal Approach to Brokerage in Transaction Networks. V Roger, Roberto M Gould, Fernandez, Sociological Methodology. 19Sage Publications, IncRoger V Gould and Roberto M Fernandez. Structures of Mediation: A Formal Approach to Brokerage in Transaction Networks. Sociological Methodology, 19:89-126, 1989. Publisher: [American Sociological Association, Wiley, Sage Publications, Inc.].
Worlds Made by Words: Scholarship and Community in the Modern West. Anthony Grafton, Harvard University PressCambridge, MAAnthony Grafton. Worlds Made by Words: Scholarship and Community in the Modern West. Harvard University Press, Cambridge, MA, 2009.
Global possibilities in intellectual history: a note on practice. Knud Haakonssen, Richard Whatmore, Global Intellectual History. 21Knud Haakonssen and Richard Whatmore. Global possibilities in intellectual history: a note on practice. Global Intellectual History, 2(1):18-29, 2017.
Studying Social Networks: A Guide to Empirical Research. Marina Hennig, Ulrik Brandes, Jürgen Pfeffer, Ines Mergel, Campus VerlagMarina Hennig, Ulrik Brandes, Jürgen Pfeffer, and Ines Mergel. Studying Social Networks: A Guide to Empirical Research. Campus Verlag, 2012.
A Map of Approaches to Temporal Networks. Petter Holme, Jari Saramäki, Temporal Network Theory. Petter Holme and Jari SaramäkiChamSpringer International PublishingPetter Holme and Jari Saramäki. A Map of Approaches to Temporal Networks. In Petter Holme and Jari Saramäki, editors, Temporal Network Theory, Computational Social Sciences, pages 1-24. Springer International Publishing, Cham, 2019.
Statistical Analysis of Longitudinal Network Data With Changing Composition. Mark Huisman, A B Tom, Snijders, Sociological Methods & Research. 322Mark Huisman and Tom A. B. Snijders. Statistical Analysis of Longitudinal Network Data With Changing Composition. Sociological Methods & Research, 32(2):253-287, 2003.
YAGO3: A knowledge base from multilingual Wikipedias. Farzaneh Mahdisoltani, Joanna Biega, Fabian M Suchanek, CIDR 2015, Seventh Biennial Conference on Innovative Data Systems Research. Asilomar, CA, USAOnline ProceedingsFarzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. YAGO3: A knowledge base from multilingual Wikipedias. In CIDR 2015, Seventh Biennial Conference on Innovative Data Systems Re- search, Asilomar, CA, USA, Online Proceedings, 2015.
A Life of Erwin Schrödinger. Walter Moore, Cambridge University PressWalter Moore. A Life of Erwin Schrödinger. Cambridge University Press, 1994.
Global Intellectual History. Samuel Moyn and Andrew SartoriNew YorkColumbia University PressSamuel Moyn and Andrew Sartori, editors. Global Intellectual History. Columbia University Press, New York, 2013.
The Acquaintance Process as a prototype of human interaction. M Theodore, Newcomb, Theodore M Newcomb. The Acquaintance Process as a prototype of human interaction. 1961.
über die Periodisierung der neueren Geschichte (Vortrag in der Geisteswissenschaftlichen Klasse am 29. Jürgen Osterhammel, Berichte und Abhandlungen. 10Jürgen Osterhammel.über die Periodisierung der neueren Geschichte (Vortrag in der Geisteswissenschaftlichen Klasse am 29. November 2002). volume 10 of Berichte und Abhandlungen, pages 45-64.
Social Network Analysis: a Powerful Strategy, also for the Information Sciences. Evelien Otte, Ronald Rousseau, Information Science. 286Evelien Otte and Ronald Rousseau. Social Network Analysis: a Powerful Strategy, also for the Information Sciences. Information Science, 28(6):441-453, 2002.
Sinndeutung und Periodisierung der Geschichte. Eine systematischeÜbersicht der Theorien und Auffassungen. Johan Hendrik , Jacob Van Der Pot, BrillLeiden Bosten KölnJohan Hendrik Jacob van der Pot. Sinndeutung und Periodisierung der Geschichte. Eine systematischeÜbersicht der Theorien und Auffassun- gen. Brill, Leiden Bosten Köln, 1999.
Introduction to Actor-Based Models for Network Dynamics. Tom Snijders, Gerhard G Bunt, Christian Steglich, Social Networks. 32Tom Snijders, Gerhard G. Bunt, and Christian Steglich. Introduction to Actor-Based Models for Network Dynamics. Social Networks, 32:44- 60, 01 2010.
Sanjay Subrahmanyam, Global Intellectual History Beyond Hegel and Marx. History and Theory. 54Sanjay Subrahmanyam. Global Intellectual History Beyond Hegel and Marx. History and Theory, 54(1):126-137, 2015.
Beyond the usual suspects: on intellectual networks in the early modern world. Sanjay Subrahmanyam, Global Intellectual History. 21Sanjay Subrahmanyam. Beyond the usual suspects: on intellectual networks in the early modern world. Global Intellectual History, 2(1):30-48, 2017.
Using Multi-Layered Networks to Disclose Books in the Republic of Letters. Ingeborg Van Vugt, Journal of Historical Network Research. 11Ingeborg van Vugt. Using Multi-Layered Networks to Disclose Books in the Republic of Letters. Journal of Historical Network Research, 1(1):25-51, October 2017.
Social Network Analysis: Methods and Applications. Structural analysis in the social sciences. Stanley Wasserman, Katherine Faust, Cambridge University Press8New York1 editionStanley Wasserman and Katherine Faust. Social Network Analysis: Methods and Applications. Structural analysis in the social sciences, 8. Cambridge University Press, New York, 1 edition, November 1994.
Publisher: Routledge eprint. Daniel Wickberg, Intellectual History vs. the Social History of Intellectuals. Rethinking History. 5Daniel Wickberg. Intellectual History vs. the Social History of Intel- lectuals. Rethinking History, 5(3):383-395, November 2001. Publisher: Routledge eprint.
| [] |
[
"Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms",
"Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms"
] | [
"Jessica Van Brummelen ",
"Phoebe Lin ",
"\nMassachusetts Institute of Technology\nUSA\n",
"\nHarvard University\nUSA\n"
] | [
"Massachusetts Institute of Technology\nUSA",
"Harvard University\nUSA"
] | [] | Fig. 1. Representations of the four integrated curricula[6,9,20]. Left: The "exemplar" physics and AI curriculum. Mid-left: Social studies and AI curriculum. Mid-right: ESL and AI curriculum. Right: Literacy and AI curriculum for students with learning disabilities.Artificial Intelligence (AI) education is an increasingly popular topic area for K-12 teachers. However, little research has investigated how AI education can be designed to be more accessible to all learners. We organized co-design workshops with 15 K-12 teachers to identify opportunities to integrate AI education into core curriculum to leverage learners' interests. During the co-design workshops, teachers and researchers cocreated lesson plans where AI concepts were embedded into various core subjects. We found that K-12 teachers need additional scaffolding in the curriculum to facilitate ethics and data discussions, and value supports for learner engagement, collaboration, and reflection. We identify opportunities for researchers and teachers to collaborate to make AI education more accessible, and present an exemplar lesson plan that shows entry points for teaching AI in non-computing subjects. We also reflect on co-designing with K-12 teachers in a remote setting. | 10.1145/3411764.3445377 | [
"https://arxiv.org/pdf/2009.11100v1.pdf"
] | 221,856,678 | 2009.11100 | ad6f969a3018bbce2904e7623b35a270896ffaed |
Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms
Jessica Van Brummelen
Phoebe Lin
Massachusetts Institute of Technology
USA
Harvard University
USA
Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms
CCS Concepts: • Human-centered computing → Participatory design; User centered design Additional Key Words and Phrases: Artificial intelligenceK-12 educationco-design workshop
Fig. 1. Representations of the four integrated curricula[6,9,20]. Left: The "exemplar" physics and AI curriculum. Mid-left: Social studies and AI curriculum. Mid-right: ESL and AI curriculum. Right: Literacy and AI curriculum for students with learning disabilities.Artificial Intelligence (AI) education is an increasingly popular topic area for K-12 teachers. However, little research has investigated how AI education can be designed to be more accessible to all learners. We organized co-design workshops with 15 K-12 teachers to identify opportunities to integrate AI education into core curriculum to leverage learners' interests. During the co-design workshops, teachers and researchers cocreated lesson plans where AI concepts were embedded into various core subjects. We found that K-12 teachers need additional scaffolding in the curriculum to facilitate ethics and data discussions, and value supports for learner engagement, collaboration, and reflection. We identify opportunities for researchers and teachers to collaborate to make AI education more accessible, and present an exemplar lesson plan that shows entry points for teaching AI in non-computing subjects. We also reflect on co-designing with K-12 teachers in a remote setting.
INTRODUCTION
Artificial intelligence (AI) education is becoming an increasingly popular subject in the eyes of educators due to the rapid integration of AI technologies in user-facing services and products [16,34,41]. Researchers have called for formal K-12 education to prioritize AI literacy and teach children to interact with AI using a critical lens [42]. The AI4K12 research community has also published guidelines for what AI concepts K-12 curriculum should cover, known as the Big AI Ideas, and calls for AI researchers to help teachers and students understand AI [35]. As children interact more with AI technologies, it is critical that they are able to recognize AI, understand how AI algorithms work, use those algorithms to solve problems meaningful to them, and evaluate the impact of AI technologies on society [5].
Teachers of all subjects should feel empowered to teach AI curriculum, yet teachers often feel they lack sufficient understanding to teach AI or the capacity to include more curriculum on top of their existing curriculum [39]. Despite the proliferation of tools and AI curriculum in response to the recent calls to action, few are widely implemented due to challenges in the classroom that prevent these curricula from being accessible [27]. In order to introduce new practices, researchers and developers should consider the contexts of teachers and invest in additional supports to facilitate the accessibility of AI resources for teachers.
Similarly, AI as a discipline can span many other topics, such as government, journalism, and art [10,18,32], therefore AI should not be confined to just computing subjects such as computer science or data science. Tools and curriculum today often teach AI as an extension of computer science curricula or as standalone curricula that is difficult to adjust to other contexts [8,25,33]. Adapting those tools and curriculum then becomes especially difficult for teachers who teach core subjects, including English, math, social studies, and science, and may not have any AI experience. The lack of integrated AI curricula in core subjects has become one of the barriers to exposing AI to students with little access to computing disciplines.
In this paper, we partner with K-12 teachers to design AI curriculum that is integrated with core subjects. We aim to empower all teachers to incorporate AI into their classrooms and leverage learners' interests for other subjects through a two-day co-design workshop with 15 teachers from different schools. We set out to understand what is necessary and valuable to K-12 teachers to effectively implement integrated AI curricula, and co-create lesson plans that address those needs and values. Specifically, our research questions are:
RQ1: How might we address K-12 teachers' values and considerations when designing AI curriculum? (Teaching needs) RQ2: How might AI be integrated into core subject curriculum? (Integrated curriculum design) To answer these research questions, we organized a multi-session workshop that spanned two days with fifteen teachers who teach various subjects. The first day of the workshop involved presentations and group discussions to level set everyone's basic understanding of AI. Between the first and second day of the workshop, participants were asked to complete a brainstorming assignment where they identified curriculum of their own to use as a potential base for an integrated AI curriculum. During the second day of the workshop, we split participants into three small groups to work together and design a lesson plan that integrates AI into a non-computing subject curriculum. The co-design process revealed when teachers design curriculum, they consider four practical needs: evaluation, engagement, logistics, and collaboration. Furthermore, our analysis of the co-designed lesson plans showed opportunities for connections between AI and a core subject, with three points of integration: data, reflection, and scaffolding for ethics.
The contributions of this work are (1) identifying the values and needs of K-12 teachers teaching AI in the classroom and opportunities to address them, (2) showing an exemplar integrated AI curriculum as an output of the co-design session [6], and (3) reflecting on co-design sessions involving K-12 teachers in a remote setting to solicit design considerations of AI curriculum.
RELATED WORK
To the authors' knowledge, there are no papers describing a co-design process with teachers to integrate AI concepts into core curriculum, and few papers that purposefully integrate AI concepts into core curriculum. Other related research includes the development of AI education tools and curricula, as well as co-design of other course materials with teachers.
AI Education for K-12
Many K-12 AI tools and curricula exist as standalone products or extend computer science curriculum. Two widely-used AI teaching tools are Teachable Machine [9] and Machine Learning for Kids [20], which empower learners to develop classification models without needing to program. Other standalone AI teaching tools include Any-Cubes, which are toys to teach machine learning (ML) concepts [29]; Calypso for Cozmo, which is AI curriculum for a toy robot [37]; and extensions for MIT App Inventor, which enable students to develop AI-powered mobile apps [23]. Each of these tools could be integrated and taught in core classes; however, are presented as standalone AI tools.
In terms of K-12 AI education research involving instructors, most involve researchers rather than K-12 teachers as the instructors, and likely miss valuable expertise and feedback from professionals who have worked in the classroom. Nevertheless, some works involving K-12 teachers include an AI summer program for high school girls [38], an AI engineering course for high school students [31], and a STEM workshop for middle school students [28]. Each of these studies saw value in engaging with K-12 teachers.
Other works have also integrated core curriculum content into AI tools and curricula; however, most of these involve researchers as instructors and are often not in regular classroom settings. For example, one physical education curriculum involves students developing sports gesture classification models with researchers as facilitators [43]. Another science-based curriculum involves students teaching a conversational agent about animals, and observing it classify the animals into ecosystems with researchers as facilitators [21]. Although these works are state-of-the-art in K-12 AI education, it is unknown whether they are suitable for K-12 classrooms, since they have not been tested in regular classrooms and teachers were not involved in the design process.
In our literature review, we found one example of AI curriculum that was both integrated into a core course and designed or taught by K-12 teachers alongside researchers. This curriculum involved AI and science concepts, and was taught in Australian K-6 classrooms [17]. Although this example is insightful, much further research is needed to integrate teacher expertise and address widespread, integrated AI curriculum in K-12 classrooms [39].
Co-Design in Education
Although K-12 AI education has not yet benefited from tools and curriculum co-designed with K-12 teachers, other areas of education have. For example, in one study, researchers collaborated with teachers to develop new science curriculum materials. Researchers recognized the value in teachers' K-12 expertise and in promoting their agency throughout the design process [30]. Another science curriculum co-design study argued that the process of working with teachers had substantial effects on adoption of the tools and curricula, in addition to bringing social value and innovative ideas [13]. In order to catalyze such benefits, one paper presents key considerations to co-designing with teachers. These include addressing a "concrete, tangible innovation challenge", investigating "current practice and classroom contexts", and involving a "central accountability for the quality of the products of the co-design", among others [27]. In our study, we utilize these considerations and present a co-design for AI-integrated core curricula development.
METHOD
We conducted a two-day co-design workshop with fifteen instructors, ranging from K-12 teachers to educational directors. Participants completed pre-work before each day's activities, as well as pre-and post-workshop surveys. The co-design activity was split into three smaller group sessions to enable us to better identify differences in value and process of different teachers. This study was approved by the universityâĂŹs Institutional Review Board (IRB).
Participants
Fifteen teachers participated in the study, whom we recruited from a mailing list and our personal network. The only inclusion criteria was that they teach or previously taught in a K-12 classroom and were able to commit to the time and pre-work for the two-day workshop. Seven participants identified as female, four participants identified as male, and the remaining did not say. Their age ranged from 25 to 50 (M = 40.6, SD = 11.6). In our selection process, we prioritized participants who primarily taught non-computing subjects, such as English language arts (ELA), and then participants who taught computer science, with the idea that small group sessions could have diverse perspectives. Their work background is detailed in Tab. 1. All participants provided informed consent to participate in compliance with our institutionâĂŹs IRB. As the workshop was conducted in groups, we collected participants' availability and selected times where the greatest number of participants could join. Each day of the workshop lasted 2.5 hours. Every workshop session involved two researchers.
Co-Design Workshop
The entire co-design workshop spanned two days, Session 1 on the first day and Session 2 on the second day. Session 1 consisted of discussions and a "What is AI" presentation to level set all participants (see Tab. 2), and Session 2 consisted of the co-design activity and an ethics presentation (see Tab. 3). Here, we describe our rationale and the activities in detail.
3.2.1 Session 1. Before the first session, we asked participants to complete a pre-workshop questionnaire asking about participants' familiarity with AI, whether they have taught AI in the classroom before, and if so, what their experience was. This was to understand their backgrounds and enable us to tailor the content of Session 1 appropriately. Participants were also given detailed instructions on how to install and use Zoom [3], Slack [4], and Miro [2]-the tools used throughout the entire workshop. We started Session 1 with breaking participants into small groups on Zoom to discuss why participants thought AI is or is not important to teach their students. Having them describe what and why AI was important allowed us to understand their preconceptions about AI and their priorities as teachers. During the "Let's learn AI" presentation, participants learned the Big AI Ideas [36], categories of AI, and how to recognize what is and is not AI. During the "Let's learn AI tools" presentation, we demoed four distinct AI learning tools and provided participants with resources and links to explore further. We then used Miro for a card sorting activity [40], where we asked participants to generate categories for Google's A to Z of AI cards [1], where categories were limited to subjects taught in the classroom. The card sorting activity showed participants' enthusiasm for integrating AI topics into every classroom subject, including English language arts (ELA), writing and reading, social studies, math, science, economics, and social-emotional learning. 3.2.2 Session 2. Participants were asked to complete "pre-work" before Session 2. Participants had two days to complete their pre-work between Session 1 and Session 2. The pre-work asked participants to explore the rest of the AI learning tools, select one of the tools to go along with a curriculum they currently use or have used in their classrooms, and identify areas where they see potential to teach AI using the selected tool. Participants uploaded their submissions into a shared Google Drive folder. Participants had access to the workshop Google Drive folder, which contained all of the presentations and resources from Session 1, at all times, and could also post questions in the workshop Slack group, which was monitored closely by the researchers. From the pre-work submissions, we selected one idea to develop into an exemplar curriculum (see [6]).
For Session 2, participants were split into three groups of 4-5. Each group was asked to analyze the exemplar curriculum and discuss what they noticed. The co-design activity part 1 then began with each group responding to a prompt asking them to devise integrated AI curricula for specific subjects. We created the prompts from the pre-work submissions and organized the groups such that each would have a domain expert. For example, the group responding to the prompt asking participants to create a curriculum for students who are learning English as a Second Language (ESL) had an ESL teacher, who would be familiar with ESL students' needs. Each group was also paired with a researcher, who provided technical input and answered participants' questions about AI or learning tools. During the "Ethics & Diversity" presentation, we presented definitions of AI ethics, diversity statistics within the field, and resources for teaching and learning AI ethics. Participants then continued working in their groups on their integrated curriculum in co-design activity part 2.
Every group was successful in producing a first draft of an implementable AI curriculum that integrated with a core subject. The drafts can be found in the appendices [6]. Lastly, participants discussed why they thought AI was or was not important to teach for a second time, which acted as a reflection and a way to see if their mindset or preconceptions changed after the workshop. Participants were asked to complete a post-workshop questionnaire that asked how familiar they were with AI, how comfortable they felt teaching AI in their class, as well as feedback on the workshop itself and their demographics (i.e. age, gender, ethnicity).
Data Analysis
Our dataset consists of the audio recordings of the entire co-design workshop, participant questionnaires, and the deliverables of each participant, which include their pre-work submissions and their group work during the co-design activity. All audio recordings were transcribed to text and thematically coded by two researchers using open coding. We specifically examined their process, priorities, and challenges.
Nine out of 15 participants had never taught AI in the classroom. While some participants had had experience teaching AI, they were interested in learning how to allow non-CS students experience AI and integrate AI into their teaching. Participants came into the workshop rating their own familiarity with AI an average of 4.8 out of 7, and finished the workshop with an average rating of 5.8 out of 7. Teachers also rated their confidence about integrating AI into their own curriculum with an average rating of 5.6 out of 7.
RESULTS AND DISCUSSION
Our teacher participants teach students with diverse needs. The co-design activity prompted rich discussion with three groups completing three curriculum drafts that integrated AI with a topic of their choice. The topics were: (1) "How Does Data Affect Government Policy?" (Social Studies Curriculum), (2) "Learn Vocabulary with an AI" (Literacy curriculum for students with learning disabilities), and (3) "Build an AI-powered Pronunciation Application" (ESL curriculum), as shown in the appendices [6]. During the co-designing process, all groups shared certain considerations for the curriculum, though each group addressed them differently. In the first section of results, we answer the first research question by outlining what the shared values and considerations were and showing how each group addressed them. We then answer the second research question by showing how each curriculum effectively integrated AI.
4.1 RQ1: How might we address the values and considerations of K-12 teachers when designing AI curriculum? We identified four categories of values and considerations that our teachers had while creating the curriculum drafts: Evaluation, Engagement, Logistics, and Collaboration.
4.1.1 Evaluation. All groups considered student evaluation to be critical to a curriculum. Teachers wanted to see evidence for learning and know their students understand relevant concepts correctly. To do so, teachers first considered their own objectives: "Do we have an end goal in mind, or like, what do we consider a success?" (P9). In the ESL curriculum, P12 referred to the Big AI Ideas to identify the what the group called the "AI objective". P12 and P15 also frequently referred to the exemplar curriculum, suggesting that teachers require frameworks and scaffolding to devise the AI objective. To evaluate students, P5 and P12 both suggested non-traditional forms of evaluation, such as an "exit interview or on-the-fly assessments where students talk through all of the details, so we get a really good idea from a conversation with them whether they understand what they were doing" (P5) and "an engineer's log where you've got their design and you've got to do it all official" (P12). In these drafts, teachers wanted to evaluate students on their conceptual knowledge, and not on their technical knowledge.
4.1.2 Engagement. In a K-12 setting, engagement tends to be particularly challenging, which was a concern for our teachers. P8 and P10 grounded the Social Studies curriculum in law and government discourse by having students review an article around the Crown Act. Introducing context to the project gives students an "anchor" (P8) or hook to prompt further inquiry. Other anchors included asking students the "hard questions" about real-world applications of AI, such as "how do Siri and other personal assistants get to be at that point?" and "who used the machine learning and designed the app?" (P7). P5 and P15 both mentioned student-driven learning as a way to leverage students' interests. For example, "I can see a sixth grader coming in and going, I went to the baseball game and I couldn't say all these words. And they decide they're going to do baseball that day" (P12). Lastly, multiple groups brought up competition and gamification as effective methods of engagement: "the class creates a game that students use to quiz themselves on vocab by trying to be better than the system" and "module 1 can be a rock paper scissors game so that students get familiar with the interface" (P2).
Logistics.
By logistics, we mean factors that enable the curriculum to be smoothly run in the classroom. Teachers tended to think about how the lesson itself would take shape before addressing which core standards the lesson intended to cover. For example, at the beginning of the co-design, P10 explained that what would be most beneficial was "thinking of how to structure the lesson and what resources we can use to pull in to have the engagement component". Most teachers struggled with identifying which technology resources and learning tools to use, for example, whether to use Machine Learning for Kids [20] or Google Quick Draw [19]. Our teacher participants generally looked to the researcher for guidance, suggesting that tools can be more explicit about when and how they can be applied in K-12 classrooms. Teachers also paid close attention to grade-level considerations. They felt more comfortable having older students drive their own learning, but recognized even younger students are capable of deep reflection: "posing some challenging questions will vary a little depending on age, but you can get pretty deep with some-even fifth graders. They can get into this, and I think it's a good way of opening the door" (P15).
Collaboration.
All groups discussed the value of collaboration. In the ESL curriculum, teachers had their students collect data in groups and input the data into multiple models using Machine Learning for Kids. In the literacy curriculum, teachers had every student contribute 10 images to a class dataset to input into Google Teachable Machine. The presence of group work not only helps overcome the need to create many training examples for a machine learning model, but also provides students with opportunities to discuss design and ethics decisions with their peers and teacher. This also aligns with Long and Magerko's design consideration for Social Interaction [22].
Teachers also consider how collaboration can be implemented most effectively when designing curricula. For example, P8 described how "it's important to think about the group size because you want to make sure that students have a voice in the work. And when you start doing large group things those kids that process information internally never get to be heard. " She went on to describe how, in her experience, "duos [of students] work really, really well" and how it is generally better to "go with smaller groups [of students in the classroom], but if you're using technology [...] you're bound by what you have. " Thus, it is important to consider how AI tools can best facilitate group work to ensure all students have a chance to contribute and learn.
RQ2: How might AI curricula support teachers when they teach AI?
During the co-design, teachers made connections between the core subject material (e.g., social studies) and AI in three main ways: (1) relating an AI tool or concept to the core subject, (2) relating content from the core subject to AI, and (3) noticing overlapping concepts in AI and the core subject. For example, P14 related the AI tool, Arbitrary Style Transfer [24], to the core subject of history when he said, "If we give an image as input and try to modify [it] according to the old art [using] Style [Transfer][...] This can give us an idea about the history when we look at the picture, [...] but if you change the picture, the students may understand how people thought in the past". Other teachers related real-life applications of AI to core subjects, like how YouTube suggestion algorithms can be "tunnel visioned" in what they suggest, similar to how people can be "tunnel visioned" when considering politics or how recidivism risk analysis algorithms [7] can be related to social studies concepts (P10).
Teachers also often made connections by starting with a core subject concept and relating it to AI. For instance, one teacher connected physics data from one of their student's 3D printing projects to an AI flight prediction algorithm (P12). The same teacher also started with an English unit and asked, "What tools do we know that we [can] connect to language?", ultimately connecting English to a Shakespeare natural language processing algorithm. Another teacher began with the ELA concept of "argumentation" and connected it to the reflection and "data analysis" processes in AI (P8).
In terms of overlapping concepts between AI and core subjects, teachers often found connections using the Big AI Ideas [36]. For instance, the Big AI Ideas of Societal Implications and Representation and Reasoning are also core concepts in social studies. The AI concept of iterative development in ML was also directly connected to the social studies concept of iterative opinion making through "go[ing] back and forth" (P8) and adjusting beliefs.
Using these methods of connection, participants co-designed integrated curricula containing AI concepts and supports for teaching core subject requirements. The curricula contained three main points of integration: (1) data, (2) reflection, and (3) ethics.
4.2.1 Data. Educational activities often produce data, and AI systems often require data. This provides an obvious access point for AI systems to be integrated into ready-made educational activities. In our co-design workshops, participants used this fact to generate integrated curriculum. For instance, in the exemplar curriculum (see Fig. 1 and [6]) (which was based on a teacher's idea during the workshops) students would produce data as they construct airplanes for a physics activity. The paper airplane dimensions and time-of-flight data would then be used to train a ML model to predict the effectiveness of other potential paper airplanes, combining AI systems and physics concepts into a single curriculum.
For the ESL integrated curriculum, students produced data as they were practicing word pronunciation, which was then utilized in a pronunciation teaching app. For example, students would create data by recording saying a word correctly (as guided by a teacher) and incorrectly, which would then train a classification model for an app developed in MIT App Inventor [23]. This app would then be used to further help students learn correct pronunciation. Future AI-integrated curricula might consider utilizing the data inherent in core curricula activities, such as speech pronunciation data, to teach AI. This may be through teaching data-related AI competencies, like Data Literacy, Learning from Data, and Critically Interpreting Data [22], or through using data to train ML models, which can teach other AI competencies.
For the literacy curriculum for students with learning disabilities, participants also used data from core curriculum-vocabulary words-to integrate AI concepts. From the vocabulary words, students would find relevant images, generating further data, and use this to train a classification model. This addressed the aforementioned data-related AI competencies, as well as other competencies, including the ML Steps and Human Role in AI, in addition to relevant English literacy concepts.
Reflection.
Another point of AI integration was student reflection on core curriculum content and AI methods. Many common core standards as well as AI competencies can be addressed through student reflection. For example, the common core standard, 1-ESS1-1: "Use observations of the sun, moon, and stars to describe patterns that can be predicted." [26], and the AI concept, "Learning from Data", could be addressed by reflecting on patterns in a constellation classification model's input and output. In the exemplar curricula, students were asked to reflect on what did and did not work and why, and on the real-world implications of a biased dataset in airplane development. This reflection addressed both a standard from the common core, 3-5-ETS1-3: "Plans and carries out fair tests in which variables are controlled and failure points are considered to identify aspects of a model or prototype that can be improved" as well as a number of AI literacy competencies, including AI Strengths & Weaknesses, Critically Interpreting Data, and Ethics [22].
Teachers also used this method to integrate AI concepts into the social studies curriculum. For example, students were asked to reflect on the amount of data in each image category, social norms and peer opinion, people's ability to access resources, and consensus agreement in this curriculum. These reflection questions address a number of the AI competencies, including Data Literacy, Critically Interpreting Data, and Ethics [22], as well as core social studies and English language arts standards, including NSS-EC. 5
Ethics.
The final point of integration we present is through ethics, which is one of the AI literacy competencies [22]. Ethics can also be found in many common core standards [14,26]. For example, environmental ethics can be found in life science standards (K-ESS3-3: "Communicate solutions that will reduce the impact of humans on the land, water, air, and/or other living things in the local environment. "), and engineering standards (MS-ETS1-1: "Define the criteria and constraints of a design problem [...] taking into account [...] potential impacts on people and the natural environment") [26]. Furthermore, social justice principles, which are highly related to AI ethics, are commonly advocated for within standards-based K-12 education [11,12]. By teaching ethical principles with respect to AI, teachers can also address standards related to the common core.
Each curricula designed in the workshops had an ethics component. In the exemplar, students would engage in a brainstorming session about how AI bias affected the accuracy of ML models and relevant implications in the real world. Similarly, the ESL curriculum addressed ethics through discussing AI bias, socioeconomic norms for "correct" pronunciations, and the implications of an AI system judging people's pronunciations in the real-world. The social studies curriculum was developed around the ethics of the "CROWN Act" [15], what it means for students to design AI algorithms to classify outfits and hairstyles as "professional" or "unprofessional", and how this might affect different people groups. The literacy curriculum for students with learning disabilities addressed ethics through discussion about the accuracy of the image classification system and reasons for any bias observed. Each of these curricula touched on environmental, social justice or other ethical issues, addressing both AI and common core ethics standards.
From the workshops, we found that teachers were highly interested in teaching ethics (e.g., the social studies curricula was entirely focused on ethics); however, they also seemed apprehensive about actually implementing ethics activities in the classroom. For example, P5 described how there is a "barrier that comes up for teachers" when " kids often bring ethics up with questions and sometimes teachers will avoid it because theyâĂŹre afraid to say something wrong [...] even though those discussions would be so rich." Nevertheless, P5 also mentioned how if it was in a "planned lesson", it would be "less scary because you know what youâĂŹre going to say". Designing scaffolding for AI ethics lessons would not only enable core curricula integration, but would also empower teachers to more confidently teach students about ethics.
Reflecting on Remote Co-design
Due to Covid-19, we organized and ran this co-design workshop completely remotely. Among our activities, the perceived helpfulness from most to least helpful was: Presentations (11 votes), Co-design activity (9 votes), Why AI? and Ethics discussions (both 7 votes), and the Card sorting activity (4 votes). Participants also indicated meeting like-minded educators from around the world and having access to the list of tools and links to be particularly rewarding takeaways. Overall, we noticed a slight increase in familiarity with AI after the workshop and a high level of confidence for integrating AI into their classrooms, though we did not establish that baseline. When asked if the workshop changed their opinion about teaching AI, teachers cited "introducing AI is the gateway to so much learning...now I am seeing and starting to understand the vast world of opportunities that exist for coding beyond being video game designers" (anonymous), as well as seeing the necessity of teaching AI and understanding that AI can be accessible to not "just the computery people" (anonymous).
At the beginning of the workshop, we established norms as an entire group to make facilitating easier. For example, setting expectations for "warm" calling to ensure equal representation of voices in the room meant participants expected to be called on to share their thoughts. Other norms included being present, having discussions in breakout rooms, and keeping cameras on. Since most teachers were unfamiliar with teaching AI, we grouped them into smaller groups of 4-5 so that each group could have a researcher co-designing with them. However, this meant some teachers worked on curriculum that was unrelated to their discipline. We believed this trade off was necessary given the complexity of the task, and was mitigated by the benefits of collaboration. This setup may have worked better if teachers from the same school joined, and groups could be formed by school. Several teachers also requested more time to play with the AI learning tools and digest the presentations. This could have been addressed by scheduling more time between Session 1 and 2, so teachers would have more time to complete the pre-work for Session 2. One suggestion from a participant was to introduce the AI tools using a jigsaw game where every teacher explores an assigned AI tool and presents it back to the group.
Broader Implications and Limitations
The above findings contribute to the under-explored need to collaborate with teachers when designing AI curriculum, as well as the potential for AI to be integrated into K-12 core curriculum. Combining teaching expertise with research expertise through co-design allowed for thinking beyond the context of a research study and into actual classrooms. The adoption of learning tools and AI curriculum is influenced by complex factors outside the locus of control of the people creating the tools (i.e. designers and researchers) and the people using the tools (i.e. teachers). However, without teacher buy-in, adoption in the classroom would be impossible, and understanding their contexts is necessary and often understated. Through this co-design, teachers experienced the potential for AI to be embedded in subjects like social studies and English, which could allow non-CS and non-technical learners to experience AI in new classrooms. This method also leverages students' existing interests in non-technical subjects as a pathway into AI. This work serves as a push for further explorations to expose a wider range of students to AI.
While the above findings could provide useful insights to AI education researchers and designers, we acknowledge the limitations of our study. Because our explorations focused on integrating AI with non-technical subjects with a small group of teachers, applying and extending general implications beyond this context should be done with caution.
CONCLUSION
In this paper, we explored the needs of teachers in K-12 classrooms and how AI education can be integrated with existing core curriculum. We engaged K-12 teachers and researchers in a two-day co-design workshop, where we co-created lesson plans that embedded AI concepts into curricula for social studies, ESL, and literacy for students with learning disabilities. We found that teachers value curriculum that address evaluation and engagement of students, which could be built into the learning tool or curriculum. Teachers also successfully connected AI with their subject by having students examine subject-related datasets, as well as reflect on real-world implications and AI ethics. Our work highlights an opportunity to increase accessibility of K-12 AI education by embedding AI into core subjects (e.g., English, social studies), and reaching students outside of CS and technology classrooms.
- 8 . 1 :
81Scarcity, NSS-C.5-8.3: Principles of Democracy, NL-ENG.K-12.4: Communication Skills, and NL-ENG.K-12.7: Evaluating Data [14].
Table 1 .
1Participants were selected to represent diverse profiles and/or subject areas. 12th grade English as a Second Language (ESL) Pennsylvania, USAID Grade taught Subject taught
Location
P1
6th grade
English Language Arts (ELA)
North Carolina, USA
P2
5th grade
Science
Connecticut, USA
P3
6th-8th grade
Computer Science
Tunisia, North Africa
P4
9th-12th grade Computer Science
Cuneo, Italy
P5
9th-12th grade Chemistry and Math
British Columbia, Canada
P6
6th-8th grade
STEM
Florida, USA
P7
6th-8th grade
STEM
Florida, USA
P8
-
STEM
Pennsylvania, USA
P9
6th-8th grade
Computer Science
California, USA
P10 9th-12th grade Career Exploration
Rhode Island, USA
P11 9th-12th grade Computer Science
Massachusetts, USA
P12 9th-12th grade Library Science
Rome, Italy
P13 6th grade
History
California, USA
P14 6th-9th grade
Computer Science
Turkey
P15 6th-
Table 2 .
2Schedule for Session 1 Let's learn AI! (Presentation) 15 min Break 25 min Card sorting activity 25 min Let's learn AI tools! (Presentation)Time
Activity
15 min Introduction
20 min Why AI? (Discussion)
50 min
Table 3 .
3Schedule for Session 2Time
Activity
60 min Co-design activity part 1
15 min Break
40 min Ethics & Diversity (Presentation)
20 min Co-design activity part 2
15 min Why AI? (Discussion)
ACKNOWLEDGMENTSWe thank the teachers who were a part of this study; Randi Williams, who provided co-design guidance; and Hal Abelson, who made this work possible.
. Google, A to Z of AI with Google. https://atozofai.withgoogle.com/intl/en-US/about/. Accessed: 2020-08-13.
Hae Won Park, and Cynthia Breazeal. Safinah Ali, H Blakeley, Randi Payne, Williams, n.d.Safinah Ali, Blakeley H Payne, Randi Williams, Hae Won Park, and Cynthia Breazeal. [n.d.].
Developing Primary and Middle School Artificial Intelligence Education. Ethics Constructionism, Creativity , Constructionism, Ethics, and Creativity: Developing Primary and Middle School Artificial Intelligence Education.
Anonymous Authors. 2020. Appendix. Anonymous Authors. 2020. Appendix. http://anonymized.for.peer.review. Accessed: 2020-09-16.
An impact assessment of machine learning risk forecasts on parole board decisions and recidivism. Richard Berk, Journal of Experimental Criminology. 13Richard Berk. 2017. An impact assessment of machine learning risk forecasts on parole board decisions and recidivism. Journal of Experimental Criminology 13, 2 (2017), 193-216.
IRobot: Teaching the basics of artificial intelligence in high schools. Harald Burgsteiner, Martin Kandlhofer, Gerald Steinbauer, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligenceHarald Burgsteiner, Martin Kandlhofer, and Gerald Steinbauer. 2016. IRobot: Teaching the basics of artificial intelligence in high schools. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. 4126-4127.
Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. Michelle Carney, Barron Webster, Irene Alvarado, Kyle Phillips, Noura Howell, Jordan Griffith, Jonas Jongejan, Amit Pitaru, Alexander Chen, Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Michelle Carney, Barron Webster, Irene Alvarado, Kyle Phillips, Noura Howell, Jordan Griffith, Jonas Jongejan, Amit Pitaru, and Alexander Chen. 2020. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1-8.
AI, E-government, and Politics 2.0. H Chen, IEEE Intelligent Systems. 24H. Chen. 2009. AI, E-government, and Politics 2.0. IEEE Intelligent Systems 24, 5 (2009), 64-86.
Integrating social justice advocacy with national standards of practice: Implications for school counselor education. Counselor Education and Supervision. Catherine Andrea L Dixon, Mary Ann Tucker, Clark, 50Andrea L Dixon, Catherine Tucker, and Mary Ann Clark. 2010. Integrating social justice advocacy with national standards of practice: Implications for school counselor education. Counselor Education and Supervision 50, 2 (2010), 103-115.
Getting âĂIJup to codeâĂİ: Preparing for and confronting challenges when teaching for social justice in standards-based classrooms. Alison G Dover, Action in Teacher Education. 35Alison G Dover. 2013. Getting âĂIJup to codeâĂİ: Preparing for and confronting challenges when teaching for social justice in standards-based classrooms. Action in Teacher Education 35, 2 (2013), 89-102.
Teemu Leinonen, and Evangelos Kapros. 2020. Co-creation and co-design in technology-enhanced learning: Innovating science learning outside the classroom. Interaction Design and Architecture (s). Eva Durall, Merja Bauters, Iida Hietala, Eva Durall, Merja Bauters, Iida Hietala, Teemu Leinonen, and Evangelos Kapros. 2020. Co-creation and co-design in technology-enhanced learning: Innovating science learning outside the classroom. Interaction Design and Architecture (s) 42 (2020), 202-226.
From the Editorial Board: Tangled Discrimination in Schools: Binding Hair to Control Student Identity. K Torrie, Edwards, The High School Journal. 103Torrie K Edwards. 2020. From the Editorial Board: Tangled Discrimination in Schools: Binding Hair to Control Student Identity. The High School Journal 103, 2 (2020), 53-56.
AI Education for the World. Ashok Goel, AI Magazine. 38Ashok Goel. 2017. AI Education for the World. AI Magazine 38, 2 (2017), 3-4.
An action research report from a multiyear approach to teaching artificial intelligence at the K-6 level. Clint Heinze, Janet Haase, Helen Higgins, In 1st symposium on educational advances in artificial intelligenceClint Heinze, Janet Haase, and Helen Higgins. 2010. An action research report from a multiyear approach to teaching artificial intelligence at the K-6 level. In 1st symposium on educational advances in artificial intelligence.
. Aaron Hertzmann, Visual Indeterminacy in GAN Art. Leonardo. 53Aaron Hertzmann. 2020. Visual Indeterminacy in GAN Art. Leonardo 53, 4 (2020), 424-428.
Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim, Nick Fox-Gieg, Google Quick. Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim, and Nick Fox-Gieg. 2017. Google Quick, Draw! https://quickdraw.withgoogle.com/. Accessed: 2020-09-16.
Dale Lane, Machine Learning for Kids. Dale Lane. 2020. Machine Learning for Kids. https://machinelearningforkids.co.uk/. Accessed: 2020-09-05.
Zhorai: Designing a Conversational Agent for Children to Explore Machine Learning Concepts. Phoebe Lin, Jessica Van Brummelen, Galit Lukin, Randi Williams, Cynthia Breazeal, AAAI. AAAI. New York, NY, USAPhoebe Lin, Jessica Van Brummelen, Galit Lukin, Randi Williams, and Cynthia Breazeal. 2020. Zhorai: Designing a Conversational Agent for Children to Explore Machine Learning Concepts.. In AAAI. AAAI, New York, NY, USA, 13381-13388.
What is AI Literacy? Competencies and Design Considerations. Duri Long, Brian Magerko, 10.1145/3313831.3376727Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. the 2020 CHI Conference on Human Factors in Computing SystemsHonolulu, HI, USA; New York, NY, USA, 1âĂŞ16Association for Computing MachineryCHI âĂŹ20)Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI âĂŹ20). Association for Computing Machinery, New York, NY, USA, 1âĂŞ16. https://doi.org/10.1145/3313831.3376727
MIT App Inventor. 2020. Artificial Intelligence with MIT App Inventor. MIT App Inventor. 2020. Artificial Intelligence with MIT App Inventor. https://appinventor.mit.edu/explore/ai-with- mit-app-inventor. Accessed: 2020-09-10.
Arbitrary Style Transfer in the Browser. Reiichiro Nakano, Reiichiro Nakano. 2019. Arbitrary Style Transfer in the Browser. https://reiinakano.com/arbitrary-image-stylization- tfjs/. Accessed: 2020-09-16.
Personalizing homemade bots with plug & play AI for STEAM education. Taro Narahara, Yoshihiro Kobayashi, SIGGRAPH Asia. Taro Narahara and Yoshihiro Kobayashi. 2018. Personalizing homemade bots with plug & play AI for STEAM education. In SIGGRAPH Asia 2018 Technical Briefs. 1-4.
Access the Next Generation Science Standards by Topic. Nsta, NSTA. 2014. Access the Next Generation Science Standards by Topic. https://ngss.nsta.org/AccessStandardsByTopic. aspx. Accessed: 2020-09-14.
Co-design of innovations with teachers: Definition and dynamics. Jeremy Roschelle, William Penuel, Nicole Shechtman, Proceedings of the 7th international conference on Learning sciences. International Society of the Learning Sciences. the 7th international conference on Learning sciences. International Society of the Learning SciencesJeremy Roschelle, William Penuel, and Nicole Shechtman. 2006. Co-design of innovations with teachers: Definition and dynamics. In Proceedings of the 7th international conference on Learning sciences. International Society of the Learning Sciences, 606-612.
Kids making AI: Integrating Machine Learning, Gamification, and Social Context in STEM Education. Bawornsak Sakulkueakulsuk, Siyada Witoon, Potiwat Ngarmkajornwiwat, Pornpen Pataranutaporn, 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering. IEEEWerasak Surareungchai, Pat Pataranutaporn, and Pakpoom SubsoontornBawornsak Sakulkueakulsuk, Siyada Witoon, Potiwat Ngarmkajornwiwat, Pornpen Pataranutaporn, Werasak Surare- ungchai, Pat Pataranutaporn, and Pakpoom Subsoontorn. 2018. Kids making AI: Integrating Machine Learning, Gamification, and Social Context in STEM Education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE, 1005-1010.
Any-Cubes: A Children's Toy for Learning AI: Enhanced Play with Deep Learning and MQTT. Alexander Scheidt, Tim Pulver, Proceedings of Mensch und Computer. Mensch und ComputerAlexander Scheidt and Tim Pulver. 2019. Any-Cubes: A Children's Toy for Learning AI: Enhanced Play with Deep Learning and MQTT. In Proceedings of Mensch und Computer 2019. 893-895.
Organizing for teacher agency in curricular co-design. Samuel Severance, Tamara William R Penuel, Heather Sumner, Leary, Journal of the Learning Sciences. 254Samuel Severance, William R Penuel, Tamara Sumner, and Heather Leary. 2016. Organizing for teacher agency in curricular co-design. Journal of the Learning Sciences 25, 4 (2016), 531-564.
Integrating AI and machine learning in software engineering course for high school students. Ahuva Sperling, Dorit Lickerman, Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education. the 17th ACM annual conference on Innovation and technology in computer science educationAhuva Sperling and Dorit Lickerman. 2012. Integrating AI and machine learning in software engineering course for high school students. In Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education. 244-249.
Making artificial intelligence work for investigative journalism. Jonathan Stray, Digital Journalism. 7Jonathan Stray. 2019. Making artificial intelligence work for investigative journalism. Digital Journalism 7, 8 (2019), 1076-1097.
PIC: A Personal Image Classification Webtool for High School Students. Danny Tang, Yuria Utsumi, Natalie Lao, Proceedings of the 2019 IJCAI EduAI Workshop. IJCAI. the 2019 IJCAI EduAI Workshop. IJCAIDanny Tang, Yuria Utsumi, and Natalie Lao. 2019. PIC: A Personal Image Classification Webtool for High School Students. In Proceedings of the 2019 IJCAI EduAI Workshop. IJCAI.
A year in K-12 AI education. David Touretzky, Christina Gardner-Mccune, Cynthia Breazeal, Fred Martin, Deborah Seehorn, AI Magazine. 40David Touretzky, Christina Gardner-McCune, Cynthia Breazeal, Fred Martin, and Deborah Seehorn. 2019. A year in K-12 AI education. AI Magazine 40, 4 (2019), 88-90.
Envisioning AI for K-12: What should every child know about AI. David Touretzky, Christina Gardner-Mccune, Fred Martin, Deborah Seehorn, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33David Touretzky, Christina Gardner-McCune, Fred Martin, and Deborah Seehorn. 2019. Envisioning AI for K-12: What should every child know about AI?. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 9795-9799.
Envisioning AI for K-12: What should every child know about AI. David Touretzky, Christina Gardner-Mccune, Fred Martin, Deborah Seehorn, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceHonolulu, Hawaii, USAAAAI33David Touretzky, Christina Gardner-McCune, Fred Martin, and Deborah Seehorn. 2019. Envisioning AI for K-12: What should every child know about AI?. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. AAAI, Honolulu, Hawaii, USA, 9795-9799.
Calypso for Cozmo: Robotic AI for everyone. S David, Christina Touretzky, Gardner-Mccune, Proceedings of the 49th ACM Technical Symposium on Computer Science Education. the 49th ACM Technical Symposium on Computer Science EducationBaltimore, MD, USAACMDavid S Touretzky and Christina Gardner-McCune. 2018. Calypso for Cozmo: Robotic AI for everyone. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. ACM, Baltimore, MD, USA, 1110-1110.
Toward more gender diversity in CS through an artificial intelligence summer program for high school girls. Marie E Vachovsky, Grace Wu, Sorathan Chaturapruek, Olga Russakovsky, Richard Sommer, Li Fei-Fei, Proceedings of the 47th ACM Technical Symposium on Computing Science Education. the 47th ACM Technical Symposium on Computing Science EducationMarie E Vachovsky, Grace Wu, Sorathan Chaturapruek, Olga Russakovsky, Richard Sommer, and Li Fei-Fei. 2016. Toward more gender diversity in CS through an artificial intelligence summer program for high school girls. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education. 303-308.
Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings. A Vazhayil, R Shetty, R R Bhavani, N Akshay, 2019 IEEE Tenth International Conference on Technology for Education (T4E). A. Vazhayil, R. Shetty, R. R. Bhavani, and N. Akshay. 2019. Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings. In 2019 IEEE Tenth International Conference on Technology for Education (T4E). 71-77.
Jed R Wood, Larry E Wood, Card Sorting: Current Practices and Beyond. 4Jed R. Wood and Larry E. Wood. 2008. Card Sorting: Current Practices and Beyond. 4, 1 (Nov. 2008), 1âĂŞ6.
Accelerated Move for AI Education in China. Xiaozhe Yang, ECNU Review of Education. 2Xiaozhe Yang. 2019. Accelerated Move for AI Education in China. ECNU Review of Education 2, 3 (2019), 347-352.
Teaching AI: Exploring New Frontiers for Learning. Michelle Zimmerman, International Society for Technology in Education. Michelle Zimmerman. 2018. Teaching AI: Exploring New Frontiers for Learning. International Society for Technology in Education (12 2018).
Sports and machine learning: How young people can use data from their own bodies to learn about machine learning. Abigail Zimmermann-Niefield, Benjamin Shapiro, Shaun Kane, XRDS: Crossroads, The ACM Magazine for Students. 25Abigail Zimmermann-Niefield, R Benjamin Shapiro, and Shaun Kane. 2019. Sports and machine learning: How young people can use data from their own bodies to learn about machine learning. XRDS: Crossroads, The ACM Magazine for Students 25, 4 (2019), 44-49.
| [] |
[
"Go-Explore: a New Approach for Hard-Exploration Problems",
"Go-Explore: a New Approach for Hard-Exploration Problems"
] | [
"Adrien Ecoffet [email protected] \nUber AI Labs San Francisco\n94103CA\n",
"Joost Huizinga [email protected] \nUber AI Labs San Francisco\n94103CA\n",
"Joel Lehman [email protected] \nUber AI Labs San Francisco\n94103CA\n",
"Kenneth O Stanley [email protected] \nUber AI Labs San Francisco\n94103CA\n",
"Jeff Clune [email protected] \nUber AI Labs San Francisco\n94103CA\n"
] | [
"Uber AI Labs San Francisco\n94103CA",
"Uber AI Labs San Francisco\n94103CA",
"Uber AI Labs San Francisco\n94103CA",
"Uber AI Labs San Francisco\n94103CA",
"Uber AI Labs San Francisco\n94103CA"
] | [] | A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hardexploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hardexploration problems. On Montezuma's Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because Go-Explore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics). | null | [
"https://arxiv.org/pdf/1901.10995v1.pdf"
] | 59,413,825 | 1901.10995 | c520bf47db3360ae3a52219771390a354ed8a91f |
Go-Explore: a New Approach for Hard-Exploration Problems
Adrien Ecoffet [email protected]
Uber AI Labs San Francisco
94103CA
Joost Huizinga [email protected]
Uber AI Labs San Francisco
94103CA
Joel Lehman [email protected]
Uber AI Labs San Francisco
94103CA
Kenneth O Stanley [email protected]
Uber AI Labs San Francisco
94103CA
Jeff Clune [email protected]
Uber AI Labs San Francisco
94103CA
Go-Explore: a New Approach for Hard-Exploration Problems
*Co-senior authors
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to encourage exploration and improve performance on hardexploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through exploiting any available means (including by introducing determinism), then robustify (create a policy that can reliably perform the solution) via imitation learning. The combined effect of these principles generates dramatic performance improvements on hardexploration problems. On Montezuma's Revenge, without being provided any domain knowledge, Go-Explore scores over 43,000 points, almost 4 times the previous state of the art. Go-Explore can also easily harness human-provided domain knowledge, and when augmented with it Go-Explore scores a mean of over 650,000 points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record by an order of magnitude, thus meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean performance of almost 60,000 points also exceeds expert human performance. Because Go-Explore can produce many high-performing demonstrations automatically and cheaply, it also outperforms previous imitation learning work in which the solution was provided in the form of a human demonstration. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that often harness a simulator during training (e.g. robotics).
Introduction
Reinforcement learning (RL) has experienced significant progress in recent years, achieving superhuman performance in board games such as Go [1,2] and in classic video games such as Atari [3]. However, this progress obscures some of the deep unmet challenges in scaling RL to complex real-world domains. In particular, many important tasks require effective exploration to be solved, i.e. to explore and learn about the world even when rewards are sparse or deceptive. In sparse-reward problems, precise sequences of many (e.g. hundreds or more) actions must be taken between obtaining rewards. Deceptive-reward problems are even harder, because instead of feedback rarely being provided, the reward function actually provides misleading feedback for reaching the overall global objective, which can lead to getting stuck on local optima. Both sparse and deceptive reward problems constitute "hard-exploration" problems, and classic RL algorithms perform poorly on them [4]. Unfortunately, most challenging real-world problems are also hard-exploration problems. That is because we often desire to provide abstract goals (e.g. "find survivors and tell us their location," or "turn off the valve to the leaking pipe in the reactor"), and such reward functions do not provide detailed guidance on how to solve the problem (sparsity) while also often creating unintended local optima (deception) [5][6][7][8].
For example, in the case of finding survivors in a disaster area, survivors will be few and far between, thus introducing sparsity. Even worse, if we also instruct the robot to minimize damage to itself, this additional reward signal may actively teach the robot not to explore the environment, because exploration is initially much more likely to result in damage than it is to result in finding a survivor. This seemingly sensible additional objective thus introduces deception on top of the already sparse reward problem.
To address these challenges, this paper introduces Go-Explore, a new algorithm for hard-exploration problems that dramatically improves state-of-the-art performance in two classic hard-exploration benchmarks: the Atari games Montezuma's Revenge and Pitfall.
Prior to Go-Explore, the typical approach to sparse reward problems has been intrinsic motivation (IM) [4,[9][10][11], which supplies the RL agent with intrinsic rewards (IRs) that encourage exploration (augmenting or replacing extrinsic reward that comes from the environment). IM is often motivated by psychological concepts such as curiosity [12,13] or novelty-seeking [7,14], which play a role in how humans explore and learn. While IM has produced exciting progress in sparse reward problems, in many domains IM approaches are still far from fully solving the problem, including on Montezuma's Revenge and Pitfall. We hypothesize that, amongst other issues, such failures stem from two root causes that we call detachment and derailment.
Detachment is the idea that an agent driven by IM could become detached from the frontiers of high intrinsic reward (IR). To understand detachment, we must first consider that intrinsic reward is nearly always a consumable resource: a curious agent is curious about states to the extent that it has not often visited them (similar arguments apply for surprise, novelty, or prediction-error seeking agents [4,[14][15][16]). If an agent discovers multiple areas of the state space that produce high IR, its policy may in the short term focus on one such area. After exhausting some of the IR offered by that area, the policy may by chance begin consuming IR in another area. Once it has exhausted that IR, it is difficult for it to rediscover the frontier it detached from in the initial area, because it has already consumed the IR that led to that frontier (Fig. 1), and it likely will not remember how to return to that frontier due to catastrophic forgetting [17][18][19][20]. Each time this process occurs, a potential avenue of exploration can be lost, or at least be difficult to rediscover. In the worst case, there may be a dearth of remaining IR near the areas of state space visited by the current policy (even though much IR might remain elsewhere), and therefore no learning signal remains to guide the agent to further explore in an effective and informed way. One could slowly add intrinsic rewards back over time, but then the entire fruitless process could repeat indefinitely. In theory a replay buffer could prevent detachment, but in practice it would have to be large to prevent data about the abandoned frontier to not be purged before it becomes needed, and large replay buffers introduce their own optimization stability difficulties [21,22]. The Go-Explore algorithm addresses detachment by explicitly storing an archive of promising states visited so that they can then be revisited and explored from later.
Derailment can occur when an agent has discovered a promising state and it would be beneficial to return to that state and explore from it. Typical RL algorithms attempt to enact such desirable behavior by running the policy that led to the initial state again, but with some stochastic perturbations to the existing policy mixed in to encourage a slightly different behavior (e.g. exploring further). The stochastic perturbation is performed because IM agents have two layers of exploration mechanisms: (1) the higher-level IR incentive that rewards when new states are reached, and (2) a more basic exploratory mechanism such as epsilon-greedy exploration, action-space noise, or parameter-space noise [23][24][25]. Importantly, IM agents rely on the latter mechanism to discover states containing high IR, and the former mechanism to return to them. However, the longer, more complex, and more precise a sequence of actions needs to be in order to reach a previously-discovered high-IR state, the more likely it is that such stochastic perturbations will "derail" the agent from ever returning to that state. That is because the needed precise actions are naively perturbed by the basic exploration mechanism, causing the agent to only rarely succeed in reaching the known state to which it is drawn, Figure 1: A hypothetical example of detachment in intrinsic motivation (IM) algorithms. Green areas indicate intrinsic reward, white indicates areas where no intrinsic reward remains, and purple areas indicate where the algorithm is currently exploring. (1) The agent starts each episode between the two mazes. (2) It may by chance start exploring the West maze and IM may drive it to learn to traverse, say, 50% of it. (3) Because current algorithms sprinkle in randomness (either in actions or parameters) to try to produce new behaviors to find explicit or intrinsic rewards, by chance the agent may at some point begin exploring the East maze, where it will also encounter a lot of intrinsic reward. After completely exploring the East maze, it has no explicit memory of the promising exploration frontier it abandoned in the West maze. It likely would also have no implicit memory of this frontier due to the problem of catastrophic forgetting [17][18][19][20]. (4) Worse, the path leading to the frontier in the West maze has already been explored, so no (or little) intrinsic motivation remains to rediscover it. We thus say the algorithm has detached from a frontier of states that provide intrinsic motivation. As a result, exploration can stall when areas close to where the current agent visits have already been explored. This problem would be remedied if the agent remembered and returned to previously discovered promising areas for exploration, which Go-Explore does. and from which further exploration might be most effective. To address derailment, an insight in Go-Explore is that effective exploration can be decomposed into first returning to a promising state (without intentionally adding any exploration) before then exploring further.
Go-Explore is an explicit response to both detachment and derailment that is also designed to achieve robust solutions in stochastic environments. The version presented here works in two phases ( Fig. 2): (1) first solve the problem in a way that may be brittle, such as solving a deterministic version of the problem (i.e. discover how to solve the problem at all), and (2) then robustify (i.e. train to be able to reliably perform the solution in the presence of stochasticity). 1 Similar to IM algorithms, Phase 1 focuses on exploring infrequently visited states, which forms the basis for dealing with sparse-reward and deceptive problems. In contrast to IM algorithms, Phase 1 addresses detachment and derailment by accumulating an archive of states and ways to reach them through two strategies: (a) add all interestingly different states visited so far into the archive, and (b) each time a state from the archive is selected to explore from, first Go back to that state (without adding exploration), and then Explore further from that state in search of new states (hence the name "Go-Explore").
An analogy of searching a house can help one contrast IM algorithms and Phase 1 of Go-Explore. IM algorithms are akin to searching through a house with a flashlight, which casts a narrow beam of exploration first in one area of the house, then another, and another, and so on, with the light being drawn towards areas of intrinsic motivation at the edge of its small visible region. It can get lost if at any point the beam fails to fall on any area with intrinsic motivation remaining. Go-Explore more resembles turning the lights on in one room of a house, then its adjacent rooms, then their adjacent rooms, etc., until the entire house is illuminated. Go-Explore thus gradually expands its sphere of knowledge in all directions simultaneously until a solution is discovered.
If necessary, the second phase of Go-Explore robustifies high-performing trajectories from the archive such that they are robust to the stochastic dynamics of the true environment. Go-Explore robustifies via imitation learning (aka learning from demonstrations or LfD [26][27][28][29]), a technique that learns how to solve a task from human demonstrations. The only difference with Go-Explore is that the solution demonstrations are produced automatically by Phase 1 of Go-Explore instead of being provided by humans. The input to this phase is one or more high-performing trajectories, and the output is a robust policy able to consistently achieve similar performance. The combination of both phases instantiates a powerful algorithm for hard-exploration problems, able to deeply explore sparse-and deceptive-reward environments and robustify high-performing trajectories into reliable solutions that perform well in the unmodified, stochastic test environment.
Some of these ideas are similar to ideas proposed in related work. Those connections are discussed in Section 5. That said, we believe we are the first to combine these ideas in this way and demonstrate that doing so provides substantial performance improvements on hard-exploration problems.
To explore its potential, we test Go-Explore on two hard-exploration benchmarks from the Arcade Learning Environment (ALE) [30,31]: Montezuma's Revenge and Pitfall. Montezuma's Revenge has become an important benchmark for exploration algorithms (including intrinsic motivation algorithms) [4,16,[32][33][34][35][36][37][38][39] because precise sequences of hundreds of actions must be taken in between receiving rewards. Pitfall is even harder because its rewards are sparser (only 32 positive rewards are scattered over 255 rooms) and because many actions yield small negative rewards that dissuade RL algorithms from exploring the environment.
Classic RL algorithms (i.e. those without intrinsic motivation) such as DQN [3], A3C [40], Ape-X [41] and IMPALA [42] perform poorly on these domains even with up to 22 billion game frames of experience, scoring 2,500 or lower on Montezuma's Revenge and failing to solve level one, and scoring ≤ 0 on Pitfall. Those results exclude experiments that are evaluated in a deterministic test environment [43,44] or were given human demonstrations [26,27,45]. On Pitfall, the lack of positive rewards and frequent negative rewards causes RL algorithms to learn a policy that effectively does nothing, either standing completely still or moving back and forth near the start of the game (https://youtu.be/Z0lYamtgdqQ [46]).
These two games are also tremendously difficult for planning algorithms, even when allowed to plan directly within the game emulator. Classical planning algorithms such as UCT [47][48][49] (a powerful form of Monte Carlo tree search [49,50]) obtain 0 points on Montezuma's Revenge because the state space is too large to explore effectively, even with probabilistic methods [30,51].
Despite being specifically designed to tackle sparse reward problems and being the dominant method for them, IM algorithms also struggle with Montezuma's Revenge and Pitfall, although they perform better than algorithms without IM. On Montezuma's Revenge, the best such algorithms thus far average around 11,500 with a maximum of 17,500 [16,39]. One solved level 1 of the game in 10% of its runs [16]. Even with IM, no algorithm scores greater than 0 on Pitfall (in a stochastic test environment, without a human demonstration). We hypothesize that detachment and derailment are major reasons why IM algorithms do not perform better.
When exploiting easy-to-provide domain knowledge, Go-Explore on Montezuma's Revenge scores a mean of 666,474, and its best run scores almost 18 million and solves 1,441 levels. On Pitfall, Go-Explore scores a mean of 59,494 and a maximum of 107,363, which is close to the maximum of the game of 112,000 points. Without exploiting domain knowledge, Go-Explore still scores a mean of 43,763 on Montezuma's Revenge. All scores are dramatic improvements over the previous state of the art. This and all other claims about solving the game and producing state-of-the-art scores assume that, while stochasticity is required during testing, deterministic training is allowable (discussed in Section 2.1.3). We conclude that Go-Explore is a promising new algorithm for solving hard-exploration RL tasks with sparse and/or deceptive rewards.
The Go-Explore Algorithm
The insight that remembering and returning reliably to promising states is fundamental to effective exploration in sparse-reward problems is at the core of Go-Explore. Because this insight is so flexible and can be exploited in different ways, Go-Explore effectively encompasses a family of algorithms built around this key idea. The variant implemented for the experiments in this paper and described in detail in this section relies on two distinct phases. While it provides a canonical demonstration of the possibilities opened up by Go-Explore, other variants are also discussed (e.g. in Section 4) to provide a broader compass for future applications.
Phase 1: Explore until solved
In the two-phase variant of Go-Explore presented in this paper, the purpose of Phase 1 is to explore the state space and find one or more high-performing trajectories that can later be turned into a robust policy in Phase 2. To do so, Phase 1 builds up an archive of interestingly different game states, which we call "cells" (Section 2.1.1), and trajectories that lead to them. It starts with an archive that only contains the starting state. From there, it builds the archive by repeating the following procedures: choose a cell from the current archive (Section 2.1.2), return to that cell without adding any stochastic exploration (Section 2.1.3), and then explore from that location stochastically (Section 2.1.4). During this process, any newly encountered cells (as well as how to reach them) or improved trajectories to existing cells are added to the archive (Section 2.1.5).
Cell representations
One could, in theory, run Go-Explore directly in a high-dimensional state space (wherein each cell contains exactly one state); however doing so would be intractable in practice. To be tractable in high-dimensional state spaces like Atari, Phase 1 of Go-Explore needs a lower-dimensional space within which to search (although the final policy will still play in the same original state space, in this case pixels). Thus, the cell representation should conflate similar states while not conflating states that are meaningfully different.
In this way, a good cell representation should reduce the dimensionality of the observations into a meaningful low-dimensional space. A rich literature investigates how to obtain good representations from pixels. One option is to take latent codes from the middle of neural networks trained with traditional RL algorithms maximizing extrinsic and/or intrinsic motivation, optionally adding auxiliary tasks such as predicting rewards [52]. Additional options include unsupervised techniques such as networks that autoencode [53] or predict future states, and other auxiliary tasks such as pixel control [54].
While it will be interesting to test any or all of these techniques with Go-Explore in future work, for these initial experiments with Go-Explore we test its performance with two different representations: a simple one that does not harness game-specific domain knowledge, and one that does exploit easy-to-provide domain knowledge.
Cell representations without domain knowledge
We found that a very simple dimensionality reduction procedure produces surprisingly good results on Montezuma's Revenge. The main idea is simply to downsample the current game frame. Specifically, we (1) convert each game frame image to grayscale (2) downscale it to an 11 × 8 image with area interpolation (i.e. using the average pixel value in the area of the downsampled pixel), (3) rescale pixel intensities so that they are integers between 0 and 8, instead of the original 0 to 255 (Fig. 3). The downscaling dimensions and pixel-intensity range were found by grid search. The aggressive Figure 3: Example cell representation without domain knowledge, which is simply to downsample each game frame. The full observable state, a color image, is converted to grayscale and downscaled to an 11 × 8 image with 8 possible pixel intensities.
downscaling used by this representation is reminiscent of the Basic feature set from Bellemare et al. [30]. This cell representation requires no game-specific knowledge and is fast to compute.
Cell representations with domain knowledge
The ability of an algorithm to integrate easy-to-provide domain knowledge can be an important asset. In Montezuma's Revenge, domain knowledge is provided as unique combinations of the x, y position of the agent (discretized into a grid in which each cell is 16 × 16 pixels), room number, level number, and in which rooms the currently-held keys were found. In the case of Pitfall, only the x, y position of the agent and the room number were used. All this information was extracted directly from pixels with simple hand-coded classifiers to detect objects such as the main character's location combined with our knowledge of the map structure in the two games (Appendix A.3). While Go-Explore provides the opportunity to leverage domain knowledge in the cell representation in Phase 1, the robustified neural network produced by Phase 2 still plays directly from pixels only.
Selecting cells
In each iteration of Phase 1, a cell is chosen from the archive to explore from. This choice could be made uniformly at random, but we can improve upon that baseline in many cases by creating (or learning) a heuristic for preferring some cells over others. In preliminary experiments, we found that such a heuristic can improve performance over uniform random sampling (data not shown). The exact heuristic differs depending on the problem being solved, but at a high level, the heuristics in our work assign a positive weight to each cell that is higher for cells that are deemed more promising. For example, cells might be preferred because they have not been visited often, have recently contributed to discovering a new cell, or are expected to be near undiscovered cells. The weights of all cells are normalized to represent the probability of each cell being chosen next. No cell is ever given a weight equal to 0, so that all cells in principle remain available for further exploration. The exact heuristics from our experiments are described in Appendix A.5.
Returning to cells and opportunities to exploit deterministic simulators
One of the main principles of Go-Explore is to return to a promising cell without added exploration before exploring from that cell. The Go-Explore philosophy is that we should make returning to that cell as easy as possible given the constraints of the problem. The easiest way to return to a cell is if the world is deterministic and resettable, such that one can reset the state of the simulator to a previous visit to that cell. Whether performing such resets is allowable for RL research is an interesting subject of debate that was motivated by the initial announcement of Go-Explore [55]. The ability to harness determinism and perform such resets forces us to recognize that there are two different types of problems we wish to solve with RL algorithms: those that require stochasticity at test time only, and those that require stochasticity during both testing and training.
We start with the former. Because current RL algorithms can take unsafe actions [56,57] and require tremendous amounts of experience to learn [41,42,58], the majority of applications of RL in the foreseeable future will likely require training in a simulator before being transferred to (and optionally fine-tuned in) the real world. For example, most work with learning algorithms for robotics train in a simulator before transferring the solution to the real world; that is because learning directly on the robot is slow, sample-inefficient, can damage the robot, and can be unsafe [59][60][61]. Fortunately, for many domains, simulators are available (e.g. robotics simulators, traffic simulators, etc.). An insight of Go-Explore is that we can take advantage of the fact that such simulators can be made deterministic to improve performance, especially on hard-exploration problems. For many types of problems, we want a reliable final solution (e.g. a robot that reliably finds survivors after a natural disaster) and there is no principled reason to care whether we obtain this solution via initially deterministic training. If we can solve previously unsolvable problems, including ones that are stochastic at evaluation (test) time, via making simulators deterministic, we should take advantage of this opportunity.
There are also cases where a simulator is not available and where learning algorithms must confront stochasticity during training. To create and test algorithms for this second type of problem, we cannot exploit determinism and resettability. Examples of this class of problems include when we must learn directly in the real world (and an effective simulator is not available and cannot be learned), or when studying the learning of biological animals, including ourselves. We believe Go-Explore can handle such situations by training goal-conditioned policies [62,63] that reliably return to cells in the archive during the exploration phase, which is an interesting area for future research. While computationally much more expensive, this strategy would result in a fully trained policy at the end of the exploration phase, meaning there would be no need for a robustification phase at the end. We note that there are some problems where the environment has forms of stochasticity that prevent the algorithm from reliably returning to a particular cell, regardless of which action the agent takes (e.g. in poker, there is no sequence of actions that reliably leads you to a state where you have two aces). We leave a discussion and study of whether Go-Explore helps in that problem setting for future work.
With this distinction in mind, we can now ask whether Montezuma's Revenge and Pitfall represent the first type of domain (where all we care about is a solution that is robust to stochasticity at test time) or the second (situations where the algorithm must handle stochasticity while training). We believe few people in the community had considered this question before our initial blog post on Go-Explore [55] and that it created a healthy debate on this subject. Because Atari games are proxies for the problems we want to solve with RL, and because both types of problems exist, a natural conclusion is that we should have benchmarks for each. One version of a task can require stochasticity during testing only, and another can require stochasticity during both training and testing. All results and claims in this version of this paper are for the version of these domains that does not require stochasticity during training (i.e. stochasticity is required during evaluation only). Applying Go-Explore when training is stochastic remains an exciting avenue of research for the near future.
For problems in which all we care about is a reliable policy at test time, a key insight behind Go-Explore is that we can first solve the problem (Phase 1), and then (if necessary) deal with making the solution more robust later (Phase 2). In contrast with the usual view of determinism as a stumbling block to producing agents that are robust and high-performing, it can be made an ally during exploration and then the solution extended to nondeterminism afterwards via robustification. An important domain where such insights can help is robotics, where training is often done in simulation before policies are transferred to the real world [59][60][61].
For the experiments in this paper, because we harness deterministic training, we could return to a cell by storing the sequence of actions that lead to it and subsequently replay those actions. However, simply saving the state of the emulator (in addition to this sequence of steps) and restoring that state when revisiting a cell gains additional efficiency. Doing so reduced the number of steps that needed to be simulated by at least one order of magnitude (Appendix A.8).
Due to the fact that the present version of Go-Explore operates in a deterministic setting during Phase 1, each cell is associated with an open-loop sequence of instructions that lead to it given the initial state, not a proper policy that maps any state to an action. A true policy is produced during robustification in Phase 2 (Section 2.2).
Exploration from cells
Once a cell is reached, any exploration method can be applied to find new cells. In this work the agent explores by taking random actions for k = 100 training frames, with a 95% probability of repeating the previous action at each training frame (frames at which the agent is allowed to take an action, thus not including any frames skipped due to frame skip, see Appendix A.1). Besides reaching the k = 100 training frame limit for exploration, exploration is also aborted at the episode's end (defined in Appendix A.2), and the action that led to the episode ending is ignored because it does not produce a destination cell.
Interestingly, such exploration does not require a neural network or other controller, and indeed no neural network was used for the exploration phase (Phase 1) in any of the experiments in this paper (we do not train a neural network until Phase 2). The fact that entirely random exploration works so well highlights the surprising power of simply returning to promising cells before exploring further, though we believe exploring intelligently (e.g. via a trained policy) would likely improve our results and is an interesting avenue for future work.
Updating the archive
While an agent is exploring from a cell, the archive updates in two conditions. The first condition is when the agent visits a cell that was not yet in the archive (which can happen multiple times while exploring from a given cell). In this case, that cell is added to the archive with four associated pieces of metadata: (1) how the agent got to that cell (here, a full trajectory from the starting state to that cell), (2) the state of the environment at the time of discovering the cell (if the environment supports such an operation, which is true for the two Atari-game domains in this paper), (3) the cumulative score of that trajectory, and (4) the length of that trajectory.
The second condition is when a newly-encountered trajectory is "better" than that belonging to a cell already in the archive. For the experiments below, we define a new trajectory as better than an existing trajectory when the new trajectory either has a higher cumulative score or when it is a shorter trajectory with the same score. In either case, the existing cell in the archive is updated with the new trajectory, the new trajectory length, the new environment state, and the new score. In addition, information affecting the likelihood of this cell being chosen (see Appendix A.5) is reset, including the total number of times the cell has been chosen and the number of times the cell has been chosen since leading to the discovery of another cell. Resetting these values is beneficial when cells conflate many different states because a new way of reaching a cell may actually be a more promising stepping stone to explore from (so we want to encourage its selection). We do not reset the counter that records the number of times the cell has been visited because that would make recently discovered cells indistinguishable from recently updated cells, and recently discovered cells (i.e. those with low visit counts) are more promising to explore because they are likely near the surface of our expanding sphere of knowledge.
Because cells conflate many states, we cannot assume that a trajectory from start state A through cell B to cell C will still reach C if we substitute a different, better way to get from A to B; therefore, the better way of reaching a cell is not integrated into the trajectories of other cells that built upon the original trajectory. However, performing such substitutions might work with goal-conditioned or otherwise robust policies, and investigating that possibility is an interesting avenue for future work.
Batch implementation
We implemented Phase 1 in parallel to take advantage of multiple CPUs (our experiments ran on a single machine with 22 CPU cores): at each step, a batch of b cells is selected (with replacement) according to the rules described in Section 2.1.2 and Appendix A.5, and exploration from each of these cells proceeds in parallel for each. Besides using the multiple CPUs to run more instances of the environment, a high b also saves time by recomputing cell selection probabilities less frequently, which is important as this computation accounts for a significant portion of run time as the archive gets large (though this latter factor could be mitigated in other ways in the future). Because the size of b also has an indirect effect on the exploration behavior of Go-Explore (for instance, the initial state is guaranteed to be chosen b times at the very first iteration), it is in effect a hyperparameter, whose values are given in Appendix A.6.
Phase 2: Robustification
If successful, the result of Phase 1 is one or more high-performing trajectories. However, if Phase 1 of Go-Explore harnessed determinism in a simulator, such trajectories will not be robust to any stochasticity, which is present at test time. Phase 2 addresses this gap by creating a policy robust to noise via imitation learning, also called learning from demonstration (LfD). Importantly, stochasticity is added during Phase 2 so that the final policy is robust to the stochasticity it will face during its evaluation in the test environment. Thus the policy being trained has to learn how to mimic and/or perform as well as the trajectory obtained from the Go-Explore exploration phase while simultaneously dealing with circumstances that were not present in the original trajectory. Depending on the stochasticity of the environment, this adjustment can be highly challenging, but nevertheless is far easier than attempting to solve a sparse-reward problem from scratch.
While most imitation learning algorithms could be used for Phase 2, different types of imitation learning algorithms can qualitatively affect the resulting policy. LfD algorithms that try to closely mimic the behavior of the demonstration may struggle to improve upon it. For this reason, we chose an LfD algorithm that has been shown capable of improving upon its demonstrations: the Backward Algorithm from Salimans and Chen [28]. It works by starting the agent near the last state in the trajectory, and then running an ordinary RL algorithm from there (in this case Proximal Policy Optimization (PPO) [64]). Once the algorithm has learned to obtain the same or a higher reward than the example trajectory from that starting place near the end of the trajectory, the algorithm backs the agent's starting point up to a slightly earlier place along the trajectory, and repeats the process until eventually the agent has learned to obtain a score greater than or equal to the example trajectory all the way from the initial state. Note that a similar algorithm was discovered independently at around the same time by Resnick et al. [65].
While this approach to robustification effectively treats the expert trajectory as a curriculum for the agent, the policy is only optimized to maximize its own score, and not actually forced to accurately mimic the trajectory. For this reason, this phase is able to further optimize the expert trajectories, as well as generalize beyond them, both of which we observed in practice in our experiments (Section 3). In addition to seeking a higher score than the original trajectory, because it is an RL algorithm with a discount factor that prizes near-term rewards more than those gathered later, it also has a pressure to improve the efficiency with which it collects rewards. Thus if the original trajectory contains unnecessary actions (like visiting a dead end and returning), such behavior could be eliminated during robustification (a phenomenon we also observed).
Additional experimental and analysis details
Comparing sample complexity for RL algorithms trained on Atari games can be tricky due to the common usage of frame skipping [31,66], wherein a policy only sees and acts every nth (here, 4) frame, and that action is repeated for intervening frames to save the computation of running the policy. Specifically, it can be ambiguous whether the frames that are skipped are counted (which we call "game frames") or ignored (which we call "training frames") when discussing sample complexity. In this work, we always qualify the word "frame" accordingly and all numbers we report are measured in game frames. Appendix A.1 further details the subtleties of this issue.
Because the Atari games are deterministic by default, some form of stochasticity needs to be introduced to provide a stochastic test environment, which is desirable to make Atari an informative test bed for RL algorithms. Following previous work, we introduce stochasticity into the Atari environment with two previously employed techniques: random no-ops and sticky actions.
Random no-ops means that the agent is forced to take up to 30 no-ops (do nothing commands) at the start of the game. Because most Atari games run on a timer that affects whether hazards are present or not, or where different hazards, items, or enemies are located, taking a random number of no-ops puts the world into a slightly different state each time, meaning that fixed trajectories (such as the ones found by Go-Explore Phase 1) will no longer work. Random no-ops were first introduced by Mnih et al. [3], and they were adopted as a primary source of stochasticity in most subsequent papers working in the Atari domain [3,26,27,34,38,41,42,45,[67][68][69][70][71][72][73].
While random no-ops prevent single, memorized trajectories from solving Atari games, the remainder of the game remains deterministic, meaning there is still much determinism that can be exploited. While several other forms of stochasticity have been proposed (e.g. humans restarts [74], random frame skips [75], etc.), a particularly elegant form is sticky actions [31], where at each game frame there exists some probability of repeating the previous action instead of performing a newly chosen action. This way to introduce stochasticity is akin to how humans are not frame perfect, but may hold a button for slightly longer than they intended, or how they may be slightly late in pressing a button. Because Atari games have been designed for human play, the addition of sticky actions generally does not prevent a game from being solvable, and it adds some stochasticity to every state in the game, not just the start. Although our initial blog post [55] only included random no-ops, in this paper our robustification and all post-robustification test results are produced with a combination of both random no-ops and sticky actions. All algorithms we compare against in Section 3 and in Appendix A.9 likewise were tested with some form of stochasticity (in the form of no-ops, sticky actions, human starts, or some combination thereof), though it is worth noting that, unlike Go-Explore, most also had to handle stochasticity throughout training. Relevant algorithms that were tested in a deterministic environment are discussed in Section 5.
All hyperparameters were found by performing a separate grid-search for each experiment. The final, best performing hyperparameters are listed in Appendix A.6, tables 1 and 2. All confidence intervals given are 95% bootstrapped confidence intervals computed using the pivotal (also known as empirical) method [76], obtained by resampling 10,000 times. Confidence intervals are reported with the following notation: stat (CI: lowerupper) where stat is the statistic (a mean unless otherwise specified). In graphs containing shaded areas, those areas indicate the 95% percentile bootstrapped confidence interval of the mean, obtained by resampling 1,000 times. Graphs of the exploration phase (Phase 1) depict data at approximately every 4M game frames and graphs of the robustification phase (Phase 2) depict data at approximately every 130,000 game frames.
Because the robustification process can diverge even after finding a solution, the neural network at the end of training does not necessarily perform well, even if a high-performing solution was found at some point during this process. To retrieve a neural network that performs well regardless of when it was found, all robustification runs (Phase 2) produced a checkpoint of the neural network approximately every 13M game frames. Because the performance values recorded during robustification are noisy, we cannot select the best performing checkpoint from those performance values alone. As such, at the end of each robustification run, out of the checkpoints with the lowest max_starting_point (or close to it), a random subset of checkpoints (between 10 and 50) was tested to evaluate the performance of the neural network stored within that checkpoint. We test a random subset because robustification runs usually produce more successful checkpoints then we can realistically test. The highest-scoring checkpoint for each run was then re-tested to account for the selection bias inherent in selecting the best checkpoint. The scores from this final retest are the ones we report.
The neural network from each checkpoint is evaluated with random no-ops and sticky actions until at least 5 scores for each of the 31 possible starting no-ops (from 0 to 30 inclusive) are obtained. The mean score for each no-op is then calculated and the final score for the checkpoint is the grand mean of the individual no-op scores. Unless otherwise specified, the default time limit of 400,000 game frames imposed by OpenAI Gym [75] is enforced. (b) Successful robustification with 10 demonstrations Figure 5: Examples of maximum starting point over training for robustifying using different numbers of demonstrations. Success is achieved as soon as any of the curves gets sufficiently close (e.g. within 50 units) to 0, because that means the agent is able to perform as well as at least one of the demonstrations.
In this first experiment, we run Go-Explore on Montezuma's Revenge with the downsampled image cell representation, which does not require game-specific domain knowledge. Despite the simplicity of this cell representation, Phase 1 of Go-Explore solves level 1 in 57% of runs after 1.2B game frames (a modest number by modern standards [41,42]), with one of the 100 runs also solving level 2, and visits a mean of 35 rooms (CI: 33 -37) (Fig. 4a). The number of new cells being discovered is still increasing linearly after 1.2B game frames, indicating that results would likely be even better were it run longer (Fig. 4b). Phase 1 of Go-Explore achieves a mean score of 57,439 (CI: 47,843 -67,224) (Fig. 4c). Level 1 was solved after a mean of 640M (CI: 567M -711M) game frames, which took a mean of 10.8 (CI: 9.5 -12.0) hours on a single, 22-CPU machine (note that these level 1 numbers exclude the runs that never solved level 1 after 1.2B game frames). See Appendix A.8 for more details on performance.
Amusingly, Go-Explore discovered a little-known bug in Montezuma's Revenge called the "treasure room curse" [77]. If the agent performs a specific sequence of actions, it can remain in the treasure room (the final room before being sent to the next level) indefinitely, instead of being automatically moved to the next level after some time. Because gems giving 1,000 points keep appearing in the treasure room, it is possible to easily achieve very high scores once it has been triggered. Finding bugs in games and simulators, as Go-Explore did, is an interesting reminder of the power and creativity of optimization algorithms [6], and is commercially valuable as a debugging tool to identify and fix such bugs before shipping simulators and video games. A video of the treasure room curse as triggered by Go-Explore is available at https://youtu.be/civ6OOLoR-I.
In 51 out of the 57 runs that solved level 1, the highest-scoring trajectory found by Go-Explore exploited the bug. To prevent scores from being inflated due to this bug, we filtered out trajectories that triggered the treasure room curse bug when extracting the highest scoring trajectory from each run of Go-Explore for robustification (Appendix A.4 provides details).
As mentioned in Section 2.2, we used Salimans & Chen's Backward Algorithm [28] for robustification. However, we found it somewhat unreliable in learning from a single demonstration (Fig. 5a). Indeed, only 40% of our attempts at robustifying trajectories that solved level 1 were successful when using a single demonstration.
However, because Go-Explore can produce many demonstrations, we modified the Backward Algorithm to simultaneously learn from multiple demonstrations (details in Appendix A.7). To simulate the use case in which Phase 1 is run repeatedly until enough successful demonstrations (in this case 10) are found, we extracted the highest scoring non-bug demonstration from each of the 57 out of 100 Phase 1 runs that had solved level 1, and randomly assigned them to one of 5 non-overlapping groups of 10 demonstrations (7 demonstrations were left over and ignored), each of which was used for a robustification run. When training with 10 demonstration trajectories, all 5 robustification runs were successful. Fig. 5b shows an example of successful robustification with 10 trajectories.
In the end, our robustified policies achieve a mean score of 43,763 (CI: 36,718 -50,196), substantially higher than the human expert mean of 34,900 [27]. All policies successfully solve level 1 (with a 99.8% success rate over different stochastic evaluations of the policies), and one of our 5 policies also solves level 2 100% of the time. Fig. 6 shows how these results compare with prior work.
Surprisingly, the computational cost of Phase 2 is greater than that of Phase 1. These Phase 2 results were achieved after a mean of 4.35B (CI: 4.27B -4.45B) game frames of training, which took a mean of 2.4 (CI: 2.4 -2.5) days of training (details in Appendix A.8).
With domain knowledge in the cell representation
On Montezuma's Revenge, when harnessing domain knowledge in its cell representation (Section 2.1.1), Phase 1 of Go-Explore finds a total of 238 (CI: 231 -245) rooms, solves a mean of 9.1 (CI: 8.8 -9.4) levels (with every run solving at least 7 levels), and does so in roughly half as many game frames as with the downscaled image cell representation (Fig. 7a). Its scores are also extremely high, with a mean of 148,220 (CI: 144,580 -151,730) (Fig. 7c). These results are averaged over 50 runs.
As with the downscaled version, Phase 1 of Go-Explore with domain knowledge was still discovering additional rooms, cells, and ever-higher scores linearly when it was stopped (Fig. 7). Indeed, because every level of Montezuma's Revenge past level 3 is nearly identical to level 3 (except for the scores on the screen and the stochastic timing of events) and because each run had already passed level 3, it would likely continue to find new rooms, cells, and higher scores forever.
Domain knowledge runs spend less time exploiting the treasure room bug because we preferentially select cells in the highest level reached so far (Appendix A.5). Doing so encourages exploring new levels instead of exploring the treasure rooms on previous levels to keep exploiting the treasure room bug. The highest final scores thus come from trajectories that solved many levels. Because knowing the level number constitutes domain knowledge, non-domain knowledge runs cannot take advantage of this information and are thus affected by the bug more.
In hours on a single 22-CPU machine. Solving level 3, which effectively means solving the entire game as discussed above, is accomplished in a mean of 173M (CI: 164M -182M) game frames, corresponding to 6.8 (CI: 6.2 -7.3) hours. Appendix A.8 provides full performance details.
For robustification, we chose trajectories that solve level 3, truncated to the exact point at which level 3 is solved because, as mentioned earlier, all levels beyond level 3 are nearly identical aside from the pixels that display the score, which of course keep changing, and some global counters that change the timing of aspects of the game like when laser beams turn on and off.
We performed 5 robustification runs with demonstrations from the Phase 1 experiments above, each of which had a demonstration from each of 10 different Phase 1 runs. All 5 runs succeeded. The resulting mean score is 666,474 (CI: 461,016 -915,557), far above both the prior state of the art and the non-domain knowledge version of Go-Explore. As with the downscaled frame version, Phase 2 was slower than Phase 1, taking a mean of 4.59B (CI: 3.09B -5.91B) game frames, corresponding to a mean of 2.6 (CI: 1.8 -3.3) days of training.
The networks show substantial evidence of generalization to the minor changes in the game beyond level 3: although the trajectories they were trained on only solve level 3, these networks solved a mean of 49.7 levels (CI: 32.6 -68.8). In many cases, the agents did not die, but were stopped by the maximum limit of 400,000 game frames imposed by default in OpenAI Gym [75]. Removing this limit altogether, our best single run from a robustified agent achieved a score of 17,986,800 and solved 1,441 levels during 6,198,985 game frames, corresponding to 28.7 hours of game play (at 60 game frames per second, Atari's original speed) before losing all its lives. This score is over an order of magnitude higher than the human world record of 1,219,200 [78], thus achieving the strictest definition of "superhuman" performance. A video of the agent solving the first ten levels can be seen here: https://youtu.be/gnGyUPd_4Eo. Fig. 8 compares the performance of Go-Explore to historical results (including the previous state of the art), the no-domain-knowledge version of Go-Explore, and previous imitation learning work that relied on human demonstrations to solve the game. The version of Go-Explore that harnesses domain knowledge dramatically outperforms them all. Specifically, Go-Explore produces scores over 9 times greater than those reported for imitation learning from human demonstrations [28] and over 55 times the score reported for the prior state of the art without human demonstrations [39].
That Go-Explore outperforms imitation learning plus human demonstrations is particularly noteworthy, as human-provided solutions are arguably a much stronger form of domain knowledge than that provided to Go-Explore. We believe that this result is due to the higher quality of demonstrations that Go-Explore was able to produce for Montezuma's Revenge vs. those provided by humans in the previous imitation learning work. The demonstrations used in our work range in score from 35,200 to 51,900 (lower than the final mean score of 148,220 for Phase 1 because these demonstrations are limited to only solving up to level 3) and most importantly, they all solve level 3. The demonstration originally used with the Backward Algorithm [28] reaches a score of 71,500 but doesn't solve level 3, thus preventing it from generalizing to further levels. The demonstrations used in DQfD and Ape-X DQfD [26,27] only range in score from 32,300 to 34,900. In this last case, it is not clear whether level 3 was solved in any of the demonstrations, but we believe this is unlikely given the reported scores because they are lower than the lowest level-3-solving scores found by Go-Explore and given the fact that the human demonstration used by the Backward Algorithm scored twice as high without solving level 3.
One interesting benefit of a robustification phase with an imitation learning algorithm that does not try to mimic the original demonstration is that it can improve upon that demonstration. Because of the discount on future rewards that exists in the base RL algorithm PPO, there is a pressure to remove inefficiencies in the demonstration. Videos of Go-Explore policies reveal efficient movements. In contrast, IM algorithms specifically reward reaching novel states, meaning that policies produced by them often do seemingly inefficient things like deviating to explore dead ends or jumping often to touch states only accessible by jumping, even though doing so is not necessary to gain real reward. An example of a Deep Curiosity Search agent [37] performing such inefficient jumps can be viewed at https://youtu.be/-Fy2va3IbQU, and a random network distillation [16] IM agent can be viewed at https://youtu.be/40VZeFppDEM. These results suggest that IM algorithms could also benefit from a robustification phase in which they focus only on real-game reward once the IM phase has sufficiently explored the state space.
Pitfall
We next test Go-Explore on the harder, more deceptive game of Pitfall, for which all previous RL algorithms scored ≤ 0 points, except those that were evaluated on the fully deterministic version of the game [43,44] or relied on human demonstrations [26,27,45]. As with Montezuma's Revenge, we first run Go-Explore with the simple, domain-general, downscaled representation described in Section 2.1.1, with the same hyperparameters. With these settings, Go-Explore is able to find 22 rooms, but it is unable to find any rewards (Fig. 9). We believe that this number of rooms visited is greater than the previous state of the art, but the number of rooms visited is infrequently reported so we are unsure. In preliminary experiments, Go-Explore with a more fine-grained downscaling procedure (assigning 16 different pixel values to the screen, rather than just 8) is able to find up to 30 rooms, but it then runs out of memory (Appendix A.10). Perhaps with a more efficient or distributed found stagnates after about 2B game frames, score continues to go up for about another billion game frames. This is possible because, in Pitfall, there can exist many different trajectories to the same cell that vary in score. As such, once all reachable cells have been discovered, Go-Explore relies on replacing lower-scoring trajectories with higher-scoring trajectories to increase its score. The final score is not the maximum score that can be reached in Pitfall (the maximum score in Pitfall is 112,000), but Go-Explore finds itself in a local optima where higher scoring trajectories cannot be found starting from any of the trajectories currently in the archive. Lines represent the mean over 10 (without domain knowledge) and 40 (with domain knowledge) independent runs. computational setup this representation could perform well on the domain, a subject we leave to future work. We did not attempt to robustify any of the trajectories because no positive reward was found.
We believe the downscaled-image cell representation underperforms on Pitfall because the game is partially observable, and frequently contains many importantly different states that appear almost identical (even in the unaltered observation space of the game itself), but require different actions (Appendix A.12). One potential solution to this problem would be to change to a cell representation that takes previous states into account to disambiguate such situations. Doing so is an interesting direction for future work.
Next, we tested Go-Explore with domain knowledge (Section 2.1.1). The cell representation with domain knowledge is not affected by the partial observability of Pitfall because it maintains the room number, which is information that disambiguates the visually identical states (note that we can keep track of the room number from pixel information only by keeping track of all screen transitions that happened along the trajectory). With it, the exploration phase of Go-Explore (Phase 1) is able to visit all 255 rooms and its best trajectories collect a mean of 70,264 (CI: 67,287 -73,150) points (Fig. 9).
We attempted to robustify the best trajectories, but the full-length trajectories found in the exploration phase did not robustify successfully (Appendix A.11), possibly because different behaviors may be required for states that are visually hard to distinguish (Appendix A.12). Note that the domainknowledge cell representation does not help in this situation, because the network trained in the robustification phase (Phase 2) is not presented with the cell representation from the exploration phase (Phase 1). The network thus has to learn to keep track of past information by itself. Remembering the past is possible, as the network of the agent does include a fully recurrent layer, but it is unclear to what degree this layer stores information from previous rooms, especially because the Backward Algorithm loads the agent at various points in the game without providing the agent with the history of rooms that came before. This can make it difficult for the agent to learn to store information from previous states. As such, robustifying these long trajectories remains a topic for future research. We found that shorter trajectories scoring roughly 35,824 (CI: 34,225 -37,437) points could be successfully robustified. To obtain these shorter trajectories, we truncated all trajectories in the archive produced in Phase 1 to 9,000 training frames (down from the total of 18,000 training frames), and then selected the highest scoring trajectory out of these truncated trajectories. We then further truncated this highest scoring trajectory such that it would end right after the collection of the last obtained reward, to ensure that the Backward Algorithm would always start right before obtaining a reward, resulting in trajectories with a mean length of 8,304 (CI: 8,118 -8,507) training frames.
From the truncated trajectories, the robustification phase (Phase 2) of Go-Explore is able to produce agents that collect 59,494 (CI: 49,042 -72,721) points (mean over 10 independent runs), substantially outperforming both the prior state of the art and human experts (Fig. 10). These trajectories required a mean of 8.20B (CI: 6.73B -9.74B) game frames to robustify, which took a mean of 4.5 (CI: 3.7 -5.3) days. The best rollout of the best robustified policy obtained a score of 107,363 points, and a video of this rollout is available at: https://youtu.be/IJMdYOnsDpA.
Interestingly, the mean performance of the robustified networks of 59,494 is higher than the maximum performance among the demonstration trajectories of 45,643. This score difference is too large to be the result of small optimizations along the example trajectories (e.g. by avoiding more of the negative rewards in the environment), thus suggesting that, as with Montezuma's Revenge, these policies are able to generalize well beyond the example trajectories they were provided.
Discussion and Future Work
Three key principles enable Go-Explore to perform so well on hard-exploration problems: (1) remember good exploration stepping stones, (2) first return to a state, then explore and, (3) first solve a problem, then robustify (if necessary).
These principles do not exist in most RL algorithms, but it would be interesting to weave them in. As discussed in Section 1, contemporary RL algorithms do not do follow principle 1, leading to detachment. Number 2 is important because current RL algorithms explore by randomly perturbing the parameters or actions of the current policy in the hope of exploring new areas of the environment, which is ineffective when most changes break or substantially change a policy such that it cannot first return to hard-to-reach states before further exploring from them (an issue we call derailment). Go-Explore solves this problem by first returning to a state and then exploring from there. Doing so enables deep exploration that can find a solution to the problem, which can then be robustified to produce a reliable policy (principle number 3).
The idea of preserving and exploring from stepping stones in an archive comes from the quality diversity (QD) family of algorithms (like MAP-elites [60,79] and novelty search with local competition [80]), and Go-Explore is an enhanced QD algorithm based on MAP-Elites. However, previous QD algorithms focus on exploring the space of behaviors by randomly perturbing the current archive of policies (in effect departing from a stepping stone in policy space rather than in state space), as opposed to explicitly exploring state space by departing to explore anew from precisely where in state space a previous exploration left off. In effect, Go-Explore offers significantly more controlled exploration of state space than other QD methods by ensuring that the scope of exploration is cumulative through state space as each new exploratory trajectory departs from the endpoint of a previous one.
It is remarkable that the current version of Go-Explore works by taking entirely random actions during exploration (without any neural network) and that it is effective even when applied on a very simple discretization of the state space. Its success despite such surprisingly simplistic exploration strongly suggests that remembering and exploring from good stepping stones is a key to effective exploration, and that doing so even with otherwise naive exploration helps the search more than contemporary deep RL methods for finding new states and representing those states. Go-Explore might be made even more powerful by combining it with effective, learned representations. It could further benefit from replacing the current random exploration with more intelligent exploration policies, which would allow the efficient reuse of skills required for exploration (e.g. walking). Both of these possible improvements are promising avenues for future work.
Go-Explore also demonstrates how exploration and dealing with environmental stochasticity are problems that can be solved separately by first performing exploration in a deterministic environment and then robustifying relevant solutions. The reliance on having access to a deterministic environment may initially seem like a drawback of Go-Explore, but we emphasize that deterministic environments are available in many popular RL domains, including videos games, robotic simulators, or even learned world models. Once a brittle solution is found, or especially a diverse set of brittle solutions, a robust solution can then be produced in simulation. If the ultimate goal is a policy for the real world (e.g. in robotics), one can then use any of the many available techniques for transferring the robust policy from simulation to the real world [59,60,81]. In addition, we expect that future work will demonstrate that it is possible to substitute exploiting determinism to return to states with a goal-conditioned policy [62,63] that learns to deal with stochastic environments from the start (during training). Such an algorithm would still benefit from the first two principles of Go-Explore, and possibly the third too, as even a goal-conditioned policy could benefit from additional optimization once the desired goal is known.
A possible objection is that, while this method already works in the high-dimensional domain of Atari-from-pixels, it might not scale to truly high-dimensional domains like simulations of the real world. We believe Go-Explore can be adapted to such high-dimensional domains, but it will likely have to marry a more intelligent cell representation of interestingly different states (e.g. learned, compressed representations of the world) with intelligent (instead of random) exploration. Indeed, the more conflation (mapping more states to the same cell) one does, the more probable it is that one will need intelligent exploration to reach such qualitatively different cells.
Though our current implementation of Go-Explore can handle the deceptive reward structure found in Pitfall, its exploitation of determinism makes it vulnerable to a new form of deception we call the "busy-highway problem." Consider an environment in which the agent needs to cross a busy highway. One option is to traverse the highway directly on foot, but that creates so much risk of being hit by a car that no policy could reliably cross this way. A safer alternative would be to take a bridge that goes over the highway, which would constitute a detour, but be guaranteed to succeed. By making the environment deterministic for Phase 1, the current version of Go-Explore would eventually succeed in traversing the highway directly, leading to a much shorter trajectory than by taking the bridge. Thus all the solutions chosen for robustification will be ones that involve crossing the highway directly instead of taking the bridge, making robustification impossible.
One solution to this issue would be to provide robustification with more demonstrations from Phase 1 of Go-Explore (which could include some that take the bridge instead of crossing the highway), or even all of the trajectories it gathers during Phase 1. With this approach, robustification would be able to fall back on the bridge trajectories when the highway trajectories fail to robustify. While this approach should help, it may still be the case that so much of the experience gathered by Go-Explore Phase 1 is dependent on trajectories that are impossible to reproduce reliably that learning from these Go-Explore trajectories is less efficient than learning from scratch. How common this class of problem is in practice is an empirical question and an interesting subject for future work. However, we hypothesize that versions of Go-Explore that deal with stochasticity throughout training (e.g. by training goal-conditioned policies to return to states) would not be affected by this issue, as they would not succeed in crossing the highway reliably except by taking the bridge.
One promising area for future work is robotics. Many problems in robotics, such as figuring out the right way to grasp an object, how to open doors, or how to locomote, are hard-exploration problems. Even harder are tasks that require long sequences of actions, such as asking a robot to find survivors, clean a house, or get a drink from the refrigerator. Go-Explore could enable a robot to learn how to do these things in simulation. Because conducting learning in the real world is slow and may damage the robot, most robotic work already involves first optimizing in a simulator and then transferring the policy to the real world [59][60][61]82]. Go-Explore's ability to exploit determinism can then be helpful because robotic simulators could be made deterministic for Phase 1 of Go-Explore. The full pipeline could look like the following: (1) Solve the problem in a deterministic simulator via Phase 1 of Go-Explore. (2) Robustify the policy in simulation by adding stochasticity to the simulation via Phase 2 of Go-Explore. (3) Transfer the policies to the real world, optionally adding techniques to help cross the simulation-reality gap [59][60][61], including optionally further learning via these techniques or any learning algorithm. Of course, this pipeline could also be changed to using a goal-conditioned version of Go-Explore if appropriate. Overall, we are optimistic that Go-Explore may make many previously unsolvable robotics problems solvable, and we are excited to see future research in this area from our group and others.
Interestingly, the Go-Explore algorithm has implications and applications beyond solving sparse-or deceptive-reward problems. The algorithm's ability to broadly explore the state space can unearth important facets of the domain that go beyond reward, e.g. the distribution of states that contain a particular agent (e.g. a game character or robot) or are near to catastrophic outcomes. For example, within AI safety [5] one open problem is that of safe exploration [83], wherein the process of training an effective real-world policy is constrained by avoiding catastrophe-causing actions during that training. In the robotics setting where Go-Explore is applied in simulation (before attempting transfer to the real world), the algorithm could be driven explicitly to search for diverse simulated catastrophes (in addition to or instead of reward). Such a catastrophe collection could then be leveraged to train agents that act more carefully in the real world, especially while learning [84,85]. Beyond this example, there are likely many other possibilities for how the data produced by Go-Explore could be productively put to use (e.g. as a source of data for generative models, to create auxiliary objectives for policy training, or for understanding other agents in the environment by inverse reinforcement learning).
Related Work
Go-Explore is reminiscent of earlier work that separates exploration and exploitation (e.g. Colas et al. [86]), in which exploration follows a reward-agnostic Goal Exploration Process [87] (an algorithm similar to novelty search [7]), from which experience is collected to prefill the replay buffer of an off-policy RL algorithm, in this case DDPG [88]. This algorithm then extracts the highest-rewarding policy from the experience gathered. In contrast, Go-Explore further decomposes exploration into three elements: Accumulate stepping stones (interestingly different states), return to promising stepping stones, and explore from them in search of additional stepping stones (i.e. principles 1 and 2 above). The impressive results Go-Explore achieves by slotting in very simple algorithms for each element shows the value of this decomposition.
The aspect of Go-Explore of first finding a solution and then robustifying around it has precedent in Guided Policy Search [89]. However, this method requires a non-deceptive, non-sparse, differentiable loss function to find solutions, meaning it cannot be applied directly to problems where rewards are discrete, sparse, or deceptive, as both Atari and many real-world problems are. Further, Guided Policy Search requires having a differentiable model of the world or learning a set of local models, which to be tractable requires the full state of the system to be observable during training time.
Another algorithm that is related to the idea of first returning before exploring is Bootstrapped DQN [90]. It trains an ensemble of networks that approximate the Q function, but with bootstrapping the data so each network is trained on a different random subset of the data. Each training episode, it picks one of the networks and acts according to the policy it implies. In frequently visited areas of the search space, all of the networks will have lots of data and are likely to converge to the same policy (thus, exploration will be low). However, in rarely visited areas of the state space, the networks would ideally have different Q-value predictions, meaning that in different episodes different choices will be made, yielding exploration. At a high level, the dynamics can thus allow an agent to first return to an area of the search space with little exploration before exploring from it. That said, this algorithm will still try to focus on returning to one narrow area of the search space (the one it is currently exploring, see the flashlight metaphor of IM algorithms in Section 1) before exploring, and thus is still likely to suffer from the issue of detachment described in Section 1. Indeed, empirically Bootstrapped DQN scores only 100 on Montezuma's Revenge, and detachment may be a large reason why.
Closely related to the first two principles of Go-Explore is the work by Liu et al. [43], which takes a hierarchical reinforcement learning approach in which an abstract MDP is created through the conflation of multiple states into abstract states, which are similar to the cells in Go-Explore. This abstract MDP stores all abstract states (i.e. cells) that it encounters, thus keeping track of promising states to explore from, and it navigates the MDP in a reliable way before exploring from a selected abstract-MDP state, thus implementing the idea of returning before exploring. One difference with Go-Explore is that this algorithm does not use a trajectory of actions to return to a cell, but instead relies on a set of sub-policies, called skills, which are executed in sequence to navigate the abstract MDP. While this set of skills is flexible, in that it allows the same skill to be reused for different transitions, it takes time to train a new skill, potentially making it computationally expensive to explore as deep into the game as Go-Explore does. Another difference is that the algorithm by Liu et al. [43] does not implement a robustification phase, but instead relies on the abstract MDP, even at evaluation time. While this means the algorithm does not require any additional training, it also means the algorithm can never improve upon the limits of the constructed MDP. The algorithm from Liu et al. [43], which harnesses domain knowledge, scores 12,500 on Montezuma's Revenge and 15,000 on Pitfall, though these scores come from evaluation in the deterministic version of the environment (they do provide results on stochastic test environments for a different game: Private Eye). Go-Explore scores substantially more in both Montezuma's Revenge and Pitfall despite being tested in a stochastic environment and, in the case of Montezuma's Revenge, even when not relying on domain knowledge.
In a similar vein, Dong et al. [91] maintains an explicit memory of novel states and explores after returning to them via a goal-conditioned policy, though their algorithm only reaches scores of around 1,000 on Montezuma's Revenge, substantially less than Go-Explore. We speculate that this is due to (1) Its use of a fixed-capacity pool of potential next states to visit, which might not be able to keep up with the large number of possible interestingly different states present in Montezuma's Revenge, and (2) By determining whether a goal is reached based on a pixel based measure, their goal-conditioned policy could have a hard time learning to return to a previously visited state, as the pixel-based match requires all moving objects, such as enemies, to be in very similar locations before a goal is considered reached. The insights of keeping an archive of known states and exploring to discover new states to add to the archive dates back at least to the E 3 algorithm [92], although the E 3 authors note that it does not work in high-dimensional problems for which tabular methods are intractable and function approximation (or some form of conflation) is required. Go-Explore can be seen as an E 3 -like algorithm that adapts some of its principles to high-dimensional domains.
The idea of planning (searching in a deterministic model of the world to find a good strategy) and then training a policy to mimic what was learned is reminiscent of Guo et al. [93]. It plans (in the Atari emulator) with UCT [47][48][49], which is slow, and then trains a much faster policy with supervised learning to imitate the planning algorithm. At first glance it seems that in Guo et al. [93] UCT serves a similar role to the exploration phase in Go-Explore, but UCT is quite different in several ways that make it inferior for domains that are either high-dimensional or hard-exploration. That is true even though UCT does have a form of exploration bonus.
UCT plans in a model of the world so as to decide on the next action to take in the real environment. An exploration bonus is used during the planning phase, but only extrinsic rewards are considered when choosing the next action to take. This approach can improve performance in domains with relatively dense rewards, but fails in sparse rewards domains as rewards are likely to be beyond the planning horizon of the algorithm. Once planning what to do from one state is done, an action is taken and the planning process is run again from the next state. UCT does not try to explore all states, and each run of UCT is independent of which states were visited in previous planning steps. As such, UCT (either within an episode, or across episodes) does not try to discover new terrain: instead its exploration bonus only helps it within the current short-horizon planning phase. As mentioned in Section 1, UCT scores 0 on Montezuma's Revenge and Pitfall [30,51].
Another approach to planning is Fractal Monte Carlo (FMC) [94]. When choosing the next action, it takes into account both the expected reward and novelty of that action, and in that way is more similar to Go-Explore. In FMC, a planning process is initiated from each state the agent visits. Planning is done within a deterministic version of the game emulator. A fixed number of workers are started in the state from which planning is occurring, and they perform random walks in state space. Periodically, workers that have accumulated lower reward and/or are in less novel states are replaced by "clones" of more successful workers. Novelty is approximated as the Euclidean distance of the worker's state (in the original, raw, observation space) to that of a randomly selected other worker.
FMC reaches a score of 5,600 on Montezuma's Revenge, substantially higher than UCT. We believe this increased performance is due to at least three factors: (1) its planning process puts more emphasis on depth than breadth due to its finite amount of workers as opposed to the exponential branching factor that UCT needs to handle; (2) it favors novel states within a planning iteration, so actions that lead to hard-to-reach states such as jumping an enemy are more likely to be chosen; (3) having an exploration bonus based on Euclidean distance is more informative than UCT's exact-match state bonus, because more distant states are recognized as being more novel than states that differ by, say, one pixel. One major reason we believe FMC performs worse than Go-Explore is because, like UCT, it restarts its planning process from scratch each time an action is taken. That means it can cycle indefinitely between the same few states, because it does not have a means over time of remembering which states it has visited in order to attempt to explore all states, and instead must rely on random chance to break out of cycling. This phenomenon is apparent when watching its agent play: https://youtu.be/FgaXa0uCBR4. Although its greater focus on depth rather than breadth versus UCT extends its planning horizon enough to reach the first few rewards available in Montezuma's Revenge, that seemingly was insufficient for it to reach the even sparser rewards found later in the game that are easily found by Go-Explore.
On Pitfall, SOORL [44] was the first planning algorithm to achieve a non-zero score, but did so in a deterministic test environment. It does so through a combination of learning a model of the environment, domain knowledge, and a value function that is optimistic about the value of unseen states, thus effectively providing an exploration bonus. At the end of 50 episodes of training, which was the maximum reported number of episodes, SOORL achieves an average of about 200 points across runs, and its best run scored an average of 606.6 with a maximum of 4,000.
Another way to view Phase 1 of Go-Explore is as being similar to a graph-search algorithm over nodes that are made up of the conflated states, and with unknown edges between the different nodes, meaning that nodes can never fully be marked as "closed". Specifically, the algorithm has to empirically discover the existence of an edge between two nodes, for example by executing a sequence of random actions that leads from one node to another node, and, as a result, it is never clear whether a node is closed because it is always possible that additional edges from this node exist, but that they have not been discovered yet. Prioritizing which nodes to explore by assigning a weight to them is reminiscent of graph-search algorithms such as Dijkstra's algorithm [95] and A* [96]. Graph-search algorithms as a means of exploration in planning have been investigated in algorithms such as Rapidly-exploring Random Trees (RRTs) [97], which were recently used to explore Atari games by Zhan et al. [98]. Indeed, Go-Explore exhibits important similarities with RRTs as they both keep track of an archive of states and trajectories to those states. However, there are some crucial differences, including: (1) RRTs proceed by first sampling a goal to attempt to reach, which can be impractical in environments where reachable states are not known a priori (and which is particularly pernicious in high-dimensional state spaces, such as pixels or even learned encodings, where most randomly selected goals are unreachable), such as Atari, and (2) RRTs do not have the concept of "cells" present in Go-Explore and thus RRTs can add many very similar states to their archive that do little to help the algorithm reach meaningfully different unexplored areas of the search space. In general, we believe that Go-Explore points to an interesting future research direction in adapting the principles behind graph-search algorithms to high dimensional state spaces.
Even more distantly related are the many variants of intrinsically motivated model-free reinforcement learning algorithms. The relation between Go-Explore and these algorithms is discussed in Section 1 and many specific algorithms are included in our comparison in Appendix A.9, as they account for most of the high-scoring work on Montezuma's Revenge prior to Go-Explore.
Conclusion
Go-Explore represents an exciting new family of algorithms for solving hard-exploration reinforcement learning problems, meaning those with sparse and/or deceptive rewards. It opens up a large number of new research directions beyond the simple version described in this paper, including experimenting with different archives, different methods for choosing which cells to return to, different cell representations, different exploration methods, and different robustification methods. We expect Go-Explore will accelerate progress in a variety of challenging domains such as robotics. It will also be interesting to see not only the domains in which it excels, but also those in which it fails. Go-Explore thus opens a new playground of possibilities for future research, and we hope the community will join us in investigating this new terrain.
A.2 Episode end
In the case of Montezuma's Revenge, the end of an episode is defined as a loss of life, while in the case of Pitfall it is the game-over signal. Both definitions of the end of an episode appear in the literature [31], and our use of differing approaches in Montezuma's Revenge and Pitfall was due to the greater difficulty of tracking room location based on pixels in Montezuma's Revenge if the character is allowed to lose lives (a difficulty which does not exist in Pitfall). Additionally, death in Pitfall grants the agent additional affordances, which is not the case in Montezuma's Revenge. These factors are further explained in Appendix A.3 below.
A.3 Extraction of domain knowledge features from pixels
Phase 1 of Go-Explore used the following domain knowledge features: the x, y position of the agent, the current room, the current level and the rooms in which the currently held keys were found (these last two only apply to Montezuma's Revenge). Although these features can be found in RAM, they were extracted from pixels in our implementation for two reasons: (1) extracting information from pixels is more similar to how a real world environment would be tackled and ensures we do not exploit any non-visible information that might be stored in the RAM, and (2) we found that extracting values from the RAM could be unreliable at times: in Montezuma's Revenge, when the character moves into a new room, a black transition image is shown for a few frames. The current room and current x, y position are updated at different times during these transition frames, so that reading these values from RAM would give a room number and x, y position that are inconsistent.
The location of the agent could be extracted by training a simple classifier, or in an unsupervised way through contingency-awareness [39], but it turns out that, in both Montezuma's Revenge and Pitfall, some pixel values only occur in the character sprite, making it trivial to identify the character location by searching for these values in the current frame. Coincidentally, searching for pixels with a red channel value of 228 is enough to find the character in both Montezuma's Revenge and Pitfall.
Room changes are identified by detecting sudden changes in x, y position: if the character was located at the far right of the screen and is now located at the far left, it likely moved to a room on the right of the current room. In the case of Pitfall, additional domain knowledge is required: underground transitions move the players 3 rooms at a time instead of just 1, and the map wraps around so that the last room is situated to the left of the first room. In Montezuma's Revenge, knowledge of the map is not strictly necessary for room tracking, as the room transition rules are simple, but it is necessary for level tracking: any transition away from the treasure room is an increase in level.
Loss of life needs to be taken into account when tracking room changes: in Montezuma's Revenge, losing a life causes the character to be brought back to the exact location where it entered the room, so that if the character entered the room from the left and dies on the right of the room, the sudden change of x value due to the character reviving on the left side of the room could be mistaken for a room change. Handling this behavior is possible, but we believe unnecessarily complicated for our purposes. For this reason, we end episodes on loss of life in Montezuma's Revenge. By contrast, in Pitfall, the character is brought to a fixed location on the left side of the screen that cannot be confused with a room change, so that there is no need to end the episode on life loss to simplify room tracking. Further, while losing a life is a strict waste of time in Montezuma's Revenge since it brings the agent back to a previously seen location, in Pitfall it can be used as a form of teleportation: if an agent enters a room from the right and loses a life soon after, it will be teleported all the way to the left of the room, thus skipping the hazards that may be in the middle. For this reason, we did not choose to end episodes on life loss in Pitfall.
Finally, key tracking in Montezuma's Revenge is done simply by pattern-matching for keys in the section of the screen that shows the current inventory, and tracking the room number associated to any increase in the current number of keys.
A.4 Filtering out bug trajectories
As mentioned in Section 3.1, we filtered out trajectories that triggered the treasure room bug when robustifying Montezuma's Revenge without domain knowledge. Such filtering was not necessary when using domain knowledge because none of the highest scoring trajectories triggered the bug, as explained in Section 3.1.2.
The filtering of bug trajectories was done by excluding all trajectories whose level was lower than the maximum level in the archive. That works because the bug makes it impossible to leave the treasure room and advance to the next level, so any trajectory that makes it to a new level did not trigger the bug in the previous level.
A.5 Cell selection details
As mentioned in Section 2.1.2, cells are selected at each iteration by first assigning them a score, which is then normalized across all cells in the archive, yielding the probability of each cell being selected. The score of a cell is the sum of separate subscores, which we now describe.
One important set of such subscores, called the count subscores, are computed from attributes that represent the number of times a cell was interacted with in different ways. Specifically: the number of times a cell has already been chosen (i.e. selected as a cell to explore from), the number of times a cell was visited at any point during the exploration phase, and the number of times a cell has been chosen since exploration from it last produced the discovery of a new or better cell. In the case of each of these attributes, a lower count likely indicates a more promising cell to explore from (e.g. a cell that has been chosen more times already is less likely to lead to new cells than a cell that has been chosen fewer times). The count subscore for each of these attributes is given by:
CntScore(c, a) = w a · 1 v(c, a) + ε 1 pa + ε 2(1)
Here c is the cell for which we are calculating the score, v(c, a) is a function that returns the value of attribute a for cell c, w a is the weight hyperparameter for attribute a, and p a is the power hyperparameter for attribute a. ε 1 helps prevent division by 0 and determines the relative weight of cells for which a given value is 0. ε 2 helps guarantee that no cell ever has a 0 probability of being chosen. In our implementation, ε 1 = 0.001 and ε 2 = 0.00001, which we chose after preliminary experiments showed that they worked well.
When cell representations are informed by domain knowledge (Section 3.1.2), giving us the x, y position of the agent, it is possible to determine the possible neighbors of given cells, and whether these neighbors are already present in the archive. For those cases, we define a set of neighbor subscores. Each neighbor subscore is defined as w n if neighbor n does not exist in the archive, and is 0 otherwise. The motivation behind these neighbor subscores is that cells that are lacking neighbors are likely at the edge of the current frontier of knowledge and are thus more likely to yield new cells. We consider 3 types of neighbors: vertical (2 neighbors), horizontal (2 neighbors), and (in the case of Montezuma's Revenge) cells that are in the same level, room and x, y position, but are holding a larger number of keys (the intuition is that if a cell lacks a "more keys" neighbor, then it is the cell that is most capable of opening doors from its location). Neighbors of the same type share the same value for w n ( Table 2). These definitions result in the following neighbor subscore, assuming a function called HasN eighbor(c, n) which returns 1 if neighbor n of cell c is present in the archive, and which returns 0 otherwise:
N eighScore(c, n) = w n · (1 − HasN eighbor(c, n))(2)
In cases without domain knowledge, it is unclear what exactly would constitute a cell's neighbor, and so N eighScore is defined as 0 in this case in our experiments.
In the case of Pitfall (where no notion of level exists) and Montezuma's Revenge without domain knowledge (where we do not know what the level of a given cell is), LevelW eight is always 1.
The final cell score is then computed as follows:
CellScore(c) = LevelW eight(c) · n N eighScore(c, n) + a CntScore(c, a) + 1 (4) Note that CellScore(c) > 0 for all cells c. The cell selection probability is given by:
CellP rob(c) = CellScore(c) c CellScore(c )(5)
Hyperparameters (the different values of w a , p a and w n ) were found through separate grid searches on each game (Montezuma's Revenge and Pitfall) and for each treatment (with or without domain knowledge). Detailed hyperparameter tables are found in Appendix A.6 below.
A.6 Phase 1 hyperparameters
Hyperparameter values were found through grid search. Here, the power hyperparameter p a (see Section A.5) found by grid search turned out to be 0.5 for all attributes a in every experiment, so these are excluded from the tables for conciseness.
The "count-based" attributes are as follows: "Times chosen" is the number of times a cell was selected from the archive so far, "Times chosen since new" is the number of times the cell was selected from the archive since last time it led to a new cell being found or to a cell being improved, and "Times seen" is the number of times the cell was seen during exploration, regardless of whether it was chosen. Note: the Atari emulator resolution is 160 × 210, which results in "tall" frames. However, Atari games were meant to be displayed with wide pixels, resulting in frames wider than they are tall. The common way to achieve this effect is to duplicate pixels horizontally, resulting in a 320 × 210 frame. We divide the frame into a 16 × 16 grid after the frame is adjusted to 320 × 210, so that in the original frame space our cells would be 8 × 16.
A.7 Modifications to the Backward Algorithm
A.7.1 Multiple demonstrations
As mentioned, we modified the Backward Algorithm to robustify with multiple demonstrations, 10 in the case of Montezuma's Revenge. For Pitfall with domain knowledge (we did not robustify any trajectories without domain knowledge) and with the truncated trajectories (Section 3.2), we robustified with 4 demonstrations. We did not robustify the long Pitfall trajectories with multiple demonstrations. While doing so is expected to improve robustification performance, it is unclear whether multiple demonstrations would enable successful robustification of the full-length Pitfall runs, and we leave this question for future work.
Handling multiple demonstrations in the Backwards Algorithm was implemented by choosing a demonstration uniformly at random each time the Backward Algorithm selects a demonstration state from which to start a rollout. Demonstration-specific information such as the current max_starting_point (the latest frame in the demonstration that the Backward Algorithm will start from) and success rates (the proportion of runs starting from a given starting point that performed at least as well as the demonstration) were tracked separately for each demonstration (see Salimans and Chen [28] for details on the various attributes used by the Backward Algorithm).
A.7.2 Modified hyperparameters
For robustification, we kept the default hyperparameters given by Salimans and Chen [28], with the following exceptions: we added random no-ops at the beginning of the trajectory when the starting point was equal to 0 and we also added sticky actions throughout learning (unless otherwise specified).
In addition, to improve performance when robustifying from multiple demonstrations, we set the success rate parameter to 0.1 instead of 0.2, and we changed the parameter that determines how frequently the starting point can be updated to 200 · nDemos steps instead of a fixed 1024 steps. To avoid cases where the reward would be hard to find from the first checkpoint (i.e. the checkpoint closest to the end of the game), we also changed an internal starting-frame parameter (i.e. the number of frames before the end that the backward process would start robustifying from) from 256 to 0. We found that these parameters seemed to work better empirically, though we did not experiment with them extensively.
A.7.3 Pitfall-specific changes
The small negative rewards in combination with the large positive rewards encountered on Pitfall required two additional changes in this particular game. The first change is to replace reward clipping with reward scaling: instead of rewards being clipped to the range [-1, 1], rewards are multiplied by 0.001. This change was necessary because, in Pitfall, negative rewards can have values as small as -1 while positive rewards have values between 2,000 and 5,000. Because negative rewards are so common and positive rewards so rare, clipping rewards gives a huge relative boost to avoiding negative rewards relative to obtaining positive rewards, which makes learning nearly impossible. With reward scaling, the relative importance of the two types of rewards is preserved, and learning succeeds. The scaling factor of 0.001 for Pitfall's rewards creates a reward range similar to that of clipped rewards, facilitating the use of the same hyperparameters (learning rate, entropy coefficient etc.) across Montezuma's Revenge and Pitfall. We chose to make a special case of Pitfall instead of using reward scaling in general for our method because reward clipping is more amenable to sharing hyperparameters across many different games [3]. An alternative to these domain-specific adjustments would be to implement automated reward scaling methods such as Pop-Art [70].
Another change to the canonical Backward Algorithm relates to fixing an issue with early termination and negative rewards. To quickly eliminate rollouts that are slow in collecting the rewards from the demonstration, the original Backward Algorithm implements early termination, where it terminates all rollouts that do not get the same (or a higher) cumulative reward as the demonstration within a certain number of steps (50 in our case). The early termination is implemented in the form of a sliding window, where the cumulative reward of the current rollout is compared with the cumulative reward of the demonstration from 50 time steps ago, and if the cumulative reward of the current rollout is lower, the rollout is terminated. For example, if the demonstration collected a reward of 100 points at time step 20 (counting from the starting point of the rollout, and assuming no other rewards were collected), then a rollout will have to collect at least 100 points before time step 70, otherwise the rollout will be terminated at time step 70.
The sliding window method for early termination works fine when only positive rewards exist, as the only reason the rollout can have a lower score than the demonstration is because it failed to collect a particular positive reward within the given time frame. However, if negative rewards exist, a rollout can also be terminated by collecting a negative reward, even if the demonstration collected the same negative reward. For example, if the demonstration collected a negative reward of -1 at time step 20 (once again, counting from the starting point of the rollout and assuming no other rewards were collected), the rollout needs to avoid this negative reward at all costs; otherwise it will be terminated at time step 50, even though it followed the same behavior as the demonstration. The reason for such early termination is that, at time step 50, the rollout will be compared with the performance of the demonstration at time step 0, and at that time step, the demonstration has not collected the negative reward yet.
To avoid this termination criteria, we give the agent an allowed score deficit of 250 points, meaning a rollout will only be terminated if its score is more than 250 points lower than that of the demonstration from 50 time steps earlier. This convention means that, as long as the demonstration did not collect more 250 points of negative reward within the given 50 time steps, the rollout will not be terminated if it follows the demonstration. The value of 250 points was found empirically on Pitfall, though future work could look for a more general method of implementing early termination in domains with negative reward.
A.8 Performance
All Phase 1 runs were done on single virtual machines with 22 CPU cores. Each virtual machine had 50GB of RAM. It is a hypothetical metric, since we did not replay the trajectories, but instead reset the environment. The "Solved %" column shows the proportion of runs that solved a given level. All other metrics are computed only for the subset of runs that did solve the level.
It is worth noting that time scales superlinearly with game frames primarily due to: (1) Cell selection, which happens at a fixed interval, but takes an amount of time proportional to the number of cells, which is constantly growing. We note that our cell selection is a form of Roulette-Wheel Selection (RWS) [99], which we implement naively with an O(n) algorithm. O(log n) and even O(1) implementations for RWS are possible [100], so that cell selection could be sped up substantially in the future. (2) Trajectory concatenation, which is implemented in a naive way where each cell contains an array that represents the entire trajectory needed to reach it, such that if cell B was reached from cell A, cell B's trajectory will contain a copy of the trajectory that leads to cell A, plus the actions that can lead from cell A to cell B. The copying of trajectories ever increasing in length is negligible at the start of the algorithm, but takes up more and more time as the algorithm goes on. An alternative representation with better memory and computational efficiency would be to represent trajectories as linked lists of actions, and in reverse order, so that each action links to its predecessor. With this representation, if cell B is reached from cell A, only the actions leading from cell A to cell B need to be stored in cell B, with the first of these actions linking to the last action needed to reach cell A, which means that adding cell B would take constant time, instead of a time proportional to the length of the longest trajectories in memory. Further, the amount of memory would also grow linearly, and the number of actions stored in memory would be bounded by the number of actions ever taken during exploration. Because Pitfall without domain knowledge did not obtain any rewards, it is hard to define good thresholds at which to compare game frames. In addition, regardless of the threshold we choose, the resulting data would not be representative of the resources Go-Explore would need to make progress on Pitfall (instead, it would represent the resource usage when Go-Explore fails to make progress). For those two reasons, we do not include Pitfall without domain knowledge in the remainder of this section.
We did not monitor the precise memory usage of Phase 1, beyond the fact that all our runs succeeded on machines with 50GB of RAM. Another indicator is the size of the serialized checkpoints produced at the end of each run, as these checkpoints contain all the necessary data to run Go-Explore, including the complete set of all cells, the metadata used in cell selection (see Appendix A.5), and the trajectories needed to reach the cells. Uncompressed, these files serialized using pickle have a mean size of 341. knowledge, runs lasted up to 10B game frames but selected checkpoints were produced after a mean of 4.59B (CI: 3.09B -5.91B) game frames, which took a mean of 2.6 (CI: 1.8 -3.3) days. For Pitfall with domain knowledge, runs lasted for about 12B game frames and selected checkpoints were produced after a mean of 8.20B (CI: 6.73B -9.74B) game frames, which took a mean of 4.5 (CI: 3.7 -5.3) days.
A.9 Scores Table 5 compares the results of Go-Explore with many other algorithms. The scores for the other algorithms are with stochastic testing in the form of random no-ops, sticky actions, human restarts, or a combination thereof. In the case of Go-Explore, both random no-ops and sticky actions were present in testing. As mentioned in Section 2.1.3, Go-Explore was trained partially without sticky actions or random no-ops, whereas many of the algorithms in this table also handled stochasticity throughout training.
A.10 Fine-grained cell representation for Pitfall without domain knowledge
As mentioned in Section 3.2, we attempted an experiment on Pitfall without domain knowledge using the same parameters as with Montezuma's Revenge. This approach did not succeed, as Go-Explore quickly stopped finding new rooms and failed to find any rewards (Fig. 9). One potential reason for this failure is that the downscaled cell representation optimized for Montezuma's Revenge conflates too many states into the same cell in Pitfall. This hypothesis is supported by the fact that Go-Explore stops discovering new cells, both when measured post-hoc as domain knowledge cells (Fig. 9) and in the downscaled representation of cells in which it is actually searching (Fig. 11). To resolve this issue, we looked at different cell representations that would be able to distinguish a larger number of states. A particularly promising cell representation assigns 16, rather than 8, different pixel values to each pixel in the 11 × 8 downscaled representation. While this cell representation does result in a larger number of rooms visited and the number of downscaled cells found did not stagnate, the runs terminated prematurely due to exhausting the 50GB of memory available on the virtual machine (Fig. 12). Better hardware, distributed computation, or algorithmic improvements are all potential methods to resolve this issue, but we leave their implementation to future work. found continues to increase, even after 600M game frames, the runs do not continue beyond this point because they run out of memory. Despite visiting more rooms, Go-Explore still does not find any rewards, although it may have were it able to continue for longer (c). The noise at the end of sub-figure (a) is caused by different runs crashing at different times. The plot shows the mean and 95% bootstrapped confidence interval over 20 runs initially, but the number of runs declines over time. The first run crashes around 500M game frames.
A.11 Failure robustifying long trajectories in Pitfall
While Go-Explore on Pitfall with domain knowledge is able to find trajectories that score over 70,000 points ( Fig. 9), the Backward Algorithm was unable to robustify these trajectories (Fig. 13). Figure 13: Maximum starting point over training when robustifying the full-length trajectories produced by Go-Explore in Phase 1 on Pitfall with domain knowledge. Unlike in Fig. 5b, the lines in this figure represent separate robustification attempts, each of which was applied to a single demonstration taken from different runs of Go-Explore Phase 1. None of the 5 robustification attempts reaches a starting point near 0, meaning that robustification failed on these demonstrations. We did not try to robustify from multiple demonstrations for want of time, although doing so may have worked better.
A.12 Nearly identical states in Pitfall
Pitfall contains many rooms located in different parts of the game that contain the exact same objects and hazards. These identical rooms can result in nearly identical-looking states that require different actions to be navigated optimally (Fig. 14), and they indicate that Pitfall is a Partially Observable Markov Decision Process (POMDP). These nearly identical looking states can pose a problem, both when robustifying trajectories that visit some of these states, and when designing a domain-agnostic cell representation that should, ideally, treat these states as being in different cells.
The general method for handling POMDPs is to condition the current action on all previously observed states, for example by training a recurrent, rather than feed-forward, neural network. For the robustification phase our method already implements a recurrent layer in the neural network, but, possibly due to the way the network is trained with the Backward Algorithm (i.e. whenever the agent is started from a particular state, it is not presented with all states that would have come before), this recurrent layer does not appear to completely solve the issue (see also section 3.2). A similar approach could be applied for obtaining cell representations (e.g. a cell representation could be conditioned on all observations of the trajectory to a particular state, rather than just the observation at a particular state), but care would have to be taken to ensure that actually identical (or nearly identical) states are recognized as such.
direction in demonstration direction in demonstration Figure 14: Two nearly identical looking states that require different actions to be navigated optimally. The two screenshots are taken from the same Go-Explore demonstration, but at different times. The rooms are conceptually identical: they both contain a blue pool, a swinging vine, three rolling logs, and a scorpion. However, because the two rooms are located in different areas of the game, the correct actions for navigating the two rooms can be different. In this case, the Go-Explore demonstration navigates the left room right to left, whereas it navigates the right room from left to right. When training a policy in this situation, there will be many similar looking states that require opposite actions. While the moving aspects of the room (i.e. the vine, the logs, and the scorpion) are likely to be in different locations in the two rooms of the demonstration, the fact that they will also be in different locations when entering the rooms at different times makes them poor features for differentiation. Probably the most informative features that can be used to determine in which direction to move are the score counter and the clock (the white numbers in the top left of each image), though, in practice, these small, frequently-changing features seem insufficient to provide the necessary guidance.
Figure 2 :
2A high-level overview of the Go-Explore algorithm.
Figure 4 :
4Performance of the exploration phase of Go-Explore with downscaled frames on Montezuma's Revenge. Lines indicating human and the algorithmic state of the art are for comparison, but recall that the Go-Explore scores in this plot are on a deterministic version of the game (unlike the post-Phase 2 scores presented in this section).
Figure 6 :
6History of progress on Montezuma's Revenge vs. the version of Go-Explore that does not harness domain knowledge. Go-Explore significantly improves on the prior state of the art. These data are presented in tabular form in Appendix A.9.
Figure 7 :
7terms of computational performance, Phase 1 with domain knowledge solves the first level after a mean of only 57.6M (CI: 52.7M -62.3M) game frames, corresponding to 0.9 (CI: 0.8 -1.0) Performance on Montezuma's Revenge of Phase 1 of Go-Explore with and without domain knowledge. The algorithm finds more rooms, cells, and higher scores with the easily provided domain knowledge, and does so with a better sample complexity. For (b), we plot the number of cells found in the no-domain-knowledge runs according to the more intelligent cell representation from the domain-knowledge run to allow for an equal comparison.
Figure 8 :
8Historical progress on Montezuma's Revenge vs. the version of Go-Explore that harnesses domain knowledge. With domain knowledge, Go-Explore dramatically outperforms prior work, the no-domain-knowledge version of Go-Explore, and even prior work with imitation learning that was provided the solution in the form of human demonstrations. The data are presented in tabular form in Appendix A.9.
Figure 9 :
9Performance on Pitfall of Phase 1 of Go-Explore with and without domain knowledge. Without domain knowledge, the exploration phase finds about 22 rooms (a), but it then quickly stops finding new rooms (a) or cells (b) (here, we display discovery of domain-knowledge cells to enable a fair comparison, see Appendix A.10 for progress on the domain-agnostic cell representation), and it doesn't find any rewards (c). With domain knowledge, the exploration phase of Go-Explore finds all 255 rooms (a) and trajectories scoring a mean 70,264 points (c). In addition, even though the number of rooms (a) and the number cells (b)
Figure 10 :
10Historical progress on Pitfall vs. the version of Go-Explore that harnesses domain knowledge. Go-Explore achieves a mean of over 59,000 points, greatly outperforming the prior state of the art. The data are presented in tabular form in Appendix A.9.
Finally
, in the case of Montezuma's Revenge with domain knowledge, cells are exponentially downweighted based on the distance to the maximum level currently reached, thereby favoring progress in the furthest level reached, while still keeping open the possibility of improving previous levels' trajectories: LevelW eight(c) = 0.1 M axLevel−Level(c)
Figure 11 :
11Number of without-domain-knowledge cells found during Phase 1 on Pitfall without domain knowledge. Most cells are found within the first 500M game frames, after which very few new cells are found. This observation suggests that Pitfall without domain knowledge fails because there are too many different states that are mapped to the same Go-Explore cell.
Figure 12 :
12Go-Explore Phase 1 on Pitfall without domain knowledge with a more fine-grained (16 different pixel values instead of 8) cell representation. While the number of rooms (a) and the number of cells (b)
Table 1
1shows the hyperparameters for the runs with downsampled frames, i.e. the ones without domain knowledge, andTable 2shows the hyperparameters for the runs with domain knowledge.Montezuma's Revenge Pitfall
Table 2 :
2Hyperparameter values for Montezuma's Revenge and Pitfall with domain knowledge.
Table 3
3shows various performance metrics as a function of the level reached during Phase 1 the "domain knowledge" experiment on Montezuma's Revenge. The full experiment ran for 600M game frames, which took a mean of 74.9 (CI: 72.6 -77.2) hours.Level Solved %
Game Frames
(excl. replay)
Time (hours)
Hypothetical
Game Frames
(incl. replay)
1
100%
58M (53M -62M)
0.9 (0.9 -1.0)
1.5B (1.3B -1.6B)
2
100%
104M (97M -111M)
2.5 (2.3 -2.7)
4.3B (3.9B -4.6B)
3
100%
173M (164M -182M)
6.8 (6.2 -7.3)
12B (12B -13B)
4
100%
242M (230M -253M) 12.7 (11.7 -13.6)
24B (23B -26B)
5
100%
305M (292M -318M) 19.9 (18.6 -21.3)
38B (36B -41B)
6
100%
373M (358M -388M) 29.4 (27.8 -31.2)
57B (53B -60B)
7
100%
432M (416M -448M) 39.1 (37.0 -41.2)
75B (71B -79B)
8
94%
487M (471M -503M) 49.8 (47.4 -52.1)
96B (92B -101B)
9
70%
533M (518M -548M) 61.4 (58.5 -64.3) 117B (113B -121B)
10
38%
561M (550M -572M) 70.4 (67.7 -73.2) 139B (134B -144B)
11
12%
582M (570M -595M) 77.6 (72.9 -82.0) 151B (147B -155B)
Table 3 :
3Mean computational complexity to reach different levels for Montezuma's Revenge with domain knowledge. The "Game frame (incl. replay)" metric shows the number of game frames that would have been played if we replayed trajectories instead of resetting the emulator state.
For Montezuma's Revenge without domain knowledge, performance metrics are shown inTable 4. The full experiment ran for 1.2B game frames, which took 26.9 (CI: 25.6 -28.2) hours. It is notable that this is faster than the experiment with domain knowledge in spite of processing twice as many frames. This is likely due to the same reasons that domain knowledge runs get slower over time: runs without domain knowledge find fewer cells and shorter trajectories, and are thus less affected by the slowdown.Level Solved %
Game Frames
(excl. replay)
Time (hours)
Hypothetical
Game Frames
(incl. replay)
1
57%
640M (567M -711M) 10.8 (9.5 -12.0) 33B (28B -37B)
2
1%
592M (592M -592M) 11.4 (11.4 -11.4) 32B (32B -32B)
Table 4 :
4Mean computational complexity to reach different levels for Montezuma's Revenge without domain knowledge. The "Game frame (incl. replay)" metric shows the number of game frames that would have been played if we replayed trajectories instead of resetting the emulator state. It is a hypothetical metric, since we did not replay the trajectories, but instead reset the environment. The "Solved %" column shows the proportion of runs that solved a given level. All other metrics are computed only for the subset of runs that did solve the level.For Pitfall with domain knowledge, the threshold at which to compare game frames is not as clear as it is for Montezuma's Revenge. In order to include data from all of our 40 runs, we report the required game frames for reaching the lowest score achieved out of those runs, which is 47, 534.Reaching this threshold required a mean of 794.0M (CI: 715.9M -869.8M) game frames, which takes 25.0 (CI: 21.4 -28.3) hours, and it would have required a mean of 100.8B (CI: 84.1B -116.0B) game frames if trajectories had to be replayed from the start of the game. The full experiment lasted for 4.0B game frames, which took a mean of 186.3 (CI: 184.9 -187.8) hours. The full experiment would have required 1,060.4B (CI: 1,048.5B -1,071.7B) game frames if trajectories had to be replayed from the start of the game.
2MB (CI: 292.3MB -389.3MB) in the case of Montezuma's Revenge without domain knowledge, and 2.8GB (CI: 2.7GB -2.9GB) with domain knowledge. For Pitfall with domain knowledge, the mean uncompressed checkpoint size was 1.30GB (CI: 1.29GB -1.31GB). For robustification, each run used 16 workers, each equipped with a single GPU, for a total of 16 GPUs per run. For Montezuma's Revenge without domain knowledge, runs lasted up to 5B game frames though the selected checkpoints were produced after a mean of 4.35B (CI: 4.27B -4.45B) game frames (which took a mean of 2.4 (CI: 2.4 -2.5) days). For Montezuma's Revenge with domain
Note that this second phase is in principle not necessary if Phase 1 itself produces a policy that can handle stochastic environments (Section 2.1.3).
AcknowledgmentsWe thank the following for helpful discussions on the Go-Explore algorithm and the ideas behind it: Peter Dayan, Zoubin Ghahramani, Shimon Whiteson, Juergen Schmidhuber, Ian Osband, and Kevin Clune. We also appreciate input from all of the members of Uber AI Labs, especially Vashisht Madhavan, Felipe Petroski Such, John Sears, and Thomas Miconi. We are also deeply appreciative of the machine learning community at large for providing feedback that refined our thinking and exposition of Go-Explore, including all of those that provided commentary on Reddit, Twitter, and via other online mediums such as blog posts about our work. Finally, we are grateful to Leon Rosenshein, Joel Snow, Thaxton Beesley, the Colorado Data Center team and the entire OpusStack Team at Uber for providing our computing platform and for technical support.A AppendixA.1 The meaning of "frames" It is common practice to introduce "frame skipping" during training in the Atari domain, so that the agent only selects actions every k frames instead of every single frame (the action then persists across k frames). The most common value of k is 4, and both our exploration and robustification phases were implemented with frame skipping with k = 4.Following the recommendations in Petroski Such et al.[66], we call the total number of frames produced by the underlying emulator "game frames" and the number of frames seen and acted on by the agent during training "training frames." It can sometimes be difficult to know whether a reported number of frames corresponds to training frames or game frames, and the difference can be significant because the number of game frames is usually 4 times the number of training frames. In this work, frame counts are always reported as game frames, as recommended by Petroski Such et al.[66]and Machado et al.[31]. Further, we always qualify the word "frame" with either "training" or "game." This clarification is particularly important for the rare cases in which we are indeed referring to training frames and not game frames, such as in Section 2.1.4, where we mention that in the exploration phase, actions are repeated with 95% probability each training frame. 3,705 0 A3C-CTS[4]1,127 -259 BASS-hash[33]238 -DQN-PixelCNN[36]2,514 0 ES[71]0 -Reactor[34]2,643.5 -3.5 Feature-EB[35]2,745 -C51[72]0 0 UBE[38]3,000 0 Rainbow[73]154 0 IMPALA[42]2,643.5 -1.2 Ape-X[41]2,500 -1 DeepCS[37]3,500 -186 RND[16]11,347 -3 PPO+CoEX[39]11,540 -Go-Explore 43,763 -Go-Explore (domain knowledge) 666,474 59,494Go-Explore (best) 17,986,800 107,363 DQfD[26]4,638 57 TDC+CMC[45]41,098 76,813 Ape-X DQfD[27]29,384 3,997 LfSD (best)[28]74,500 -Average Human[27]4,753 6,464 Human Expert[27]34,900 47,821 Human World Record[78]1,219,200 114,000 Results are given in order of first public release (including preprints). Many historical papers did not consider Pitfall, in which case the score is displayed as "-". Two references are given in cases where the score from a given method does not appear in its original paper, but appears in another paper (usually in a comparison section).
Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den, Julian Driessche, Ioannis Schrittwieser, Veda Antonoglou, Marc Panneershelvam, Lanctot, Nature. 5297587David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.
George van den Driessche, Thore Graepel, and Demis Hassabis. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Robert Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy P Lillicrap, Fan Hui, Laurent Sifre, Nature. 550Mastering the game of go without human knowledgeDavid Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, L Robert Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy P. Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354-359, 2017.
Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, G Marc, Alex Bellemare, Martin Graves, Andreas K Riedmiller, Georg Fidjeland, Stig Ostrovski, Petersen, Legg, and Demis Hassabis. Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane518Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
Unifying count-based exploration and intrinsic motivation. Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, Remi Munos, NIPS. Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, pages 1471-1479, 2016.
Concrete problems in ai safety. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F Christiano, John Schulman, Dan Mané, abs/1606.06565CoRR. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. CoRR, abs/1606.06565, 2016.
. Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stéphane Doncieux, Fred C Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, K Le Leni, Laura M Goff, Babak Grabowski, Frank Hodjat, Laurent Hutter, Carole Keller, Peter Knibbe, Richard E Krcah, Hod Lenski, Lipson, B Robert, Carlos Maccurdy, Risto Maestre, Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Tuan Le Nguyen, Charles Ofria, Marc Parizeau, David P. Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon ThibaultWestley Weimer, Richard Watson, and Jason YosinksiThe surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. CoRR, abs/1803.03453Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stéphane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni K. Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert B MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Tuan Le Nguyen, Charles Ofria, Marc Parizeau, David P. Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, and Jason Yosinksi. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. CoRR, abs/1803.03453, 2018.
Novelty search and the problem with objectives. Joel Lehman, Kenneth O Stanley, Genetic Programming Theory and Practice IX. Joel Lehman and Kenneth O. Stanley. Novelty search and the problem with objectives. In Genetic Programming Theory and Practice IX (GPTP 2011), 2011.
Back to basics: Benchmarking canonical evolution strategies for playing atari. Patryk Chrabaszcz, Ilya Loshchilov, Frank Hutter, In IJCAI. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. Back to basics: Benchmarking canonical evolution strategies for playing atari. In IJCAI, 2018.
A possibility for implementing curiosity and boredom in model-building neural controllers. Jürgen Schmidhuber, Proc. of the international conference on simulation of adaptive behavior: From animals to animats. of the international conference on simulation of adaptive behavior: From animals to animatsJürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animats, pages 222-227, 1991.
What is intrinsic motivation? a typology of computational approaches. Pierre- , Yves Oudeyer, Frederic Kaplan, Frontiers in Neurorobotics. 16Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in Neurorobotics, 1:6, 2009.
Intrinsic motivation and reinforcement learning. G Andrew, Barto, Intrinsically motivated learning in natural and artificial systems. SpringerAndrew G Barto. Intrinsic motivation and reinforcement learning. In Intrinsically motivated learning in natural and artificial systems, pages 17-47. Springer, 2013.
Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Jürgen Schmidhuber, Connect. Sci. 18Jürgen Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connect. Sci., 18:173-187, 2006.
Curious model-building control systems. Jürgen Schmidhuber, IEEE International Joint Conference on. IEEENeural NetworksJürgen Schmidhuber. Curious model-building control systems. In Neural Networks, 1991. 1991 IEEE International Joint Conference on, pages 1458-1463. IEEE, 1991.
Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O Stanley, Jeff Clune, arXiv:1712.06560arXiv preprintEdoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. arXiv preprint arXiv:1712.06560, 2017.
Surprise-based intrinsic motivation for deep reinforcement learning. Joshua Achiam, S. Shankar Sastry, abs/1703.01732CoRRJoshua Achiam and S. Shankar Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. CoRR, abs/1703.01732, 2017.
Yuri Burda, Harrison Edwards, Amos Storkey, Oleg Klimov, arXiv:1810.12894Exploration by random network distillation. arXiv preprintYuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018.
Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. Roby Velez, Jeff Clune, PloS one. 1211187736Roby Velez and Jeff Clune. Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. PloS one, 12(11):e0187736, 2017.
Neural modularity helps organisms evolve to learn new skills without forgetting old skills. Kai Olav Ellefsen, Jean-Baptiste Mouret, Jeff Clune, Josh C Bongard, PLoS Comput Biol. 1141004128Kai Olav Ellefsen, Jean-Baptiste Mouret, Jeff Clune, and Josh C Bongard. Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput Biol, 11(4):e1004128, 2015.
Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Proceedings of the national academy of sciences. the national academy of sciences201611835James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, page 201611835, 2017.
Catastrophic forgetting in connectionist networks. M Robert, French, Trends in cognitive sciences. 34Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4): 128-135, 1999.
A deeper look at experience replay. CoRR. Shangtong Zhang, Richard S Sutton, abs/1712.01275Shangtong Zhang and Richard S. Sutton. A deeper look at experience replay. CoRR, abs/1712.01275, 2017.
The effects of memory replay in reinforcement learning. Ruishan Liu, James Zou, abs/1710.06574CoRRRuishan Liu and James Zou. The effects of memory replay in reinforcement learning. CoRR, abs/1710.06574, 2017.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, 1BradfordRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. Bradford, 1998.
Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Y Richard, Xi Chen, Chen, arXiv:1706.01905Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. arXiv preprintMatthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905, 2017.
State-dependent exploration for policy gradient methods. Thomas Rückstieß, Martin Felder, Jürgen Schmidhuber, ECML/PKDD. Thomas Rückstieß, Martin Felder, and Jürgen Schmidhuber. State-dependent exploration for policy gradient methods. In ECML/PKDD, 2008.
Deep q-learning from demonstrations. Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, Gabriel Dulac-Arnold, John Agapiou, AAAI. Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, Gabriel Dulac-Arnold, John Agapiou, Joel Z. Leibo, and Audrunas Gruslys. Deep q-learning from demonstrations. In AAAI, 2018.
Observe and look further: Achieving consistent performance on atari. Tobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, John Hado Van Hasselt, Mel Quan, Večerík, arXiv:1805.11593arXiv preprintTobias Pohlen, Bilal Piot, Todd Hester, Mohammad Gheshlaghi Azar, Dan Horgan, David Budden, Gabriel Barth-Maron, Hado van Hasselt, John Quan, Mel Večerík, et al. Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593, 2018.
Learning montezuma's revenge from a single demonstration. Tim Salimans, Richard Chen, arXiv:1812.03381arXiv preprintTim Salimans and Richard Chen. Learning montezuma's revenge from a single demonstration. arXiv preprint arXiv:1812.03381, 2018.
Generative adversarial imitation learning. Jonathan Ho, Stefano Ermon, NIPS. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, 2016.
The arcade learning environment: An evaluation platform for general agents. Yavar Marc G Bellemare, Joel Naddaf, Michael Veness, Bowling, J. Artif. Intell. Res.(JAIR). 47Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253-279, 2013.
Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. C Marlos, Marc G Machado, Erik Bellemare, Joel Talvitie, Matthew J Veness, Michael H Hausknecht, Bowling, J. Artif. Intell. Res. 61Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew J. Hausknecht, and Michael H. Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. J. Artif. Intell. Res., 61:523-562, 2018.
Solving montezuma's revenge with planning and reinforcement learning. Alonso Adrià Garriga, Adrià Garriga Alonso. Solving montezuma's revenge with planning and reinforcement learning, 2017.
# exploration: A study of count-based exploration for deep reinforcement learning. Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Openai Xi Chen, Yan Duan, Filip DeTurck, and Pieter Abbeel. NIPSHaoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schul- man, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In NIPS, pages 2750-2759, 2017.
The reactor: A sample-efficient actor-critic architecture. Audrunas Gruslys, Mohammad Gheshlaghi Azar, G Marc, Remi Bellemare, Munos, arXiv:1704.04651arXiv preprintAudrunas Gruslys, Mohammad Gheshlaghi Azar, Marc G Bellemare, and Remi Munos. The reactor: A sample-efficient actor-critic architecture. arXiv preprint arXiv:1704.04651, 2017.
Count-based exploration in feature space for reinforcement learning. Jarryd Martin, Suraj Narayanan, Tom Sasikumar, Marcus Everitt, Hutter, IJCAI. Jarryd Martin, Suraj Narayanan Sasikumar, Tom Everitt, and Marcus Hutter. Count-based exploration in feature space for reinforcement learning. In IJCAI, 2017.
Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. Georg Ostrovski, Marc G Bellemare, Georg Ostrovski, Marc G. Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. In ICML, 2017.
Deep curiosity search: Intra-life exploration improves performance on challenging deep reinforcement learning problems. Christopher Stanton, Jeff Clune, abs/1806.00553CoRRChristopher Stanton and Jeff Clune. Deep curiosity search: Intra-life exploration improves performance on challenging deep reinforcement learning problems. CoRR, abs/1806.00553, 2018.
The uncertainty bellman equation and exploration. Ian Brendan O'donoghue, Rémi Osband, Volodymyr Munos, Mnih, ICML. Brendan O'Donoghue, Ian Osband, Rémi Munos, and Volodymyr Mnih. The uncertainty bellman equation and exploration. In ICML, 2018.
Contingency-aware exploration in reinforcement learning. Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, Honglak Lee, abs/1811.01483CoRR. Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, and Honglak Lee. Contingency-aware exploration in reinforcement learning. CoRR, abs/1811.01483, 2018.
Asynchronous methods for deep reinforcement learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, ICML. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, pages 1928-1937, 2016.
. Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, David Hado Van Hasselt, Silver, Distributed prioritized experience replay. CoRR, abs/1803.00933Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay. CoRR, abs/1803.00933, 2018.
Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu, ICML. Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In ICML, 2018.
Learning abstract models for long-horizon exploration. Ramtin Evan Zheran Liu, Sudarshan Keramati, Kelvin Seshadri, Panupong Guu, Emma Pasupat, Percy Brunskill, Liang, Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, and Percy Liang. Learning abstract models for long-horizon exploration, 2019. URL https://openreview.net/forum?id=ryxLG2RcYX.
Fast exploration with simplified models and approximately optimistic planning in model based reinforcement learning. Ramtin Keramati, Jay Whang, Patrick Cho, Emma Brunskill, Ramtin Keramati, Jay Whang, Patrick Cho, and Emma Brunskill. Fast exploration with simplified models and approximately optimistic planning in model based reinforcement learning, 2019. URL https://openreview.net/forum?id=HygS7n0cFQ.
Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, Nando De Freitas, arXiv:1805.11592Playing hard exploration games by watching youtube. arXiv preprintYusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. arXiv preprint arXiv:1805.11592, 2018.
An atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents. Felipe Petroski Such, Vashisht Madhavan, Rosanne Liu, Rui Wang, Pablo Samuel Castro, Yulun Li, Ludwig Schubert, Marc Bellemare, Jeff Clune, Joel Lehman, arXiv:1812.07069arXiv preprintFelipe Petroski Such, Vashisht Madhavan, Rosanne Liu, Rui Wang, Pablo Samuel Castro, Yulun Li, Ludwig Schubert, Marc Bellemare, Jeff Clune, and Joel Lehman. An atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents. arXiv preprint arXiv:1812.07069, 2018.
Bandit based monte-carlo planning. Levente Kocsis, Csaba Szepesvári, ECML. Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In ECML, 2006.
Improved monte-carlo search. Levente Kocsis, Csaba Szepesvári, Jan Willemson, Univ. Tartu, Estonia, Tech. Rep. 1Levente Kocsis, Csaba Szepesvári, and Jan Willemson. Improved monte-carlo search. Univ. Tartu, Estonia, Tech. Rep, 1, 2006.
A survey of monte carlo tree search methods. Cameron Browne, Edward Jack Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez Liebana, Spyridon Samothrakis, Simon Colton, IEEE Transactions on Computational Intelligence and AI in Games. 4Cameron Browne, Edward Jack Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez Liebana, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4:1-43, 2012.
Monte-carlo tree search: A new framework for game ai. Guillaume Chaslot, Sander Bakkes, István Szita, Pieter Spronck, AIIDE. Guillaume Chaslot, Sander Bakkes, István Szita, and Pieter Spronck. Monte-carlo tree search: A new framework for game ai. In AIIDE, 2008.
Classical planning with simulators: Results on the atari video games. Nir Lipovetzky, Miquel Ramírez, Hector Geffner, IJCAI. Nir Lipovetzky, Miquel Ramírez, and Hector Geffner. Classical planning with simulators: Results on the atari video games. In IJCAI, 2015.
The predictron: End-to-end learning and planning. David Silver, Matteo Hado Van Hasselt, Tom Hessel, Arthur Schaul, Tim Guez, Gabriel Harley, David P Dulac-Arnold, Neil C Reichert, André Rabinowitz, Thomas Barreto, Degris, ICML. David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac- Arnold, David P. Reichert, Neil C. Rabinowitz, André Barreto, and Thomas Degris. The predictron: End-to-end learning and planning. In ICML, 2017.
Deep auto-encoder neural networks in reinforcement learning. Sascha Lange, Martin Riedmiller, The International Joint Conference on Neural Networks. IEEESascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In The International Joint Conference on Neural Networks, pages 1-8. IEEE, 2010.
Reinforcement learning with unsupervised auxiliary tasks. Max Jaderberg, Volodymyr Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu, abs/1611.05397CoRRMax Jaderberg, Volodymyr Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. CoRR, abs/1611.05397, 2016.
Montezuma's revenge solved by go-explore, a new algorithm for hard-exploration problems (sets records on pitfall, too). Adrien Ecoffet, Joost Huizinga, Joel Lehman, O Kenneth, Jeff Stanley, Clune, Uber Engineering BlogAdrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Montezuma's revenge solved by go-explore, a new algorithm for hard-exploration problems (sets records on pitfall, too). Uber Engineering Blog, Nov 2018. URL http://eng.uber.com/go-explore.
Robustness to out-of-distribution inputs via task-aware generative uncertainty. Rowan Mcallister, Gregory Kahn, Jeff Clune, Sergey Levine, arXiv:1812.10687arXiv preprintRowan McAllister, Gregory Kahn, Jeff Clune, and Sergey Levine. Robustness to out-of-distribution inputs via task-aware generative uncertainty. arXiv preprint arXiv:1812.10687, 2018.
Uncertainty-aware reinforcement learning for collision avoidance. Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, Sergey Levine, arXiv:1702.01182arXiv preprintGregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, and Sergey Levine. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017.
Building machines that learn and think like people. The Behavioral and brain sciences. M Brenden, Lake, D Tomer, Joshua B Ullman, Samuel J Tenenbaum, Gershman, 40253Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. The Behavioral and brain sciences, 40:e253, 2017.
The transferability approach: Crossing the reality gap in evolutionary robotics. Jean-Baptiste Sylvain Koos, Stéphane Mouret, Doncieux, IEEE Transactions on Evolutionary Computation. 171Sylvain Koos, Jean-Baptiste Mouret, and Stéphane Doncieux. The transferability approach: Crossing the reality gap in evolutionary robotics. IEEE Transactions on Evolutionary Computation, 17(1):122-145, 2013.
Robots that can adapt like animals. A Cully, J Clune, D Tarapore, J.-B Mouret, 10.1038/nature14422Nature. 521A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret. Robots that can adapt like animals. Nature, 521: 503-507, 2015. doi: 10.1038/nature14422.
Learning dexterous in-hand manipulation. Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob Mcgrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, arXiv:1808.00177arXiv preprintMarcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177, 2018.
OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob Mcgrew, Josh Tobin, Advances in Neural Information Processing Systems. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pages 5048-5058, 2017.
Universal value function approximators. Tom Schaul, Daniel Horgan, Karol Gregor, David Silver, International Conference on Machine Learning. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning, pages 1312-1320, 2015.
Proximal policy optimization algorithms. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, abs/1707.06347CoRRJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017.
Backplay: "man muss immer umkehren. Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alex Peysakhovich, Kyunghyun Cho, Joan Bruna, abs/1807.06919Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alex Peysakhovich, Kyunghyun Cho, and Joan Bruna. Backplay: "man muss immer umkehren". CoRR, abs/1807.06919, 2018.
Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O Stanley, Jeff Clune, arXiv:1712.06567arXiv preprintFelipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017.
Deep reinforcement learning with double q-learning. Arthur Hado Van Hasselt, David Guez, Silver, AAAI. Phoenix, AZ2Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In AAAI, volume 2, page 5. Phoenix, AZ, 2016.
Dueling network architectures for deep reinforcement learning. Ziyu Wang, Marc Nando De Freitas, Lanctot, ICML. Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. In ICML, 2016.
. Tom Schaul, John Quan, Ioannis Antonoglou, David Silver, arXiv:1511.05952Prioritized experience replay. arXiv preprintTom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
Learning values across many orders of magnitude. Arthur Hado P Van Hasselt, Matteo Guez, Volodymyr Hessel, David Mnih, Silver, Advances in Neural Information Processing Systems. Hado P van Hasselt, Arthur Guez, Matteo Hessel, Volodymyr Mnih, and David Silver. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, pages 4287- 4295, 2016.
Evolution strategies as a scalable alternative to reinforcement learning. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever, arXiv:1703.03864arXiv preprintTim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
A distributional perspective on reinforcement learning. G Marc, Will Bellemare, Rémi Dabney, Munos, ICML. Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In ICML, 2017.
Rainbow: Combining improvements in deep reinforcement learning. Matteo Hessel, Joseph Modayil, Tom Hado Van Hasselt, Georg Schaul, Will Ostrovski, Dan Dabney, Bilal Horgan, Mohammad Gheshlaghi Piot, David Azar, Silver, AAAI. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In AAAI, 2018.
Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, arXiv:1507.04296Massively parallel methods for deep reinforcement learning. arXiv preprintArun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
Openai gym. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
Bootstrap methods and applications. A M Zoubir, D Robert Iskander, IEEE Signal Processing Magazine. 24A. M. Zoubir and D. Robert Iskander. Bootstrap methods and applications. IEEE Signal Processing Magazine, 24:10-19, 2007.
Atari vcs/2600 easter egg list. Atari vcs/2600 easter egg list, 2018. URL http://www.ataricompendium.com/game_library/ easter_eggs/vcs/easter_eggs.html.
Atari vcs/2600 scoreboard. Atari vcs/2600 scoreboard, Dec 2018. URL http://www.ataricompendium.com/game_library/ high_scores/high_scores.html.
Illuminating search spaces by mapping elites. Jean-Baptiste Mouret, Jeff Clune, abs/1504.04909Jean-Baptiste Mouret and Jeff Clune. Illuminating search spaces by mapping elites. ArXiv e-prints, abs/1504.04909, 2015. URL http://arxiv.org/abs/1504.04909.
Evolving a diversity of virtual creatures through novelty search and local competition. Joel Lehman, Kenneth O Stanley, GECCO '11: Proceedings of the 13th annual conference on Genetic and evolutionary computation. Joel Lehman and Kenneth O. Stanley. Evolving a diversity of virtual creatures through novelty search and local competition. In GECCO '11: Proceedings of the 13th annual conference on Genetic and evolutionary computation, pages 211-218, 2011.
Domain randomization for transferring deep neural networks from simulation to the real world. Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, Pieter Abbeel, Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on. IEEEJosh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pages 23-30. IEEE, 2017.
Evolving gaits for physical robots with the hyperneat generative encoding: the benefits of simulation. S Lee, J Yosinski, K Glette, H Lipson, J Clune, Applications of Evolutionary Computing. SpringerS. Lee, J. Yosinski, K. Glette, H. Lipson, and J. Clune. Evolving gaits for physical robots with the hyperneat generative encoding: the benefits of simulation. In Applications of Evolutionary Computing. Springer, 2013.
A comprehensive survey on safe reinforcement learning. Javier Garcıa, Fernando Fernández, Journal of Machine Learning Research. 161Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437-1480, 2015.
Combating reinforcement learning's sisyphean curse with intrinsic fear. Kamyar Zachary C Lipton, Abhishek Azizzadenesheli, Lihong Kumar, Jianfeng Li, Li Gao, Deng, arXiv:1611.01211arXiv preprintZachary C Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, and Li Deng. Combating reinforcement learning's sisyphean curse with intrinsic fear. arXiv preprint arXiv:1611.01211, 2016.
Trial without error: Towards safe reinforcement learning via human intervention. William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans, Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. the 17th International Conference on Autonomous Agents and MultiAgent SystemsInternational Foundation for Autonomous Agents and Multiagent SystemsWilliam Saunders, Girish Sastry, Andreas Stuhlmueller, and Owain Evans. Trial without error: Towards safe reinforcement learning via human intervention. In Proceedings of the 17th International Confer- ence on Autonomous Agents and MultiAgent Systems, pages 2067-2069. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
Gep-pg: Decoupling exploration and exploitation in deep reinforcement learning algorithms. Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer, In JFPDA. Cédric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. Gep-pg: Decoupling exploration and exploitation in deep reinforcement learning algorithms. In JFPDA, 2018.
Intrinsically motivated goal exploration processes with automatic curriculum learning. Sébastien Forestier, Yoan Mollard, Pierre-Yves Oudeyer, arXiv:1708.02190arXiv preprintSébastien Forestier, Yoan Mollard, and Pierre-Yves Oudeyer. Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190, 2017.
Continuous control with deep reinforcement learning. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra, abs/1509.02971CoRRTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
End-to-end training of deep visuomotor policies. Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel, The Journal of Machine Learning Research. 171Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016.
Deep exploration via bootstrapped dqn. Ian Osband, Charles Blundell, Alexander Pritzel, Benjamin Van Roy, NIPS. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via boot- strapped dqn. In NIPS, 2016.
Explicit recall for efficient exploration. Honghua Dong, Jiayuan Mao, Xinyue Cui, Lihong Li, Honghua Dong, Jiayuan Mao, Xinyue Cui, and Lihong Li. Explicit recall for efficient exploration, 2019. URL https://openreview.net/forum?id=B1GIB3A9YX.
Near-optimal reinforcement learning in polynomial time. Michael Kearns, Satinder Singh, Machine learning. 492-3Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209-232, 2002.
Deep learning for real-time atari game play using offline monte-carlo tree search planning. Xiaoxiao Guo, P Satinder, Honglak Singh, Richard L Lee, Xiaoshi Lewis, Wang, NIPS. Xiaoxiao Guo, Satinder P. Singh, Honglak Lee, Richard L. Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In NIPS, 2014.
Fractal ai: A fragile theory of intelligence. Sergio Hernandez Cerezo, Guillem Duran Ballester, abs/1803.05049CoRRSergio Hernandez Cerezo and Guillem Duran Ballester. Fractal ai: A fragile theory of intelligence. CoRR, abs/1803.05049, 2018.
A note on two problems in connexion with graphs. E W Dijkstra, 10.1007/BF01386390Numer. Math. 11E. W. Dijkstra. A note on two problems in connexion with graphs. Numer. Math., 1(1):269-271, December 1959. ISSN 0029-599X. doi: 10.1007/BF01386390. URL http://dx.doi.org/10.1007/ BF01386390.
A formal basis for the heuristic determination of minimum cost paths. P E Hart, N J Nilsson, B Raphael, 10.1109/TSSC.1968.300136IEEE Transactions on Systems Science and Cybernetics. 42P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100-107, July 1968. ISSN 0536-1567. doi: 10.1109/TSSC.1968.300136.
Rapidly-exploring random trees: A new tool for path planning. M Steven, Lavalle, Iowa State UniversityTechnical reportSteven M. Lavalle. Rapidly-exploring random trees: A new tool for path planning. Technical report, Iowa State University, 1998.
Zeping Zhan, Batu Aytemiz, Adam M Smith, arXiv:1812.03125Taking the scenic route: Automatic exploration for videogames. arXiv preprintZeping Zhan, Batu Aytemiz, and Adam M Smith. Taking the scenic route: Automatic exploration for videogames. arXiv preprint arXiv:1812.03125, 2018.
Genetic Algorithms in Search, Optimization and Machine Learning. David E Goldberg, Addison-WesleyReading, MADavid E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989.
Roulette-wheel selection via stochastic acceptance. CoRR, abs/1109. Adam Lipowski, Dorota Lipowska, 3627Adam Lipowski and Dorota Lipowska. Roulette-wheel selection via stochastic acceptance. CoRR, abs/1109.3627, 2011.
Investigating contingency awareness using atari 2600 games. G Marc, Joel Bellemare, Michael H Veness, Bowling, AAAI. Marc G. Bellemare, Joel Veness, and Michael H. Bowling. Investigating contingency awareness using atari 2600 games. In AAAI, 2012.
Incentivizing exploration in reinforcement learning with deep predictive models. C Bradly, Sergey Stadie, Pieter Levine, Abbeel, arXiv:1507.00814arXiv preprintBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
Asynchronous methods for deep reinforcement learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, International conference on machine learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937, 2016.
| [] |
[
"Higher-Order Spectra of Weak Lensing Convergence Maps in Parameterized Theories of Modified Gravity",
"Higher-Order Spectra of Weak Lensing Convergence Maps in Parameterized Theories of Modified Gravity"
] | [
"D Munshi \nMullard Space Science Laboratory\nUniversity College London\nHolmbury St Mary\nRH5 6NTDorkingSurreyUK\n",
"J D Mcewen \nMullard Space Science Laboratory\nUniversity College London\nHolmbury St Mary\nRH5 6NTDorkingSurreyUK\n"
] | [
"Mullard Space Science Laboratory\nUniversity College London\nHolmbury St Mary\nRH5 6NTDorkingSurreyUK",
"Mullard Space Science Laboratory\nUniversity College London\nHolmbury St Mary\nRH5 6NTDorkingSurreyUK"
] | [
"Mon. Not. R. Astron. Soc"
] | We compute the low-limit of the family of higher-order spectra for projected (2D) weak lensing convergence maps. In this limit these spectra are computed to an arbitrary order using tree-level perturbative calculations. We use the flat-sky approximation and Eulerian perturbative results based on a generating function approach. We test these results for the lower-order members of this family, i.e. the skew-and kurt-spectra against state-of-theart simulated all-sky weak lensing convergence maps and find our results to be in very good agreement. We also show how these spectra can be computed in the presence of a realistic sky-mask and Gaussian noise. We generalize these results to three-dimensions (3D) and compute the equal-time higher-order spectra. These results will be valuable in analyzing higher-order statistics from future all-sky weak lensing surveys such as the Euclid survey at low-modes. As illustrative examples, we compute these statistics in the context of the Horndeski and Beyond Horndeski theories of modified gravity. They will be especially useful in constraining theories such as the Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories and Degenerate Higher-Order Scalar-Tensor (DHOST) theories as well as the commonly used normal-branch of Dvali-Gabadadze-Porrati (nDGP) model, clustering quintessence models and scenarios with massive neutrinos. | 10.1093/mnras/staa2706 | [
"https://arxiv.org/pdf/2004.07021v1.pdf"
] | 215,768,772 | 2004.07021 | 92559e03eb800a058e786a6a01656a44991ed8e0 |
Higher-Order Spectra of Weak Lensing Convergence Maps in Parameterized Theories of Modified Gravity
16 April 2020
D Munshi
Mullard Space Science Laboratory
University College London
Holmbury St Mary
RH5 6NTDorkingSurreyUK
J D Mcewen
Mullard Space Science Laboratory
University College London
Holmbury St Mary
RH5 6NTDorkingSurreyUK
Higher-Order Spectra of Weak Lensing Convergence Maps in Parameterized Theories of Modified Gravity
Mon. Not. R. Astron. Soc
000000016 April 2020Printed 16 April 2020(MN L A T E X style file v2.2)Cosmology-Weak Lensing-Methods: analyticalstatisticalnumerical
We compute the low-limit of the family of higher-order spectra for projected (2D) weak lensing convergence maps. In this limit these spectra are computed to an arbitrary order using tree-level perturbative calculations. We use the flat-sky approximation and Eulerian perturbative results based on a generating function approach. We test these results for the lower-order members of this family, i.e. the skew-and kurt-spectra against state-of-theart simulated all-sky weak lensing convergence maps and find our results to be in very good agreement. We also show how these spectra can be computed in the presence of a realistic sky-mask and Gaussian noise. We generalize these results to three-dimensions (3D) and compute the equal-time higher-order spectra. These results will be valuable in analyzing higher-order statistics from future all-sky weak lensing surveys such as the Euclid survey at low-modes. As illustrative examples, we compute these statistics in the context of the Horndeski and Beyond Horndeski theories of modified gravity. They will be especially useful in constraining theories such as the Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories and Degenerate Higher-Order Scalar-Tensor (DHOST) theories as well as the commonly used normal-branch of Dvali-Gabadadze-Porrati (nDGP) model, clustering quintessence models and scenarios with massive neutrinos.
INTRODUCTION
We have a standard model of cosmology thanks to recently completed Cosmic Microwave Background Radiation (CMBR) experiments such as the Planck Surveyor 1 (Planck Collaboration 2013. However, many of the outstanding questions pertaining , e.g. to the nature of dark matter and dark energy or possible modification of General Relativity on cosmological scales remain unanswered (Joyce et al. 2015;Clifton et al. 2012). In addition they will also provide an estimate of the sum of the neutrino masses (Lesgourgues & Pastor 2006). Ongoing and planned future large scale structure (LSS) surveys may resolve or will provide clues for these questions using weak lensing analyses. Observational programs of many such surveys, including Euclid 2 (Laureijs et al. 2011 (Tyson et al. 2003), BOSS 8 (Eisenstein et al. 2011), KiDS(Kuijiken et al. 2015 and WFIRST(National Research Council 2010) lists weak lensing as their main science driver.
From the early days of detection the weak lensing ) studies have matured to a point where weak lensing results from Euclid are expected to constrain the cosmological parameters to sub-percent accuracy. However, weak lensing at smaller angular scales probes the nonlinear regime of gravitational clustering, and is thus key to understanding the non-Gaussianity induced by the nonlinearity and fullly exploiting in the weak lensing maps. The higher-order statistics are also useful for the breaking of parameter degeneracies in studies involving the power spectrum analysis alone and they are also important in understanding the variance or error of estimation of lower-order statistics. These higher-order statistics including the cumulants and their correlators Munshi et al. 2012;Riquelme & Spergel 2012;Calabrese et al. 2010) are among the best-known diagnostics of the deviation from Gaussianity that characterizes the non-linear regime (Bartolo et al. 2004), with a long history analytical modeling . Most of these studies use extensions of perturbative results in the quasilinear regime valid at large smoothing angular scales or variants of halo models (Cooray & Sheth 2002). Early studies concentrated on measurements of higher-order correlation hierarchy in the angular space due to small survey size Bernardeau, Waerbeke, Mellier 2003). However, the near all-sky coverage of future surveys such as Euclid will let us estimate higher-order statistics in the harmonic domain with unprecedented accuracy (Amendola et al. 2013). While measurements of real space correlations are much simpler in the presence of complicated survey design the measurements for different angular scales can be highly correlated (Munshi & Jain 2000;Munshi 2000). In comparison measurements in the harmonic domain are less correlated and each mode contains (nearly) independent information in the limit of allsky coverage. The primary motivation of this study is to develop analytical predictions for one such statistic called the skew-spectrum, and test them against numerical simulations. We will borrow the concepts developed for constructing skew-spectra for the the study of non-Gaussianity in the context of CMBR observations (Planck Collaboration 2016). However, we also include gravity-induced secondary non-Gaussianity. The skew-spectrum is the lowest-order member in the family of higher-order spectra (Munshi et al. 2011a,b). In a series of papers the one-point statistics such as the skewness and kurtosis were generalized to two-point cumulant correlator, e.g. the two-toone correlator and its higher-order generalizations. These can be represented in the harmonic domain by their associated spectra such as the skew-spectrum ) and its higher-order generalizations (Munshi et al. 2011a,b). These spectra have already been used to analyze WMAP 9 (Smidt et al. 2012) as well as Planck data (Planck Collaboration 2016). They are useful tools to separate individual contributions and estimate systematics. In this paper we will concentrate on the projected skew-spectrum and kurt-spectrum in the context of weak lensing surveys .
Other similar estimators also exist, including the morphological estimators (Munshi et al. 2012), e.g. position-dependent power spectra (Munshi 2017), phase correlations (Matsubara 2007), line-correlations (Eggemeier & Smith 2012), peak-statistics (Peel et al. 2012), peak-correlations (Heavens & Gupta 2012) and extreme value statistics (Harrison & Coles 2012).
Many modified gravity theories are now severely constrained with the first detection of GW170817 (Abott et al. 2017) and its electromagnetic counterpart GRB 170817A (Goldstein et al. 2017) implying Gravity Waves travel at the speed of light with deviation smaller than few × 10 −15 -see e.g. Baker et al. (2017); Sakstein & Jain (2017); Lombriser & Lima (2017); Creminelli & Verniizzi (2017). However, some of the models we consider here are designed to evade this constraint. It is expected that the constraints on these models will be further tightened by the observations of large scale structure by Euclid and LSST. The higher-order statistics we develop here can be very effectively used to test these models independently or jointly with power spectrum estimates. As a concrete example of the higher-order spectra we take the modified gravity theories also known as the Horndeski's theory of gravity. These are the most general theory of gravity that has second-order equation of motion. It was proposed first in 1974 (Barthelemy 2019) and since then, it was realised that Horndeski theory contains many other theories of gravity as a special cases. These include General relativity, Jordon-Brans-Dicke theories of gravity, Dilaton and Chameleon theories of gravity, theories involving as co-variant Galileons as well as models of Quintessence. All of these models of gravity have found use in construction of cosmological models of inflation as well as dark energy (see e.g. Deffayet et al (2011); Barthelemy (2019); Gleyzes et al. (2015a,b); Langois & Noui (2016a,b) for an incomplete list of recent references). We use a recent parametrization of the gravity induced bispectrum in this model as well as models that are known as the beyond Horndeski theories to compute the skew-spectrum in the low-limit. This paper is organized as follows. In §2 we review results regarding the convergence bispectrum in the context of tree-level Standard Perturbation Theory (SPT). In §3 we introduce the skew-spectrum and relate it to the bispectrum. The corresponding results for trispectrum and kurt-spectra are derived in §4. Theoretical predictions in the context of generating functions are derived in §5, The generalization of higher-order spectra is presented in §6. The higher-order spectra can be derived in the presence of a mask. The corresponding results are presented in §7, The simulations are discussed in §8, the numerical results are presented in §9. We present the results for various modified gravity and other beyond-ΛCDM scenarios in §10. Finally, the conclusions are drawn in §11.
MODELLING OF HIGHER-ORDER WEAK LENSING SPECTRA
In this section we will review the aspects of standard tree-level perturbative which we use to compute the bispectrum as well trispectrum as and eventually the skew-spectrum and kurt-spectrum.
Tree-level Perturbative Calculations
In the quasilinear regime (δ ≤ 1), the evolution of density contrast δ can be described using SPT . However, the treatment based on perturbation theory breaks down when density contrast at a given length scale becomes nonlinear (δ ≥ 1) which significantly increases the growth of clustering. We will denote the Fourier transform of the density contrast δ(r) by δ(k), where r is the comoving co-ordinate, and k denotes the comoving wavenumber. Expanding the δ(k), in a perturbative series, and assuming the density contrast is less than unity, for the pertubative series to be convergent, we get:
δ(k) = δ (1) (k) + δ (2) (k) + δ (3) (k) + . . . , .(1a)
The n-th order perturbative term denoted as δ (n) is ∝ [δ (1) ] n where δ (1) is the linear density contrast. The term δ (n) is expressed using a kernel Fn using the following convolution:
δ (n) (k) = dk1. · · · dknδ3D(k1 + · · · + kn − k)Fn(k1, · · · , kn)δ (1) (k1) · · · δ (1) (kn); dk = d 3 k (2π) 3/2 .(1b)
The Dirac delta function in 3D is denoted by δ3D and k1, k2, · · · , kn denotes different wavenumbers. The second-order kernel F2 has the following expression. For the higher-order kernels see :
F2(k1, k2) = 5 7 + 1 2 k1 k2 + k2 k1 k1 · k2 k1k2 + 2 7 k1 · k2 k1k2 2 , ki = |ki|.(1c)
Throughout we will use the following convention for the three-dimensional (3D) Fourier Transform (FT) and its inverse:
δ(k) = dr exp(−ik · r)δ(r); δ(r) = dk exp(i k · r)δ(k); dr = d 3 r (2π) 3/2 .(2)
We have suppressed the temporal dependence in Eq.(1a)-Eq.(1b) which will be introduced later in this section. The power spectrum P δ and the bispectrum B δ of δ are defined respectively as the two and three point correlation of the variable δ(k) The P lin (k) denotes the linear power spectrum, i.e. δ lin (k) = δ (1) (k) and δ lin (k1)δ lin (k2) c = (2π) 3 δ3D(k1 + k2)P lin (k1). Throughout angular brackets represent ensemble averaging. The subscript lin stands for linear-order contributions. The linearized solution for the density field is δ (1) (k); higher-order terms yield corrections to this linear solution. Using an ideal fluid approach known to be valid at large scales (and before shell crossing) one can write the second order correction to the linearized density field using the kernel F2(k1, k2). The cosmological structure formation is described by a set of equation which describes the Newtonian gravity coupled to the Euler and continuity equation . This system of non-linear, coupled integrodifferential equations are used to compute the kernels F2(k1, k2), F3(k1, k2, k3) and their high-order counterparts. This is typically done perturbatively in an order-by-order manner.
Weak Lensing Statistics in Projection (2D)
We will now specialize our discussion to weak lensing surveys. The weak lensing convergence κ is a line of sight projection of the 3D density contrast δ(r): κ( Ω; rs) = rs 0 dr w(r, rs) δ(r, Ω); w(r, rs) = 3ΩM 2
H 2 0 ac 2 dA(r)dA(rs − r) dA(rs) .(3)
Where r is the comoving distance, Ω = (θ, φ) is a unit vector that defines the position of the ppixel on the surface of the sky, with θ and φ respectively representing the azimuthal and polar co-ordinates d Ω = sin θ dθ dϕ is the measure of integration, rs is the radial comoving distance to the source plane, c is the speed of light, a represents the scale factor, H0 the Hubble parameter, dA(r) is the comoving angular diameter distance and the three-dimensional (3D) density contrast δ and ΩM is the cosmological density parameter. We will ignore the source distribution and assume them to be localized on a single source plane, we will also ignore photometric redshift errors. However, such complications are essential to link predictions to observational data and can readily be included in our analysis.
To avoid cluttering, we will suppress the rs dependence of κ( Ω, rs) and w(r, rs) defined in Eq.(1a) in the following. The corresponding 3D power spectrum P δ , bispectrum B δ and trispectrum T δ for δ are:
δ(k1)δ(k2) c = (2π) 3 δ3D(k1 + k2)P δ (k1); k = |k|; (4a) δ(k1)δ(k2)δ(k3) c = (2π) 3 δ3D(k1 + k2 + k3)B δ (k1, k2, k3); (4b) δ(k1) · · · δ(k4) c = (2π) 3 δ3D(k1 + · · · + k4)T δ (k1, · · · , k3).(4c)
The subscript c denotes the fact that only connected diagrams are included in computing these statistics. The flat-sky power spectrum P κ and bispectrum B κ are similarly defined through :
κ(l1)κ(l2) c = (2π) 2 δ2D(l1 + l2)P κ (l1); (5a) κ(l1)κ(l2)κ(l3) c = (2π) 2 δ2D(l1 + l2 + l3)B κ (l1, l2, l3); (5b) κ(l1) · · · κ(l4) c = (2π) 2 δ2D(l1 + · · · + l3)T κ (l1, l2, l3, l4).(5c)
The wavenumbers l, l1, · · · l4 are wavenumbers defined on the flat-patch of the sky. For a given radial distance r they are related to the projected 3D wave number by the relation l = k ⊥ /dA(r); where dA(r) being the co-moving angular diameter distance defined before and l = |l|. Using the flat-sky approximation as well as Limber and prefactor unity approximation the projected power spectrum P κ (l) and bispectrum B κ (k1, k2, k3) can be expressed respectively in terms of the 3D δ power spectrum P δ (k) and bispectrum B δ (k1, k2, k3) :
P κ (l) = rs 0 dr ω 2 (r) d 2 A (r) P δ l dA(r) ; r ; (6a) B κ (l1, l2, l3) = rs 0 dr ω 3 (r) d 4 A (r) B δ l1 dA(r) , l2 dA(r), , l3 dA(r) ; r ; (6b) T κ (l1, l2, l3, l4) = rs 0 dr ω 4 (r) d 6 A (r) T δ l1 dA(r) , l2 dA(r), , l3 dA(r) , l4 dA(r) ; r .(6c)
The superscript κ correspond to the convergence field which these statistics correspond to. The function ω is defined in Eq.(3). We will use different approximations introduced in §2 in Eq.(6a)-Eq.(6b) to compute the convergence or κ bispectrum.
BISPECTRUM AND SKEW-SPECTRUM
The spherical harmonic transform of a convergence map κ( Ω), denoted as κ m , defined over the surface of the sky using spherical harmonics Y m ( Ω) can be used to define the multipoles κ m :
κ m = d Ω Y m ( Ω) κ( Ω); Ω = (θ, ϕ).(7)
A Gaussian field is completely characterized by its power spectrum C κ which is defined as C κ = κ m κ * m . Here κ * m represents the complex conjugate of κ m . The flat sky power spectrum P κ (l) is identical to C κ at high with the identification l = . Bispectrum is the lowest order statistics that characterizes departure from Gaussianity that is defined as the three-point coupling of harmonic coefficients. Assuming isotropy and homogeneity the all-sky bispectrum B κ 1 2 3 is defined as (Bartolo et al. 2004):
κ 1 m 1 κ 2 m 2 κ 3 m 3 c ≡ B κ 1 2 3 1 2 3 m1 m2 m3 .(8)
The quantity in parentheses is the well-known Wigner-3j symbol which enforces rotational invariance. It is only non-zero for the triplets ( 1, 2, 3) that satisfy the triangular condition and 1 + 2 + 3 is even. The reduced bispectrum b κ 1 2 3 for convergence κ is defined through the following expression (Bartolo et al. 2004):
B κ 1 2 3 = (2 1 + 1)(2 2 + 1)(2 3 + 1) 4π 1 2 3 0 0 0 b κ 1 2 3 .(9)
The skew-spectrum is defined as the cross power spectrum formed by cross-correlating the squared κ 2 maps against the original map κ ):
S (21) = 1 2 + 1 m Real{[κ 2 ] m [κ] * m } = 1 2 B κ 1 2 J 1 2 ; (10a) J 1 2 = (2 1 + 1)(2 2 + 1) (2 + 1) 1 2 0 0 0 .(10b)
To avoid cluttering we will not explicitly display smoothing windows in our equations. The beam-smoothed versions of the expressions can be recovered by using the smoothed harmonics i.e. replacing k m with κ m b where b is the smoothing beam in the harmonic domian which can be tophat or Gaussian. In case of Gaussian smoothing the expressions are derived in an order-by-order manner For a tophat smoothing these expressions are derived using a generating function to an arbitrary order (Matsubara 2012). The normalized one-point skewness parameter ) S3 = κ 3 c/ κ 2 2 c can be recovered from the skew-spectrum by constructing the beam-smoothed third-order moment κ 3 c
µ3 = κ 3 c = (2 + 1)S (21) = 1 2 J 1 2 B κ 1 2 .(11)
The normalized skewness parameter S3 is defined as S3 = µ3/µ 2 2 with µN = κ N c and SN = µN/µ N−1 2
The real space two-to-one correlation function can be defined in terms of the skew-spectrum as :
ξ (21) (θ12) = κ 2 ( Ω1)κ( Ω2) c = 1 4π
(2 + 1)S (21) P (cos θ12).
Where P represents the Legendre Polynomial, and the angular positions Ω1 and Ω2 are separated by an angle θ12. Suitably normalized two-to-one correlator is the lowest order of a family of statistics also known as cumulant correlator Munshi et al. 2012;Riquelme & Spergel 2012;Calabrese et al. 2010), which has also been used in the context of weak-lensing surveys (Munshi et al. 2012;Munshi 2000). In our notation δ2D is the 2D Dirac delta function. The flat-sky bispectrum B κ (l1, l2, l3) is identical to the reduced bispectrum b 1 2 2 for high multipole (Bartolo et al. 2004). This can be shown by noting the following asymptotic relationship.
G 1 m 1 , 2 m 2 , 3 m 3 ≡ d ΩY 1 m 1 ( Ω)Y 2 m 2 ( Ω)Y 3 m 3 ( Ω); =
(2 1 + 1)(2 2 + 1)(2 3 + 1) 4π
1 2 3 0 0 0 1 2 3 m1 m2 m3 ≈ (2π) 2 δ2D(l1 + l2 + l3).(13)
A few comments about the skew-spectrum are in order. One-point statistics such as the skewness parameter have the advantage of having high signal-to-noise. However, they lack distinguishing power as all the available information in the bispectrum is compressed into a single number. In contrast, the skew-spectrum, encodes some information on the shape of the spectrum, and in principle can allow us to separate the contribution from gravity-induced non-Gaussianity or possible source of contamination from systematics. Though primordial non-Gaussianity is highly constrained in the light of Planck data, such contributions can also tested using the skew-spectrum.
In this paper we consider a direct estimator for the skew-spectrum as opposed to the optimal estimator developed in where optimality was achieved by using suitable weights to the harmonics that incorporates a match filtering as well as saturates the Cramer-Rao limit in the limit of weakly non-Gaussian limit. Indeed, a simple Fisher matrix based analysis, however, will non-longer be adequate for moderately non-Gaussian weak lensing maps. However, optimality is not of crucial importance of analysis for weak lensing maps as the secondary non-Gaussianity is expected to be detected with much higher signal-to-noise. A simpler direct estimator will thus be useful for studying non-Gaussianity in weak-lensing maps.
TRISPECTRUM AND KURT-SPECTRA
The near all-sky weak lensing maps from surveys such as Euclid will also allow determination of non-Gaussianity statistics beyond the lowest-order, e.g. the fourth-order correlator or the trispectrum. Trispectrum can be useful not only to construct the covariance of the power spectrum estimator but also as a consistency check for the lower order estimators. In this section we will extend the estimator present above for the bispectrum to the case of trispectrum. The trispectrum T 1 2 3 4 (L) can be defined by the following expressions from the four-point correlation function of the spherical harmonics κ m for the convergence field κ (Munshi et al. 2011a):
κ 1 m 1 κ 2 m 2 κ 3 m 3 κ 4 m 4 c = LM (−1) M T 1 2 3 4 (L) 1 2 L m1 m2 M 3 4 L m3 m4 −M ; (14a) T 1 2 3 4 (L) = (2L + 1) M m i 1 2 L m1 m2 M 3 4 L m3 m4 −M κ 1 m 1 κ 2 m 2 κ 3 m 3 κ 4 m 4 c. (14b)
Here, M is the magnetic quantum number associated with the azimuthal quantum number L. The Wigner 3j-symbols above ensure that the triangle inequalities imposed by statistical isotropy and homogeneity of the trispectrum in the harmonic space is represented by a quadrilateral. The harmonics 1, 2, 3 and 4 represent the sides of the quadrilateral and the harmonics L represents one of the diagonal of the quadrilateral The two kurt-spectra K (31) and K (31) are defined as (Munshi et al. 2011a,b):
K (31) = 1 2 + 1 m Real{[κ 3 ] m [κ] * m } = 1 2 3 L T 3 1 2 (L)J 1 2 L J L 3 ; (15a) K (22) = 1 2 + 1 m {[κ 2 ] m [κ 2 ] * m } = 1 2 3 4 T 3 4 1 2 ( )J 1 2 J 3 4 .(15b)
Thus the kurt-spectra described above are computed using either by keeping the diagonal fixed and summing over all possible configurations (the two-to-two kurt-spectra K (2,2) defined in Eq.(15a)) or by keeping one of the side fixed and summing over all possible configurations (introduced above as three-to-one kurt-spectra K (31) defined in Eq.(15b)). These states are linked to the collapsed and squeezed configurations. At higher-order the polyspectra are characterized by a polygon. The number of polyspectra at a given order can be high since the number of diagonals and sides of such polygons can be be quite high.
Another related point is that disconnected contributions will exists even in the absence of noise. These contributions needs to subtracted out when estimating from the data (Hu 2001;Okamoto & Hu 2002). The trispectrum in this case is given in Eq.(17) and is specified completely by the power spectrum C . The corresponding spectra are given in terms of the Gaussian Trispectrum G l 1 l 2 l 3 l 4 (L) (Munshi et al. 2011a):
G (31) = 1 2 3 L G 3 1 2 (L)J 1 2 L J L 3 ; G (22) = 1 2 3 4 G 3 4 1 2 ( )J 1 2 J 3 4 .(16)
where the Gaussian trispectrum G l 1 l 2 l 3 l 4 (L) is given by (Hu 2001;Okamoto & Hu 2002):
G l 1 l 2 l 3 l 4 (L) = (−1) l 1 +l 3 (2l1 + 1)(2l3 + 1)C l 1 C l 3 δL0δ l 1 l 2 δ l 2 l 3 +(2L + 1)C l 1 C l 2 (−1) l 2 +l 3 +L δ l 1 l 3 δ l 2 l 4 + δ l 1 l 4 δ l 2 l 3 .(17)
In Munshi et al. 2011a,b) optimal versions of skew-and kurt-spectra estimators were developed which requires weights based on target spectra. This method was used in investigating primordial spectra as the signal-to-noise is rather low.
However, for investigating the gravity induced secondary non-Gaussianity with surveys that have as high expected signal-to-noise as Euclid optimization is not mandatory. The commonly used kurtosis parameter S4 (to be defined below) can be reconstructed from the kurt-spectra as follows (Munshi et al. 2011a):
µ4 = κ 4 ( Ω) c = 1 4π κ 4 ( Ω)d Ω = 1 4π L 1 2 3 4 h 1 2 L h 3 4 L T 1 2 3 4 (L); (18a) = (2 + 1)K (31) = (2 + 1)K (2,2) .(18b)
We will use noise free simulations but in case of analyzing noisy maps the C s will also include the noise contribution. The commonly used kurtosis are normalized one-point estimators as
) S4 = µ 4 −3µ 2 2 µ 3 2 .
Here, µ2 = 1/4π (2 + 1)C . The corresponding cumulant correlators for these spectra are defined in a manner similar to Eq.(12) Munshi et al. (2011b):
ξ 31 (θ12) = κ 3 ( Ω1)κ( Ω2) c = 1 4π (2 + 1)K (31) P (cos θ12); (19a) ξ 22 (θ12) = κ 2 ( Ω1)κ 2 ( Ω2) c = 1 4π (2 + 1)K (22) P (cos θ12).(19b)
Next we will employ tree-level perturbative calculations.
TREE-LEVEL PERTURBATIVE RESULTS
The unsmoothed normalized higher-order cumulants or
SN = δ N c/ δ 2 N −1 c
can be expressed in terms of the tree-level vertices denoted as νN using the following expressions :
S3 = 3ν2; S4 = 4ν3 + 12ν 2 2 ; S5 = 5ν4 + 60ν3ν2 + 60ν 3 2 .(20)
The vertices νN are the angular averages of the mode-coupling kernels FN defined in Eq.(1c) i.e. νN = N ! FN introduced in §2.2 in the Fourier domain.
νN = N ! FN = N ! d Ω k 1 4π · · · d Ω k N 4π FN (k1. · · · , kN ); d Ω k = sin θ k dθ k dϕ k .(21)
The following generating function approach was introduced in (Bernardeau 1992. The generating functions G δ (τ ) are solved using the equations of gravitational dynamics encapsulated in Euler-Continuity-Poisson equations. Here τ plays the role of a dummy variable. In the perturbative regime the νN parameter can be computed for an arbitrary N .
G δ (τ ) = n νN N ! τ N = −τ + 12 14 τ 2 − 29 42 τ 3 + 79 147 τ 4 − 2085 5096 τ 5 + · · ·(22)
Next, using Eq.(20), the one-point cumulants in 2D ), denoted as ΣN as opposed to SN parameters which represent the cumulants in 3D, can be used to compute the cumulants to arbitrary order in 2D
The generalization of the one-point cumulants i.e the SN parameters to the two-point cumulant correlators Cpq = δ p 1 δ q 2 c/ δ 2 p+q−1 c δ1δ2 c or Cpq parameters was introduced in . The lower-order normalized cumulant correlators can also be expressed in terms of the tree-level vertices νN just as the one-point cumulants introduced in Eq.(20).
C21 = 2ν2; C31 = 3ν3 + 6ν 2 2 ; C41 = 4ν4 + 36ν3ν2 + 24ν 3 2 .(24)
To compare with observed or simulated data smoothing of the field is necessary. The smoothed generating function G s δ can be computed from the unsmoothed generating function G δ . The generating functions G s δ and G δ are related by the following implicit relation (Bernardeau 1995)
G s δ (τ ) = G δ (τ [1 + G s δ ] −(2+n)/4 ).(25)
A tophat smoothing window is assumed and the power spectrum is approximated locally as a power law P (k) ∝ k n Bernardeau 1995). For other window functions, e.g. Gaussian window generic results are not possible for arbitrary N . However, an order-by-order approach can be adopted to obtain the lower-order cumulants (Matsubara 2012). Notice that the smoothed power law depends on the spectral index while unsmoothed vertices depend solely on the gravitational collapse in 3D spherical or cylindrical in 2D. The smoothed vertices can be recovered by Taylor-expanding the smoothed generating function G s . Using these vertices it is possible to now compute the 2D skewness Σ3 and kurtosis Σ4 can be computed :
Σ3 = 36 7 − 3 2 (n + 2); (26a) Σ4 = 2540 49 − 33(n + 2) + 21 4 (n + 2) 2 .(26b)
These expressions are derived using 2D where gravitational collapse with cylindrical symmetry is relevant as is the case for projected surveys. However, the underlying statistics for the 3D density field is linked to spherical collapse which we have not considered here but may be relevant for a 3D weak lensing scenario where photometric data is used. However, there is a crucial difference between 2D and 3D statistics. For large separations in 3D we can factorise Cpq = Cp1Cq1, while in 2D this approximation is not valid. Thus, we will consider the family of statistics Cp1 for arbitrary p.
S (21) = R2Σ21P κ (l)σ 2 L = R2 24 7 − 1 2 (n + 2) P δ (l)σ 2 L ; σ 2 L = κ 2 ; (27a) R2 = rs 0 d r w 3 (r) d 4+2n A (r) rs 0 d r w 2 (r) d 2+n A (r) 2 . (27b)
The corresponding result at the fourth-order is given by:
K (31) = R3Σ31P κ (l)σ 4 L = R3 1473 49 − 195 14 (n + 2) + 3 2 (n + 2) 2 P δ (l)σ 4 L ; (28a) R3 = rs 0 d r w 4 (r) d 6+3n A (r) rs 0 d r w 2 (r) d 2+n A (r) 3 . (28b)
The dynamical contribution is encoded on Σp1 where as the line-of-sight integration is represented by the pre-factors in Rp.
Historically the generating function approach was developed without any reference to perturbative dynamics and the vertices were left undetermined. Many generic predictions were developed coupling scaling Ansätze with the generating function formalism (Balian & Schaeffer 1989). While in the quasi-linear regime the loop corrections to the tree-level results violate the scaling Ansatz, in the highly nonlinear regime the vertices are known to become shape independent parameter as encapsulated in Hyper Extended Peturbation Theory (Scoccimarro & Frieman 2012). In recent years some of the results were derived the Large Deviation Principle (Bernardeau & Reimberg 2016;Uhlemann et al. 2016;Reimberg & Bernardeau 2018;Uhlemann et al. 2018).
Previously, many studies have focused on observed and simulated data of one-point cumulants (Bernardeau 1995;Munshi, Coles & Melott 1989) as well as for the two-point cumulant correlators (Munshi, Coles & Melott 1989;Bernardeau 1995). Previous studies have focused on galaxy surveys. In this paper we extend these results to the context of weak lensing.
HIGHER-ORDER SPECTRA IN THREE DIMENSIONS
Next, we will consider higher-order statistics in three dimensions (3D). Future surveys such as Euclid will go beyond the projection and using photometric redshifts will be able to retain radial information. In 3D we will compute the higher-order spectra as before in the low-limit. The results have similar characteristics as in projection, which we have discussed in §5 but are very different in certain aspect as we discuss below. We will decompose the lensing field in two different eigenmodes (a) Fourier-Bessel decomposition typically used for radially symmetric fields and (b) The generic Fourier-Cartesian decomposition that are most commonly used for perturbative analysis. We will use follow the same convention for forward and reverse Fourier transformation introduced in Eq.
(2) for the Cartesian co-ordinate. For an arbitrary function A(r) with r ≡ (r, Ω) = (r, θ, φ) and its Fourier tarnsform A(k; r) we will use:
A(r; r) = dk A(k; r) exp(ik · r); A(k; r) = dr A(r; r) exp(ik · r).(29)
In spherical-Bessel coordinates the eigenfunctions of the Laplacian operators are the products of spherical harmonics Y m ( Ω) and spherical Bessels function j (r) i.e. j (kr)Y m ( Ω) the transforms take the following form:
A m (k) ≡ 2 π d 3 rA(r)kj (kr)Y * m ( Ω); A(r) ≡ 2 π kdk ∞ =0 m− A m (k)j (kr)Y m ( Ω).(30)
Using the well-known Rayleigh expansion that expands the plane-wave in a spherical wave basis:
exp(i k · r) = 4π m= m=− i j (kr)Y m ( Ω k )Y m ( Ω); Ω k = (θ k , φ k ).(31)
we can relate the spherical harmonic coefficients A m with their Fourier counterpart A:
A m (k; r) = 1 (2π) 3/2 k i d Ω k A(k; r)Y m ( Ω k ).(32)
The 3D power spectrum P AA (k) defined respectively in Cartesian coordinates and C AA in spherical coordinates are:
A(k)A * (k) = (2π) 3 P AA (k); A m (k, l)A * m (k ) = (2π) 2 C AA (k; r)δ1D(k − k )δ δ mm .(33)
In general, in the absence of any mask, it can be shown that: C = P (k) i.e. the 3D power spectrum in spherical co-ordinate is independent of and is actually same as the power spectrum in Cartestian co-ordinates (Castro, Heavens, Kitching 2005). Next, for the construction of the higher-order 3D spectra we will define the following cross-spectra between two arbitrary 3D fields A(r) and B(r) in spherical co-ordinates:
A(k)B * (k) = (2π) 3 P AB (k); A m (k; r)B * m (k ; r) = (2π) 2 C AB (k; r)δ1D(k − k )δ δ mm .(34)
Using this identity, for the 3D density field δ we can derive the following expressions for the higher-order spectra of the density field:
P δ (k; r, r ) = δ(k; r)δ * (k , r ) c; C δ (k; r, r ) = δ m (k; r)δ * m (k; r ) c; P δ (k; r, r ) = C δ (k; r, r ).(35a)
S 21,δ (k; r, r ) = δ 2 (k; r)δ * (k ; r ) c ; S 21,δ (k; r, r ) = δ 2 m (k; r)δ * m (k; r ) c; S 21,δ (k, r, r ) = S 21,δ (k; r, r ). (35b) T 31,δ (k; r, r ) = δ 3 (k; r)δ * (k ; r ) c ; T 31,δ (k, r) = δ 2 m (k; r)δ * m (k; r ) c; T 31,δ (k, r, r ) = T 31,δ (k, r, r ).
T 22,δ (k, r, r ) = δ 2 (k; r)δ 2 * (k ; r ) c ; T 22,δ (k; r, r ) = δ 2 m (k; r)δ 2 * m (k; r ) c; T 22,δ (k, r, r ) = T 22,δ (k; r, r ). (35d)
In our notation, δ p (k) is the Fourier transform of δ p . Notice these expressions are non-perturbative and are valid irrespective of detailed modelling and are valid to an arbitrary order i.e. when cross-correlating p-th power of δ against the q-th power i,e in δ p (k)δ q (k) in spherical or Cartesian co-ordinate. In the Cartesian co-ordinate the normalized cumulant correlators Cpq are defined as follows:
δ p (k)δ q * (k) c = Cpq δ 2 p+q−2 c P (k) = Cpq δ 2 p+q−2 c C (k)(36)
The second step relies on Eq.(35a). In the real-space Eq.(36) this is equivalent to:
δ p (r1)δ q * (r2) c = Cpq δ 2 p+q−2 c δ(r1)δ(r2) c.(37)
The results Eq.(35a) -Eq.(35d) are non-perturbative and do not depend on any simplifying assumptions. However, in case of studies of galaxy clustering it is more natural to study high-order statistics in the redshift space. Similarly, for 3D weak lensing, line-of-sight integration will need to be taken into account. Such extensions will be presented separately. The coefficients Cpq defined in Eq.(36) can be computed using perturbative calculations. In 3D the smoothed and unsmoothed vertex generating functions are related through an implicit expression which is analogous to Eq.(25).
G s δ (τ ) = 3 2 G δ (τ [1 + G s δ ] −(3+n)/6 ).(38)
The power spectrum is assumed to be approximated locally by a power law with power law index n i.e. P (k) ∝ k n . On Taylor expanding the 3D (unsmoothed) generating function G(τ ), we can recover the lower order vertices νN in 3D :
G δ (τ ) = n νN N ! τ N = −τ + 34 21 τ 2 − 682 189 τ 3 + · · ·(39)
Using these vertices it is possible to compute the normalized lower-order moments i.e. skewness S3 and kurtosis S4 in 3D :
S3 = 34 7 − (n + 3); S4 = 60712 1323 − 62 3 (n + 3) + 7 3 (n + 3) 3 .(40a)
The lower-order cumulant correlators have the following form ::
C21 = 68 21 − (n + 3) 3 ; C31 = 11710 441 − 61 7 (n + 3) + 2 3 (n + 3) 3 .(40b)
Detailed derivations regarding construction of one-and two-point proabability distribution functions are detailed in . The 3D vertices defined in Eq.(39) assume a different numerical value though the formal structure remains the same. In addition a more generic results Cpq = Cp1Cq1 gives a much needed consistency check. The results in a 3D collapse are related to a spherical window and the dynamics relate to the 3D spherical collapse.
PSEUDO-C (PCL) ESTIMATORS
Maximum likelihood (ML) estimators or quadratic maximum likelihood (QML) estimators are most obvious choices for analyzing cosmological data sets. However, these estimators require inverse covariance weighting which clearly is not practical for large cosmological data sets though various clever algorithmic techniques have been considered . This has resulted in the development of many sub-optimal estimators which uses heuristic weighting schemes. The so-called pseudo-C (PCL) technique was introduced in (Hivon et al. 2012); see Szapudi et al. (2012) for a related method. These estimators are unbiased but sub-optimal. Various weighting schemes depending on sky coverage as well as noise characteristic as well as various hybridization schemes to combine large angular scale (equivalently the low ) estimates using QML with small angular scale (high ) PCL estimates. were considered in (Efstathiou 2004).
M = (2 + 1) 0 0 0 2 (2 + 1) 4π |w 2 |; (41a) S (21) = M −1 S (21) .(41b)
HereS (21) denotes the skew-spectrum computed from a map in the presence of a mask w( Ω),Ŝ (21) is the all-sky estimate and w = 1/(2 + 1) m w m w * m is the power spectrum of the mask constructed from the harmonic-coefficient w m of the map. The coupling matrix M is represents the mode-mixing due to the presence of a mask. The generalization of the PCL method to estimate higher-order spectra were developed in (Munshi et al. 2011a,b) for spin-0 fields and in higher spin fields in (Munshi et al. 2012) as well as in 3D in . Exactly same result holds for higher-order spectra, e.g., for all-sky estimate of kurt-spectrumK
NUMERICAL SIMULATIONS
We use the publicly available all-sky weak-lensing maps generated by (Takahashi et al. 2012) 10 that were generated using ray-tracing through N-body simulations. Multiple lens planes were used to generate convergence (κ) as well as shear (γ) maps. Many recent studies were performed using these maps, e.g. Namikawa et al. (2018); Munshi et al. (2019a). In these simulations, the source redshifts used were in the range zs = 0.05 − 5.30 at interval ∆zs = 0.05. In this study, we have used the maps with zs = 0.5, 1.0, 1.5, 2.0. The maps do include post-Born corrections (Lewis & Pratten 2016). Though at the low source redshift such corrections only play a negligible role. Indeed, they do play significant role in CMB lensing. The convergence maps were generated using an equal area pixelisation scheme. in HEALPix 11 format (Gorski et al. 2016). In this pixelisation scheme the number of pixels scale as Npix = 12N 2 side where N side is the resolution parameter which can take values N side = 2 N with N = 1, 2, · · · . The set of maps we use are generated at N side = 4096 which were also cross-checked using higher resolution maps that were constructed at a resolution N side = 8192, 16384. These maps were found to be consistent with each other up to the angular harmonics ≤ 3600. In addition detailed tests were performed by using a Electric/Magnetic (E/B) decomposition of shear maps for the construction of κ maps (Takahashi et al. 2012). Though we have used high resolution maps N side = 4096, we have degraded them to low resolution maps at N side = 1024 as we are primarily interested in the perturbative regime. The following set of cosmological parameters ΩCDM = 0.233, Ω b = 0.046, ΩM = ΩCDM +Ω b , ΩΛ = 1−ΩM and h = 0.7 were used to generate the maps assuming a ΛCDM background cosmology The amplitude of density fluctuations σ8 = 0.82 and the spectral index ns = 0.97. Examples of κ maps used in our study are presented in Figure-1 We will be focus on the large-separation or the small regime in our study and we do not expect the baryonic feedback to play a significant role (Weiss et al. 2019). It is worth mentioning here that these maps were also used to recently analyze the bispectrum the context of CMB lensing (Namikawa et al. 2018).
TESTS AGAINST NUMERICAL SIMULATIONS
The skew-spectrum S (21) is shown as a function of the harmonics in Figure-2. From top to bottom the curves represents the source redshifts zs = 0.5, 1.0, 1.5 and 2.0 respectively. The results are from maps with N side = 1024. We have analyzed these maps for max = 2N side . The straight lines correspond to perturbative results computed using tree-level perturbation theory Eq.(27a) -Eq.(27b). We have used an ensemble of ten realisations to compute the mean which is being plotted. We use all-sky maps without an observational mask. The effect of mask can be incorporated using Eq.(41a)-Eq.(41b). The K (31) is shown as a function of the harmonics in Figure-3. From top to bottom the curves represents the source redshifts zs = 0.5, 1.0, 1.5 and 2.0 respectively. The maps used are N side = 1024 and as before we have analyzed for max = 2N side . The straight lines corresponds to perturbative results computed using tree-level perturbation theory Eq.(28a) -Eq.(28b). We have used an ensemble of ten realisations to compute the mean which is being plotted. We use all-sky maps without an observational mask.
Our results for the skew-and kurt-spectra are derived in the large separation limit i.e. the cumulant correlators defined, e.g. in Eq.(12) and in Eq.(19b) |ξ12|/ξ2
1. In real-space this limit was seen to be reached very fast as soon as the two neighboring cells are not overlapping. In harmonic domain the scale represents the separation of two beam-smoothed pixels for which the skew-spectrum is being measured. Thus, large separation in our case corresponds to low , and typical size of the pixels corresponds to the at which the beam can no longer be approximated as unity. This is the scale where the correction to the skew-spectrum starts to be non-negligible. These corrections, which are of order ξij/ξ2 1, are difficult to compute analytically. Though, entire skew-spectrum can be computed with fitting functions. Clearly, such a computation will not be possible beyond third-order i.e. skew-spectrum due to lack of such a fitting function at the fourth-order. Thus, the techniques developed here are valuable as their predictions can be made at all-orders.
The results we have computed are based on spherical top-hat window. However, many previous studies have shown that the actual shape of the window is not important, and replacing the circle with square can be very good approximation ). However, profile of the smoothing beam or window as opposed to its shape can change the theoretical predictions. The predictions for a Gaussian window was worked out in detail in (Matsubara 2012). However, results can be derived only in order-by-order manner and approaches based on generating functions are not applicable.
A few comments about going beyond kurt-spectrum are in order. Extraction and interpretation of higher-order statistics can be rather complex from any cosmological data-sets. Estimators of the cumulants and cumulant correlators are typically known to be biased and elaborate scheme were developed in estimating and correcting such bias as well as scatter in estimators typically used for evaluating these quantities mainly in real space in the context of galaxy clustering. Such corrections are expected to be more dominant role with increasingly higher-order . Such corrections and their accurate calibration against simulations are lacking in the literature. Though, for lower-order statistics we probed here, such corrections are expected to be negligible, a better understanding of such effects is needed before we can interpret the statistics beyond kurt-spectra (equivalently trispectrum).
An alternative approach considered by various authors was to consider the one-point and two-point probability functions which encode cumulants an their correlators to an arbitrary order (Munshi & Jain 2000;Munshi 2000). These results are applicable in realspace which make them useful for surveys with low-sky coverage. The results derived here will be relevant for surveys with high sky-coverage where harmonics decomposition would mean less correlated measurements for individual .
By their very nature, projected or 2D surveys unlike their 3D counterparts mixes scale which makes assigning exact spectral index with an angular scales or in our case the harmonic . We have shown how much variation we should expect for a range of feasible spectral index n. Finally, the redshift dependence of the skew-and kurt-spectra is encoded in the coefficients R2 and R3. It is however important to point out that these pre-factors are rather sensitive to the lower limit of the integration zmin i.e. in Eq.(27b) and Eq.(27b). Numerical implementation of simulation of ray-tracing to generate convergence maps may introduce slight modification in zmin which may lead to a bias in the theoretical predictions.
The excellent match between the theoretical predictions and simulations we have found here is encouraging for computing such corrections.
MODIFIED THEORIES OF GRAVITY: COMPUTATION OF C21
The theoretical modelling of the bispectrum in modified gravity scenarios is more challenging than the power spectrum calculation. Typically a perturbative approach is adopted in the quasilinear regime (Bernardeau & Brax 2011). In addition, a quasi-static approximation is used, i.e. metric perturbations are varying slowly with time that they can be ignored. Many extensions of the perturbative approach were considered in the literature in recent years (Bose & Taruya 2018;Namikawa, Bouchet & Taruya 2018). Typically, this is achieved by introducing more freedom to the kernels and validating or calibrating them using numerical simulations. Indeed, others including variants of halo model predictions too have been proposed that can reproduce the simulation results with varying degree of success.
In the literature, typically, two main families of modified gravity theories are considered. (A) models with Vainshtein-screening mechanism which includes the DGP model as well as the Horndeski (Hordenski 1974) and beyond Horndeski theories (Gleyzes et al. 2015a,b;Langois & Noui 2016) and (B) the models with Chameleon-screening that includes the Hu-Sawicki f (R) model (Hu & Sawicki 2016). In the DGP model (Dvali, Gabadadze, Porati 2000) the bispectrum from simulations can be reproduced using the GR expression by suitably modifying the power spectrum. The situation is somewhat more complicated for f (R) theories. The numerical modelling is more important at small scales where analytical results start to fail.
Bernardeau & Brax Models
Next, we first turn to different phenomenological toy models of modified gravity presented by Bernardeau & Brax (2011). In this parametrization the kernel F2 in Eq.(1b) is modified to the following form:
F2(k1, k2) = 1 2 (1 + ) + 1 2 k1 · k2 k1k2 k1 k2 + k2 k1 + 1 2 (1 − ) k1 · k2 k1k2 2(42)
In general the parameter can be a function of scale factor a or the wavelength k. For = 3/7 recover the expression given in Eq.(1c). The Lagrangian perturbation theory is often used to model quasilinear evolution of gravitational clustering. The Zel'dovich approximation is the linear order in Lagrangian perturbation theory. The bispectrum in the Zel'dovich approximation can be recovered from Eq.(42) = 0 (Munshi, Sahni, Starobinsky 1994).
F2 3D = + 2 3 ; F2 2D = + 3 4 .(43)
The actual value of the parameter can be computed using the linearised Euler-Continuity-Poisson equation, and assuming a parametric form for the growth fcator f = d ln D+/d ln a ≈ Ω γ M . A convenient form of a fitting function can be obtained for values not too far from General Relativistic (ΛCDM) values. This model can be considered as a special case of Eq.(66b) with κ = 1 and = 1 − 4/7λ. The smoothing includes a dependence on spectral index. In 2D we have
C21 = R2 4 F2 2D − 1 2 (n + 2) .(44)
(b) Beta (β) Model: In the β model proposed by (Bernardeau & Brax 2011) where the expression for the kernel F2(k1, k2) we have:
F2(k1, k2) = 3νs 4 − 1 2 + 1 2 k1 · k2 k1k2 k1 k2 + k2 k1 + 3 2 − 3νs 2 k1 · k2 k1k2 2(45)
where, the parameter ν2 can be related to the parameter in Eq.(42) = 3 2 νs − 2. The parametric value for ν2 can be obtained in a manner similar to the γ model. However, we would leave them unspeified. The angular average gives F2 3D = νs/2 and similarly F2 2D = (3νs/2+1)/4 for 2D and is independent of t. In these models the ν2 can in general be a function of z as well as wave-number k. This model was also recently used in (Munshi 2017) for computation of a related statistics known as integrated bispectrum. In general the parameter can also be a k dependent parameter. The expression for C21 has the following form:
C21 = R2 3 2 νs + 1 − 1 2 (n + 2) .(46)
The power spectrum too gets modified due to changes in the kernel F2(k1, k2) at one-loop. The loop corrections to the linear power spectrum depends on the F2(k1, k2) and thus can also be used to constrain any departure from GR.
Horndeski and Beyond Horndeski in the Perturbative Regime
Horndeski theories are scalar-tensor theories with a single propagating degree of freedom and are free from Ostrogradsky type instabilities. Horndeski theories The Horndeski theories have also been extended by considering what are also known as the degenerate higher-order scalar-tensor (DHOST) theories. The simplest extensions in the context of non-degenerate scenarios are also known as the Galeyzes-Langlois-Piazza-Venizzi or GPLV theories (Gleyzes et al. 2015a,b). The second-order kernel in these scenario include a scale dependent additional term which changes the bispectrum (Hirano, Kobayashi, Tashiro, Yokoyama 2017) that can be constrained using the staistics discussed here.
F2(k1, k2, z) = κs(z)αs(k1, k2) − 2 7 λs(z)γ(k1, k2); (47a) αs(k1, k2) = 1 + 1 2 (k1 · k2) (k 2 1 + k 2 2 ) k 2 1 k 2 2 ; γs(k1, k2) = 1 − (k1 · k2) 2 k 2 1 k 2 2 .(47b)
Taking angular averages we can see αs(k1, k2) = 1 and γ(k1, k2) = 1/2 which leads us respectively in 2D to:
F2 2D = κs(z) − 1 7 λs(z); F2 3D = κs(z) − 4 21 λs(z).(48)
Similar calculation in the Effective Field Theory (EFT) of dark energy framework can be found in (Cusina, Lewandowskyi, Vernizzi 2018)
C21 = 2 rs 0 drD 4 + (z) w 3 (r) d 4+2n A (r) κs(z) − 1 7 λs(z) − 1 2 κs(z)(n + 2) rs 0 d rD 2 + (z) w 2 (r) d 2+n A (r) 2 .(49)
In Crisostomi, Lewandoski, Vernizzi (2019) the following equivalent parameterization for the kernel F2 was introduced:
F2(k1, k3) = Aα(z)αs(k1, k2) + Aγ(z)γ(k1, k2).(50)
In terms of the parameters Aα(z) and Aγ(z)
C21 = 2 rs 0 drD 4 + (z) w 3 (r) d 4+2n A (r) Aα(z) + 1 2 Aγ(z) − 1 2 Aα(z)(n + 2) rs 0 d rD 2 + (z) w 2 (r) d 2+n A (r) 2 .(51)
In general the parameters Aα(z) = κs(z), Aγ(z) = −2/7λ(z) are time-dependent. For this model we have F2 3D = Aα + 2 3 Aγ and F2 2D = Aα + 1 2 Aγ. It is important to notice that these theories have an important difference with GR and Horndeski theories. The Horndeski theories are invariant under time-dependent spatial co-ordinate transformations. The form for the F2 kernels are fixed by existence of such symmetry. Many modified gravity theories fall under this category. In Beyond Horndeski theories, the fluid equations and the equations of gravity possess very different symmetry properties and have a kernel F2 that is structurally different. This is related to violation of these theories from the so-called consistency relation which are respected in GR (Peloso & Pietroni 2008). Future surveys such as the Euclid survey will be able to probe such theories beyond the consistency relations using the statistics developed here.
A detailed study for the skew-spectrum ) Minkowski functionals (Munshi et al. 2012) for these models and the integrated bispectrum (Munshi et al. 2019b) as well as the related consistency relations will be presented elsewhere (Munshi et al. 2020, in preparation).
Normal-branch of Dvali, Gabadadze, Porrati (nDGP) model
The normal branch of (Dvali, Gabadadze, Porati 2000) model known also as the nDGP is a prototypical example that involve Vainshtein screening. The model of bispectrum that is known to accurately reproduce the bispectrum was computed by (Koyama, Taruya, Hiramatsu 2009) which correspond to the case κ = 1 in Eq.(66b).
κs(z) = 1; λs(z) = 1 − 7 2 D2(z) D 2 + (z) .(52)
Here D2(z) and D+(z) are the first-order and second-order growth factors that can be computed by numerically solving the equations governing growth of perturbations (Bose & Taruya 2018).
Massive Neutrinos
A small but non-negligible fraction of the cosmological matter density is provided by massive neutrinos (Lesgourgues & Pastor 2006). The massive neutrinos are known to have significant thermal distribution and a different cosmological evolutionary history in comparison to the cold dark matter. The thermal dispersion in velocity results in a damping of perturbation below a length scale also known as the free-streaming length scale. This will be probed by future surveys with a very high degree of accuracy. In the long run cosmological surveys are expected to provide an upper limit to the sum of the neutrino masses.. This will be very useful when jointly considered with the lower limits from neutrino-oscillation experiments.
The neutrinos decouple and free-stream with a large thermal velocities. The redshift znr at which neutrinos become relativistic depend on their mass eigenstate mi, 1 + znr = 1980 [mν,i/1eV] The fractional contribution to the total matter density is denoted as fν which can be expressed as
fν ≡ Ων ΩM = 1 ΩM,0h 2 i Mν,i 93.14eV .(53)
In future it will also be interesting to consider the effect of neutrino mass on bispectrum when simulated all-sky lensing maps for such cosmologies will be available (Liu et al. 2018;Coulton et al. 2018). The total matter distribution thus can be written in terms of the cold dark matter perturbation δ cdm and the fluctuations in the neutrino density distribution δν . δm = fcδc + fν δν ; fc + fν = 1.
The resulting matter power spectrum Pmm(k) and bispectrum Bmmm(k1, k2, k3) can be expressed as (Ruggeri 2018):
Pmm(k) = f 2 c Pcc(k) + 2fν fcPνc(k) + f 2 ν Pνν (k) (55a) Bmmm = f 3 c Bccc + f 2 c fν Bccν + fcf 2 ν Bννc + f 3 ν Bννν .(55b)
Here Pcc and Pνν represent the power spectrum cold dark matter and the neutrino component where as the Pνc is the cross spectra between them. We will drop the suffix 3D to avoid cluttering. We will only consider the linear order perturbation in δν and ignore all higher order contributions which implies Bννν = 0. For Bccc the expression in the squeezed limit is exactly same as derived before.
B 2D,sq ccc = 24 7 − 1 2 dk 2 Pcc(k) d ln k Pcc(k ⊥ )Pcc(q 3⊥ ).(56a)
We will next consider the mixed terms Bννc These contributions in terms of δc and δν can be expressed as:
Bccν (k1, k2, k3) = δc(k1)δc(k2)δν (k3) + cyc.perm.;
Bννc(k1, k2, k3) = δν (k1)δν (k2)δc(k3) . + cyc.perm..
In the above equations the cyc. perm. represent cyclic permutations of the wave vectors k1, k2 and k3.
To evaluate Bννc we expand the terms perturbatively. Employing tree level perturbation theory, the contributions from Bννc,112 are from these terms: Bννc,112(k1,k2,k3) + Bννc,112(k2,k3,k1) + Bννc,112(k1,k2,k3).
Bννc =
(58)
In our notation, Bννc,112(k1, k2, k3) ≡ δ
(1)
ν (k1)δ (1) ν (k2)δ
(2) c (k3) and similarly for the other terms. In tems of the second-order kernels F2(k1, k2) we have:
Bννc,112(k1, k2, k3) = 2F2(k1, k2)Pνc(k1)Pνc(k2) ;(59)
The other terms can be recovered by cyclic permutation of the wavenumber. In the squeezed limit we have:
B 2D,sq ννc = 24 7 − 1 2 d ln k 2 Pνc(k ⊥ ) d ln k ⊥ Pνc(k ⊥ )Pνc(q 3⊥ ).(60)
Finally we turn to Bccν . The perturbative contributions are as follows:
B 2D,sq ννc = 2[F2(k1, k2)Pcc(k1)Pcν (k2) + cyc.perm.].(61)
Going through an elaborate algebraic manipulation we arrive at the squeezed limi:
B 2D,sq ννc = 24 7 − 1 2 d ln k 2 Pcc(k ⊥ ) d ln k ⊥ Pcc(k ⊥ )Pcc(q 3⊥ ) + 24 7 − 1 2 dk 2 ⊥ Pcν (k ⊥ ) d ln k ⊥ Pcν (k ⊥ )Pcc(q 3⊥ ).(62)
In future it will be interesting to study the effect of neutrino mass on bispectrum using simulations when all-sky lensing maps for such cosmologies will be available (Liu et al. 2018;Coulton et al. 2018).
Clustering Quintessence
Quintessence (Tsujikawa 2013) is the most popular dynamics of dark energy in which the potential energy of a single scalar field drives the accelerated expansion of the Universe. The quintessence model is different from the cosmological constant scenario allowing for a different temporal dependence of the observables. The scalar field in most quintessence models is considered homogeneous and is typically minimally coupled. The sound speed of the scalar field in these models equals the speed of light which prevents any clustering below the horizon scale. However, extensions of such models with vanishing or lower than speed of light have also been considered. These models are known as the clustering quintessence models (Sefusatti & Vernizzi 2011;Bassel et al. 2001). The future large scale structure surveys can be used to differentiate between these two scenarios. We use our formalism to derive the changes in the bispectrum in the squeezed limit in these models. We quote the expression of the kernel F2 from (Sefusatti & Vernizzi 2011):
D+ a = 5 2 ΩM ΩM 4/7 + 3 2 ΩM + 1 70 − 1 + w 4 ΩQ 1 + ΩM 2 −1 .(63a)
Here, ΩQ and ΩM are the density parameter related to Quintessence and dark matter. The corresponding linear growth rates are denoted by DQ+ and D+. The parameters s = Ω Q Ω M D Q,+ D + and ν2 can also be expressed in terms of ΩQ and ΩM and depend of redshift z.
F2(k1, k2, η) = νs 2 + 1 2 (1 − s) k1 · k2 k1k2 k1 k2 + k2 k1 − 1 2 1 − s − νs 2 1 − 3 k1 · k2 k1k2 2 .(64a)
Thus, two different parameters s(z) and νs(z) to describe the tree-level bispectrum in this model.
C21 = rs 0 dr w 3 (r) d 4+2n (r) D 4 + (z) 1 4 (1 − s) + 3 8 νs − 1 2 (1 − s)(n + 2) rs 0 dr w 2 (r) d 2+n (r) D 2 + (z) 2 .(65a)
Typically at low redshift for some values of w the parameter can reach upto 10% which can lead to roughly an order of 10% correction to the bispectrum which can be accounted for high precision measurements from future surveys.
Bispectrum in General Scalar-tensor Theories
Next, we consider a phenomenological fitting function. The second-order perturbative analysis of the general scalar tensor theories were initially performed by (Hirano, Kobayashi, Tashiro, Yokoyama 2017) which was later extended to smaller non-perturbative scales using a fitting function (Namikawa, Bouchet & Taruya 2018), Using the fitting function proposed in (Namikawa, Bouchet & Taruya 2018) we can compute the C21 in a class of models which are represented by the following expression for F2(k1, k2, z) replacing F2(k1, k2) in Eq. (1c):
F2(k1, k2, z) = κs(z) − 2 7 λs(z) a(k1, z)a(k2, z) + 1 2 κs(z) k1 · k2 k1k2 k1 k2 + k2 k1 b(k1, z)b(k2, z) + 2 7 λ(z) k1 · k2
The functions κs(z) and λ(z) are approximated using the above functional forms and ξ λ and are free parameters that can be estimated from observational data. The functional forms for a, b and c are assumed to be same as that of their ΛCDM form (Scoccimarro & Couchman 2001;Gil-Marin et al. 2011) which interpolates the perturbative regime and highly nonlinear assumed to be described by Hyper-Extended-Perturbation-Theory (Scoccimarro & Frieman 2012). To be consistent with the literature we have used κs to denote one of the parameters which should not be confused with the weak lensing convergence κ as their meaning would be obvious from the context. For κs(a) = 1 and λs(a) = 1 or, equivalently, ξκ = 0 and ξ λ = 0 we recover the case of General Relativity (GR) presented in Eq.(1c). As discussed before, the Horndeski theories are the most general class of scalar-tensor theories which are non-degenerate that leads second-order equations of motion in 4D. In these models, λs = 1 though κ = 1 still remains valid. A generalization of Horndeski (Hordenski 1974) theory leads to a class of models that are known as "Beyond Horndeski" models (Gleyzes et al. 2015a,b;Langois & Noui 2016). In his models both κs and λs can deviate from unity. At high-z the both theories converge to GR as expected. The Horndeski theories violate the Vaishentein mechanism to recover GR at nonlinear scale has also been considered. In these scenarios the parameter both κ and λ deviates from unity. Thus testing GR which correspond to λ = κ = 1 reduces to constrain deviation of λ and κ from unity. The functional form for κ and λ is adopted from (Namikawa et al. 2018) and converges to GR at high-z as expected.
We will next focus on computing the second order vertex ν2 as defined in Eq.(21) for both 3D and 3D. Unlike in case of GR, in general these vertices have a redshift dependence. To compute these we start by noticing that in both three and two dimensions we have k1 · k2/k1k2 = 0 and in 2D we have (k1 · k2/k1k2) 2 = 1/2. In the following, we will ignore smoothing as the correction terms involved will be exactly same as the one presented
In the quasilinear regime the functions a, b and c tend to unity. In this limiting case the departure from GR is encoded only in the redshift dependent factors and the expression for C21 is identical to Eq.(49) with the specific form for κs and λs are given by Eq.(66b). Substituting κs(z) = 1 and λs(z) = 1 we recover the unsmoothed results for GR. The smoothing in 3D and 2D will introduce terms involving factors of (n + 3) in Eq.(40a)-Eq.(40b) and (n + 2) in Eq.(26a)-Eq.(27b). The results for specific models for 3D and 2D are respectively shown in Figure-4 and Figure-5. While for GR the F2 is independent of redshift z, for Horndeski and beyond Horndeski theories F2 depends on redshift. At higher z they become identical to that of GR as expected. In Figure-4 the F2 for the 2D cylindrical collapse is plotted as function of z and their pattern of evolution is same as in 3D. The effect of line-of-sight projection is encoded in the factor R2(z) which is shown in the right panel.
Although, the results for higher-order spectra are known to an arbitrary order in GR, similar results for most of the modified gravity theories are known mostly to second order. Going beyond third-order in general requires order-by-order calculation. While we have considered the statistics of 3D density field δ and resulting convergence κ similar results can be obtained for the divergence of peculiar velocity.
The tests involving bispectrum related statistics presenetd here can further tighten the constraints obtained using linear growth rate alone. This is particularly important as no strong constraint on λs and κs exist currently. Indeed, there are no upper or lower limits for κs based on theoretical expectation.
Before we conclude this section, we would like to point out that the two paramters used in defining the clusteing quintessence i.e. νs and s (or αs and βs for the case of DHOST theories) can independently be contrained using 3D and 2D measuremenets. This is due to the fact that the statistics C21 depends on νs and in a different manner in 3D and 2D. We have concentrated on projected or 2D surveys in this paper but similar results will be presented for 3D surveys in a separate article.
CONCLUSIONS AND FUTURE PROSPECTS
We have computed the skew-spectrum (see Eq.(10a)) and kurtosis-spectrum Eq.(15a)) at low for the analysis of weak lensing convergence or κ maps. These spectra generalizes the one-point cumulants, e.g. the skewness and kurtosis defined in §3, and are often used in the literature for analyzing higher-order non-Gaussianity of cosmological maps. They capture some of the essential properties of the full bispectrum or trispectrum which are more difficult to estimate. In the real space these spectra correspond to cumulant correlators that can be computed in the leading-order using tree-level perturbations in the large-separation limit. In this limit these spectra can be computed to arbitrary order using tree-level perturbative calculations without any need for any phenomenological fitting functions or extensions of perturbative calculation. We use the flat-sky approximation and Eulerian perturbative results based on generating function approach we show how to compute high-order spectra to arbitrary order. We test these results for lower-order spectra namely the skewand kurt-spectra against state-of-the-art all-sky weak lensing maps. We find the our results are in good agreement. These results will be valuable in analyzing higher-order statistics from future all-sky weak lensing surveys such as the Euclid survey The presence of mask generated from near all-sky surveys introduces mode mixing. Unless corrected, the mode mixing introduced by a mask can be a source of confusion while analyzing the higher-order spectra as they encode information about gravity induced mode-coupling We have presented a generalization of existing method typically used in the study of ordinary power spectrum to construct an unbiased estimates of higher-order spectra Eq.(41a)-Eq.(41b).
The parameters Cpq computed for 3D weak lensing will be important when photo-z information is available. The statistics introduced here will be useful in analyzing non-Gaussianity in such context. We will present results of such analysis in future work. The results presented here can be generalized using a 3D Fourier-Bessel transform or a 3D flat sky formalism. As noted before the 3D analysis allows factorization Cpq = CpqCq1 and their dependence on the spectral index n are different so 2D and 3D results will provide independent information as well as much needed consistency checks and test for possible systematics.
Any modification of gravity leaves detectable signature at the level of bispectrum. Though such signatures are less prominent than any modification at the level of power spectrum, it has recently attracted a lot of attention in the context of CMB lensing bispectrum (Namikawa et al. 2018). Similar investigations in the context of weak lensing are currently being pursued using various statistical estimators. Various techniques were adopted to extend perturbaive results derived in the context of General Relativity (GR). Extensions to modified gravity scenarios were implemented by introducing more freedom to the kernels and calibrating then using numerical simulations (Bose & Taruya 2018). The expressions for bispectrum exist for both type of modified gravity scenarios i.e. models with Vainshtein-screening mechanism which includes the DGP model as well as the Horndeski (Hordenski 1974) and beyond Horndeski theories (Gleyzes et al. 2015a,b;Langois & Noui 2016). In the other class of models i.e. models with Chameleon-screening that includes the Hu-Sawicki f (R) model (Hu & Sawicki 2016) the bispectrum from simulations can be successfully reproduced using the GR expression but with suitable modification of the power spectrum. We will extend our results derived here to the modifying gravity scenarios as well as scenarios involving massive neutrinos.
The position-dependent bispectrum and its higher-order generalization at the level of trispectrum has exact one-to-one correspondence with the statistics studied in this paper. Indeed the expressions for integrated bispectrum and the skew-spectrum at low-are identical. However, the physical interpretation is different. The expressions at the level of fourth order are not the same. The integrated bispectrum or equivalently the position-dependent power spectrum probes the influence of large scale modes on small-scale structure. The cumulant correlators at large separation limit as well as their harmonic counterparts namely the skew-spectrum and kurt-spectra probe dynamics mainly at scales of smoothing. Comparing results from these two statistics can provide useful cross-checks at each order.
Finite sky coverage can introduce bias in our estimators. The scatter and bias introduced by finite survey size have been studied in great detail for galaxy surveys and to a lesser extent for weak lensing surveys (Munshi & Coles 2003). These are less dominant in the quasi-linear regime where the variance is small in the limiting case which we have studied here.
In our study we have assumed that the bispectrum is of even parity. Many studies in the recent past have pointed out existence of an off-parity bispectrum (Munshi et al. 2012). Such a bispectrum do not arise from 3D density perturbations. However, signatures of contributions can be used to test possible existence of systematics.
In a recent work (Barthelemy 2019), it was shown that nulling can be used effectively to improve the accuracy of perturbative calculations by reducing the cross-talk between quasilinear and nonlinear scales. These calculations were performed in the real-space focusing primarily on one-point cumulants and PDF. In contrast our results here concern primarily on two-point correlators and their associated spectra in the Fourier domain. Applying the nulling before computing the spectra is expected to improve the validity of the perturbative results.
Last but not least, the next generation of CMB Stage-IV experiments will be able to map the projected lensing potential all the way to the surface of last scattering. It is expected that the results obtained in this paper will be valuable in analyzing higher-order statistics of maps obtained from such experiments (Abajazajian et al. 2018). However, in this case the estimator described here will have to be optimized to tackle low signal-to-noise for higher-order statistics of CMBR. The post-born corrections (Lewis & Pratten 2016) play an important role in higher-order statistics of CMBR. For realistic comparison against observations such corrections should be included.
ACKNOWLEDGMENT
DM is supported by a grant from the Leverhume Trust at MSSL. It is a pleasure for DM to thank F. Bouchet, T. D. Kitching, T. Namikawa, R. Takahashi, A. Taruya and F. Vernizzi for many useful discussions. We would like to also thank R. Takahashi for making the lensing maps publicly available. We would like to also thank R. Schoenrich for careful reading of the draft and many suggestions that greatly improved the presentation. DM would also like to organizers of the Euclid Theory Working Group Meeting (8th April -9th, April 2019) in Oxford.
), CFHTLS 3 , PAN-STARRS 4 , Dark Energy Surveys (DES) 5 (Abott et al. 2016), WiggleZ 6 (Jurek et al. 2010), LSST 7
Figure 1 .
1Examples of simulated κ maps used in our study. The left panel corresponds to zs = 0.5 while the right panel corresponds to zs = 2.0. The maps were generated at a resolution of N side = 4096. See §8 for more detail discussion about construction of maps used in our study.
Figure 2 .
2The skew-spectrum S (21) defined in Eq.(10a)-Eq.(10b) is shown as a function of the harmonics . From top to bottom the curves correspond to source redshifts zs = 0.5, 1.0, 1.5 and 2.0 respectively. A total of 10 simulations were used to compute the S (21) . The straight lines at the left correspond to predictions from perturbation theory encapsulated in Eq.(27a)-Eq.(27b. We have assumed a power-law power spectrum P δ (k) ∝ k n . We have chosen n = −2.0 (dot-dashed lines). See text for details.
( 21 )
21and its masked counterpartK(21) are related through a similar expressionK (31) = M −1 K (31) . This has also been generalized to reconstruct the Minkowski Functionals in an order-by-order manner(Munshi et al. 2012,?) Two equivalent techniques for flat-sky PCLs are developed here (Asgari et al. 2012) and (Hikage et al. 2012).
Figure 3 .
3The skew-spectrum K (31) defined in Eq.(15a) is shown as a function of the harmonics . From top to bottom the curves correspond to source redshifts zs = 0.5, 1.0, 1.5 and 2.0 respectively. A total of 10 simulations were used to compute the K (31) . The straight lines at the left correspond to predictions from perturbation theory encapsulated in Eq.(28a)-Eq.(28b). We have assumed a power-law power spectrum P δ (k) ∝ k n . We have chosen n = −2. See text for details.
Figure 4 .
4The second-order tree-level perturbative vertex F 2 (z) 3D is plotted as a function of redshift z for 3D surveys as given in Eq.(48). Three different cases shown correspond to the Horndeski, Beyond Horndeski and GR as indicated. See text for more details. The results are shown for unsmoothed field i.e. n = −3. We have used the parameterizations in Eq.(66b) for various models. The Horndeski model is given by ξκ = 1, ξ λ = 0 and the beyond Horndeski theories are given by ξκ = 1, ξ λ = 1. For the GR we have ξκ = 0, ξ λ = 0.
(a) Gamma γ Model: This model is generated by modifying the Euler equation of the Euler-Continuity-Poisson equation. In this model the gravitational field seen by massive particles (denoted as φ eff ) is different from the gravitational potential that solved the Poisson equation φ. These two potentials are different and related by φ eff = (1 + )φ through parameter (t) in the sub-horizon scale.
Figure 5 .
5The left panel shows F 2 (z) 2D for GR, Horndeski and Beyond Horndeski as a function of redshift z in 2D. The results are plotted for n = −2 which represents the unsmoothed field. The right panel corresponds to C 21 (z) as a function for these models.
z) = [ΩM(z)] ξ λ ; κs(z) = [ΩM(z)] ξκ ; ΩM(z) = ΩM,0(1 + z) 3 /((1 + z) 3 ΩM,0 + ΩΛ).
).Σ3 =
36
7
; Σ4 =
2540
49
; Σ5 = 793; Σ6 = 16370;
c 0000 RAS, MNRAS 000, 000-000
http://cosmo.phys.hirosaki-u.ac.jp/takahasi/allsky raytracing/ 11 https://healpix.jpl.nasa.gov/ c 0000 RAS, MNRAS 000, 000-000
. K Abajazajian, arXiv/1907.04437K. Abajazajian et al. [arXiv/1907.04437]
. T Abbott, F B Abdalla, S Allam, arxiv/1507.0552Pjys. Rev. D. 9422001T. Abbott, F. B. Abdalla, S. Allam, et al., 2016, Pjys. Rev. D, 94, 022001 [arxiv/1507.0552]
. ; B P Virgo, LIGO Scientific CollaborationAbbott, LIGO Scientific CollaborationarXiv/1612.04664Virgo, LIGO Scientific Collaboration, B. P. Abbott et. al., 2017, PRL, 119, 161101, 1710.05832. [arXiv/1710.05832] M. Asgari, A. Taylor, B. Joachimi, T. D. Kitching [arXiv/1612.04664]
. L Amendola, arXiv/1206.1225Living Rev. Relativity. 16L. Amendola et al., 2013, Living Rev. Relativity 16, 6, [arXiv/1206.1225].
. R Balian, R Schaeffer, A&A. 2201R. Balian, R. Schaeffer, 1989, A&A, 220, 1
. A Barthelemy, S Codis, C Uhlemann, F Bernardeau, R Gavazzi, arXiv/1909.02615A. Barthelemy, S. Codis, C. Uhlemann, F. Bernardeau, R. Gavazzi [arXiv/1909.02615]
. N Bartolo, E Komatsu, S Matarrese, A Riotto, astro-ph/0406398Physics Report. 402103N. Bartolo, E. Komatsu, S. Matarrese, A. Riotto, 2004, Physics Report, 402, 103 [astro-ph/0406398]
. T Baker, arXiv/1510.06930Phys. Rev. Lett. 119251301T. Baker et al. 2017, Phys. Rev. Lett. 119, 251301 [arXiv/1510.06930]
. T Basse1, O E Bjalde, Y Y Y Wong, arxiv/1009.0010T. Basse1, O. E. Bjalde, Y. Y. Y. Wong [arxiv/1009.0010]
. Bernardeau, ApJ. 3921Bernardeau, 1992, ApJ, 392, 1
. F Bernardeau, astro-ph/9403020A&A. 291F. Bernardeau, 1994, A&A, 291, 697, [astro-ph/9403020]
. F Bernardeau, astro-ph/9311066ApJ. 42751F.Bernardeau, 1994,ApJ,427,51 [astro-ph/9311066]
. F Bernardeau, arXiv/9502089A&A. 301309F. Bernardeau 1995, A&A, 301, 309 [arXiv/9502089]
. F Bernardeau, astro-ph/9312026ApJ. 4331F. Bernardeau, 1994, ApJ, 433, 1 [astro-ph/9312026]
. F Bernardeau, arXiv/9602072A&A. 11F. Bernardeau 1996, A&A, 312, 11 [arXiv/9602072]
. F Bernardeau, astro-ph/9602072A&A. 31211F. Bernardeau, 1996, A&A, 312, 11 [astro-ph/9602072]
. F Bernardeau, P Brax, arXiv/1102.1907JCAP. 19F. Bernardeau, P. Brax 2011, JCAP, 1106, 019 [arXiv/1102.1907]
. F Bernardeau, P Reimberg, arXiv/1511.08641Phys. Rev. D. 9463520F. Bernardeau, P. Reimberg, 2016, Phys. Rev. D 94, 063520 [arXiv/1511.08641]
. F Bernardeau, Y Mellier, L Van Waerbeke, astro-ph/0201032A&A. 38928F. Bernardeau, Y. Mellier, L. van Waerbeke, 2002, A&A, 389, L28 [astro-ph/0201032].
. F Bernardeau, S Colombi, E Gaztanaga, R Scoccimarro, ; F Bernardeau, L Van Waerbeke, Y Mellier, astro-ph/0201029Physics Report. 367405A&AF. Bernardeau, S. Colombi, E. Gaztanaga, R. Scoccimarro, 2002, Physics Report, 367, 1, [astro-ph/0112551] F. Bernardeau, L. van Waerbeke, Y. Mellier, 2003, A&A, 397, 405 [astro-ph/0201029]
. B Bose, A Taruya, astro-ph/1808.01120JCAP. 19B. Bose, A. Taruya, 2018, JCAP, 1810, 2018,, 019 [astro-ph/1808.01120]
. E Calabrese, J Smidt, A Amblard, A Cooray, A Melchiorri, P Serra, A Heavens, D Munshi, Phys. Rev. D. 8135290909.1837E. Calabrese, J. Smidt, A. Amblard, A. Cooray, A. Melchiorri, P. Serra, A. Heavens, D. Munshi, 2010, Phys. Rev. D, 81, 3529 [0909.1837]
. P G Castro, A F Heavens, T D Kitching, astro-ph/0503479Phys.Rev. D72. 23516P. G. Castro, A. F. Heavens, T. D. Kitching, 2005, Phys.Rev. D72, 023516 [astro-ph/0503479]
. W R Coulton, J Liu, M S Madhavacheril, V Bhm, D N Spergel, arXiv/1810.02374W. R. Coulton, J. Liu, M. S. Madhavacheril, V. Bhm, D. N. Spergel, [arXiv/1810.02374
. T Clifton, P G Ferreira, A Padilla, C Skordis, ; P Creminelli, F Vernizzi, arXiv/1710.05877Phys. Rev. Lett. 5131251302Physics ReportT. Clifton, P. G. Ferreira, A. Padilla, C. Skordis, 2012, Physics Report 513, 1, 1, [astro-ph/1106.2476] P. Creminelli, F. Vernizzi, 2017, Phys. Rev. Lett. 119, 251302, [arXiv/1710.05877]
. M Crisostomi, M Lewandowski, F Vernizzi, arXiv/1903.11591Phys. Rev. D. 100M. Crisostomi, M. Lewandowski, F. Vernizzi, 2019, Phys. Rev. D 100, 024025, [arXiv/1903.11591]
. M Crisostomi, M Lewandowsk, F Vernizzi1, arXiv/1909.07366M. Crisostomi, M. Lewandowsk, F. Vernizzi1 [arXiv/1909.07366]
. A Cooray, R Sheth, Physics Report. 372arxive/0206508A. Cooray, R. Sheth, 2002, Physics Report, 372, 1, [arxive/0206508].
. G Cusina, M Lewandowski, F Vernizzi, arXiv/1712.02783JCAP. 5G. Cusina, M. Lewandowski, F. Vernizzi, 2018, JCAP, 04, 005, [arXiv/1712.02783]
. G Cusina, M Lewandowski, F Vernizzi, arXiv/1710.05877JCAP. 5G. Cusina, M. Lewandowski, F. Vernizzi, 2018, JCAP, 04, 005, [arXiv/1710.05877]
. G Dvali, G Gabadadze, M Porrati, arXiv/1510.06930Phys. Rev. B. 485G. Dvali, G. Gabadadze, M. Porrati, 2000, Phys. Rev. B, 485, 208, [arXiv/1510.06930]
. C Deffayet, X Gao, D A Steer, G Zahariade, arXiv/1103.3260Phys. Rev. D. 8464039C. Deffayet, X. Gao, D. A. Steer, G. Zahariade, 2011, Phys. Rev. D, 84, 064039, [arXiv/1103.3260]
. G Efstathiou, astro-ph/0307515MNRAS. 349G. Efstathiou, 2004, MNRAS, 349, 603, [astro-ph/0307515]
. A Eggemeier, R E Smith, arXiv/1611.01160MNRAS. 466A. Eggemeier, R. E. Smith, 2017, MNRAS, 466, 2496, [arXiv/1611.01160].
. D J Eisenstein, D H Weinberg, E Agol, astro-ph/1101.1529AJ. 142D. J. Eisenstein, D. H. Weinberg, E. Agol, et al. 2011, AJ, 142, 72, [astro-ph/1101.1529]
. E Gaztanaga, F Bernardeau, arXiv/9707095A&A. 331829E. Gaztanaga, F. Bernardeau 1998, A&A, 331, 829 [arXiv/9707095]
. H Gil-Marn, C Wagner, F Fragkoudi, R Jimenez, L Verde, arXiv/1111.4477H. Gil-Marn, C. Wagner, F. Fragkoudi, R. Jimenez, L. Verde, [arXiv/1111.4477]
. A Goldstein, arXiv/1710.05446ApJ. 848A. Goldstein et. al., ApJ, 2017, 848, L14, [arXiv/1710.05446]
. K M Gorski, E Hivon, A J Banday, B D Wandelt, F K Hansen, M Reinecke, M Bartelman, astro-ph/0409513ApJ. 622K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelman, 2005, ApJ, .622, 759, [astro-ph/0409513]
. I Harrison, P Coles, arXiv/1108.1358MNRAS. 418I. Harrison, P. Coles, 2011, MNRAS 418, L20, [arXiv/1108.1358]
. A F Heavens, S Gupta, astro-ph/1610.02956MNRAS. 324A. F. Heavens, S. Gupta, 2001, MNRAS, 324, 960, [astro-ph/1610.02956]
. G W Horndeski, arXiv/1408.1952International Journal of Theoretical Physics. J. Gleyzes, D. Langlois, F. Piazza, F. Vernizzi10G. W. Horndeski, 1974, International Journal of Theoretical Physics, 10, 363, J. Gleyzes, D. Langlois, F. Piazza, F. Vernizzi, 2015, JCAP, 2, 018, [arXiv/1408.1952]
. J Gleyzes, D Langlois, F Piazza, F Vernizzi, ; S Hirano, T Kobayashi, H Tashiro, S Yokoyama, arXiv/1801.07885Phys. Rev. Lett. 114211101Phys. Rev. DJ. Gleyzes, D. Langlois, F. Piazza, F. Vernizzi, 2015, Phys. Rev. Lett., 114, 211101, [arXiv/1404.6495] S. Hirano, T. Kobayashi, H. Tashiro, S. Yokoyama, 2018, Phys. Rev. D 97, 103517, [arXiv/1801.07885].
. E Hivon, K M Gorski, C B Netterfield, B P Crill, S Prunet, F Hansen, ; C Hikage, M Takada, T Hamana, D Spergel, arXiv/1004.3542MNRAS. 567ApJE. Hivon, K. M. Gorski, C. B. Netterfield, B. P. Crill, S. Prunet, F. Hansen, 2002, ApJ, 567, 2, [astro-ph/0105302] C. Hikage, M. Takada, T. Hamana, D. Spergel, 2011, MNRAS, 412, 65, [arXiv/1004.3542]
. G W Horndeski, International Journal of Theoretical Physics. 10363G. W. Horndeski, 1974, International Journal of Theoretical Physics, 10, 363.
. Hu, arXiv/015117Phys.Rev. D. 6483005Hu, 2001, Phys.Rev. D, 64, 083005, [arXiv/015117]
. W Hu, I Sawicki, arXiv/0705.1158Phys. Rev. D. 76W. Hu, I. Sawicki, 2007, Phys. Rev. D, 76, 064004, [arXiv/0705.1158]
. R J Jurek, C Blake, astro-ph/0911.4246MNRAS. 401R. J. Jurek, C. Blake, et al. 2010, MNRAS, 401, 14, [astro-ph/0911.4246]
. A Joyce, B Jain, J Khoury, M Trodden, astro-ph/1407.0059Physics Report. 568A. Joyce, B. Jain, J. Khoury, M. Trodden, 2015, Physics Report, 568, 1, [astro-ph/1407.0059]
. T D Kitching, A F Heavens, arXiv/1612.00770Phys. Rev. D. 9563522T. D. Kitching, A. F. Heavens, 2017, Phys. Rev. D 95, 063522 [arXiv/1612.00770]
. K Koyama, A Taruya, T Hiramatsu, arXiv/0902.0618Phys.Rev. 79123512K. Koyama, A. Taruya, T. Hiramatsu, 2009, Phys.Rev., D79, 123512, [arXiv/0902.0618]
. T Kobayashi, M Yamaguchi, J Yokoyama, arXiv/1105.5723Progress of Theoretical Physics. 126T. Kobayashi, M. Yamaguchi, J. Yokoyama, 2011, Progress of Theoretical Physics 126, 511, [arXiv/1105.5723]
. K Kuijken, C Heymans, H Hildebrandt, astro-ph/1507.00738MNRAS. 3500454K. Kuijken, C. Heymans, H. Hildebrandt, et al. 2015, MNRAS, 454, 3500, [astro-ph/1507.00738]
. K Langlois, Noui, arXiv/1510.06930JCAP. 234Langlois, K. Noui, 2016, JCAP, 2, 034, [arXiv/1510.06930]
. K Langlois, Noui, arXiv/1512.06820JCAP. 7Langlois, K. Noui, 2016, JCAP, 7, 16, [arXiv/1512.06820]
. D Langlois, K Noui, arXiv/1510.06930Cosmol. Astropart. Phys. 0234D. Langlois, K. Noui, 2016, Cosmol. Astropart. Phys. 02, 034, [arXiv/1510.06930]
. R Laureijs, J Amiaux, S Arduini, ESA/SRE12R. Laureijs, J. Amiaux, S. Arduini, et al., 2011, ESA/SRE(2011)12
. J Lesgourgues, S ; E L Pastor, R Lokas, D H Juszkiewicz, F R Weinberg, ; L Bouchet, N A Lombriser, Lima, arXiv/1602.07670Physics Report. 4293382Phys. Lett. BJ. Lesgourgues, S. Pastor, 2006, Physics Report, 429, 307, E. L. Lokas, R. Juszkiewicz, D. H. Weinberg, F. R. Bouchet, 1995, MNRAS, 274, 3 [astro-ph/9508032] L. Lombriser, N. A. Lima 2017, Phys. Lett. B 765, 382 [arXiv/1602.07670]
. J Liu, S Bird, J M Z Matilla, J C Hill, Z Haiman, M S Madhavacheril, D N Spergel, A Petri, arXiv/1711.10524JCAP. J. Liu, S. Bird, J. M. Z. Matilla, J. C. Hill, Z. Haiman, M. S. Madhavacheril, D. N. Spergel, A. Petri, 2018, JCAP [arXiv/1711.10524]
. T Matsubara, astro-ph/0610536ApJS. 170T. Matsubara, 2007, ApJS, 170, 1, [astro-ph/0610536]
. T Matsubara, astro-ph/0006269ApJ. 584T. Matsubara, 2003, ApJ, 584, 1, [astro-ph/0006269]
. D Munshi, V Sahni, A A Starobinsky, arXiv/9402065ApJ. 436517D. Munshi, V. Sahni, A. A. Starobinsky, 1994, ApJ, 436, 517, arXiv/9402065]
. D Munshi, F Bernardeau, A L Melott, R Schaeffer, MNRAS. 303arXiv/9707009D. Munshi, F. Bernardeau, A. L. Melott, R. Schaeffer, 1999, MNRAS, 303, 433, [arXiv/9707009]
. D Munshi, P Coles, A L Melott, MNRAS. 310arXiv/9902215D. Munshi, P. Coles, A. L. Melott, 1999, MNRAS, 310, 892, [arXiv/9902215]
. D Munshi, astro-ph/0001240MNRAS. 318D. Munshi, 2000, MNRAS, 318, 145, [astro-ph/0001240]
. D Munshi, B Jain, astro-ph/9911502MNRAS. 318D. Munshi, B. Jain, 2000, MNRAS, 318, 109, [astro-ph/9911502]
. D Munshi, B Jain, astro-ph/9912330MNRAS. 322D. Munshi, B. Jain, 2001, MNRAS, 322, 107, [astro-ph/9912330]
. D Munshi, P Coles, astro-ph/0003481MNRAS. 338D. Munshi, P. Coles, 2003, MNRAS, 338, 846, 856, [astro-ph/0003481]
. D Munshi, P Valageas, L Van Waerbeke, A Heavens, arXiv:0612667Physics Report. 462D. Munshi, P. Valageas, L. Van Waerbeke, A. Heavens, 2008, Physics Report, 462, 67, [arXiv:0612667]
. D Munshi, P Coles, A Cooray, A Heavens, J Smidt, MNRAS. 41012951002.4998D. Munshi, P. Coles, A. Cooray, A. Heavens, J. Smidt, 2011, MNRAS, 410, 1295, [1002.4998]
. D Munshi, A Heavens, astro-ph/0001240MNRAS. 401D. Munshi, A. Heavens, 2010, MNRAS, 401, 2406, [astro-ph/0001240].
. D Munshi, J Smidt, A Heavens, P Coles, A Cooray, MNRAS. 41122410910.3693D. Munshi, J. Smidt, A. Heavens, P. Coles, A. Cooray, 2011, MNRAS, 411, 2241, [0910.3693]
. D Munshi, P Valageas, A Cooray, A Heavens, arXiv/0910.3693MNRAS. 414D. Munshi, P. Valageas, A. Cooray, A. Heavens, 2011, MNRAS, 414, 3173, [arXiv/0910.3693].
. D Munshi, J Smidt, A Heavens, P Coles, A Cooray, MNRAS. 41122411003.5003D. Munshi, J. Smidt, A. Heavens, P. Coles, A. Cooray, 2011, MNRAS, 411, 2241, [1003.5003].
. D Munshi, A Heavens, A Cooray, J Smidt, P Coles, P Serra, MNRAS. 4120910.3693D. Munshi, A. Heavens, A. Cooray, J. Smidt, P. Coles, P. Serra, 2011, MNRAS, 412, 1993, [0910.3693].
. D Munshi, P Coles, A Cooray, A Heavens, J Smidt, arXiv/1002.4998MNRAS. 414D. Munshi, P. Coles, A. Cooray, A. Heavens, J. Smidt, 2011, MNRAS, 414, 3173, [arXiv/1002.4998]
. D Munshi, L Van Waerbeke, J Smidt, P Coles, MNRAS. 4195361103.1876D. Munshi, L. van Waerbeke, J. Smidt, P. Coles 2012, MNRAS, 419, 536 [1103.1876]
. D Munshi, J Smidt, A Cooray, A Renzi, A Heavens, P Coles, MNRAS. 43428301011.5224D. Munshi, J. Smidt, A. Cooray, A. Renzi, A. Heavens, P. Coles 2013, MNRAS, 434, 2830 [1011.5224]
. D Munshi, arXiv/1610.02956JCAP. 0149D. Munshi, 2017, JCAP, 01, 049, [arXiv/1610.02956]
. D Munshi, T Kitching, A Heavens, P Coles, arXiv/1711.04767MNRAS. 416D. Munshi, T. Kitching, A. Heavens, P. Coles, 2011, MNRAS, 416, 629, [arXiv/1711.04767]
. D Munshi, T Namikawa, T D Kitching, J D Mcewen, R Takahashi, F R Bouchet, A Taruya, B Bose, arXiv/1910.04627MNRAS. 493D. Munshi, T. Namikawa, T. D. Kitching, J. D. McEwen, R. Takahashi, F. R. Bouchet, A. Taruya, B. Bose, 2020, MNRAS, 493, 3985, [arXiv/1910.04627]
D Munshi, J D Mcewen, T Kitching, P Fosalba, R Teyssier, J Stadel, 10.17226/12951arXiv/1902.04877National Research Council. 2010. New Worlds, New Horizons in A&A. The National Academies Press. DDDDDD. New HorizonsD. Munshi, J. D. McEwen, T. Kitching, P. Fosalba, R. Teyssier, J. Stadel, [arXiv/1902.04877] National Research Council. 2010. New Worlds, New Horizons in A&A. The National Academies Press. DDDDDD [New Horizons]
. T Namikawa, B Bose, F R Bouchet, R Takahashi, A Taruya, ; T Namikawa, F R Bouchet, A Taruya, astro-ph/1805.10567Phys. Rev. D. 9963511T. Namikawa, B. Bose, F. R. Bouchet, R. Takahashi, A. Taruya, 2019, Phys. Rev. D, 99, 063511, [arxiv/1812.10635] T. Namikawa, F. R. Bouchet, A. Taruya, [astro-ph/1805.10567]
. T Okamoto, W Hu, arXiv/0206155Phys. Rev. D. 6663008T. Okamoto, W. Hu 2002, Phys. Rev. D, 66, 063008 [arXiv/0206155]
. P Oh, D N Spergel, G Hinshaw, astro-ph/9805339ApJ. 510551P. Oh, D. N. Spergel, G. Hinshaw 1990, ApJ, 510, 551 [astro-ph/9805339]
. A Peel, C.-A Lin, F Lanusse, A Leonard, J.-L Starck, M Kilbinger, arXiv/1612.02264A&A. 599A. Peel, C.-A. Lin, F. Lanusse, A. Leonard, J.-L. Starck, M. Kilbinger, 2017, A&A, 599, 79, [arXiv/1612.02264]
. M Peloso, Planck Collaboration ; Planck Collaboration ; Planck collaborationM Pietroni, Planck Collaboration ; Planck Collaboration ; Planck collaborationastro-ph/1502.01592JCAP. 5710411A&AM. Peloso, M. Pietroni 2014, JCAP, 04, 011 [astro-ph/0612667] Planck Collaboration, 2014, A&A, 571, A16, [astro-ph/1303.5076] Planck Collaboration, 2016, A&A 594, A13, [astro-ph/1502.01589] Planck collaboration, 2016, A&A, 594, A17, [astro-ph/1502.01592].
. G Pratten, A Lewis, arXiv/1905.1136JCAP. 47G. Pratten, A. Lewis 2016, JCAP, 08, 047 [arXiv/1905.1136]
. M A Riquelme, D N Spergel, 1002.4998ApJ. 661672M. A. Riquelme, D. N. Spergel, 2007, ApJ, 661, 672, [1002.4998]
. R Ruggeri, E Castorina, C Carbone, E , arXiv/1712.02334JCAP. 033R. Ruggeri, E. Castorina, C. Carbone, E. Sefusatti 2018, JCAP, 03, 003 [arXiv/1712.02334]
. J Sakstein, B , arXiv/1710.05893Phys. Rev. Lett. 119251303J. Sakstein, B. Jain 2017, Phys. Rev. Lett. 119, 251303 [arXiv/1710.05893]
. R Scoccimarro, H M P Couchman, arXiv/0902.0618MNRAS. 3254R. Scoccimarro, H. M. P. Couchman, 2001, MNRAS, 325, 4 [arXiv/0902.0618]
. R Scoccimarro, J A Frieman, astro-ph/9811184ApJ. 520R. Scoccimarro, J. A. Frieman, 1999, ApJ, 520, 35, [astro-ph/9811184]
. E Sefusatti, F Vernizzi, arXiv/1101.1026JCAP. 47E. Sefusatti, F. Vernizzi 2011, JCAP, 1103, 047 [arXiv/1101.1026]
. J Smidt, A Amblard, C T Byrnes, A Cooray, A Heavens, D Munshi, Phys. Rev. D. 811230070909.1837J. Smidt, A. Amblard, C. T. Byrnes, A. Cooray, A. Heavens, D. Munshi 2010, Phys. Rev. D, 81, 123007, [0909.1837].
. I S Szapudi, D Prunet, A S Pogosyan, J R Szalay, Bond, ApJ. 548115I. Szapudi S. Prunet, D. Pogosyan, A. S. Szalay, J. R. Bond, 2001, ApJ, 548, 115
. I Szapudi, A S Szalay, arXiv/9702015ApJ. 51543I. Szapudi, A. S. Szalay 1999, ApJ, 515, L43 [arXiv/9702015]
. R Takahashi, T Hamana, M Shirasaki, T Namikawa, T Nishimichi, K Osato, K Shiroyama, astro-ph/1706.01472] S. TsujikawaClass.Quant.Grav. 850214003ApJ. astro-ph/1304.1961R. Takahashi, T. Hamana, M. Shirasaki, T. Namikawa, T. Nishimichi, K. Osato, K. Shiroyama, 2017, ApJ, 850, 24 [astro-ph/1706.01472] S. Tsujikawa 2013, Class.Quant.Grav., 30, 214003 [astro-ph/1304.1961]
. J A Tyson, D M Wittman, J F Hennawi, D N Spergel, astro-ph/0209632Nuclear Physics B Proceedings Supplements. 12421J. A. Tyson, D. M. Wittman, J. F. Hennawi, D. N. Spergel, 2003, Nuclear Physics B Proceedings Supplements, 124, 21 [astro-ph/0209632]
. P Reimberg, F , arXiv/1708.00252Phys. Rev. D. 9723524P. Reimberg, F. Bernardeau 2018, Phys. Rev. D 97, 023524 [arXiv/1708.00252]
. C Uhlemann, S Codis, C Pichon, F Bernardeau, P , arXiv/1512.05793MNRAS. 4601529C. Uhlemann, S. Codis, C. Pichon, F. Bernardeau, P. Reimberg 2016, MNRAS, 460, 1529 [arXiv/1512.05793]
. C Uhlemann, C Pichon, S Codis, B Lhuillier, J Kim, F Bernardeau, C Park, S , arXiv/1711.04767MNRAS. 4772772C. Uhlemann, C. Pichon, S. Codis, B. LHuillier, J. Kim, F. Bernardeau, C. Park, S. Prunet 2018, MNRAS, 477, 2772 [arXiv/1711.04767]
. A J Weiss, A Schneider, R Sgier, T Kacprzak, A Amara, A Refregier, arXiv/1905.1136A. J. Weiss, A. Schneider, R. Sgier, T. Kacprzak, A. Amara, A. Refregier, [arXiv/1905.1136]
. D Munshi, P Valageas, L Van Waerbeke, astro-ph/0612667A. Heavens Phys.Rept. 462D. Munshi, P. Valageas, L. Van Waerbeke, A. Heavens Phys.Rept, 462, 67, 2008 [astro-ph/0612667]
| [] |
[
"Anatomical Data Augmentation via Fluid-based Image Registration",
"Anatomical Data Augmentation via Fluid-based Image Registration"
] | [
"Zhengyang Shen \nDepartment of Computer Science\nUNC Chapel Hill\n\n",
"Zhenlin Xu \nDepartment of Computer Science\nUNC Chapel Hill\n\n",
"Sahin Olut \nDepartment of Computer Science\nUNC Chapel Hill\n\n",
"Marc Niethammer \nDepartment of Computer Science\nUNC Chapel Hill\n\n"
] | [
"Department of Computer Science\nUNC Chapel Hill\n",
"Department of Computer Science\nUNC Chapel Hill\n",
"Department of Computer Science\nUNC Chapel Hill\n",
"Department of Computer Science\nUNC Chapel Hill\n"
] | [] | We introduce a fluid-based image augmentation method for medical image analysis. In contrast to existing methods, our framework generates anatomically meaningful images via interpolation from the geodesic subspace underlying given samples. Our approach consists of three steps: 1) given a source image and a set of target images, we construct a geodesic subspace using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model; 2) we sample transformations from the resulting geodesic subspace; 3) we obtain deformed images and segmentations via interpolation. Experiments on brain (LPBA) and knee (OAI) data illustrate the performance of our approach on two tasks: 1) data augmentation during training and testing for image segmentation; 2) one-shot learning for single atlas image segmentation. We demonstrate that our approach generates anatomically meaningful data and improves performance on these tasks over competing approaches. Code is available at https://github.com/uncbiag/easyreg. | 10.1007/978-3-030-59716-0_31 | [
"https://arxiv.org/pdf/2007.02447v1.pdf"
] | 220,364,170 | 2007.02447 | 027763a8adfeeb670ce88a1d2d6d60afbc245a5e |
Anatomical Data Augmentation via Fluid-based Image Registration
Zhengyang Shen
Department of Computer Science
UNC Chapel Hill
Zhenlin Xu
Department of Computer Science
UNC Chapel Hill
Sahin Olut
Department of Computer Science
UNC Chapel Hill
Marc Niethammer
Department of Computer Science
UNC Chapel Hill
Anatomical Data Augmentation via Fluid-based Image Registration
We introduce a fluid-based image augmentation method for medical image analysis. In contrast to existing methods, our framework generates anatomically meaningful images via interpolation from the geodesic subspace underlying given samples. Our approach consists of three steps: 1) given a source image and a set of target images, we construct a geodesic subspace using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model; 2) we sample transformations from the resulting geodesic subspace; 3) we obtain deformed images and segmentations via interpolation. Experiments on brain (LPBA) and knee (OAI) data illustrate the performance of our approach on two tasks: 1) data augmentation during training and testing for image segmentation; 2) one-shot learning for single atlas image segmentation. We demonstrate that our approach generates anatomically meaningful data and improves performance on these tasks over competing approaches. Code is available at https://github.com/uncbiag/easyreg.
Introduction
Training data-hungry deep neural networks is challenging for medical image analysis where manual annotations are more difficult and expensive to obtain than for natural images. Thus it is critical to study how to use scarce annotated data efficiently, e.g., via data-efficient models [30,11], training strategies [20] and semi-supervised learning strategies utilizing widely available unlabeled data through self-training [3,16], regularization [4], and multi-task learning [7,31,36].
An alternative approach is data augmentation. Typical methods for medical image augmentation include random cropping [12], geometric transformations [18,15,24] (e.g., rotations, translations, and free-form deformations), and photometric (i.e., color) transformations [14,21]. Data-driven data augmentation has also been proposed, to learn generative models for synthesizing images with new appearance [28,9], to estimate class/template-dependent distributions of deformations [10,19,34] or both [35,6]. Compared with these methods, our approach focuses on a geometric view and constructs a continuous geodesic subspace as an estimate of the space of anatomical variability.
Compared with the high dimensionality of medical images, anatomical variability is often assumed to lie in a much lower dimensional space [1]. Though how to directly specify this space is not obvious, we can rely on reasonable assumptions informed by the data itself. We assume there is a diffeomorphic Fig. 1. Illustration of our fluid-based data augmentation using a 1D (left) and 2D (right) geodesic subspace. We assume a registration from a source to a target image in unit time. In 1D, we can sample along the geodesic path (t ∈ [0, 1]) between the source (t = 0) and the target images (t = 1). We can also extrapolate t / ∈ [0, 1]. In the 2D case, a source and two target images define a two-dimensional geodesic subspace.
transformation between two images, that image pairs can be connected via a geodesic path, and that appearance variation is implicitly captured by the appearance differences of a given image population. For longitudinal image data, we can approximate images at intermediate time points by interpolation or predict via extrapolation. As long as no major appearance changes exist, diffeomorphic transformations can provide realistic intermediate images 1 . Based on these considerations, we propose a data augmentation method based on fluid registration which produces anatomically plausible deformations and retains appearance differences of a given image population. Specifically, we choose the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model as our fluid registration approach. LDDMM comes equipped with a metric and results in a geodesic path between a source and a target image which is parameterized by the LDDMM initial momentum vector field. Given two initial momenta in the tangent space of the same source image, we can define a geodesic plane, illustrated in Fig. 1; similarly, we can construct higher dimensional subspaces based on convex combinations of sets of momenta [22]. Our method includes the following steps: 1) we compute a set of initial momenta for a source image and a set of target images; 2) we generate an initial momentum via a convex combination of initial momenta; 3) we sample a transformation on the geodesic path determined by the momentum; and 4) we warp the image and its segmentation according to this transformation.
Data augmentation is often designed for the training phase. However, we show the proposed approach can be extended to the testing phase, e.g., a testing image is registered to a set of training images (with segmentations) and the deep learning (DL) segmentation model is evaluated in this warped space (where it was trained, hence ensuring consistency of the DL input); the predicted segmentations are then mapped back to their original spaces. In such a setting, using LDDMM can guarantee the existence of the inverse map whereas traditional elastic approaches cannot. Contributions: 1) We propose a general fluid-based approach for anatomically consistent medical image augmentation for both training and testing. 2) We build on LDDMM and can therefore assure well-behaved diffeomorphic transformations when interpolating and extrapolating samples with large deformations. 3) Our method easily integrates into different tasks, such as segmentation and one-shot learning for which we show general performance improvements.
LDDMM Method
LDDMM [5] is a fluid-based image registration model, estimating a spatiotemporal velocity field v(t, x) from which the spatial transformation ϕ can be computed via integration of ∂ t ϕ(t, x) = v(t, ϕ(t, x)), ϕ(0, x) = x . For appropriately regularized velocity fields [8], diffeomorphic transformations can be guaranteed. The optimization problem underlying LDDMM can be written as
v * = argmin v 1 2 1 0 v(t) 2 L ddt+Sim(I(1), I 1 ) s.t. ∂ t I+ ∇I, v = 0, I(0) = I 0 ,(1)
where ∇ denotes the gradient, ·, · the inner product, and Sim(A, B) is a similarity measure between images. We note that I(1, x) = I 0 • ϕ −1 (1, x), where ϕ −1 denotes the inverse of ϕ in the target image space. The evolution of this map follows ∂ t ϕ −1 + Dϕ −1 v = 0, where D is the Jacobian. Typically, one seeks a velocity field which deforms the source to the target image in unit time. To assure smooth transformations, LDDMM penalizes non-smooth velocity fields via the norm v 2 L = Lv, Lv , where L is a differential operator. At optimality the following equations hold [33] and the entire evolution can be parameterized via the initial vector-valued momentum, m = L † Lv:
m(0) * = argmin m(0) 1 2 m(0), v(0) + Sim(I 0 • ϕ −1 (1), I 1 ),(2)s.t. ϕ −1 t + Dϕ −1 v = 0, ϕ(0, x) = x ,(3)∂ t m + div(v)m + Dv T (m) + Dm(v) = 0, m(0) = m 0 , v = K m ,(4)
where we assume (L † L) −1 m is specified via convolution K m. Eq. 4 is the Euler-Poincaré equation for diffeomorphisms (EPDiff) [33], defining the evolution of the spatio-temporal velocity field based on the initial momentum m 0 . The geodesic which connects the image pair (I 0 , I 0 •ϕ −1 (1)) and approximates the path between (I 0 , I 1 ) is specified by m 0 . We can sample along the geodesic path, assuring diffeomorphic transformations. As LDDMM assures diffeomorphic transformations, we can also obtain the inverse transformation map, ϕ (defined in source image space, whereas ϕ −1 is defined in target image space) by solving
ϕ(1, x) = x + 1 0 v(t, ϕ(t, x)) dt, ϕ(0, x) = x.(5)
Computing the inverse for an arbitrary displacement field on the other hand requires the numerical minimization of ϕ −1 • ϕ − id 2 . Existence of the inverse map cannot be guaranteed for such an arbitrary displacement field.
Geodesic Subspaces
We define a geodesic subspace constructed from a source image and a set of target images. Given a dataset of size N,
I c ∈ R D denotes an individual image c ∈ {1 . . . N },
where D is the number of voxels. For each source image I c , we further denote a target set of K images as
I c K . We define M c K := {m cj 0 |M(I c , I j ), M : R D × R D − → R D×d , I j ∈ I c K } as a set of K different initial momenta,
where M maps from an image pair to the corresponding initial momentum via Eqs. 2-4; d is the spatial dimension. We define convex combinations of M c K as
C(M c K ) := m c 0 m c 0 = K j=1 λ j m cj 0 , m cj 0 ∈ M c K , λ j ≥ 0 ∀j, K j=1 λ j = 1 .(6)
Restricting ourselves to convex combinations, instead of using the entire space defined by arbitrary linear combinations of the momenta M c k allows us to retain more control over the resulting momenta magnitudes. For our augmentation strategy we simply sample an initial momentumm c 0 from C(M c K ), which, according to the EPDiff Eq. 4, determines a geodesic path starting from I c . If we set K = 2, for example, the sampled momentum parameterizes a path from a source image toward two target images, where the λ i weigh how much the two different images drive the overall deformation. As LDDMM registers a source to a target image in unit time, we obtain interpolations by additionally sampling t from [0, 1], resulting in the intermediate deformation ϕ −1 m c 0 (t) from the geodesic path starting at I c and determined bym c 0 . We can also extrapolate by sampling t from R \ [0, 1]. We then synthesize images via interpolation:
I c • ϕ −1 m c 0 (t).
Segmentation
In this section, we first introduce an augmentation strategy for general image segmentation (Sec. 4.1) and then a variant for one-shot segmentation (Sec. 4.2).
Data augmentation for general image segmentation
We use a two-phase data augmentation approach consisting of (1) pre-augmentation of the training data and (2) post-augmentation of the testing data. During the training phase, for each training image, I c , we generate a set of new images by sampling from its geodesic subspace, C(M c K ). This results in a set of deformed images which are anatomically meaningful and retain the appearance of I c . We apply the same sampled spatial transformations to the segmentation of the training image, resulting in a new set of warped images and segmentations. We train a segmentation network based on this augmented dataset.
During the testing phase, for each testing image, we also create a set of new images using the same strategy described above. Specifically, we pair a testing image with a set of training images to create the geodesic subspace for sampling. This will result in samples that come from a similar subspace that has been used for augmentation during training. A final segmentation is then obtained by warping the predicted segmentations back to the original space of the image to be segmented and applying a label-fusion strategy. Consequently, we expect that the segmentation network performance will be improved as it (1) is allowed to see multiple views of the same image and (2) the set of views is consistent with the set of views that the segmentation network was trained with.
I c via H I c • ϕ −1 m c 0 (t) • ϕmc 0 (t).
The final segmentation is obtained by merging all warped predictions via a label fusion strategy.
Dataset The LONI Probabilistic Brain Atlas [25] (LPBA40) dataset contains volumes of 40 healthy patients with 56 manually annotated anatomical structures. We affinely register all images to a mean atlas [13], resample to isotropic spacing of 1 mm, crop them to 196 × 164 × 196 and intensity normalize them to [0, 1] via histogram equalization. We randomly take 25 patients for training, 10 patients for testing, and 5 patients for validation.
The Osteoarthritis Initiative [29] (OAI) provides manually labeled knee images with segmentations of femur and tibia as well as femoral and tibial cartilage [2]. We first affinely register all images to a mean atlas [13], resample them to isotropic spacing of 1 mm, and crop them to 160×200×200. We randomly take 60 patients for training, 25 patients for validation, and 52 patients for testing.
To evaluate the effect of data augmentation on training datasets with different sizes, we further sample 5, 10, 15, 20, 25 patients as the training set on LPBA40 and 10, 20, 30, 40, 60 patients as the training set for OAI.
Metric We use the average Dice score over segmentation classes for all tasks in Sec. 4
.1 and Sec. 4.2.
Baselines Non-augmentation is our lower bound method. We use a classbalanced random cropping schedule during training [32]. We use this cropping schedule for all segmentation methods that we implement. We use a U-Net [23] segmentation network. Random B-Spline Transform is a transformation locally parameterized by randomizing the location of B-spline control points. Denote (·, ·) as the number of control points distributed over a uniform mesh and the standard deviation of the normal distribution, units are in mm. The three set-tings we use are (10 3 , 3), (10 3 , 4), (20 3 , 2). During data augmentation, we randomly select one of the settings to generate a new example.
Settings During the training augmentation phase (pre-aug), we randomly pick a source image and K targets, uniformly sample λ i in Eq. 6 and then uniformly sample t. For LPBA40, we set K = 2 and t ∈ [−1, 2]; for the OAI data, we set K = 1 and t ∈ [−1, 2]. For all training sets with different sizes, for both the B-Spline and the fluid-based augmentation methods and for both datasets, we augment the training data by 1,500 cases. During the testing augmentation phase (post-aug), for both datasets, we set K = 2 and t ∈ [−1, 2] and draw 20 synthesized samples for each test image. The models trained via the augmented training set are used to predict the segmentations. To obtain the final segmentation, we sum the softmax outputs of all the segmentations warped to the original space and assign the label with the largest sum. We test using the models achieving the best performance on the validation set. We use the optimization approach in [17] and the network of [26,27] Results Fig. 4 shows the segmentation performance on the LPBA40 and the OAI datasets. For training phase augmentation, fluid-based augmentation improves accuracy over non-augmentation and B-Spline augmentation by a large margin on the OAI dataset and results in comparable performance on the LPBA40 dataset. This difference might be due to the larger anatomical differences in the LPBA40 dataset compared to the OAI dataset; such large differences might not be well captured by inter-and extrapolation along a few geodesics. Hence, the OAI dataset may benefit more from the anatomically plausible geodesic space. When test phase augmentation is used in addition to training augmentation, performance is further improved. This shows that the ensemble strategy used by post-aug, where the segmentation network makes a consensus decision based on different views of the image to be segmented, is effective. In practice, we observe that high-quality inverse transformations (that map the segmentations back to the test image space) are important to achieve good performance. These inverse transformations can efficiently be computed via Eq. 5 for our fluid-based approach.
Data augmentation for one-shot segmentation
We explore one-shot learning. Specifically, we consider single atlas medical image segmentation, where only the atlas image has a segmentation, while all other images are unlabeled. We first review Brainstorm [35], a competing data augmentation framework for one-shot segmentation. We then discuss our modifications.
In Brainstorm, the appearance of a sampled unlabeled image is first transfered to atlas-space and subsequently spatially transformed by registering to another sampled unlabeled image. Specifically, a registration network H r is trained to predict the displacement field between the atlas A and the unlabeled images. For a given image I c , the predicted transformation to A is ϕ c (x) = H r (I c , A)+x. A set of approximated inverse transformations {ϕ −1 c , c ∈ 1 . . . N } from the atlas to the image set can also be predicted by the network H r . These inverse transformations capture the anatomical diversity of the unlabeled set and are used to deform the images. Further, an appearance network H a is trained to capture the appearance of the unlabeled set. The network is designed to output the residue r between the warped image I c • ϕ and the atlas, r c = H a (A, I c • ϕ c ). Finally, we obtain a new set of segmented images by applying the transformations to the atlas with the new appearance:
{(A + r i ) • ϕ −1 j , i, j ∈ 1 . . . N }.
We modify the Brainstorm idea as follows: 1) Instead of sampling the transformation ϕ −1 c , we sample ϕ −1 m c 0 (t) based on our fluid-based approach; 2) We remove the appearance network and instead simply use {I c • ϕ c , c ∈ 1 . . . N } to model appearance. I.e., we retain the appearance of individual images, but deform them by going through atlas space. This results by construction in a realistic appearance distribution. Our synthesized images are
{(I i •ϕ i )•ϕ −1 m j 0 (t),m j 0 ∈ C(M j K ), t ∈ R, i, j ∈ 1 . . . N }.
We refer to this approach as F luid-Aug real . Dataset We use the OAI dataset with 100 manually annotated images and a segmented mean atlas [13]. We only use the atlas segmentation for our one-shot segmentation experiments.
Baseline Upper-bound is a model trained from 100 images and their manual segmentations. We use the same U-net as for the general segmentation task in Sec. 4.1. Brainstorm is our baseline. We train a registration network and an appearance network separately, using the same network structures as in [35]. We sample a new training set of size 1,500 via random compositions of the appearance and the deformation. We also compare with a variant replacing the appearance network, where the synthesized set can be written as
{(I i • ϕ i ) • ϕ −1 j , i, j ∈ 1 . . . N }.
We refer to this approach as Brainstorm real . Settings We set K = 2 and t ∈ [−1, 2] and draw a new training set with 1,500 pairs the same way as in Sec. 4.1. We also compare with a variant where we set t = 1 (instead of randomly sampling it), which we denote F luid-Aug realt=1 . Further, we compare with a variant using the appearance network, where the
synthesized set is {(A + r i ) • ϕ −1 m j 0 (t), m j 0 ∈ C(M j K ), t ∈ R, i, j ∈ 1 . . . N }.
We refer to this approach as F luid-Aug. Fig. 4 shows better performance for fluid-based augmentation than for Brainstorm when using either real or learnt appearance. Furthermore, directly using the appearance of the unlabeled images shows better performance than using the appearance network. Lastly, randomizing over the location on the geodesic (F luid−Aug real ) shows small improvements over fixing t = 1 (F luid−Aug realt 1 ).
Conclusion
We introduced a fluid-based method for medical image data augmentation. Our approach makes use of a geodesic subspace capturing anatomical variability. We explored its use for general segmentation and one-shot segmentation, achieving improvements over competing methods. Future work will focus on efficiency improvements. Specifically, computing the geodesic subspaces is costly if they are not approximated by a registration network. We will therefore explore introducing multiple atlases to reduce the number of possible registration pairs. 6. Ablation study on testing augmentation size (first row) and the choice of K (second row). For the first row, we evaluate segmentation accuracies for a different numbers of augmentation samples in the test phase. N times denotes that a testing image is deformed by N different random transformations drawn from the geodesic subspace. Segmentation accuracies start to saturate for N ≥ 20. For the second row, we evaluate the performance for different choices of K (i.e., the dimensionality of the geodesic subspace). A larger K does not necessarily result in better segmentation accuracies. Fig. 7. From the first to the fourth column, we visualize the segmentation results in Sec. 4 on LPBA40 (first row) and OAI (second row); from left to right: results without augmentation (non-aug), training phase augmentation (pre-aug), testing phase augmentation (post-aug), and the manual segmentations. We observe segmentation refinement after pre-aug and post-aug. For the last two columns, we compare the learnt appearance A + ri (fifth column) and the real appearance Ii • ϕi (sixth column) in Sec. 4.2, where each row is a patient. The learnt appearance is smoother and hence shows less image texture. In our case, the segmentation network trained using the learnt appearance does not match the noisy testing data well.
Fig. 2 .
2Illustration of the training phase data augmentation. Given a source image Ic and a set of target images I c K , a set of momenta M c K is first computed. Then a momentumm c 0 is sampled from the convex combination of these momenta C(M c K ) defining a geodesic path starting from the source image. Lastly, a transformation ϕ on the geodesic and used to warp the source image and its segmentation.
Fig. 2
2illustrates the training phase data augmentation. We first compute M c K by picking an image I c , c ∈ {1 . . . N } from a training dataset of size N and a target set I c K of cardinality K, also sampled from the training set. We then samplem c 0 ∈ C(M c K ) defining a geodesic path from which we sample a deformation ϕ −1 m c 0 (t) at time point t. We apply the same strategy multiple times and obtain a new deformation set for eachI c , c ∈ {1 . . . N }. The new image set {I c • ϕ −1 m c 0(t)} consisting of the chosen set of random transformations of I c and the corresponding segmentations can then be obtained by interpolation.Fig. 3illustrates the testing phase data augmentation. For a test image I c and its target set I c K sampled from the training set, we obtain a set of transformations {ϕ we can therefore efficiently obtain the corresponding inverse map ϕmc 0 (t). We denote our trained segmentation network by H : R D − → R D×L which takes an image as its input and predicts its segmentation labels. Here, L is the number of segmentation labels. Each prediction H I c • ϕ
Fig. 3 .
3Illustration of the testing phase data augmentation. Given a source image Ic and a set of target images I c K , a geodesic subspace is determined first. A set of transformations ϕ −1 m c 0 (t) is then sampled from this space and, at the same time, the corresponding inverse transformations ϕmc 0 (t) are obtained. A segmentation network H is applied to each warped image and the resulting segmentations H(Ic • ϕ back to the source image space. A label fusion strategy is applied to obtain the final segmentation.back to the space of
Fig. 4 .
4Segmentation performance for segmentation tasks. The left two plots show Dice scores for the different methods with different training set sizes on the LPBA40 and OAI datasets for general segmentation. Performance increases with training set size. Fluid-based augmentation (pre-aug and post-aug) shows the best performance. The right table compares the performance for one-shot segmentation in Sec. 4.2. Fluidbased augmentation methods perform better than their Brainstorm counterparts.
Fig. 5 .
5Comparison between inter-and extrapolation of the displacement field (top row) and geodesic inter-and extrapolation (bottom row). We show the center slices from the sagittal view of the 3D MRI knee images. For both methods, we assume they have the same transformation ϕ −1 (1) at t = 1. Then we compute the displacementfield based inter-and extrapolation asϕ −1 af f ine (t, x) = (ϕ −1 (1, x) − x)t + x, whereas ϕ −1 LDDM M isobtained via geodesic shooting (based on solving the EPDiff Eq. 4). For large deformations, i.e., t = −3 and t = 4, affine extrapolation results in foldings while extrapolation via the LDDMM geodesic results in diffeomorphic transformations.
Fig.
Fig. 6. Ablation study on testing augmentation size (first row) and the choice of K (second row). For the first row, we evaluate segmentation accuracies for a different numbers of augmentation samples in the test phase. N times denotes that a testing image is deformed by N different random transformations drawn from the geodesic subspace. Segmentation accuracies start to saturate for N ≥ 20. For the second row, we evaluate the performance for different choices of K (i.e., the dimensionality of the geodesic subspace). A larger K does not necessarily result in better segmentation accuracies.
Fig. 8 .
8Visualization of synthesized images obtained from a source image and its K = 2 geodesic subspace. From left to right: the first and the last column show the two target images. The remaining columns show synthesized images Ic •ϕ ). The column index (·, ·) denotes the mixture weights (λ1, λ2) in Eq. 6, determining how much target 1 and target 2 influence the overall deformation.From top to bottom: each row refers to the time t sampled along the geodesic path. For the row t = 0, the transformation is the identity, i.e., Ic = Ic • ϕ −1 m c 0 (0). For the row t = 1, the warped image has similar anatomical structure as target 1 when (λ1, λ2) = (1.0, 0.0) while is similar to target 2 when (λ1, λ2) = (0.0, 1.0). Columns (1.0, 0.0) and (0.0, 1.0) show samples on two geodesic paths (K = 1) toward target 1 and target 2, respectively.
to compute the mappings M on LPBA40 and OAI, respectively.OAI Dataset
Method
Dice (std)
Brainstorm 79.94 (2.22)
F luid-Aug
80.81 (2.35)
Brainstormreal 86.83 (2.21)
F luid-Augreal t1 87.74 (1.82)
F luid-Augreal 88.31 (1.56)
U pper-bound 90.01 (1.58)
In some cases, for example for lung images, sliding effects need to be considered, violating the diffeomorphic assumption.
Acknowledgements: Research reported in this publication was supported by the National Institutes of Health (NIH) and the National Science Foundation (NSF) under award numbers NSF EECS1711776 and NIH 1R01AR072013. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the NSF.
Manifold learning for medical image registration, segmentation, and classification. P Aljabar, R Wolz, D Rueckert, Machine learning in computer-aided diagnosis: Medical imaging intelligence and analysis. IGI GlobalAljabar, P., Wolz, R., Rueckert, D.: Manifold learning for medical image registra- tion, segmentation, and classification. In: Machine learning in computer-aided di- agnosis: Medical imaging intelligence and analysis, pp. 351-372. IGI Global (2012)
Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the osteoarthritis initiative. F Ambellan, A Tack, M Ehlke, S Zachow, Medical image analysis. 52Ambellan, F., Tack, A., Ehlke, M., Zachow, S.: Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the osteoarthritis initiative. Medical image analysis 52, 109- 118 (2019)
Semi-supervised learning for networkbased cardiac MR image segmentation. W Bai, O Oktay, M Sinclair, H Suzuki, M Rajchl, G Tarroni, B Glocker, A King, P M Matthews, D Rueckert, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerBai, W., Oktay, O., Sinclair, M., Suzuki, H., Rajchl, M., Tarroni, G., Glocker, B., King, A., Matthews, P.M., Rueckert, D.: Semi-supervised learning for network- based cardiac MR image segmentation. In: International Conference on Medical Im- age Computing and Computer-Assisted Intervention. pp. 253-260. Springer (2017)
Semi-supervised deep learning for fully convolutional networks. C Baur, S Albarqouni, N Navab, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerBaur, C., Albarqouni, S., Navab, N.: Semi-supervised deep learning for fully convo- lutional networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 311-319. Springer (2017)
Computing large deformation metric mappings via geodesic flows of diffeomorphisms. M F Beg, M I Miller, A Trouvé, L Younes, IJCV. 612Beg, M.F., Miller, M.I., Trouvé, A., Younes, L.: Computing large deformation met- ric mappings via geodesic flows of diffeomorphisms. IJCV 61(2), 139-157 (2005)
Semi-supervised and task-driven data augmentation. K Chaitanya, N Karani, C F Baumgartner, A Becker, O Donati, E Konukoglu, International Conference on Information Processing in Medical Imaging. SpringerChaitanya, K., Karani, N., Baumgartner, C.F., Becker, A., Donati, O., Konukoglu, E.: Semi-supervised and task-driven data augmentation. In: International Confer- ence on Information Processing in Medical Imaging. pp. 29-41. Springer (2019)
Multitask attention-based semi-supervised learning for medical image segmentation. S Chen, G Bortsova, A G U Juárez, G Van Tulder, M De Bruijne, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerChen, S., Bortsova, G., Juárez, A.G.U., van Tulder, G., de Bruijne, M.: Multi- task attention-based semi-supervised learning for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 457-465. Springer (2019)
Variational problems on flows of diffeomorphisms for image matching. P Dupuis, U Grenander, M I Miller, Dupuis, P., Grenander, U., Miller, M.I.: Variational problems on flows of diffeo- morphisms for image matching. Quarterly of applied mathematics pp. 587-600 (1998)
GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. M Frid-Adar, I Diamant, E Klang, M Amitai, J Goldberger, H Greenspan, Neurocomputing. 321Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321, 321-331 (2018)
Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. S Hauberg, O Freifeld, A B L Larsen, J Fisher, L Hansen, Artificial Intelligence and Statistics. Hauberg, S., Freifeld, O., Larsen, A.B.L., Fisher, J., Hansen, L.: Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data aug- mentation. In: Artificial Intelligence and Statistics. pp. 342-350 (2016)
Obelisk-one kernel to solve nearly everything: Unified 3d binary convolutions for image analysis. M P Heinrich, O Oktay, N Bouteldja, Heinrich, M.P., Oktay, O., Bouteldja, N.: Obelisk-one kernel to solve nearly every- thing: Unified 3d binary convolutions for image analysis (2018)
Differential data augmentation techniques for medical imaging classification tasks. Z Hussain, F Gimenez, D Yi, D Rubin, AMIA Annual Symposium Proceedings. 2017979Hussain, Z., Gimenez, F., Yi, D., Rubin, D.: Differential data augmentation tech- niques for medical imaging classification tasks. In: AMIA Annual Symposium Pro- ceedings. vol. 2017, p. 979. American Medical Informatics Association (2017)
Unbiased diffeomorphic atlas construction for computational anatomy. S Joshi, B Davis, M Jomier, G Gerig, NeuroImage. 23Joshi, S., Davis, B., Jomier, M., Gerig, G.: Unbiased diffeomorphic atlas construc- tion for computational anatomy. NeuroImage 23, S151-S160 (2004)
Data driven image models through continuous joint alignment. E G Learned-Miller, IEEE Transactions on Pattern Analysis and Machine Intelligence. 282Learned-Miller, E.G.: Data driven image models through continuous joint align- ment. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(2), 236-250 (2005)
V-net: Fully convolutional neural networks for volumetric medical image segmentation. F Milletari, N Navab, S A Ahmadi, 3D Vision (3DV), 2016 Fourth International Conference on. IEEEMilletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 3D Vision (3DV), 2016 Fourth International Conference on. pp. 565-571. IEEE (2016)
Asdnet: Attention based semi-supervised deep networks for medical image segmentation. D Nie, Y Gao, L Wang, D Shen, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerNie, D., Gao, Y., Wang, L., Shen, D.: Asdnet: Attention based semi-supervised deep networks for medical image segmentation. In: International Conference on Medi- cal Image Computing and Computer-Assisted Intervention. pp. 370-378. Springer (2018)
Metric learning for image registration. M Niethammer, R Kwitt, F X Vialard, CVPRNiethammer, M., Kwitt, R., Vialard, F.X.: Metric learning for image registration. CVPR (2019)
Augmenting data when training a CNN for retinal vessel segmentation: How to warp? In. A Oliveira, S Pereira, C A Silva, 2017 IEEE 5th Portuguese Meeting on Bioengineering (ENBENG). IEEEOliveira, A., Pereira, S., Silva, C.A.: Augmenting data when training a CNN for retinal vessel segmentation: How to warp? In: 2017 IEEE 5th Portuguese Meeting on Bioengineering (ENBENG). pp. 1-4. IEEE (2017)
Representing and learning high dimensional data with the optimal transport map from a probabilistic viewpoint. S Park, M Thorpe, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPark, S., Thorpe, M.: Representing and learning high dimensional data with the optimal transport map from a probabilistic viewpoint. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7864-7872 (2018)
Data augmentation with manifold exploring geometric transformations for increased performance and robustness. M Paschali, W Simson, A G Roy, M F Naeem, R Göbl, C Wachinger, N Navab, arXiv:1901.04420arXiv preprintPaschali, M., Simson, W., Roy, A.G., Naeem, M.F., Göbl, R., Wachinger, C., Navab, N.: Data augmentation with manifold exploring geometric transformations for increased performance and robustness. arXiv preprint arXiv:1901.04420 (2019)
Brain tumor segmentation using convolutional neural networks in MRI images. S Pereira, A Pinto, V Alves, C A Silva, IEEE transactions on medical imaging. 355Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using con- volutional neural networks in MRI images. IEEE transactions on medical imaging 35(5), 1240-1251 (2016)
Principal component based diffeomorphic surface mapping. A Qiu, L Younes, M I Miller, IEEE transactions on medical imaging. 312Qiu, A., Younes, L., Miller, M.I.: Principal component based diffeomorphic surface mapping. IEEE transactions on medical imaging 31(2), 302-311 (2011)
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, SpringerRonneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed- ical image segmentation. In: MICCAI. pp. 234-241. Springer (2015)
Anatomy-specific classification of medical images using deep convolutional nets. H R Roth, C T Lee, H C Shin, A Seff, L Kim, J Yao, L Lu, R M Summers, 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEERoth, H.R., Lee, C.T., Shin, H.C., Seff, A., Kim, L., Yao, J., Lu, L., Summers, R.M.: Anatomy-specific classification of medical images using deep convolutional nets. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). pp. 101-104. IEEE (2015)
Construction of a 3D probabilistic atlas of human cortical structures. D W Shattuck, M Mirza, V Adisetiyo, C Hojatkashani, G Salamon, K L Narr, R A Poldrack, R M Bilder, A W Toga, Neuroimage. 393Shattuck, D.W., Mirza, M., Adisetiyo, V., Hojatkashani, C., Salamon, G., Narr, K.L., Poldrack, R.A., Bilder, R.M., Toga, A.W.: Construction of a 3D probabilistic atlas of human cortical structures. Neuroimage 39(3), 1064-1080 (2008)
Networks for joint affine and nonparametric image registration. Z Shen, X Han, Z Xu, M Niethammer, CVPRShen, Z., Han, X., Xu, Z., Niethammer, M.: Networks for joint affine and non- parametric image registration. CVPR (2019)
Region-specific diffeomorphic metric mapping. Z Shen, F X Vialard, M Niethammer, Advances in Neural Information Processing Systems. Shen, Z., Vialard, F.X., Niethammer, M.: Region-specific diffeomorphic metric mapping. In: Advances in Neural Information Processing Systems. pp. 1096-1106 (2019)
Medical image synthesis for data augmentation and anonymization using generative adversarial networks. H C Shin, N A Tenenholtz, J K Rogers, C G Schwarz, M L Senjem, J L Gunter, K P Andriole, M Michalski, International workshop on simulation and synthesis in medical imaging. SpringerShin, H.C., Tenenholtz, N.A., Rogers, J.K., Schwarz, C.G., Senjem, M.L., Gunter, J.L., Andriole, K.P., Michalski, M.: Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In: International work- shop on simulation and synthesis in medical imaging. pp. 1-11. Springer (2018)
The Osteoarthritis Initiative: Osteoarthritis initiative (OAI) dataset. The Osteoarthritis Initiative: Osteoarthritis initiative (OAI) dataset. https:// nda.nih.gov/oai/
AtlasNet: multi-atlas non-linear deep networks for medical image segmentation. M Vakalopoulou, G Chassagnon, N Bus, R Marini, E I Zacharaki, M P Revel, N Paragios, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerVakalopoulou, M., Chassagnon, G., Bus, N., Marini, R., Zacharaki, E.I., Revel, M.P., Paragios, N.: AtlasNet: multi-atlas non-linear deep networks for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 658-666. Springer (2018)
DeepAtlas: Joint semi-supervised learning of image registration and segmentation. Z Xu, M Niethammer, arXiv:1904.08465arXiv preprintXu, Z., Niethammer, M.: DeepAtlas: Joint semi-supervised learning of image reg- istration and segmentation. arXiv preprint arXiv:1904.08465 (2019)
Contextual additive networks to efficiently boost 3D image segmentations. Z Xu, Z Shen, M Niethammer, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. SpringerXu, Z., Shen, Z., Niethammer, M.: Contextual additive networks to efficiently boost 3D image segmentations. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 92-100. Springer (2018)
Evolutions equations in computational anatomy. L Younes, F Arrate, M I Miller, NeuroImage. 451Younes, L., Arrate, F., Miller, M.I.: Evolutions equations in computational anatomy. NeuroImage 45(1), S40-S50 (2009)
Bayesian estimation of regularization and atlas building in diffeomorphic image registration. M Zhang, N Singh, P T Fletcher, International conference on information processing in medical imaging. SpringerZhang, M., Singh, N., Fletcher, P.T.: Bayesian estimation of regularization and atlas building in diffeomorphic image registration. In: International conference on information processing in medical imaging. pp. 37-48. Springer (2013)
Data augmentation using learned transformations for one-shot medical image segmentation. A Zhao, G Balakrishnan, F Durand, J V Guttag, A V Dalca, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhao, A., Balakrishnan, G., Durand, F., Guttag, J.V., Dalca, A.V.: Data augmen- tation using learned transformations for one-shot medical image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8543-8553 (2019)
Collaborative learning of semi-supervised segmentation and classification for medical images. Y Zhou, X He, L Huang, L Liu, F Zhu, S Cui, L Shao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZhou, Y., He, X., Huang, L., Liu, L., Zhu, F., Cui, S., Shao, L.: Collaborative learning of semi-supervised segmentation and classification for medical images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2079-2088 (2019)
| [
"https://github.com/uncbiag/easyreg."
] |
[
"Parallelizing Stream with Future",
"Parallelizing Stream with Future"
] | [
"Raphaël Jolly [email protected] \nDatabeans\nVélizy-VillacoublayFrance\n"
] | [
"Databeans\nVélizy-VillacoublayFrance"
] | [] | Stream is re-interpreted in terms of a Lazy monad. Future is substituted for Lazy in the obtained construct, resulting in possible parallelization of any algorithm expressible as a Stream computation. The principle is tested against two example algorithms. Performance is evaluated, and a way to improve it briefly discussed. | null | [
"https://arxiv.org/pdf/1305.4367v1.pdf"
] | 6,687,146 | 1305.4367 | 02b246e81971b917711203fe9bd3d2cec4f689c9 |
Parallelizing Stream with Future
19 May 2013
Raphaël Jolly [email protected]
Databeans
Vélizy-VillacoublayFrance
Parallelizing Stream with Future
19 May 2013banner above paper titleD33 [Language Constructs and Features]: Data types; G4 [Mathematical Software]: Algorithm design and analysis Keywords streaming algorithmpolynomial multiplica- tionlazy monad
Stream is re-interpreted in terms of a Lazy monad. Future is substituted for Lazy in the obtained construct, resulting in possible parallelization of any algorithm expressible as a Stream computation. The principle is tested against two example algorithms. Performance is evaluated, and a way to improve it briefly discussed.
Introduction
Scala parallel collections [8] exploit data or SIMD parallelism, whereby a unique operation is applied in parallel to several data, independantly. There are problems however, where sub-parts are not independant. In such cases, some sequence must be re-introduced, to allow certain tasks to operate only after some others have ended. This is called task parallelism or pipe-line. To achieve it, if we do not want to descend to thread level, one alternate option is to use a message passing scheme, such as the one implemented in Scala in the form of Future [3]. We seek to assemble futures in a way that allows us to obtain parallelization of some suitable algorithms. Let us take the Stream concept of a lazily evaluated List as a model. List is implemented as a chain of elementary cells: This is illustrated in Figure 1. This idea should allow us to parallelize any algorithm that can be expressed functionally and recursively as a Stream.
Outline
The paper is organized as follows : in Section 3, we will introduce a Lazy monad that is semantically equivalent to the pass-by-name parameter used in Stream elementary cells. In Section 4, we will explain how to rewrite Stream in terms of this monad, or any other one for that matter, namely the Future monad. In Section 5 and 6 we will test our parallelization scheme against two example algorithms. Lastly, in Section 7 we will discuss our results and suggest some directions of improvement.
Lazy monad
We examine a construct that behaves like => A and at the same time obeys the monad rules. As an illustration, we take the example of the Traversable.filter method. In List, it is implemented in an imperative, iterative way: But it requires as many stack frames as elements in the List, resulting in stack overflows (tail call optimization is not applicable here because filter is not the last operation in the method body). In Stream, this is avoided by the passby-name nature of the second parameter to cons, allowing filter not to be called again immediately, and the number of stack frames to stay below reasonnable level. As a result, the computation is not performed immediately but on an ondemand basis.
def filter(p: A => Boolean): Stream[A] = { ...
if (!rest.isEmpty) cons(rest.head, rest.tail filter p) else Empty } To achieve the same behavior with a monad, we use again an extractor as in the List case, but we suppose that its second member is not forced, i.e. it is of type => A. Then we suppose that we can transform this type => A through a (for now putative) method map.
def filter(p: A => Boolean): Stream[A] = { ...
rest match { case head#::tail => head#::tail.map(_ filter p)) case Empty => Empty } } Likewise, we require the second parameter of the constructor #:: to be by-name, as the laziness is to be forwarded by map. Let us now sketch the form of our Lazy monad. In order to ease later substitutions with Future, let us name it the same. We find that our construct has type () => A and a method map, both as expected. We lastly endow a method to force its value, in a similar fashion as Future, for the reason given above.
Stream re-interpretation
Every method of Stream can be rewritten in the same spirit. Let us skip the details, and concentrate on the implementation of elementary Cons cells. In List, the constructor's second parameter is a normal, "flat" type. The extractor provided by the case class gives us back this value as-is. Our monad-based implementation is as follows. The second parameter of the constructor is a monad. Calls to tail force the value as above. Extractions however do not, and give us back the genuine monad, thus preserving the laziness. When forced, memoization of the value occurs internally and needs not be done again in the Cons cell.
Example : prime sieve
To evaluate our parallel algorithm, we have first tested it against a prime sieve [7]. It is not the most efficient, as it scans every divisors of a number up to the number itself instead of just its square root, but it turns out to be parallelizable accoring to our technique. First we give the original algorithm with the normal Stream implementation. val n = 20000 def primes = sieve(Stream.from (2) The purpose of force is to wait for the computation to complete. Notice that we have defined the desired number of terms in advance, otherwise the computation will not stop since it is asynchronous (if Future is used ; in the case of Lazy, the behavior is the same as with normal Stream).
Example : polynomial muliplication
The second example that we have tested our scheme against, is a computer algebraic algorithm of sparse polynomial multiplication. Other researches and applications of streaming algorithms for such kind of computations can be found in [5,6,9]. We use multivariate polynomials, in distributive representation:
x = x 0 + x 1 + ... + x n x i = c i m i
The test case, detailed in [2], simply consists in computing the product of two such big polynomials:
xy Decomposing polynomial multiplication into a sequence of multiply-by-a-term-and-add operations, it is possible to express the algorithm in terms of a stream computation. Polynomial addition is also implemented recursively. Note that the tail has to be forced in the case when one term cancels, which results in a call to Await.result. This is short description of paper Figure 2. Streaming multiply and add operations not considered good in a regular use of Futures, but we have not been able to avoid it (and it does not occur all the time). Figure 2 illustrates the process.
+ + + + + + + y 0 * x y 1 * x y m * x
Evaluation
To evaluate our method, we have run the examples both in sequential and parallel mode (using Lazy and Future respectively). Computations were performed on a single core Intel Atom D410 with hyperthreading and 2GB memory, under Linux version 2.6.32-5-amd64 (Debian 6.0) with java version "1.7.0 17", OpenJDK 64-Bit Server VM (mixed mode) and scala-2.11.0-M2. The primes example was run in two versions, primes and primes x3, until number 20000 and 60000 respectively. The polynomial multiplication example was also run in two versions, stream and stream big, the latter using polynomials with bigger coefficients (of a factor 10000000001), in order to increase the footprint of elementary operations. According for instance to [1], the expected speedup with hyperthreading should be on the order on 1.20. This is what we obtain with a control computation, list (and list big), which uses a more classical parallelization technique, based on parallel collections [4]. Our results are presented in Table 1 and Figure 3 and 4. On the vertical axis, seq means sequential execution, par(1) means parallel execution with available processors set to 1, and par(2) means normal parallel execution on our platform. We make the following observations: As a way to improve our technique, since the minimum size of elementary computations seems to be a key factor, we suppose that grouping these in bigger chunks may provide better efficiency. This will have to be tested in forthcoming research.
Conclusion
We have presented a technique for parallelizing algorithms expressible as stream computations. Stream was rewritten in terms of a Lazy monad, which was then replaced by Future, enabling parallel execution of computation subparts. Two applications were proposed, for prime numbers computation and polynomial multiplication, respectively. Evaluation showed that this method has an overhead, but that it can scale nonetheless if elementary computations are big enough, even on a limited platform such as a hyperthreaded mono-processor.
trait Future[+A] extends (() => A) { def map[B](f: A => B) = Future(f(apply()))
Figure 3 .
3Timings for primes (seconds)
1. scaling does not occur in the primes example, probably due to too fine-grained elementary operations 2. the polynomial multiplication example does not scale either in the small coefficient version
Figure 4 .
4Timings for polynomial multiplication (seconds) stream big, and performance increases consistently with what we can expect of hyperthreading
class Cons(hd: A, tl: => Stream[A]) [Copyright notice will appear here once 'preprint' option is removed.] If, instead of waiting for the moment when it is requested, tail starts to compute itself asynchronously on a new thread, we obtain a parallel computation. The elementary cell must be modified as: class Cons(hd: A, tl: Future[Stream[A]]) extends Stream[A]class Cons(hd: A, tl: List[A])
extends List[A]
In Stream, tail is evaluated lazily, using a by-name param-
eter:
List
Stream
A
A
A
A
A
A
A
A
=>
=>
=>
Stream
A
A
A
A
Future
Future
Future
Figure 1. Chaining of elementary cells
extends Stream[A]
Functionally, this would have to be expressed recursively:def filter(p: A => Boolean): List[A] = {
val b = new ListBuffer[A]
for (x <-this) if (p(x)) b += x
b.result
}
def filter(p: A => Boolean): List[A] = {
var rest = this
while (!rest.isEmpty && !p(rest.head))
rest = rest.tail
rest match {
case head::tail => head::tail.filter(p)
case Nil => Nil
}
}
case class ::[A](hd: A, tl: List[A]) extends List[A] { override def isEmpty: Boolean = false override def head: A = hd override def tail: List[A] = tl }Conversely, in Stream it is a by-name parameter. Since case classes disallow such a parameter type, the Cons cell must be a normal class, and no extractor is provided. Calls to tail force the value, which is memoized.object Stream {
class Cons[+A](hd: A, tl:
=> Stream[A]) extends Stream[A] {
private[this] var tlVal: Stream[A] = _
...
def tailDefined: Boolean = tlVal ne null
override def tail: Stream[A] = {
if (!tailDefined) tlVal = tl
tlVal
}
}
}
Below we give the methods to get our modified Stream back and forth from/to the original Scala Stream. These are implemented recursively like the other modified Stream methods. Notice the call to future made in apply in order to wrap tails into their monadic containers. The reverse operation in unapply is simply done by forcing the value through calling tail.short description of paper object Stream {
case class Cons[+A](hd: A, tl:
Future[Stream[A]]) extends Stream[A] {
private[this] var defined: Boolean = _
...
def tailDefined = defined
override def tail: Stream[A] = {
defined = true
Await.result(tl, Duration.Inf)
}
}
}
object Stream {
def apply[A](s: scala.Stream[A]):
Stream[A] =
if (s.isEmpty) Empty
else cons(s.head, future(apply(s.tail)))
def unapply[A](s: Stream[A]):
Option[scala.Stream[A]] = Some(
if (s.isEmpty) scala.Stream.Empty
else scala.Stream.cons(s.head,
unapply(s.tail).get))
}
The modified implementation of Stream entails the following modifications to the example code : use an extractor to obtain head and (wrapped) tail ; call map on tail to express further operations.)
def sieve(s: Stream[Int]): Stream[Int] =
Stream.cons(s.head,
sieve(s.tail.filter { _ % s.head != 0 }))
primes.take(n).force
def primes = sieve(Stream.range(2, n, 1))
def sieve(s: Stream[Int]): Stream[Int] =
s match {
case head#::tail =>
head#::tail.map(s =>
sieve(s.filter { _ % head != 0 }))
case Empty => Empty
}
primes.force
Table 1 .
1Timings(seconds)
seq par(1) par(2)
primes
3.4
5.9
primes x3
15.7
20.2
stream
14
35.1
37.7
stream big
48
67.5
49.5
list
8.2
5.7
list big
38.6
22.7
. the streaming approach, at least in the polynomial example, seems to be sound, and perform reasonably well when no parallelization is involved (stream is not worse than half as fast than list, which is a well optimized classical iterative/imperative implementation) 4. the overhead incurred by parallelization, well visible when available processors is set to 1, is compensated when the footprint of coefficients is big enough, as in short description of paper
/5/21
/5/21
Media applications on hyper-threading technology. Y.-K Chen, M Holliman, E Debes, S Zheltov, A Knyazev, S Bratanov, R Belenov, I Santos, Intel Technology Journal. 6Y.-K. Chen, M. Holliman, E. Debes, S. Zheltov, A. Knyazev, S. Bratanov, R. Belenov, and I. Santos. Media appli- cations on hyper-threading technology. Intel Technol- ogy Journal, Vol. 6 Issue 1:47-57, Q1, 2002. URL http://www.intel.com/technology/itj/2002/volume06issue01/vol6iss1_hyper_threading_technology.pdf.
Draft: Comparing the speed of programs for sparse polynomial multiplication. R J Fateman, R. J. Fateman. Draft: Comparing the speed of programs for sparse polynomial multiplication. http://www.cs.berkeley.edu/∼fateman/papers/fastmult.pdf, accessed Nov 2009, 2002.
Futures and promises. P Haller, A Prokopec, H Miller, V Klang, R Kuhn, V Jovanovic, P. Haller, A. Prokopec, H. Miller, V. Klang, R. Kuhn, and V. Jovanovic. Futures and promises, 2011. URL http://docs.scala-lang.org/overviews/core/futures.html.
Straightforward parallelization of polynomial multiplication using parallel collections in scala. R Jolly, IEEE 27th International Conference on Advanced Information Networking and Applications Workshops. L. Barolli, F. Xhafa, M. Takizawa, T. Enokido, and H.-H. HsuBarcelona, Catalonia, SpainIEEER. Jolly. Straightforward parallelization of polynomial multiplication using parallel collections in scala. In L. Barolli, F. Xhafa, M. Takizawa, T. Enokido, and H.-H. Hsu, editors, IEEE 27th International Confer- ence on Advanced Information Networking and Ap- plications Workshops, 25-28 March 2013, Barcelona, Catalonia, Spain, pages 1436-1438. IEEE, 2013. URL http://jscl-meditor.sourceforge.net/parallel.pdf.
Parallel buchberger algorithms on virtual shared memory ksr1. H Kredel, H. Kredel. Parallel buchberger algorithms on virtual shared memory ksr1. 1994. URL http://krum.rz.uni-mannheim.de/kredel/parGBvsm.pdf.
Parallel polynomial operations in the large Buchberger algorithm. H Melenk, W Neun, Computer Algebra and Parallelism. Academic PressH. Melenk and W. Neun. Parallel polynomial operations in the large Buchberger algorithm. In Computer Algebra and Parallelism, pages 143-158. Academic Press, 1989. URL http://opus4.kobv.de/opus4-zib/frontdoor/deliver/index/docId
class stream in scala. M Odersky, M Zenger, Technical reportM. Odersky and M. Zenger. class stream in scala. Technical report, http://www.scala-lang.org/api/2.7.5/scala/Stream.html, 2003.
On a generic parallel collection framework. A Prokopec, P Bawgell, T Rompf, M Odersky, Technical reportA. Prokopec, P. Bawgell, T. Rompf, and M. Odersky. On a generic parallel collection framework. Technical report, http://infoscience.epfl.ch/record/165523/files/techrep.pdf, June 2011.
Extended parallelism in the gröbner basis algorithm. International journal of parallel programming. S A Schwab, 21S. A. Schwab. Extended parallelism in the gröbner basis algorithm. International journal of par- allel programming, 21(1):39-66, 1992. URL http://repository.cmu.edu/cgi/viewcontent.cgi?article=3048&c short description of paper
| [] |
[
"Cyber Mobility Mirror: A Deep Learning-based Real-World Object Perception Platform Using Roadside LiDAR",
"Cyber Mobility Mirror: A Deep Learning-based Real-World Object Perception Platform Using Roadside LiDAR"
] | [
"Student Member, IEEEZhengwei Bai ",
"Saswat P Nayak ",
"Xuanpeng Zhao ",
"Senior Member, IEEEGuoyuan Wu ",
"Fellow, IEEEMatthew J Barth ",
"Member, IEEEXuewei Qi ",
"EmrahYongkang Liu ",
"Akin Sisbot ",
"Kentaro Oguchi "
] | [] | [] | Object perception plays a fundamental role in Cooperative Driving Automation (CDA) which is regarded as a revolutionary promoter for the next-generation transportation systems. However, the vehicle-based perception may suffer from the limited sensing range and occlusion as well as low penetration rates in connectivity. In this paper, we propose Cyber Mobility Mirror (CMM), a next-generation real-time traffic surveillance system for 3D object perception and reconstruction, to explore the potential of roadside sensors for enabling CDA in the real world. The CMM system consists of six main components: 1) the data pre-processor to retrieve and preprocess the raw data; 2) the roadside 3D object detector to generate 3D detection results; 3) the multi-object tracker to identify detected objects; 4) the global locator to map positioning information from the LiDAR coordinate to geographic coordinate using coordinate transformation; 5) the cloud-based communicator to transmit perception information from roadside sensors to equipped vehicles, and 6) the onboard advisor to reconstruct and display the real-time traffic conditions via Graphical User Interface (GUI). In this study, a field-operational system is deployed at a real-world intersection, University Avenue and Iowa Avenue in Riverside, California to assess the feasibility and performance of our CMM system. Results from field tests demonstrate that our CMM prototype system can provide satisfactory perception performance with 96.99% precision and 83.62% recall. Highfidelity real-time traffic conditions (at the object level) can be geo-localized with an average error of 0.14m and displayed on the GUI of the equipped vehicle with a frequency of 3 − 4Hz. | 10.1109/tits.2023.3268281 | [
"https://arxiv.org/pdf/2202.13505v2.pdf"
] | 248,069,663 | 2202.13505 | 328c7235522ae9fbc946a493a3c690ca139d7b02 |
Cyber Mobility Mirror: A Deep Learning-based Real-World Object Perception Platform Using Roadside LiDAR
Student Member, IEEEZhengwei Bai
Saswat P Nayak
Xuanpeng Zhao
Senior Member, IEEEGuoyuan Wu
Fellow, IEEEMatthew J Barth
Member, IEEEXuewei Qi
EmrahYongkang Liu
Akin Sisbot
Kentaro Oguchi
Cyber Mobility Mirror: A Deep Learning-based Real-World Object Perception Platform Using Roadside LiDAR
1Index Terms-Field Operational System3D Object DetectionMulti-Object TrackingLocalizationDeep LearningCooperative Driving Automation
Object perception plays a fundamental role in Cooperative Driving Automation (CDA) which is regarded as a revolutionary promoter for the next-generation transportation systems. However, the vehicle-based perception may suffer from the limited sensing range and occlusion as well as low penetration rates in connectivity. In this paper, we propose Cyber Mobility Mirror (CMM), a next-generation real-time traffic surveillance system for 3D object perception and reconstruction, to explore the potential of roadside sensors for enabling CDA in the real world. The CMM system consists of six main components: 1) the data pre-processor to retrieve and preprocess the raw data; 2) the roadside 3D object detector to generate 3D detection results; 3) the multi-object tracker to identify detected objects; 4) the global locator to map positioning information from the LiDAR coordinate to geographic coordinate using coordinate transformation; 5) the cloud-based communicator to transmit perception information from roadside sensors to equipped vehicles, and 6) the onboard advisor to reconstruct and display the real-time traffic conditions via Graphical User Interface (GUI). In this study, a field-operational system is deployed at a real-world intersection, University Avenue and Iowa Avenue in Riverside, California to assess the feasibility and performance of our CMM system. Results from field tests demonstrate that our CMM prototype system can provide satisfactory perception performance with 96.99% precision and 83.62% recall. Highfidelity real-time traffic conditions (at the object level) can be geo-localized with an average error of 0.14m and displayed on the GUI of the equipped vehicle with a frequency of 3 − 4Hz.
I. INTRODUCTION
With the rapid growth of travel demands, the transportation system is facing increasingly serious traffic-related challenges, such as improving traffic safety, mitigating traffic congestion, and reducing mobile source emissions. Taking advantage of recent strides in advanced sensing, wireless connectivity, and artificial intelligence, Cooperative Driving Automation (CDA) is attracting more and more attention over the past few years and is regarded as a transformative solution to the aforementioned challenges [1]. In the past few decades, several projects or programs have been conducted to explore the feasibility and potential of CDA. For instance, the California PATH program showed throughput improvement by a fully connected and automated platoon [2]. In the European DRIVE C2X project, the cooperative traffic system was assessed by large-scale field operational tests for various connected vehicle applications [3]. Recently, the U.S. Department of Transportation is leading the CARMA Program [4] for research on CDA, leveraging emerging capabilities in both connectivity and automation to enable cooperative transportation system management and operations (TSMO) strategies. Additionally, the Autonet2030 Program led by EUCar is working on Cooperative Systems in Support of Networked Automated Driving by 2030 [5]. However, most of the aforementioned projects assume an ideal scenario, i.e., all vehicles are connected and automated. Because the presence of mixed traffic (with different types of connectivity and levels of automation) would be the norm, in the long run, one of the popular ways to enhance CAVs' adaptability in such a complicated environment is to improve their situation-awareness capability. For example, vehicles are equipped with more and more high-resolution onboard sensors and upgraded with powerful onboard computers to better perceive the surroundings and make decisions by themselves, a similar path to highly automated vehicles (HAVs) [6]. However, this roadmap is facing a couple of major challenges: 1) the cost of large-scale real-world implementation is prohibitive; and 2) the detection ranges are limited for onboard sensors, which also suffer from occlusion partially due to mounting heights and positions [7].
Recently, roadside sensor-assisted perception is attracting a significant amount of attention for CAVs and is regarded as a promising way to unlock numerous opportunities for cooperative driving automation applications [8]. Current roadside sensing systems are mainly camera-based, which are cost-arXiv:2202.13505v2 [cs.CV] 7 Apr 2022 effective and well-developed for traffic surveillance such as turning movement counts, but hard to provide reliable objectlevel high-fidelity 3D information due to lighting conditions and shadow effects [9].
Considering its capability to determine an accurate 3D location based on the point cloud data, LiDAR gets more popular in infrastructure-based traffic surveillance. Previous studies validated the performance of roadside LiDAR for vehicle detection [10], vehicle tracking [11], lane identification [12], pedestrian near-crash warning [13], and other applications [14], [15]. These studies laid the foundation for applications with roadside LiDAR-based perception systems. However, all these systems deploy a traditional perception pipeline [16], [17], consisting of background filtering, point cloud clustering, object classification, and object tracking. Such pipeline may generate stable results but suffer from uncertainties and generality [18]. With the development of computer vision, deep learning-based perception models show great potential to overcome the above issues. However, very few studies applied deep learning-based perception algorithms to roadside LiDAR systems.
The main contributions of this paper can be summarized as follows:
1) To the best of the authors' knowledge, this paper is the first attempt to comprehensively build a deep learningbased real-world platform, called Cyber Mobility Mirror (CMM), for 3D object-level detection and tracking at a signalized intersection using the roadside LiDAR. 2) To improve the transferability of learning-based detection models, Roadside Point-cloud Encoder and Decoder (RPEaD) is proposed. 3) A real-time 3D multi-object tracking method, called 3DSORT, is developed. 4) A geo-localization method is proposed to support the reconstruction from the detector output to geographic visualization. The CMM platform can serve as the stepping stone to enabling various cooperative driving automation (CDA) applications.
The rest of this paper is organized as follows: related work is firstly introduced in Section II. Section III shows the concept and structure of CMM, followed by a detailed description of the associated field operational system in Section IV. The results and analyses are discussed in Section V and the last section concludes this paper with further discussion.
II. BACKGROUND
Situation awareness is one of the fundamental building blocks for Driving Automation (DA). Specifically, 3D object detection and tracking play a crucial role in perceiving the environment. Meanwhile, traffic object reconstruction helps drivers better understand the traffic conditions. Hence, in this section, related work about detection, tracking, and reconstruction of traffic objects is presented, based on a detailed literature review.
A. Traffic Object Detection
Object detection is a fundamental task of environment perception and has also gone through a rapid development process in the past several decades. Back twenty years ago, a vision-based traffic detection system made an impressive achievement using statistical methods [19]. For instance, Aslani and Mahdavi-Nasab [20] proposed an optical flowbased moving object detection method for traffic surveillance. However, these model-based methods cannot provide high fidelity detection results for more delicate applications, e.g., precise localization and object-level tracking. To explore highly accurate moving object detection methods, researchers started applying artificial neural networks [21].
With the tremendous progress of convolutional neural networks (CNNs) in vision-based tasks, CNN-based object detection methods have attracted a significant amount of attention in traffic surveillance [22]. For instance, You Only Look Once (YOLO) [23] and its variants, due to an impressive performance in real-time multi-object detection, get very popular in highresolution traffic monitoring scenarios. For multi-scale vehicle object detection, Mao et al. [24] added Spatial Pyramid Pooling (SPP) modules in YOLO to obtain multi-resolution information. The Single Shot MultiBox Detector (SSD) [25] is also of significance in traffic applications. Based on SSD, Wang et al. [26] proposed a novel multi-object detection model to improve the overall perception performance under a variety of traffic scenarios based on a multi-kernel CNN. Faster RCNN [27] is another generic epoch-making detection method, utilizing the region proposal ideology. To further improve the object detection performance for Faster-RCNN, Li et al. [28] proposed a cross-layer fusion structure based on Faster RCNN to achieve a nearly 10% higher average accuracy in complex traffic environments, e.g., dense traffic with shadows and occlusions.
Except for the general object detection task applied in traffic scenes, many studies focus on specific perception cases. For instance, considering that existing traffic surveillance systems were made up of costly equipment with complicated operational procedures, Mhalla et al. [29] designed an embedded computervision system for multi-object detection in traffic surveillance. For small object detection, Lian et al. [30] proposed an attention feature fusion block to better integrate contextual information from different layers that could achieve much better performance. Targeting other edge situations, i.e., highly crowded traffic scenarios, Gahlert et al. [31] proposed the Visibility Guided Non-Maximum Suppression (vg-NMS) to improve the detection accuracy by leveraging both pixel-based object detection and amodal perception paradigms. For situation awareness, Guindel et al. [32] proposed a deep CNN to jointly handle object detection and viewpoint estimation.
To support object-level cooperative operations, detecting the objects in a 3D format is a straightforward and promising way for high-fidelity situation awareness. Hence, owing to the capability of generating 3D point clouds with spatial information, it is increasingly popular for deploying 3D LiDAR to traffic environment perception. Wu et al. [10], proposed a revised Density-Based Spatial Clustering of Applications with Noise (3D-DBSCAN) method to detect vehicles based on roadside LiDAR sensors under rainy and snowy conditions. Using a roadside LiDAR, Zhang et al. proposed a three-stage inference pipeline, called GC-net [33], including the gridding, clustering, and classification. In this study, the raw point cloud data (PCD) was firstly mapped into a grid structure and then clustered by the Grid-Density-Based Spatial Clustering algorithm. Finally, a CNN-based classifier was applied to categorize the detected objects by extracting their local features. Liu et al. proposed a roadside LiDAR-based object detection approach by following the conventional background filtering and clustering pipeline [34], where point correlation with KDTree [35] neighborhood searching and adaptive Euclidean clustering was applied, respectively. To distinguish the moving object from the point cloud, Song et al. proposed a hierarchical searching method based on the feature distribution of point clouds to achieve background filtering and object detection [36]. Although 3D LiDAR has innate advantages to deal with 3D object detection, the lack of labeled roadside dataset significantly limits the potential for applying deep leaningbased detectors to roadside LiDAR sensors. Hence in this paper, an point cloud encoder-decoder method is proposed to enable the detection model to work on roadside point cloud with training on onboard dataset.
B. Traffic Object Tracking
Deploying CDA in urban environments poses a series of difficult technological challenges, out of which object tracking is arguably one of the most significant since it provides the identification information for other subsequent technical models [37]. Object tracking can be classified into two categories in terms of the number of objects tracked at one time: one is single-object tracking (SOT) and the other is multiobject tracking (MOT). SOT has been investigated over several decades and the Kalman filtering methods or particle filtering methods have been employed widely [38], [39] for this type of task. For MOT tasks, some approaches have been proposed with the focus on improving accuracy and real-time performance. For instance, Bewley et al. [40] proposed Simple Online and Realtime Tracking (SORT) that can achieve MOT in a high frame rate without much compromising the accuracy. Based on the structure of SORT, Nicolai et al. [41] proposed a multi-object tracker -DeepSORT, which was capable of tracking objects with longer periods of occlusions and effectively reducing the number of identity switches by integrating the appearance features. However, Deep SORT does not apply to 3D objects.
Considering the evolution of sensor technology and perception methods, camera-based approaches play a dominant role in traffic object tracking over the past several decades. For instance, Aslani et al. [20] applied the Optical Flow algorithm to detect and track moving objects by the intensity changes of frames. To improve the MOT performance, Fernandez-Sanjurjo et al. [42] built a real-time traffic monitoring system and data association with the Hungarian algorithm. Based on cameras equipped on an unmanned aerial vehicle (UAV), researchers [11], [43] applied correlation filters [44] to MOT tasks. Chen et al. proposed a camera-based edge traffic flow monitoring scheme using DeepSORT [45]. Recent advances in LiDAR technology enable it to hold a place in traffic object tracking tasks, by leveraging the point cloud data. For instance, Cui et al. [46] provided a simple global nearest neighbor (GNN) method to track multiple vehicles based on the spatial distance between consecutive frames. Adaptive probabilistic filtering was utilized by Kampker et al. [47] to handle uncertainties due to sensing limitations of 3D LiDARs and the complexity of targets' movements. Zhang et al., [48] used unscented Kalman filter (UKF) and joint probability data association filter for MOT, which improved the accuracy of estimated vehicle speed through an image matching process.
C. Traffic Object Reconstruction
Traffic reconstruction, traditionally, means rebuilding the traffic scenarios or parameters based on recorded sensor data, such as loop detectors and surveillance cameras [49], [50]. These traffic-level reconstruction data are valuable for macroscopic traffic management. In this paper, nevertheless, the object-level reconstruction means rebuilding the 3D location or shape of certain objects based on sensor data, which can more concrete information to support subsequent CDA applications. Several studies have been conducted in this emerging area. Cao et al. [51] developed a camera-based 3D object reconstruction method on the Internet of Vehicles (IoV) environment. Rao and Chakraborty [52] proposed a LiDAR-based monocular 3D shaping to reconstruct the surrounding objects for onboard display, which has a similar purpose to the reconstruction work in this paper.
III. CYBER MOBILITY MIRROR (CMM)
To explore the potential of the roadside sensing system, we propose a novel infrastructure-based object-level perception system, named Cyber Mobility Mirror. In this section, the core concept of CMM and the associated platform implemented in the real world are introduced. A. Core Concept of CMM CMM aims to enable real-time object-level traffic perception and reconstruction to empower various cooperative driving automation (CDA) applications, such as Collision Warning [13], Eco-Approach, and Departure (EAD) [53], and Cooperative Adaptive Cruise Control (CACC) [54]. In the CMM system, traffic conditions (i.e., "mobility") are detected by high-fidelity sensors and advanced perception methods, such as object detection, classification, and tracking. In the "cyber" world, digital replicas (i.e., "mirrored" objects) are built to reconstruct the traffic in real-time via high-definition 3D perception information, such as the detected objects' geodetic locations (rendered on the satellite map), 3D dimensions, speeds, and moving directions (or headings). Then, this "mirror" can act as the perception foundation for numerous CDA applications in a real-world transportation system. Specifically, Fig. 2 illustrates the system diagram for the core concept of CMM. Traffic objects can be detected by high-fidelity sensors equipped on the infrastructure side and the sensing data is processed by an edge server to generate object-level information and enable various functions, such as detection, classification, tracking, and geodetic localization. The perception information is also transmitted to a cloud server for distribution and 3D reconstruction. The reconstructed traffic environment can be displayed on the GUI of connected road users to support various CDA applications.
B. Systematic Structure of CMM
In the real-world traffic environment, the system architecture of the CMM system is designed by following the core concept. Specifically, the CMM system can be divided into two main parts: the CMM Roadside System (CRS) and the CMM Onboard System (COS). Fig. 3 illustrates the system architecture. CRS and COS are introduced in detail as follows:
1) CMM Roadside System: CRS consists of 1) roadside sensors, e.g., LiDAR in this study, to perceive traffic conditions and generate high-fidelity sensor data; 2) edge computingbased real-time perception pipeline to achieve sensor fusion (if appropriate), object detection, classification, and tracking tasks; and 3) communication devices to receive information from other road users, infrastructure or even "clouds", and share perceived results with them via different kinds of protocols.
2) CMM Onboard System: For CAVs, COS can receive the object-level perception data from CRS and then act as the perception inputs to support various CDA applications, such as CACC, cooperative merging, cooperative eco-driving; and for Connected Human-driven Vehicles (CHVs), COS can also provide them with real-time traffic information via the humanmachine interface (HMI) to improve driving performance or to avoid possible crashes due to occlusion.
In this paper, the CMM concept is implemented in the real world and a field operational system is developed for real-world testing, which will be discussed in Section IV.
IV. CMM FIELD OPERATIONAL SYSTEM A. System Overview
The system overview for the CMM Field Operational System (FOS) is shown in Fig. 4. The FOS mainly consists of a roadside 3D LiDAR for data collection, an edge-computing system for data processing, a cloud server for data distribution, and a test vehicle equipped with connectivity and Graphic User Interface (GUI). To be specific, the LiDAR is installed on the signal pole high enough to achieve better coverage. The edge computer retrieves 3D point cloud data from the roadside LiDAR and then generates high-definition perception information (i.e., 3D object detection, classification, and tracking results) which is transmitted to the cloud server via Cellular Network. A CHV equipped with the CMM OBUs (including a GPS receiver, onboard communication device, and a tablet) can receive the perception information, and reconstruct and display the objectlevel traffic condition on GUI in real-time.
B. System Initialization
As demonstrated by Fig. 5, the LiDAR is installed at the northwest corner of the intersection (marked as the red circle) of University Ave. and Iowa Ave. in Riverside, California. In this work, an OUSTER®64-Channel 3D LiDAR is used as a major roadside sensor, mounted on a signal pole at the height of 14-15 ft above the ground with the appropriate pitch and yaw angles to cover the monitoring area enclosed by the orange rectangle in Fig. 5. The edge computer at the intersection receives the stream of LiDAR data in the form of UDP packets. Other point cloud attributes such as 3D location, i.e., x, y, z, and the intensity, i, of each point are bundled into an N × 4 array to be used in the inference pipeline.
C. Data Retrieving and Preprocessing
The raw point cloud data is generated by a 64-channel 3D LiDAR and then the edge computer retrieves the raw data through an Ethernet cable via UDP communication. In this paper, the detection range Ω for the roadside LiDAR is defined as a 102.4m × 102.4m area centered on the location of LiDAR. The raw point cloud data can be described by:
P = {[x, y, z, i] | [x, y, z] ∈ R 3 , i ∈ [0.0, 1.0]}.(1)
Then, P is geo-fenced by: To build the system cost-effectively, we try to use an opensource dataset to train our detection model, e.g., Nuscenes [55]. However, these available datasets are collected based on a vehicle-equipped LiDAR. These LiDAR sensors have different spatial configurations from ours and the model trained on these datasets may not work well for our roadside point clouds.
P Ω = {[x, y, z, i] T | x ∈ X , y ∈ Y, z ∈ Z}(2)
To empower the model with the capability of training on onboard datasets while inferencing on the roadside, we propose the Roadside Point-cloud Encoder and Decoder (RPEaD). The main purpose of RPEaD is to transform roadside point clouds into a space in which the model trained on the onboard datasets can work out. The transformation process of the encoder is described in Fig. 6. To achieve the transformation, we propose a self-calibration approach for the roadside-LiDAR pose by using Least Square Regression (LSR) to the point clouds. The coordinate for roadside point clouds are defined as LiDAR Coordinate (L-Coor) and the coordinate of point clouds after encoding, is defined as Horizontal Coordinate (H-Coor). Using LSR, the least square plane is generated to represent the x − y plane of the L-Coor. Then the 3D rotation matrix can be generated as P Cali , which is shown as:
P Cali = a b c d e f g h i (3)
where a, ..., i are the parameters generated from LSR. For translation, the vertical offset ∆z is defined as:
∆z = z roadside − z onboard(4)
where z roadside and z onboard represent the heights of the roadside LiDAR and the onboard LiDAR (used in the training dataset), respectively. The whole encoding process is defined by:
P H = P Ω · P Cali 0 0 1 + [0, 0, ∆z, 0](5)
2) Object Detection Network: Although the roadside point cloud is transformed into the coordinate suitable for training on the onboard dataset. The detection model has still required a large tolerance for the difference in data. Since there is a large shifting, i.e., near 3m, along z−axis, to make the model not too sensitive for z−axis data, we voxelized the point cloud following the strategy applied in [56], i.e., only voxelization on the x − y plane to generate point cloud pillars. Then data aggregation, as shown in Fig. 7, is designed to extract and compress the features which will be sent to the deep neural network for generating predicted bounding boxes. After the data aggregation, Fig 8 shows the designed feature pyramid network (FPN) followed by a 3D anchor-based detection head [25] to generate predicted bounding boxes.
For the loss functions, localization and classification are considered. To be specific, ground target (GT) and anchors are defined by a 8-dimensional vector (x, y, z, w, l, h, θ). The localization regression residuals between ground truth and anchors are defined by:
∆x = x gt − x a d a , ∆y = y gt − y a d a , ∆z = z gt − z a h a ,(6)∆w = log w gt w a , ∆l = log l gt l a , ∆h = log h gt h a ,(7)∆θ = sin(θ gt − θ a )(8)
where the superscript gt and a represent the ground truth and anchor, respectively, and d a is defined by:
d a = (w a ) 2 + (l a ) 2 .(9)
The total localization loss is:
L loc = b∈(x,y,z,w,l,h,θ) SmoothL1(∆b)(10)
Inspired by [57], a softmax classification loss, L dir , is used to distinguish flipped boxes. The object classification is enabled by the focal loss [58], which is shown as:
L cls = −α a (1 − p a ) γ log p a ,(11)
where p a is the class probability of an anchor, and α and β are set as the same as the original paper. Hence, the total loss is:
L = 1 N pos (β loc L loc + β cls L cls + β dir L dir ),(12)
where N pos is the number of positive anchors and β loc , β cls and β dir are set as 2, 1, and 0.2.
E. 3D Multi-Object Tracking
For real-time 3D MOT, we propose 3DSORT by adding 3D object matching on DeepSORT [45]. To be specific, 2D location information is filtered from the 3D detection results, and the 2D location data is fed into the DeepSORT model to generate the 2D MOT results, i.e., unique identification (ID) number for each object. Then, a Euclidean distance-based 3D object matching algorithm is designed to generate the enhanced 3D MOT results. Algorithm 1 demonstrates the details of 3DSORT.
where N Dbbx and N T bbx are the numbers of the detection bounding boxes and 2D tracking boxes, respectively. Additionally, id represents the tracking identification number for each unique object. d o is the matching distance which is defined as 0.2m.
F. Geo-localization
To endow the perception data with more generality, the geo-referencing of the point cloud is developed in this work. However, the output T boxes from the 3D MOT is calculated
T obj 2d = {T (j)
2d (x, y, w, l, id)|j = 1, 2, ..., N T bbx } ← DeepSORT (Dobj 2d ); 4: for Dobj for T obj (j) 2d ∈ T obj 2d do 6: if Euclidean distance of (Dobj return T obj; 13: end function based on the Horizontal-LiDAR Coordinate, i.e., a Cartesian Coordinate centered with the sensor installed evenly. Thus, the input of the geo-localization data, i.e., the T boxes from Algorithm 1, is then fed into a multi-step transformation process to transform the object location information to Geodetic Coordinate, i.e., latitude, longitude, and altitude. There are three steps: 1) from the horizontal-LiDAR coordinate to the real LiDAR coordinate; 2) from the real LiDAR coordinate to the Geocentric Earth-centered Earth-fixed (ECEF) coordinate; and 3) from ECEF coordinate to the geodetic coordinate (i.e., latitude, longitude, and altitude). Specifically, the World Geodetic System 1984 (WGS84) is applied for the geotransformation. The transformation from the Horizontal LiDAR coordinate to the ECEF coordinate system is shown in Eq. 13.
X ecef Y ecef Z ecef 1 T = X hor Y hor Z hor 1 T · P −1 Cali · P ECEF(13)
where P −1 Cali ∈ R 4x4 and P ECEF ∈ R 4x4 are the inverse of the LiDAR calibration matrix, and the ECEF transformation matrix, respectively. X hor , Y hor , and Z hor represent the coordinates of 3D points concerning Horizontal LiDAR Coordinate. The P ECEF matrix responsible for transforming points in LiDAR coordinate frame to the geocentric coordinate frame (ECEF) is calculated using the Ground Control Point surveying technique [59].
The longitude (λ) is calculated from the ECEF position using Eq. 14,
λ = arctan( Y ecef X ecef )(14)
The geodetic latitude (φ) is calculated using Bowring's method by solving Eq. 15 and Eq. 16 in an iterative manner,
β = arctan( Z ecef (1 − f )s ) (15) φ = arctan( Z ecef + e 2 ( 1 − f 1 − e 2 )R(sin β) 3 s − e 2 R(cos β) 3 )(16)
where R, f , and e 2 = 1 − (1 − f ) 2 are the equatorial radius, flattening of the planet, and the square of first eccentricity, respectively. s is defined as s = X 2 ecef + Y 2 ecef . The altitude (h ego , height above ellipsoid) is given by,
h ego = s cos φ + (Z ecef + e 2 N sin φ) sin φ − N(17)
where N , the radius of curvature in the vertical prime, is defined as
N = R 1 − e 2 (sin φ) 2(18)
Then the geo-referenced perception information (φ, λ, h ego ) along with other data will be transmitted to the cloud server for distribution and the final data is packaged as:
Data roadside = {M (i) (t, id, φ, λ, h ego , w, l, h, θ)} N Dbbx i=1(19)
G. Cloud Communication
As shown in Fig. 9, the onboard unit (OBU) retrieves traffic perception data from the cloud server and GPS location data from a GPS receiver. Then the onboard unit reconstructs the traffic conditions based on the multi-source data and displays it on the graphical user interface (GUI) in real-time (the update frequency is 3-4 Hz on average). In our field implementation, a Samsung Galaxy Tab A7 tablet serves as an OBU, running a designed application to retrieve data from the GPS receiver and displaying the reconstructed object-level traffic information on the GUI. We adopt a NETGEAR AirCard 770S mobile hotspot which is equipped with a 4G/LTE sim card and can provide Vehicle-to-Cloud (V2C) communication between the cloud server and OBU. To have accurate GPS measurements, we utilize a C102-F9R U-Blox unit with an embedded Inertial Measurement Unit (IMU) which provides an 8Hz update frequency on the GPS location and heading.
H. Multi-Object Reconstruction
An application is designed to visualize the location of vehicles perceived by the roadside unit (RSU) and the ego vehicle provided by the OBU. To achieve that, we first locate the monitored area at the intersection and crop it from the Google Earth Pro satellite view. We leverage the cropped image as a background map for visualizing the reconstructed traffic. Firstly, we calculate the distance between two reference GPS points using the Haversine formula as shown followed.
a =sin 2 (∆lat/2) + cos(lat ref 1 ) · cos(lat ref 2 ) · sin 2 (∆lon/2) c =2 · atan2( √ a, √ 1 − a) d =R · c(20)
where lat ref 1 and lat ref 2 are latitudes of two reference GPS points, ∆lat is the latitude difference between two GPS points, ∆lon is the longitude difference between two GPS points, R is the radius of the earth, and d is the distance computed between two GPS points. Based on the number of pixels between their displayed pixel coordinates on the tablet, we can calculate the transfer ratio between them.
P ix ref 1 − P ix ref 2 Dis ref 1 − Dis ref 2 = α(21)
where, P ix ref 1 and P ix ref 2 are the pixel coordinates of two reference points, Dis ref 1 and Dis ref 2 are the distance between two reference points, and α is the transfer ratio. By now, we can create an object and display it on the desired pixel coordinates based on its GPS location. The tablet and the u-blox are wire-connected and the data is transmitted via Universal Serial Bus (USB) serial connection between them. With the GPS location and heading, the ego vehicle is displayed on the GUI as an orange vehicle icon. On the other hand, the data from the cloud server contains the perception information, including GPS location, heading, and size three-dimension, obtained from the RSU. From the cloud server data, we first separate the vehicle data from the pedestrian data based on the three-dimensional size information. Then display the vehicles sensed by the RSU with blue vehicle icons and pedestrians with pedestrian top view icons.
V. FIELD TESTING AND RESULTS ANALYSIS A. Feasibility
Object-level perception information acts as the building block for CMM, which requires high-fidelity data retrieved from highresolution sensors, such as LiDARs. Nevertheless, it could be costly, time-consuming, and to some extent, restricted by policies and protocols, to deploy these sensors directly in the real world. Thus, it is necessary to evaluate the feasibility of the system at the early stage of this work.
To find an efficient and cost-effective way to validate the feasibility of CMM, we emulated a CMM system in a simulation platform, i.e., a CARLA-based co-simulation system [60], before the real-world implementation. As demonstrated in Fig. 10, the basic idea is to emulate the real-world traffic environment via one CARLA simulator [61] and run the entire perception process within the emulated real-world environment.
Real-world Traffic Environment
Deep Neural Networks (ComplexYolo) Then the other CARLA simulator is applied to emulate the cyber world, i.e., to reconstruct the traffic objects and then display them. Owing to the capability of CARLA to model high-fidelity sensors, the evaluation results of the emulated CMM in the co-simulation platform can lay the foundation for real-world CMM implementation. Fig. 11. Illustration of CMM field operational test from different views from a drone, host vehicle, onboard GUI, and edge server.
After the feasibility check in the simulation environment, we implement the CMM field operational system (FOS) at a realworld intersection of University Ave. & Iowa Ave. in Riverside, California. Fig. 11 depicts the field system from different views. Multi-view videos are captured along the test including drone's view, in-vehicle views (including driver perspective, backseat passenger perspective, and GUI), roadside view, and point cloud data-based bird's-eye view (BEV). A video clip is edited with the descriptive annotations to show the whole online process, which is available at https://www.youtube.com/watch? v=0egpmgkzyG0). The video demonstrates the feasibility of the CMM FOS and the following sections will show the results of detection accuracy and real-time performance.
B. Detection Fig. 12 demonstrates several frames of the CMM FOS testing results. The first column shows the drone view, the second column depicts the bird's-eye view from LiDAR data, and the third column presents the reconstructed view on the onboard graphical user interface. The ego vehicle equipped with the CMM onboard system is marked by a red rectangle in each figure. In the GUI, the orange icons represent the GPS locations of the ego vehicle, while the blue ones denote vehicles detected by the roadside LiDAR. Additionally, pedestrians are also (a) Drone's view --ground truth.
(b) View of 3D bounding box.
(c) GUI view --reconstruction. detected and shown in the GUI with top view pedestrian icons (shown in the video). The detection accuracy is evaluated by the Confusion Matrix, a popular evaluation process used in the computer vision area [62]. Specifically, the detection results can be categorized into four classes:
• True Positive (TP): the number of cases predicted as positive by the classifier when they are indeed positive, i.e., a vehicle object is detected as a vehicle. • False Positive (FP) = the number of cases predicted as positive by the classifier when they are indeed negative, i.e., a non-vehicle object is detected as a vehicle. • True Negative (TN) = the number of cases predicted as negative by the classifier when they are indeed negative, i.e., a non-vehicle object is detected as a non-vehicle object. • False Negative (FN) = the number of cases predicted as negative by the classifier when they are indeed positive, i.e., a vehicle is detected as a non-vehicle object. Precision is the ability of the detector to identify only relevant objects, i.e., vehicles and pedestrians in this paper. It is the proportion of correct positive predictions and is given by
P recision = T P T P + F P = T P # of all detections(22)
Recall is a metric that measures the ability of the detector to find all the relevant cases (that is, all the ground truths). It is the proportion of true positive detected among all ground-truth (i.e., real vehicles) and is defined as
Recall = T P T P + F N = T P # of all ground truth(23)
In terms of the perspective for traffic surveillance, we define another metric named Miss which measures the portion of "missing" vehicles (that are not detected) and is defined by M iss = F N T P + T N = # of all missing vehicles # of all ground truth (24) To evaluate the prototype system performance, we randomly select 130 frames of testing data and manually label them based on the drone's view. A total of 1661 vehicles are labeled as the ground truth and the detection accuracy is evaluated based on the three aforementioned parameters. Table I summarizes the evaluation results.
C. Localization
This section analyzes the localization performance of our CMM field operational system. To evaluate the localization accuracy, a multi-sensor-based localization system is applied to measure the ground truth location of the ego-vehicle. This multisensor system consists of a GPS receiver enabled with Real-Time Kinematic (RTK) positioning and an Inertial Measurement Unit (IMU). Since this system can achieve centimeter-level positioning, the measurement generated by this GPS-RTK-IMU positioning system is used as the ground truth to assess the CMM system.
Field tests are conducted in terms of different driving scenarios, including 1) left turn, 2) right turn, 3) going straight, and 4) U-turn. The trajectories of ego-vehicle with four driving scenarios are extracted and visualized in Fig. 13. From the subfigures shown in Fig. 13, the trajectories generated by our CMM system (green curves) highly match the ground truth generated by the onboard GPS-RTK-IMU positioning system (red curves). From the boxplot analysis in Fig. 14(b), excepting the outliers, the minimum and maximum localization errors are −0.03m and 0.32m, respectively, which ensures the applicability of
D. Latency
As for a field operation system (FOS), it is of great significance to analyze the latency of the whole system. Depicted in Fig. 15, the latency of the whole CMM FOS pipeline can be analyzed by breaking down the whole workflow into three main phases:
• Phase 1 -Sensor Side: Time elapsed from the start till the edge server receives the sensor data. Specifically, in the sensor processing stage, the sensor collects the raw data and processes it into a transformable format via its embedded system. For data retrieving, the processed data can be transmitted to the edge server via the Local Area Network (LAN). The time consumption is certified by the manufacturer. • Phase 2 -Edge-Server Side: Time elapsed from the moment when sensor data is received by the edge server till the instance when perception data is encoded and sent out to the cloud server. The edge server is responsible for generating the object-level perception data, including 3D object detection, 3D multi-object tracking, and geodetic localization. Since these modules are running in chronological order, the time consumption for each module is measured by the starting and ending timestamps of each function. • Phase 3 -Cloud & Onboard: Time elapsed from the moment when perception data is sent from the edge server till the instance when reconstructed traffic environments are displayed on the onboard GUI. Since the CMM system tends to serve all the road users with connectivity, a cloud server is used for data acquisition, synchronization, and distribution of processed data (after edge computing). The onboard computer, i.e., the tablet utilized in this study, decodes the perception data, reconstructs the traffic environment, and displays it on the GUI. Time consumption for this phase is measured by the timestamps from the onboard end to the edge-server end. As shown in Fig. 15, the total latency is about 285ms − 335ms, whose variance mainly results from the fluctuation of communication. However, during the field testing, we find out that the time consumption of every single computational module may vary within a certain range. For example, the object-tracking and geo-localization modules have a larger variance compared with the object detection model, which may be caused by the change in the number of detected objects.
To reduce the latency of the whole system, there are several ways that can be applied in the future. For example, several for loops and external python packages are implemented in the software for tracking and localization parts which mainly account for the surprisingly high computational cost at the perception end. Therefore, programming optimization can be applied to further reduce computational time. Another way to speed up the whole process is to improve the hardware's computational performance for edge servers and onboard computers.
VI. CONCLUSION AND DISCUSSION
In this study, we introduce the concept of Cyber Mobility Mirror (CMM) and develop a CMM Field Operational System at a real-world intersection as a prototype for enabling Cooperative Driving Automation (CDA). It leverages highfidelity roadside sensors (e.g., LiDAR) to detect, classify, track and reconstruct object-level traffic information in real-time, which can lay a foundation of environment perception for various kinds of CDA applications in mixed traffic. Testing results prove the feasibility of the CMM concept and also demonstrate satisfactory system performance in terms of realtime high-fidelity traffic surveillance. The overall perception accuracy metrics include 96.99% for precision and 83.62% for recall. Additionally, the average geo-localization error of the system is 0.14m and real-time traffic conditions can be displayed at a frequency of 3 − 4Hz.
Based on this prototype CMM FOS, several future directions for improving the system performance may include:
• Perception Accuracy: Since it is a cost-effective way to collect roadside training datasets from the SOTA autonomous driving simulators, e.g., CARLA [61], we will improve the detection accuracy by enhancing the model with transferability, i.e., training on simulation and testing on real-world; • Perception Range: The current CMM FOS only involves one LiDAR sensor and thus can only cover a limited area of the whole intersection. To extend the perception range of the CMM system, we plan to set up several sensors including both LiDARs and cameras to cover multiple intersections to achieve a corridor-level cooperative perception system; • Real-time Performance: The time consumption can be mainly reduced from the edge-server side, i.e., optimizing the software programming in the tracking and localization parts. Besides, upgrading the hardware equipment can also improve the real-time processing speed. This paper intends to provide a field operational system of a novel concept of the roadside sensor-based high-fidelity traffic surveillance system, named CMM, which hopes can provide foundations and inspirations for future work. By leveraging the high-fidelity roadside sensing information available from the CMM system, plenty of subsequent CDA applications (e.g., CACC, advanced intersection management, cooperative ecodriving) can be revisited for real-world implementation in the mixed traffic environment. Kentaro Oguchi received the M.S. degree in computer science from Nagoya University. He is currently a Director at Toyota Motor North America, InfoTech Labs. Oguchi's team is responsible for creating intelligent connected vehicle architecture that takes advantage of novel AI technologies to provide realtime services to connected vehicles for smoother and efficient traffic, intelligent dynamic parking navigation and vehicle guidance to avoid risks from anomalous drivers. His team also creates technologies to form a vehicular cloud using Vehicle-to-Everything technologies. Prior, he worked as a senior researcher at Toyota Central R&D Labs in Japan.
Fig. 2 .
2Systematic diagram for the core concept of CMM.
Fig. 3 .
3System structure for CMM system in Real-World Traffic Environment.
Fig. 4 .
4The architecture for CMM field operational prototype system.
Fig. 5 .
5Location and installation of the equipped roadside LiDAR.
D. 3D Object Detection from roadside LiDAR 1) Roadside Point-cloud Encoder and Decoder: Considering the LiDAR's limited vertical field of view (FOV), it is installed with an adjusted rotation angle including pitch, yaw, and roll to cover the desired surveillance area as shown inFig 6.
Coordinate: the Coordinate for the 3D point cloud in real-world. Horizontal LiDAR Coordinate: horizontalized LiDAR Coordinate.Rotation by real-world installation.Rotation by self-calibration matrix.
Fig. 6 .
6Description of the initial transformation for LiDAR point cloud data.
Fig. 7 .
7Process for the feature extraction and compression.
Fig. 8 .
8Deep neural network backbone for hidden feature extraction.
The instant 3D object detection results: Dobj = {D (i) (x, y, z, w, l, h, θ)|i = 1, 2, ..., N Dbbx }; Output: The multi-object tracking results: T obj = {T (i) Dobj 2d ← D (i) (x, y, w, l)|i = 1, 2, ..., N Dbbx ; 3:
= {T (i) |i = 1, 2, ..., N Dbbx } 12:
Fig. 9 .
9Illustration of onboard settings (structure and communications).
Fig. 10 .
10Structure for the CMM-based co-simulation platform.
Fig. 12 .
12Examples of the CMM FOS testing results from different perspectives (The ego-vehicle is marked by red boxes).
Fig. 14shows the quantitative analysis results. Totally 455 frames of data are selected in terms of different driving scenarios and according toFig. 14(a), most of the errors (52.7%) fall into the interval of [0.1m, 0.2m]. Additionally, 62.5% localization results have errors within [−0.2m, 0.2m].
Fig. 13 .
13Trajectories of different driving scenarios (the trajectories from CMM FOS and ground truth are shown in green and red, respectively).our system for CDA applications in the real-world traffic environment.(a) Histogram of the localization error.(b) Boxplot of the localization error.
Fig. 14 .
14Localization error analysis between CMM and ground truth.
Fig. 15 .
15Visualization of the latency at different stages in CMM FOS.
Zhengwei
Bai (Student Member, IEEE) received the B.E. and M.S. degrees from Beijing Jiaotong University, Beijing, China, in 2017 and 2020, respectively. He is currently a Ph.D. student in electrical and computer engineering at the University of California at Riverside. His research focuses on computer vision, sensor fusion, cooperative perception, and cooperative driving automation (CDA). He serves as a Review Editor in Urban Transportation Systems and Mobility. Saswat N. Nayak received the B. Tech degree in Electrical Engineering from the National Institute of Technology Rourkela, India in 2018. He served as a Project Associate at the Department of Aerospace Engineering, Indian Institute of Technology Kanpur, India 2018-19. He is currently pursuing the Ph.D. degree at the Center of Environmental Research and Technology (CE-CERT), University of California Riverside, USA. His main research interests include vehicle positioning and localization in mixed traffic scenarios, multi-sensor fusion and connected vehicle applications. Xuanpeng Zhao received the B.E. degree in electrical engineering from Shanghai Maritime University in 2019 and the M.S. degree in electrical engineering from the University of California at Riverside. He is currently a Ph.D. student in electrical and computer engineering at University of California at Riverside. His research focuses on cybersecurity, and connected and automated vehicle technology. Guoyuan Wu (Senior Member, IEEE) received his Ph.D. degree in mechanical engineering from the University of California, Berkeley in 2010. Currently, he holds an Associate Researcher and an Associate Adjunct Professor position at Bourns College of Engineering -Center for Environmental Research & Technology (CE-CERT) and Department of Electrical & Computer Engineering in the University of California at Riverside. development and evaluation of sustainable and intelligent transportation system (SITS) technologies, including connected and automated transportation systems (CATS), shared mobility, transportation electrification, optimization and control of vehicles, traffic simulation, and emissions measurement and modeling. Dr. Wu serves as Associate Editors for a few journals, including IEEE Transactions on Intelligent Transportation Systems, SAE International Journal of Connected and Automated Vehicles, and IEEE Open Journal of ITS. He is also a member of the Vehicle-Highway Automation Standing Committee (ACP30) of the Transportation Research Board (TRB), a board member of Chinese Institute of Engineers Southern California Chapter (CIE-SOCAL), and a member of Chinese Overseas Transportation Association (COTA). He is a recipient of Vincent Bendix Automotive Electronics Engineering Award. Matthew J. Barth (Fellow, IEEE) received the M.S. and Ph.D degree in electrical and computer engineering from the University of California at Santa Barbara, in 1985 and 1990, respectively. He is currently the Yeager Families Professor with the College of Engineering, University of California at Riverside, USA. He is also serving as the Director for the Center for Environmental Research and Technology. His current research interests include ITS and the environment, transportation/emissions modeling, vehicle activity analysis, advanced navigation techniques, electric vehicle technology, and advanced sensing and control. Dr. Barth has been active in the IEEE Intelligent Transportation System Society for many years, serving as a Senior Editor for both the Transactions of ITS and the Transactions on Intelligent Vehicles. He served as the IEEE ITSS President for 2014 and 2015 and is currently the IEEE ITSS Vice President of Education. Xuewei Qi (Member, IEEE) received his Ph.D. degree in electrical and computer engineering from the University of California-Riverside in 2016 and his M.S. degree in engineering from the University of Georgia, USA, in 2013. He is a Principle AI Researcher with Toyota North America Research Labs (Silicon Valley). He was with General Motors as an Artificial Intelligence and Machine Learning Research Scientist. He was also working as a Lead Perception Research Engineer at Aeye.ai. His recent research interests include deep learning, autonomous vehicles, perception and sensor fusion, reinforcement learning and decision making. He is also serving as a member of several standing committees of the Transportation Research Board (TRB). Yongkang Liu received the Ph.D. and M.S. degrees in electrical engineering from the University of Texas at Dallas in 2021 and 2017, respectively. He is currently a Research Engineer at Toyota Motor North America, InfoTech Labs. His current research interests are focused on in-vehicle systems and advancements in inteligent vehicle technologies. Emrah Akin Sisbot (Member, IEEE) received the Ph.D. degree in robotics and artificial intelligence from Paul Sabatier University, Toulouse, France in 2008. He was a Postdoctoral Research Fellow at LAAS-CNRS, Toulouse, France, and at the University of Washington, Seattle. He is currently a Principal Engineer with Toyota Motor North America, InfoTech Labs, Mountain View, CA. His current research interests include real-time intelligent systems, robotics, and human-machine interaction.
where P Ω represents the 3D point cloud data after geofencing; and X and Y are set as [−51.2m, 51.2m]. Considering the calibrated height of the roadside Lidar to be 4.74m, Z is set as [−5.0m, 0m].
TABLE I TEST
IPERFORMANCE OF CMM FOSGround Truth
TP
FP Precision Recall
Miss
1661
1389 43
96.99%
83.62% 16.38%
ACKNOWLEDGMENT This research was funded by the Toyota Motor North America InfoTech Labs. The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein. The contents do not necessarily reflect the official views of Toyota Motor North America.
Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. D J Fagnant, K Kockelman, Transportation Research Part A: Policy and Practice. 77D. J. Fagnant and K. Kockelman, "Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations," Trans- portation Research Part A: Policy and Practice, vol. 77, pp. 167-181, 2015.
Path investigations in vehicle-roadside cooperation and safety: A foundation for safety and vehicle-infrastructure integration research. J A Misener, S E Shladover, 2006 IEEE Intelligent Transportation Systems Conference. IEEEJ. A. Misener and S. E. Shladover, "Path investigations in vehicle-roadside cooperation and safety: A foundation for safety and vehicle-infrastructure integration research," in 2006 IEEE Intelligent Transportation Systems Conference. IEEE, 2006, pp. 9-16.
Starting european field tests for car-2-x communication: the drive c2x framework. R Stahlmann, A Festag, A Tomatis, I Radusch, F Fischer, 18th ITS World Congress and Exhibition. 12R. Stahlmann, A. Festag, A. Tomatis, I. Radusch, and F. Fischer, "Starting european field tests for car-2-x communication: the drive c2x framework," in 18th ITS World Congress and Exhibition, 2011, p. 12.
Carma program overview. USDOTUSDOT, "Carma program overview," May 2021. [Online]. Available: https://highways.dot.gov/research/operations/CARMA
Autonet2030. EUCAREUCAR, "Autonet2030," May 2021. [Online]. Available: https: //www.autonet2030.eu/
A survey on 3d object detection methods for autonomous driving applications. E Arnold, O Y Al-Jarrah, M Dianati, S Fallah, D Oxtoby, A Mouzakitis, IEEE Transactions on Intelligent Transportation Systems. 2010E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, "A survey on 3d object detection methods for autonomous driving applications," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782-3795, 2019.
A survey of autonomous driving: Common practices and emerging technologies. E Yurtsever, J Lambert, A Carballo, K Takeda, IEEE access. 8E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, "A survey of autonomous driving: Common practices and emerging technologies," IEEE access, vol. 8, pp. 58 443-58 469, 2020.
Infrastructurebased object detection and tracking for cooperative driving automation: A survey. Z Bai, G Wu, X Qi, Y Liu, K Oguchi, M J Barth, arXiv:2201.11871arXiv preprintZ. Bai, G. Wu, X. Qi, Y. Liu, K. Oguchi, and M. J. Barth, "Infrastructure- based object detection and tracking for cooperative driving automation: A survey," arXiv preprint arXiv:2201.11871, 2022.
A survey of vision-based traffic monitoring of road intersections. S R E Datondji, Y Dupuis, P Subirats, P Vasseur, IEEE transactions on intelligent transportation systems. 1710S. R. E. Datondji, Y. Dupuis, P. Subirats, and P. Vasseur, "A survey of vision-based traffic monitoring of road intersections," IEEE transactions on intelligent transportation systems, vol. 17, no. 10, pp. 2681-2698, 2016.
Automatic vehicle detection with roadside lidar data under rainy and snowy conditions. J Wu, H Xu, J Zheng, J Zhao, IEEE Intelligent Transportation Systems Magazine. 131J. Wu, H. Xu, J. Zheng, and J. Zhao, "Automatic vehicle detection with roadside lidar data under rainy and snowy conditions," IEEE Intelligent Transportation Systems Magazine, vol. 13, no. 1, pp. 197-209, 2020.
Detection and tracking of pedestrians and vehicles using roadside lidar sensors. J Zhao, H Xu, H Liu, J Wu, Y Zheng, D Wu, Transportation research part C: emerging technologies. 100J. Zhao, H. Xu, H. Liu, J. Wu, Y. Zheng, and D. Wu, "Detection and tracking of pedestrians and vehicles using roadside lidar sensors," Transportation research part C: emerging technologies, vol. 100, pp. 68-87, 2019.
Automatic lane identification using the roadside lidar sensors. J Wu, H Xu, J Zhao, IEEE Intelligent Transportation Systems Magazine. 121J. Wu, H. Xu, and J. Zhao, "Automatic lane identification using the roadside lidar sensors," IEEE Intelligent Transportation Systems Magazine, vol. 12, no. 1, pp. 25-34, 2018.
An improved vehicle-pedestrian near-crash identification method with a roadside lidar sensor. J Wu, H Xu, Y Zhang, R Sun, Journal of safety research. 73J. Wu, H. Xu, Y. Zhang, and R. Sun, "An improved vehicle-pedestrian near-crash identification method with a roadside lidar sensor," Journal of safety research, vol. 73, pp. 211-224, 2020.
Automatic background construction and object detection based on roadside lidar. Z Zhang, J Zheng, H Xu, X Wang, X Fan, R Chen, IEEE Transactions on Intelligent Transportation Systems. 2110Z. Zhang, J. Zheng, H. Xu, X. Wang, X. Fan, and R. Chen, "Automatic background construction and object detection based on roadside lidar," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 10, pp. 4086-4097, 2019.
Lidar-enhanced connected infrastructures sensing and broadcasting highresolution traffic information serving smart cities. B Lv, H Xu, J Wu, Y Tian, Y Zhang, Y Zheng, C Yuan, S Tian, IEEE Access. 7B. Lv, H. Xu, J. Wu, Y. Tian, Y. Zhang, Y. Zheng, C. Yuan, and S. Tian, "Lidar-enhanced connected infrastructures sensing and broadcasting high- resolution traffic information serving smart cities," IEEE Access, vol. 7, pp. 79 895-79 907, 2019.
Background filtering and object detection with a stationary lidar using a layer-based method. Y Song, H Zhang, Y Liu, J Liu, H Zhang, X Song, IEEE Access. 8Y. Song, H. Zhang, Y. Liu, J. Liu, H. Zhang, and X. Song, "Background filtering and object detection with a stationary lidar using a layer-based method," IEEE Access, vol. 8, pp. 184 426-184 436, 2020.
A density-based algorithm for discovering clusters in large spatial databases with noise. M Ester, H.-P Kriegel, J Sander, X Xu, in kdd. 9634M. Ester, H.-P. Kriegel, J. Sander, X. Xu et al., "A density-based algorithm for discovering clusters in large spatial databases with noise." in kdd, vol. 96, no. 34, 1996, pp. 226-231.
Object detection in 20 years: A survey. Z Zou, Z Shi, Y Guo, J Ye, arXiv:1905.05055arXiv preprintZ. Zou, Z. Shi, Y. Guo, and J. Ye, "Object detection in 20 years: A survey," arXiv preprint arXiv:1905.05055, 2019.
Statistic and knowledge-based moving object detection in traffic scenes. R Cucchiara, C Grana, M Piccardi, A Prati, ITSC2000. 2000 IEEE Intelligent Transportation Systems. Proceedings (Cat. No. 00TH8493. IEEER. Cucchiara, C. Grana, M. Piccardi, and A. Prati, "Statistic and knowledge-based moving object detection in traffic scenes," in ITSC2000. 2000 IEEE Intelligent Transportation Systems. Proceedings (Cat. No. 00TH8493). IEEE, 2000, pp. 27-32.
Optical flow based moving object detection and tracking for traffic surveillance. S Aslani, H Mahdavi-Nasab, Computer, Energetic, Electronic and Communication Engineering. 79International Journal of ElectricalS. Aslani and H. Mahdavi-Nasab, "Optical flow based moving object detection and tracking for traffic surveillance," International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering, vol. 7, no. 9, pp. 1252-1256, 2013.
Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems. S.-C Huang, B.-H Chen, IEEE transactions on neural networks and learning systems. 24S.-C. Huang and B.-H. Chen, "Highly accurate moving object detection in variable bit rate video-based traffic monitoring systems," IEEE transactions on neural networks and learning systems, vol. 24, no. 12, pp. 1920-1931, 2013.
Object detection using deep learning methods in traffic scenarios. A Boukerche, Z Hou, ACM Computing Surveys (CSUR). 542A. Boukerche and Z. Hou, "Object detection using deep learning methods in traffic scenarios," ACM Computing Surveys (CSUR), vol. 54, no. 2, pp. 1-35, 2021.
You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779- 788.
Finding every car: a traffic surveillance multi-scale vehicle object detection method. Q.-C Mao, H.-M Sun, L.-Q Zuo, R.-S Jia, Applied Intelligence. 5010Q.-C. Mao, H.-M. Sun, L.-Q. Zuo, and R.-S. Jia, "Finding every car: a traffic surveillance multi-scale vehicle object detection method," Applied Intelligence, vol. 50, no. 10, pp. 3125-3136, 2020.
Ssd: Single shot multibox detector. W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, SpringerW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, "Ssd: Single shot multibox detector," in European conference on computer vision. Springer, 2016, pp. 21-37.
Multi-object detection in traffic scenes based on improved ssd. X Wang, X Hua, F Xiao, Y Li, X Hu, P Sun, Electronics. 711302X. Wang, X. Hua, F. Xiao, Y. Li, X. Hu, and P. Sun, "Multi-object detection in traffic scenes based on improved ssd," Electronics, vol. 7, no. 11, p. 302, 2018.
Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. 28S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, pp. 91-99, 2015.
A method of cross-layer fusion multi-object detection and recognition based on improved faster r-cnn model in complex traffic environment. C Li, Z Qu, S Wang, L Liu, Pattern Recognition Letters. 145C.-j. Li, Z. Qu, S.-y. Wang, and L. Liu, "A method of cross-layer fusion multi-object detection and recognition based on improved faster r-cnn model in complex traffic environment," Pattern Recognition Letters, vol. 145, pp. 127-134, 2021.
An embedded computer-vision system for multi-object detection in traffic surveillance. A Mhalla, T Chateau, S Gazzah, N E B Amara, IEEE Transactions on Intelligent Transportation Systems. 2011A. Mhalla, T. Chateau, S. Gazzah, and N. E. B. Amara, "An embedded computer-vision system for multi-object detection in traffic surveillance," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 11, pp. 4006-4018, 2018.
Small object detection in traffic scenes based on attention feature fusion. J Lian, Y Yin, L Li, Z Wang, Y Zhou, Sensors. 2193031J. Lian, Y. Yin, L. Li, Z. Wang, and Y. Zhou, "Small object detection in traffic scenes based on attention feature fusion," Sensors, vol. 21, no. 9, p. 3031, 2021.
Visibility guided nms: Efficient boosting of amodal object detection in crowded traffic scenes. N Gählert, N Hanselmann, U Franke, J Denzler, arXiv:2006.08547arXiv preprintN. Gählert, N. Hanselmann, U. Franke, and J. Denzler, "Visibility guided nms: Efficient boosting of amodal object detection in crowded traffic scenes," arXiv preprint arXiv:2006.08547, 2020.
Fast joint object detection and viewpoint estimation for traffic scene understanding. C Guindel, D Martin, J M , IEEE Intelligent Transportation Systems Magazine. 104C. Guindel, D. Martin, and J. M. Armingol, "Fast joint object detection and viewpoint estimation for traffic scene understanding," IEEE Intelligent Transportation Systems Magazine, vol. 10, no. 4, pp. 74-86, 2018.
Gc-net: Gridding and clustering for traffic object detection with roadside lidar. L Zhang, J Zheng, R Sun, Y Tao, IEEE Intelligent Systems. L. Zhang, J. Zheng, R. Sun, and Y. Tao, "Gc-net: Gridding and clustering for traffic object detection with roadside lidar," IEEE Intelligent Systems, 2020.
Background filtering and object detection with roadside lidar data. Z Liu, Q Li, S Mei, M Huang, 2021 4th International Conference on Electron Device and Mechanical Engineering. Z. Liu, Q. Li, S. Mei, and M. Huang, "Background filtering and object detection with roadside lidar data," in 2021 4th International Conference on Electron Device and Mechanical Engineering (ICEDME), 2021, pp. 296-299.
A method for initialising the k-means clustering algorithm using kd-trees. S J Redmond, C Heneghan, Pattern recognition letters. 288S. J. Redmond and C. Heneghan, "A method for initialising the k-means clustering algorithm using kd-trees," Pattern recognition letters, vol. 28, no. 8, pp. 965-973, 2007.
Background filtering and object detection with a stationary lidar using a layer-based method. Y Song, H Zhang, Y Liu, J Liu, H Zhang, X Song, IEEE Access. 8Y. Song, H. Zhang, Y. Liu, J. Liu, H. Zhang, and X. Song, "Background filtering and object detection with a stationary lidar using a layer-based method," IEEE Access, vol. 8, pp. 184 426-184 436, 2020.
Sae, Taxonomy and definitions for terms related to cooperative driving automation for on-road motor vehicles j3216 202005. 2021SAE, "Taxonomy and definitions for terms related to cooperative driving automation for on-road motor vehicles j3216 202005," Available: https: //www.sae.org/standards/content/j3216 202005/, 2021.
Kalman filter for vision tracking. E V Cuevas, D Zaldivar, R Rojas, E. V. Cuevas, D. Zaldivar, and R. Rojas, "Kalman filter for vision tracking," 2005.
A boosted particle filter: Multitarget detection and tracking. K Okuma, A Taleghani, N Freitas, J J Little, D G Lowe, SpringerK. Okuma, A. Taleghani, N. De Freitas, J. J. Little, and D. G. Lowe, "A boosted particle filter: Multitarget detection and tracking," in European conference on computer vision. Springer, 2004, pp. 28-39.
Simple online and realtime tracking. A Bewley, Z Ge, L Ott, F Ramos, B Upcroft, 2016 IEEE international conference on image processing (ICIP). IEEEA. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, "Simple online and realtime tracking," in 2016 IEEE international conference on image processing (ICIP). IEEE, 2016, pp. 3464-3468.
Simple online and realtime tracking with a deep association metric. N C , A Bewley, D Paulus, 2017 IEEE International Conference on Image Processing. N. c, A. Bewley, and D. Paulus, "Simple online and realtime tracking with a deep association metric," in 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 3645-3649.
Real-time visual detection and tracking system for traffic monitoring. M Fernandez-Sanjurjo, B Bosquet, M Mucientes, V M Brea, Engineering Applications of Artificial Intelligence. 85M. Fernandez-Sanjurjo, B. Bosquet, M. Mucientes, and V. M. Brea, "Real-time visual detection and tracking system for traffic monitoring," Engineering Applications of Artificial Intelligence, vol. 85, pp. 410-420, 2019.
Multeye: Monitoring system for real-time vehicle detection, tracking and speed estimation from uav imagery on edge-computing platforms. N Balamuralidhar, S Tilon, F Nex, Remote Sensing. 134573N. Balamuralidhar, S. Tilon, and F. Nex, "Multeye: Monitoring system for real-time vehicle detection, tracking and speed estimation from uav imagery on edge-computing platforms," Remote Sensing, vol. 13, no. 4, p. 573, 2021.
Visual object tracking using adaptive correlation filters. D S Bolme, J R Beveridge, B A Draper, Y M Lui, 2010 IEEE computer society conference on computer vision and pattern recognition. IEEED. S. Bolme, J. R. Beveridge, B. A. Draper, and Y. M. Lui, "Visual object tracking using adaptive correlation filters," in 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, 2010, pp. 2544-2550.
An edge traffic flow detection scheme based on deep learning in an intelligent transportation system. C Chen, B Liu, S Wan, P Qiao, Q Pei, IEEE Transactions on Intelligent Transportation Systems. 223C. Chen, B. Liu, S. Wan, P. Qiao, and Q. Pei, "An edge traffic flow detection scheme based on deep learning in an intelligent transportation system," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 3, pp. 1840-1852, 2020.
Automatic vehicle tracking with roadside lidar data for the connected-vehicles system. Y Cui, H Xu, J Wu, Y Sun, J Zhao, IEEE Intelligent Systems. 343Y. Cui, H. Xu, J. Wu, Y. Sun, and J. Zhao, "Automatic vehicle tracking with roadside lidar data for the connected-vehicles system," IEEE Intelligent Systems, vol. 34, no. 3, pp. 44-51, 2019.
Towards multi-object detection and tracking in urban scenario under uncertainties. A Kampker, M Sefati, A S A Rachman, K Kreisköther, P Campoy, VEHITS. A. Kampker, M. Sefati, A. S. A. Rachman, K. Kreisköther, and P. Campoy, "Towards multi-object detection and tracking in urban scenario under uncertainties." in VEHITS, 2018, pp. 156-167.
Vehicle tracking and speed estimation from roadside lidar. J Zhang, W Xiao, B Coifman, J P Mills, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 13J. Zhang, W. Xiao, B. Coifman, and J. P. Mills, "Vehicle tracking and speed estimation from roadside lidar," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 5597-5608, 2020.
Traffic flow reconstruction using mobile sensors and loop detector data. J C Herrera, A M Bayen, J. C. Herrera and A. M. Bayen, "Traffic flow reconstruction using mobile sensors and loop detector data," 2007.
A compressive sensing-based approach to end-to-end network traffic reconstruction. D Jiang, W Wang, L Shi, H Song, IEEE Transactions on Network Science and Engineering. 71D. Jiang, W. Wang, L. Shi, and H. Song, "A compressive sensing-based approach to end-to-end network traffic reconstruction," IEEE Transactions on Network Science and Engineering, vol. 7, no. 1, pp. 507-519, 2018.
Joint 3d reconstruction and object tracking for traffic video analysis under iov environment. M Cao, L Zheng, W Jia, X Liu, IEEE Transactions on Intelligent Transportation Systems. 226M. Cao, L. Zheng, W. Jia, and X. Liu, "Joint 3d reconstruction and object tracking for traffic video analysis under iov environment," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6, pp. 3577-3591, 2021.
In-vehicle object-level 3d reconstruction of traffic scenes. Q Rao, S Chakraborty, IEEE Transactions on Intelligent Transportation Systems. Q. Rao and S. Chakraborty, "In-vehicle object-level 3d reconstruction of traffic scenes," IEEE Transactions on Intelligent Transportation Systems, pp. 1-13, 2020.
Hybrid reinforcement learning-based eco-driving strategy for connected and automated vehicles at signalized intersections. Z Bai, P Hao, W Shangguan, B Cai, M J Barth, IEEE Transactions on Intelligent Transportation Systems. Z. Bai, P. Hao, W. Shangguan, B. Cai, and M. J. Barth, "Hybrid reinforcement learning-based eco-driving strategy for connected and automated vehicles at signalized intersections," IEEE Transactions on Intelligent Transportation Systems, pp. 1-14, 2022.
A survey on cooperative longitudinal motion control of multiple connected and automated vehicles. Z Wang, Y Bian, S E Shladover, G Wu, S E Li, M J Barth, IEEE Intelligent Transportation Systems Magazine. 121Z. Wang, Y. Bian, S. E. Shladover, G. Wu, S. E. Li, and M. J. Barth, "A survey on cooperative longitudinal motion control of multiple connected and automated vehicles," IEEE Intelligent Transportation Systems Magazine, vol. 12, no. 1, pp. 4-24, 2020.
nuscenes: A multimodal dataset for autonomous driving. H Caesar, V Bankiti, A H Lang, S Vora, V E Liong, Q Xu, A Krishnan, Y Pan, G Baldan, O Beijbom, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionH. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, "nuscenes: A multimodal dataset for autonomous driving," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 621-11 631.
Pointpillars: Fast encoders for object detection from point clouds. A H Lang, S Vora, H Caesar, L Zhou, J Yang, O Beijbom, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition705A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, "Pointpillars: Fast encoders for object detection from point clouds," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 12 697-12 705.
Second: Sparsely embedded convolutional detection. Y Yan, Y Mao, B Li, Sensors. 18103337Y. Yan, Y. Mao, and B. Li, "Second: Sparsely embedded convolutional detection," Sensors, vol. 18, no. 10, p. 3337, 2018.
Focal loss for dense object detection. T.-Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionT.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
Georeferencing of terrestrial laserscanner data for applications in architectural modeling. S Schuhmacher, J Boehm, S. Schuhmacher and J. Boehm, "Georeferencing of terrestrial laserscanner data for applications in architectural modeling," 2005.
Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform. Z Bai, G Wu, X Qi, K Oguchi, M J Barth, arXiv:2201.09463arXiv preprintZ. Bai, G. Wu, X. Qi, K. Oguchi, and M. J. Barth, "Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform," arXiv preprint arXiv:2201.09463, 2022.
Carla: An open urban driving simulator. A Dosovitskiy, G Ros, F Codevilla, A Lopez, V Koltun, Conference on robot learning. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, "Carla: An open urban driving simulator," in Conference on robot learning. PMLR, 2017, pp. 1-16.
The pascal visual object classes challenge: A retrospective. M Everingham, S Eslami, L Van Gool, C K Williams, J Winn, A Zisserman, International journal of computer vision. 1111M. Everingham, S. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, "The pascal visual object classes challenge: A retrospective," International journal of computer vision, vol. 111, no. 1, pp. 98-136, 2015.
| [] |
[
"Image2StyleGAN++: How to Edit the Embedded Images? (a) (b) (c) (d)",
"Image2StyleGAN++: How to Edit the Embedded Images? (a) (b) (c) (d)"
] | [
"Rameen Abdal Kaust \nYipeng Qin Cardiff University\n\n",
"Peter Wonka [email protected] \nYipeng Qin Cardiff University\n\n"
] | [
"Yipeng Qin Cardiff University\n",
"Yipeng Qin Cardiff University\n"
] | [] | Figure 1: (a) and (b): input images; (c): the "two-face" generated by naively copying the left half from (a) and the right half from (b); (d): the "two-face" generated by our Image2StyleGAN++ framework.AbstractWe propose Image2StyleGAN++, a flexible image editing framework with many applications. Our framework extends the recent Image2StyleGAN [1] in three ways. First, we introduce noise optimization as a complement to the W + latent space embedding. Our noise optimization can restore high frequency features in images and thus significantly improves the quality of reconstructed images, e.g. a big increase of PSNR from 20 dB to 45 dB. Second, we extend the global W + latent space embedding to enable local embeddings. Third, we combine embedding with activation tensor manipulation to perform high quality local edits along with global semantic edits on images. Such edits motivate various high quality image editing applications, e.g. image reconstruction, image inpainting, image crossover, local style transfer, image editing using scribbles, and attribute level feature transfer. Examples of the edited images are shown across the paper for visual inspection. | 10.1109/cvpr42600.2020.00832 | [
"https://arxiv.org/pdf/1911.11544v1.pdf"
] | 208,290,975 | 1911.11544 | 56e3b48e72f9452cb862de1b76c51ade2b681c43 |
Image2StyleGAN++: How to Edit the Embedded Images? (a) (b) (c) (d)
Rameen Abdal Kaust
Yipeng Qin Cardiff University
Peter Wonka [email protected]
Yipeng Qin Cardiff University
Image2StyleGAN++: How to Edit the Embedded Images? (a) (b) (c) (d)
Figure 1: (a) and (b): input images; (c): the "two-face" generated by naively copying the left half from (a) and the right half from (b); (d): the "two-face" generated by our Image2StyleGAN++ framework.AbstractWe propose Image2StyleGAN++, a flexible image editing framework with many applications. Our framework extends the recent Image2StyleGAN [1] in three ways. First, we introduce noise optimization as a complement to the W + latent space embedding. Our noise optimization can restore high frequency features in images and thus significantly improves the quality of reconstructed images, e.g. a big increase of PSNR from 20 dB to 45 dB. Second, we extend the global W + latent space embedding to enable local embeddings. Third, we combine embedding with activation tensor manipulation to perform high quality local edits along with global semantic edits on images. Such edits motivate various high quality image editing applications, e.g. image reconstruction, image inpainting, image crossover, local style transfer, image editing using scribbles, and attribute level feature transfer. Examples of the edited images are shown across the paper for visual inspection.
Introduction
Recent GANs [18,6] demonstrated that synthetic images can be generated with very high quality. This motivates research into embedding algorithms that embed a given photograph into a GAN latent space. Such embed-ding algorithms can be used to analyze the limitations of GANs [5], do image inpainting [8,40,38,36], local image editing [41,16], global image transformations such as image morphing and expression transfer [1], and few-shot video generation [35,34].
In this paper, we propose to extend a very recent embedding algorithm, Image2StyleGAN [1]. In particular, we would like to improve this previous algorithm in three aspects. First, we noticed that the embedding quality can be further improved by including Noise space optimization into the embedding framework. The key insight here is that stable Noise space optimization can only be conducted if the optimization is done sequentially with W + space and not jointly. Second, we would like to improve the capabilities of the embedding algorithm to increase the local control over the embedding. One way to improve local control is to include masks in the embedding algorithm with undefined content. The goal of the embedding algorithm should be to find a plausible embedding for everything outside the mask, while filling in reasonable semantic content in the masked pixels. Similarly, we would like to provide the option of approximate embeddings, where the specified pixel colors are only a guide for the embedding. In this way, we aim to achieve high quality embeddings that can be controlled by user scribbles. In the third technical part of the paper, we investigate the combination of embedding algorithm and di-rect manipulations of the activation maps (called activation tensors in our paper).
Our main contributions are:
1. We propose Noise space optimization to restore the high frequency features in an image that cannot be reproduced by other latent space optimization of GANs. The resulting images are very faithful reconstructions of up to 45 dB compared to about 20 dB (PSNR) for the previously best results.
2. We propose an extended embedding algorithm into the W + space of StyleGAN that allows for local modifications such as missing regions and locally approximate embeddings.
3. We investigate the combination of embedding and activation tensor manipulation to perform high quality local edits along with global semantic edits on images.
4. We apply our novel framework to multiple image editing and manipulation applications. The results show that the method can be successfully used to develop a state-of-the-art image editing software.
Related Work
Generative Adversarial Networks (GANs) [13,29] are one of the most popular generative models that have been successfully applied to many computer vision applications, e.g. object detection [22], texture synthesis [21,37,31], image-to-image translation [15,43,28,24] and video generation [33,32,35,34]. Backing these applications are the massive improvements on GANs in terms of architecture [18,6,28,15], loss function design [25,2], and regularization [27,14]. On the bright side, such improvements significantly boost the quality of the synthesized images. To date, the two highest quality GANs are StyleGAN [18] and BigGAN [6]. Between them, StyleGAN produces excellent results for unconditional image synthesis tasks, especially on face images; BigGAN produces the best results for conditional image synthesis tasks (e.g. ImageNet [9]). While on the dark side, these improvements make the training of GANs more and more expensive that nowadays it is almost a privilege of wealthy institutions to compete for the best performance. As a result, methods built on pre-trained generators start to attract attention very recently. In the following, we would like to discuss previous work of two such approaches: embedding images into a GAN latent space and the manipulation of GAN activation tensors.
Latent Space Embedding. The embedding of an image into the latent space is a longstanding topic in both machine learning and computer vision. In general, the embedding can be implemented in two ways: i) passing the input image through an encoder neural network (e.g. the Variational Auto-Encoder [20]); ii) optimizing a random initial latent code to match the input image [42,7]. Between them, the first approach dominated for a long time. Although it has an inherent problem to generalize beyond the training dataset, it produces higher quality results than the naive latent code optimization methods [42,7]. While recently, Abdal et al. [1] obtained excellent embedding results by optimizing the latent codes in an enhanced W + latent space instead of the initial Z latent space. Their method suggests a new direction for various image editing applications and makes the second approach interesting again.
Activation Tensor Manipulation. With fixed neural network weights, the expression power of a generator can be fully utilized by manipulating its activation tensors. Based on this observation, Bau [4] et al. investigated what a GAN can and cannot generate by locating and manipulating relevant neurons in the activation tensors [4,5]. Built on the understanding of how an object is "drawn" by the generator, they further designed a semantic image editing system that can add, remove or change the appearance of an object in an input image [3]. Concurrently, Frühstück et al. [11] investigated the potential of activation tensor manipulation in image blending. Observing that boundary artifacts can be eliminated by by cropping and combining activation tensors at early layers of a generator, they proposed an algorithm to create large-scale texture maps of hundreds of megapixels by combining outputs of GANs trained on a lower resolution.
Overview
Our paper is structured as follows. First, we describe an extended version of the Image2StyleGAN [1] embedding algorithm (See Sec. 4). We propose two novel modifications: 1) to enable local edits, we integrate various spatial masks into the optimization framework. Spatial masks enable embeddings of incomplete images with missing values and embeddings of images with approximate color values such as user scribbles. In addition to spatial masks, we explore layer masks that restrict the embedding into a set of selected layers. The early layers of StyleGAN [18] encode content and the later layers control the style of the image. By restricting embeddings into a subset of layers we can better control what attributes of a given image are extracted. 2) to further improve the embedding quality, we optimize for an additional group of variables n that control additive noise maps. These noise maps encode high frequency details and enable embedding with very high reconstruction quality.
Second, we explore multiple operations to directly manipulate activation tensors (See Sec. 5). We mainly explore spatial copying, channel-wise copying, and averaging, Interesting applications can be built by combining multiple embedding steps and direct manipulation steps. As a stepping stone towards building interesting application, we describe in Sec. 6 common building blocks that consist of specific settings of the extended optimization algorithm.
Finally, in Sec. 7 we outline multiple applications enabled by Image2StyleGAN++: improved image reconstruction, image crossover, image inpainting, local edits using scribbles, local style transfer, and attribute level feature transfer.
An Extended Embedding Algorithm
We implement our embedding algorithm as a gradientbased optimization that iteratively updates an image starting from some initial latent code. The embedding is performed into two spaces using two groups of variables; the semantically meaningful W + space and a Noise space N s encoding high frequency details. The corresponding groups of variables we optimize for are w ∈ W + and n ∈ N s . The inputs to the embedding algorithm are target RGB images x and y (they can also be the same image), and up to three spatial masks (M s , M m , and M p ) Algorithm 1 is the generic embedding algorithm used in the paper.
Objective Function
Our objective function consists of three different types of loss terms, i.e. the pixel-wise MSE loss, the perceptual loss [17,10], and the style loss [12].
L = λ s L style (M s , G(w, n), y) + λ mse1 N M m (G(w, n) − x) 2 2 + λ mse2 N (1 − M m ) (G(w, n) − y) 2 2 + λ p L percept (M p , G(w, n), x)(1)
Where M s , M m , M p denote the spatial masks, denotes the Hadamard product, G is the StyleGAN generator, n are the Noise space variables, w are the W + space variables, L style denotes style loss from conv3 3 layer of an Ima- geNet pretrained VGG-16 network [30], L percept is the perceptual loss defined in Image2StyleGAN [1]. Here, we use layers conv1 1, conv1 2, conv2 2 and conv3 3 of VGG-16 for the perceptual loss. Note that the perceptual loss is computed for four layers of the VGG network. Therefore, M p needs to be downsampled to match the resolutions of the corresponding VGG-16 layers in the computation of the loss function.
Optimization Strategies
Optimization of the variables w ∈ W + and n ∈ N s is not a trivial task. Since only w ∈ W + encodes semantically meaningful information, we need to ensure that as much information as possible is encoded in w and only high frequency details in the Noise space.
The first possible approach is the joint optimization of both groups of variables w and n. Fig.2 (b) shows the result using the perceptual and the pixel-wise MSE loss. We can observe that many details are lost and were replaced with high frequency image artifacts. This is due to the fact that the perceptual loss is incompatible with optimizing noise maps. Therefore, a second approach is to use pixel-wise MSE loss only (see Fig. 2 (c)). Although the reconstruction is almost perfect, the representation (w, n) is not suitable for image editing tasks. In Fig. 2 (d), we show that too much of the image information is stored in the noise layer, by resampling the noise variables n. We would expect to obtain another very good, but slightly noisy embedding. Instead, we obtain a very low quality embedding. Also, we show the result of jointly optimizing the variables and using perceptual and pixel-wise MSE loss for w variables and pixel-wise MSE loss for the noise variable. Fig. 2 (e) shows the reconstructed image is not of high perceptual quality. The PSNR score decreases to 33.3 dB. We also tested these optimizations on other images. Based on our results, we do not recommend using joint optimization.
The second strategy is an alternating optimization of the variables w and n. In Fig. 3, we show the result of optimizing w while keeping n fixed and subsequently optimizing n while keeping w fixed. In this way, most of the information is encoded in w which leads to a semantically meaningful embedding. Performing another iteration of optimizing w (Fig. 3 (d)) reveals a smoothing effect on the image and the PSNR reduces from 39.5 dB to 20 dB. Subsequent Noise space optimization does not improve PSNR of the images. Hence, repetitive alternating optimization does not improve the quality of the image further. In summary, we recommend to use alternating optimization, but each set of variables is only optimized once. First we optimize w, then n.
Algorithm 1: Semantic and Spatial component embedding in StyleGAN
Input: images x, y ∈ R n×m×3 ; masks M s , M m , M p ; a pre-trained generator G(·, ·); gradient-based optimizer F . Output: the embedded code (w, n) 1 Initialize() the code (w, n) = (w , n ); 2 while not converged do
3 Loss ← L(x, y, M s , M m , M p ); 4 (w, n) ← (w, n) − ηF (∇ w,n L, w, n); 5 end
Activation Tensor Manipulations
Due to the progressive architecture of StyleGAN, one can perform meaningful tensor operations at different layers of the network [11,4]. We consider the following editing operations: spatial copying, averaging, and channelwise copying. We define activation tensor A I l as the output of the l-th layer in the network initialized with variables (w, n) of the embedded image I. They are stored as ten- Figure 5: First and second column: input image; Third column: image generated by naively copying the left half from the first image and the right half from the second image; Fourth column: image generated by our extended embedding algorithm.
sors A I l ∈ R W l ×H l ×C l .
Given two such tensors A I l and B I l , copying replaces high-dimensional pixels ∈ R 1×1×C l in A I l by copying from B I l . Averaging forms a linear combination λA I l + (1 − λ)B I l . Channel-wise copying creates a new tensor by copying selected channels from A I l and the remaining channels from B I l . In our tests we found that spatial copying works a bit better than averaging and channelwise copying.
Frequently Used Building Blocks
We identify four fundamental building blocks that are used in multiple applications described in Sec. 7. While terms of the loss function can be controlled by spatial masks (M s , M m , M p ), we also use binary masks w m and n m to indicate what subset of variables should be optimized during an optimization process. For example, we might set w m to only update the w variables corresponding to the first k layers. In general, w m and n m contain 1s for variables that should be updated and 0s for variables that should remain constant. In addition to the listed parameters, all building blocks need initial variable values w ini and n ini . For all experiments, we use a 32GB Nvidia V100 GPU.
Masked W + optimization (W l ): This function optimizes w ∈ W + , leaving n constant. We use the following parameters in the loss function (L) Eq. 1: λ s = 0, λ mse1 = 10 −5 , λ mse2 = 0, λ p = 10 −5 . We denote the function as:
W l (M p , M m , w m , w ini , n ini , x) = arg min wm λ p L percept (M p , G(w, n), x)+ λ mse1 N M m (G(w, n) − x) 2 2(2)
where w m is a mask for W + space. We either use Adam [19] with learning rate 0.01 or gradient descent with learning rate 0.8, depending on the application. Some common settings for Adam are: β 1 = 0.9, β 2 = 0.999, and = 1e −8 . In Sec. 7, we use Adam unless specified.
Masked Noise Optimization (M k n ): This function optimizes n ∈ N s , leaving w constant. The Noise space N s has dimensions R 4×4 , . . . , R 1024×1024 . In total there are 18 noise maps, two for each resolution. We set following parameters in the loss function (L) Eq. 1: λ s = 0, λ mse1 = 10 −5 , λ mse2 = 10 −5 , λ p = 0. We denote the function as:
M k n (M, w ini , n ini , x, y) = arg min n λ mse2 N M m (G(w, n) − x) 2 2 + λ mse1 N (1 − M m ) (G(w, n) − y) 2 2(3)
For this optimization, we use Adam with learning rate 5, β 1 = 0.9, β 2 = 0.999, and = 1e −8 . Note that the learning rate is very high.
Masked Style Transfer(M st ): This function optimizes w to achieve a given target style defined by style image y. We set following parameters in the loss function (L) Eq. 1: λ s = 5 × 10 −7 , λ mse1 = 0, λ mse2 = 0, λ p = 0. We denote the function as:
where w is the whole W + space. For this optimization, we use Adam with learning rate 0.01, β 1 = 0.9, β 2 = 0.999, and = 1e −8 .
Masked activation tensor operation (I att ): This function describes an activation tensor operation. Here, we represent the generator G(w, n, t) as a function of W + space variable w, Noise space variable n, and input tensor t. The operation is represented by:
I att (M 1 , M 2 , w, n ini , l) = G(w, n, M 1 (A I1 l ) + (1 − M 2 ) (B I2 l ))(5)
where A I1 l and B I2 l are the activations corresponding to images I 1 and I 2 at layer l, and M 1 and M 2 are the masks downsampled using nearest neighbour interpolation to match the H l × W l resolution of the activation tensors.
Applications
In the following we describe various applications enabled by our framework.
Algorithm 2: Improved Image Reconstruction
Input: image I m ∈ R n×m×3 Output: the embedded code (w out , n out ) 1 (w ini , n ini ) ← initialize(); 2 w out = W l (1, 1, 1, w ini , n ini , I m ); 3 n out = M k n (1, w out , n ini , I m , 0); Figure 6: First column: original image; Second column: defective image ; Third column: inpainted image via partial convolutions [23]; Fourth column: inpainted image using our method.
Improved Image Reconstruction
As shown in Fig. 4, any image can be embedded by optimizing for variables w ∈ W + and n ∈ N s . Here we describe the details of this embedding (See Alg. 2). First, we initialize: w ini is a mean face latent code [18] or random code sampled from U [−1, 1] depending on whether the embedding image is a face or a non-face, and n ini is sampled from a standard normal distribution N (0, I) [18]. Second, we apply masked W + optimization (W l ) without using spatial masks or masking variables. That means all masks are set to 1. I m is the target image we try to reconstruct. Third, we perform masked noise optimization (M k n ), again without making use of masks. The images reconstructed are of high fidelity. The PNSR score range of 39 to 45 dB provides an insight of how expressive the Noise space in StyleGAN is. Unlike the W + space, the Noise space is used for spatial reconstruction of high frequency features. We use 5000 iterations of W l and 3000 iterations of M k n to get PSNR scores of 44 to 45 dB. Additional iterations did not improve the results in our tests.
Algorithm 3: Image Crossover
Input: images I 1 , I 2 ∈ R n×m×3 ; mask M blur Output: the embedded code (w out , n out ) 1 (w * , n ini ) ← initialize(); 2 w out = W l (M blur , M blur , 1, w * , n ini , I 1 ) +W l (1 − M blur , 1 − M blur , 1, w * , n ini , I 2 ); 3 n out = M k n (M blur , w out , n ini , I 1 , I 2 );
Image Crossover
We define the image crossover operation as copying parts from a source image y into a target image x and blending the boundaries. As initialization, we embed the target image x to obtain the W + code w * . We then perform masked W + optimization (W l ) with blurred masks M blur to embed the regions in x and y that contribute to the final image. Blurred masks are obtained by convolution of the binary mask with a Gaussian filter of suitable size. Then, we perform noise optimization. Details are provided in Alg. 3.
Other notations are the same as described in Sec 7.1. Fig. 5 and Fig. 1 show example results. We deduce that the reconstruction quality of the images is quite high. For the experiments, we use 1000 iterations in the function masked W + optimization and 1000 iterations in M k n .
Image Inpainting Algorithm 4: Image Inpainting
Input: image I def ∈ R n×m×3 ; masks M, M blur+ Output: the embedded code (w out , n out ) 1 (w ini , n ini ) ← initialize(); 2 w out = W l (1 − M, 1 − M, w m , w ini , n ini , I def ); 3 n out = M k n (1 − M blur+ , w out , n ini , I def , G(w out ));
In order to perform a semantically meaningful inpainting, we embed into the early layers of the W + space to predict the missing content and in the later layers to maintain color consistency. We define the image x as a defec- tive image (I def ). Also, we use the mask w m where the value is 1 corresponding to the first 9 (1 to 9), 17 th and 18 th layer of W + . As an initialization, we set w ini to the mean face latent code [18]. We consider M as the mask describing the defective region. Using these parameters, we perform the masked W + optimization W l . Then we perform the masked noise optimization M k n using M blur+ which is the slightly larger blurred mask used for blending. Here λ mse2 is taken to be 10 −4 . Other notations are the same as described in Sec 7.1. Alg. 4 shows the details of the algorithm. We perform 200 steps of gradient descent optimizer for masked W + optimization W l and 1000 iterations of masked noise optimization M k n . Fig.6 shows example inpainting results. The results are comparable with the current state of the art, partial convolution [23]. The partial convolution method frequently suffers from regular artifacts (see Fig.6 (third column)). These artifacts are not present in our method. In Fig.7 we show different inpainting solutions for the same image achieved by using different initializations of w ini , which is an offset to mean face latent code sampled independently from a uniform distribution U [−0.4, 0.4]. The initialization mainly affects layers 10 to 16 that are not altered during optimization. Multiple inpainting solutions cannot be computed with existing state-of-the-art methods.
Local Edits using Scribbles
Another application is performing semantic local edits guided by user scribbles. We show that simple scribbles can be converted to photo-realistic edits by embedding into the
Algorithm 5: Local Edits using Scribble
Input: image I scr ∈ R n×m×3 ; masks M blur Output: the embedded code (w out , n out ) 1 (w * , n ini ) ← initialize(); 2 w out = W l (1, 1, w m , w * , n ini , I scr ) +λ w * − w out 2 ; 3 n out = M k n (M blur , w out , n ini , I scr , G(w out )); first 4 to 6 layers of W + (See Fig.8). This enables us to do local edits without training a network. We define an image x as a scribble image (I scr ). Here, we also use the mask w m where the value is 1 corresponding to the first 4,5 or 6 layers of the W + space. As initialization, we set the w ini to w * which is the W + code of the image without scribble. We
Algorithm 6: Local Style Transfer
Input: images I 1 , I 2 ∈ R n×m×3 ; masks M blur Output: the embedded code (w out , n out ) 1 (w * , n ini ) ← initialize(); 2 w out = W l (M blur , M blur , 1, w * , n ini , I 1 ) +M st (1 − M blur , w * , n ini , I 2 ); 3 n out = M k n (M blur , w out , n ini , I 1 , G(w out )); perform masked W + optimization using these parameters. Then we perform masked noise optimization M k n using M blur . Other notations are the same as described in Sec 7.1. Alg. 5 shows the details of the algorithm. We perform 1000 iterations using Adam with a learning rate of 0.1 of masked W + optimization W l and then 1000 steps of masked noise optimization M k n to output the final image.
Local Style Transfer
Local style transfer modifies a region in the input image x to transform it to the style defined by a style reference image. First, we embed the image in W + space to obtain the code w * . Then we apply the masked W + optimization W l along with masked style transfer M st using blurred mask M blur . Finally, we perform the masked noise optimization M k n to output the final image. Alg. 6 shows the details of the algorithm. Results for the application are shown in Fig.9. We perform 1000 steps to obtain of W l along with M st and then perform 1000 iterations of M k n .
Attribute level feature transfer
We extend our work to another application using tensor operations on the images embedded in W + space. In this application we perform the tensor manipulation corresponding to the tensors at the output of the 4 th layer of StyleGAN. We feed the generator with the latent codes (w, n) of two images I 1 and I 2 and store the output of the fourth layer as intermediate activation tensors A I1 l and B I2 l . A mask M s specifies which values to copy from A I1 l and which to copy from B I2 l . The operation can be denoted by I att (M s , M s , w, n ini , 4). In Fig.10, we show results of the operation. A design parameter of this application is what style code to use for the remaining layers. In the shown example, the first image is chosen to provide the style. Notice, in column 2 of Fig.10, in-spite of the different alignment of the two faces and objects, the images are blended well. We also show results of blending for the LSUN-car and LSUN-bedroom datasets. Hence, unlike global edits like image morphing, style transfer, and expression transfer [1], here different parts of the image can be edited independently and the edits are localized. Moreover, along with other edits, we show a video in the supplementary material that further shows that other semantic edits e.g. masked image morphing can be performed on such images by linear interpolation of W + code of one image at a time.
Conclusion
We proposed Image2StyleGAN++, a powerful image editing framework built on the recent Image2StyleGAN. Our framework is motivated by three key insights: first, high frequency image features are captured by the additive noise maps used in StyleGAN, which helps to improve the quality of reconstructed images; second, local edits are enabled by including masks in the embedding algorithm, which greatly increases the capability of the proposed framework; third, a variety of applications can be created by combining embedding with activation tensor manipulation. From the high quality results presented in this paper, it can be concluded that our Image2StyleGAN++ is a promising framework for general image editing. For future work, in addition to static images, we aim to extend our framework to process and edit videos.
Additional Results
Image Inpainting
To evaluate the results quantitatively, we use three standard metrics, SSIM, MSE loss and PSNR score to compare our method with the state-of-the-art Partial Convolution [23] and Gated Convolution [39] methods.
As different methods produce outputs at different resolutions, we bi-linearly interpolate the output images to test the methods at three resolutions 1024 × 1024, 512 × 512 and 256 × 256 respectively. We use 7 masks (Fig. 11) and 10 ground truth images (Fig. 12) to create 10 defective images (i.e. images with missing regions) for the evaluation. These masks and images are chosen to make the inpainting a challenging task: i) the masks are selected to contain very large missing regions, up to half of an image; ii) the ground truth images are selected to be of high variety that cover different genders, ages, races, etc. Table 1 shows the quantitative comparison results. It can be observed that our method outperforms both Partial Convolution [23] and Gated Convolution [39] across all the metrics. More importantly, the advantages of our method can be easily verified by visual inspection. As Fig. 13 and Fig. 14 show, although previous methods (e.g. Partial convolution) perform well when the missing region is small, both of them struggle when the missing region covers a significant area (e.g. half) of the image. Specifically, Partial Convolution fails when the mask covers half of the input image (Fig. 13); due to the relatively small resolution (256 × 256) model, Gated Convolution can fill in the details of large missing regions, but of much lower quality compared to the proposed method (Fig. 14).
In addition, our method is flexible and can generate different inpainting results (Fig. 15), which cannot be fulfilled by any of the above-mentioned methods. All our inpainting results are of high perceptual quality.
Limitations Although better than the two state-of-the-art methods, our inpainting results still leave room for improvement. For example in Fig. 13, the lighting condition (first row), age (second row) and skin color (third and last row) are not learnt that well. We propose to address them in the future work. Figure 11: Masks used in the quantitative evaluation of image inpainting methods.
Image Crossover
To further evaluate the expressibility of the Noise space, we show additional results on image crossover in Fig. 16. We show that the space is able to crossover parts of images from different races (see second and third column).
Local Edits using Scribbles
In order to evaluate the quality of the local edits using scribbles, we evaluate the face attribute scores [26] on edited images. We perform some common edits of adding baldness, adding a beard, smoothing wrinkles and adding a moustache on the face images to evaluate how photo-realistic the edited images are. Table 2 shows the average change in the confidence of the classifier after a particular edit is performed. We also show additional results of the Local edits in Fig. 17. For our method, one remaining challenge is that sometimes the edited region is overly smooth (e.g. first row).
Attribute Level Feature Transfer
We show a video in which attribute interpolation can be performed on the base image by copying the content from an attribute image. Here different attributes can be taken from different images embedded in the W + space and applied to the base image. These attributes can be independently interpolated and the results show that the blending quality of the framework is quite high. We also show additional results on LSUN Cars and LSUN Bedrooms in the video (also see Fig. 18). Notice that in the LSUN bedrooms, for instance, the style and the position of the beds can be customized without changing the room layout.
In order to evaluate the perceptual quality of attribute level feature transfer, we compute perceptual length [18] between the images produced by independently interpolated attributes (called masked interpolation). StyleGAN [18] showed that the metric evaluates how perceptually smooth the transitions are. Here, perceptual length measures the changes produced by feature transfer which may be affected especially by the boundary of the blending. The boundary may tend to produce additional artifacts or introduce additional features which is clearly undesirable.
We compute the perceptual length across 1000 samples using two masks shown in Fig. 11 (First and Seventh column). In Table 3 we show the results of the computation of the perceptual length (both for masked and non-masked interpolation) on FFHQ, LSUN Cars and LSUN Bedrooms pretrained StyleGAN. We compare these scores as the non-masked interpolation gives us the upper bound of the perceptual length for a model (in this case there is no constraint on what features of the face should change). As a particular area of the image is interpolated rather than the whole image, note that our results on FFHQ pretrained StyleGAN produce lower score than the non-masked interpolation. The low perceptual length score suggests that there is a less drastic change. Hence, we conclude that the output images have comparable perceptual quality with non-masked interpolation.
LSUN Cars and LSUN Bedrooms produce relatively higher perceptual length score. We attribute this result to the fact that the images in these datasets can translate and the position of the features is not fixed. Hence, the two images produced at random might have different orientation in which case the blending does not work as good.
Channel wise feature average
We perform another operation denoted by Iatt(1, 0, wx, , nini, 6), where wx can be the W + code for images I1 or I2. In Fig. 19, we show the result of this operation which is initialized with two different W + codes. The resulting faces contain the characteristics of both faces and the styles are modulated by the input W + codes. Table 3: Perceptual length evaluation for masked and non-masked interpolation.
Figure 2 :
2Joint optimization. (a): target image; (b): image embedded by jointly optimizing w and n using perceptual and pixel-wise MSE loss; (c): image embedded by jointly optimizing w and n using the pixel-wise MSE loss only; (d): the result of the previous column with n resampled; (e): image embedded by jointly optimizing w and n using perceptual and pixel-wise MSE loss for w and pixel-wise MSE loss for n.
Figure 3 :
3Alternating optimization. (a): target image; (b): image embedded by optimizing w only; (c): taking w from the previous column and subsequently optimizing n only; (d): taking the result from the previous column and optimizing w only.
Figure 4 :
4First column: original image; Second column: image embedded in W + Space (PSNR 19 to 22 dB); Third column: image embedded in W + and Noise space (PSNR 39 to 45 dB).
M
st (M s , w ini , n ini , y) = arg min w λ s L style (M s , G(w, n), y)
Figure 7 :
7Inpainting using different initializations w ini .
Figure 8 :
8Column 1 & 4: base image; Column 2 & 5: scribbled image ; Column 3 & 6: result of local edits.
Figure 9 :
9First column: base image; Second column: mask area; Third column: style image; Fourth column: local style transfer result.
Figure 10 :
10First column: base image; Second column: attribute image; Third column: mask area; Fourth column: image generated via attribute level feature transfer.
Figure 12 :
12Images used in the quantitative evaluation of image inpainting methods.
Figure 13 :
13First column: original image; Second column: defective image; Third column: inpainted image via Partial Convolutions[23]; Fourth column: inpainted image using our method.
Figure 14 :
14First column: original image; Second column: defective image; Third column: inpainted image via Gated Convolutions[39]; Fourth column: inpainted image using our method.
Figure 16 :
16(a) and (b): input images; (c): the "two-face" generated by naively copying the left half from (a) and the right half from (b); (d): the "two-face" generated by our Image2StyleGAN++ framework.
Figure 17 :
17Column 1 & 4: base image; Column 2 & 5: scribbled image ; Column 3 & 6: result of local edits.
Figure 18 :
18First column: base image; Second column: mask area; Third column: attribute image; Fourth to Eighth column: image generated via attribute level feature transfer and masked interpolation.
Figure 19 :
19First column: First Image; Second Column: Second Image; Third Column: Feature averaged image using W + code of first image; Fourth Column: Feature averaged image using W + code of second image.
MethodImage Resolution (1024 × 1024) Image Resolution (512 × 512) Image Resolution (256 × 256)Table 1: Evaluation results of image inpainting methods using SSIM, MSE and PSNR score.SSIM
MSE
PSNR
SSIM
MSE
PSNR
SSIM
MSE
PSNR
Partial Convolution [23] 0.8957 199.39
21.83
0.8865 98.83
21.92
0.8789 48.39
22.17
Gated Convolution [39] 0.8693 246.46
19.65
0.8568 121.98
19.77
0.8295 61.82
19.41
Ours
0.9176 180.69
22.35
0.9104 89.25
22.48
0.9009 43.85
22.65
Table 2 :
2Changes in confidence scores of classifier after user edits.Figure 15: Inpainting results using different w ini initializations. Pretrained model Interpolation Perceptual length (full) Perceptual length (end)FFHQ
Non-Masked
227.1
191.1
Masked
112.1
89.8
LSUN Cars
Non-Masked
12388.1
6038.5
Masked
4742.3
3057.9
LSUN Bedrooms
Non-Masked
2521.1
1268.7
Masked
1629.8
938.1
Image2stylegan: How to embed images into the stylegan latent space?. R Abdal, Y Qin, P Wonka, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionR. Abdal, Y. Qin, and P. Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE International Conference on Computer Vision, pages 4432-4441, 2019. 1, 2, 4, 8
Wasserstein generative adversarial networks. M Arjovsky, S Chintala, L Bottou, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gener- ative adversarial networks. In Proceedings of the 34th In- ternational Conference on Machine Learning, volume 70, pages 214-223, 2017. 2
Semantic photo manipulation with a generative image prior. D Bau, H Strobelt, W Peebles, J Wulff, B Zhou, J Zhu, A Torralba, ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH). 384D. Bau, H. Strobelt, W. Peebles, J. Wulff, B. Zhou, J. Zhu, and A. Torralba. Semantic photo manipulation with a gener- ative image prior. ACM Transactions on Graphics (Proceed- ings of ACM SIGGRAPH), 38(4), 2019. 2
Gan dissection: Visualizing and understanding generative adversarial networks. D Bau, J.-Y Zhu, H Strobelt, B Zhou, J B Tenenbaum, W T Freeman, A Torralba, Proceedings of the International Conference on Learning Representations (ICLR). the International Conference on Learning Representations (ICLR)24D. Bau, J.-Y. Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Freeman, and A. Torralba. Gan dissection: Visualizing and understanding generative adversarial networks. In Pro- ceedings of the International Conference on Learning Rep- resentations (ICLR), 2019. 2, 4
Seeing what a gan cannot generate. D Bau, J.-Y Zhu, J Wulff, W Peebles, H Strobelt, B Zhou, A Torralba, Proceedings of the International Conference Computer Vision (ICCV). the International Conference Computer Vision (ICCV)1D. Bau, J.-Y. Zhu, J. Wulff, W. Peebles, H. Strobelt, B. Zhou, and A. Torralba. Seeing what a gan cannot generate. In Pro- ceedings of the International Conference Computer Vision (ICCV), 2019. 1, 2
Large scale GAN training for high fidelity natural image synthesis. A Brock, J Donahue, K Simonyan, International Conference on Learning Representations. 1A. Brock, J. Donahue, and K. Simonyan. Large scale GAN training for high fidelity natural image synthesis. In Inter- national Conference on Learning Representations, 2019. 1, 2
Inverting the generator of a generative adversarial network. A Creswell, A A Bharath, IEEE Transactions on Neural Networks and Learning Systems. 2A. Creswell and A. A. Bharath. Inverting the generator of a generative adversarial network. IEEE Transactions on Neu- ral Networks and Learning Systems, 2018. 2
Patch-based image inpainting with generative adversarial networks. U Demir, G , arXiv:1803.07422arXiv preprintU. Demir and G. Unal. Patch-based image inpaint- ing with generative adversarial networks. arXiv preprint arXiv:1803.07422, 2018. 1
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 09J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. 2
Generating images with perceptual similarity metrics based on deep networks. A Dosovitskiy, T Brox, Advances in neural information processing systems. A. Dosovitskiy and T. Brox. Generating images with percep- tual similarity metrics based on deep networks. In Advances in neural information processing systems, pages 658-666, 2016. 3
. A Frhstck, I Alhashim, P Wonka, Tilegan, ACM Transactions on Graphics. 3844A. Frhstck, I. Alhashim, and P. Wonka. Tilegan. ACM Trans- actions on Graphics, 38(4):111, Jul 2019. 2, 4
Image style transfer using convolutional neural networks. L A Gatys, A S Ecker, M Bethge, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionL. A. Gatys, A. S. Ecker, and M. Bethge. Image style trans- fer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 2414-2423, 2016. 3
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. In Advances in neural information processing systems, 2014. 2
Improved training of wasserstein gans. I Gulrajani, F Ahmed, M Arjovsky, V Dumoulin, A C Courville, Advances in neural information processing systems. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767-5777, 2017. 2
Image-to-image translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, CVPR. 2P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. CVPR, 2017. 2
Sc-fegan: Face editing generative adversarial network with user's sketch and color. Y Jo, J Park, Octo- ber 2019. 1The IEEE International Conference on Computer Vision (ICCV). Y. Jo and J. Park. Sc-fegan: Face editing generative adver- sarial network with user's sketch and color. In The IEEE International Conference on Computer Vision (ICCV), Octo- ber 2019. 1
Perceptual losses for real-time style transfer and super-resolution. J Johnson, A Alahi, L Fei-Fei, European conference on computer vision. J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, 2016. 3
A style-based generator architecture for generative adversarial networks. T Karras, S Laine, T Aila, arXiv:1812.04948711arXiv preprintT. Karras, S. Laine, and T. Aila. A style-based genera- tor architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948, 2018. 1, 2, 6, 7, 11
Adam: A method for stochastic optimization. D P Kingma, J Ba, D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. 2014. 5
D P Kingma, M Welling, arXiv:1312.6114Auto-encoding variational bayes. arXiv preprintD. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 2
Precomputed real-time texture synthesis with markovian generative adversarial networks. C Li, M Wand, Computer Vision -ECCV 2016 -14th European Conference, Amsterdam. The NetherlandsProceedings, Part IIIC. Li and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In Com- puter Vision -ECCV 2016 -14th European Conference, Am- sterdam, The Netherlands, October 11-14, 2016, Proceed- ings, Part III, 2016. 2
Perceptual generative adversarial networks for small object detection. J Li, X Liang, Y Wei, T Xu, J Feng, S Yan, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan. Perceptual generative adversarial networks for small object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 2
Image inpainting for irregular holes using partial convolutions. G Liu, F A Reda, K J Shih, T.-C Wang, A Tao, B Catanzaro, Lecture Notes in Computer Science. 1213G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using par- tial convolutions. Lecture Notes in Computer Science, page 89105, 2018. 6, 7, 11, 12, 13
Few-shot unsueprvised image-to-image translation. M.-Y Liu, X Huang, A Mallya, T Karras, T Aila, J Lehtinen, J Kautz, arxivM.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehti- nen, and J. Kautz. Few-shot unsueprvised image-to-image translation. In arxiv, 2019. 2
Least squares generative adversarial networks. X Mao, Q Li, H Xie, R Y Lau, Z Wang, S P Smolley, IEEE International Conference on Computer Vision (ICCV). X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), Oct 2017. 2
Microsoft azure face. Microsoft, Microsoft. Microsoft azure face. https: //azure.microsoft.com/en-us/services/ cognitive-services/face/. 11
Spectral normalization for generative adversarial networks. T Miyato, T Kataoka, M Koyama, Y Yoshida, International Conference on Learning Representations. T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In Inter- national Conference on Learning Representations, 2018. 2
Semantic image synthesis with spatially-adaptive normalization. T Park, M.-Y Liu, T.-C Wang, J.-Y Zhu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionT. Park, M.-Y. Liu, T.-C. Wang, and J.-Y. Zhu. Semantic im- age synthesis with spatially-adaptive normalization. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 2
Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, arXiv:1511.06434arXiv preprintA. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. arXiv preprint arXiv:1511.06434, 2015. 2
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. 2014. 4
High quality facial surface and texture synthesis via generative adversarial networks. R Slossberg, G Shamai, R Kimmel, European Conference on Computer Vision. SpringerR. Slossberg, G. Shamai, and R. Kimmel. High quality facial surface and texture synthesis via generative adversarial net- works. In European Conference on Computer Vision, pages 498-513. Springer, 2018. 2
Mocogan: Decomposing motion and content for video generation. S Tulyakov, M.-Y Liu, X Yang, J Kautz, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz. Moco- gan: Decomposing motion and content for video generation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2
Generating videos with scene dynamics. C Vondrick, H Pirsiavash, A Torralba, Advances in Neural Information Processing Systems. 29C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Advances in Neural Infor- mation Processing Systems 29. 2016. 2
Few-shot video-to-video synthesis. T.-C Wang, M.-Y Liu, A Tao, G Liu, J Kautz, B Catanzaro, arXiv:1910.127131arXiv preprintT.-C. Wang, M.-Y. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro. Few-shot video-to-video synthesis. arXiv preprint arXiv:1910.12713, 2019. 1, 2
Video-to-video synthesis. T.-C Wang, M.-Y Liu, J.-Y Zhu, G Liu, A Tao, J Kautz, B Catanzaro, Advances in Neural Information Processing Systems (NeurIPS). T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro. Video-to-video synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 1, 2
Detecting overfitting of deep generative networks via latent recovery. R Webster, J Rabin, L Simon, F Jurie, R. Webster, J. Rabin, L. Simon, and F. Jurie. Detecting over- fitting of deep generative networks via latent recovery. 2019. 1
Texturegan: Controlling deep image synthesis with texture patches. W Xian, P Sangkloy, V Agrawal, A Raj, J Lu, C Fang, F Yu, J Hays, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). W. Xian, P. Sangkloy, V. Agrawal, A. Raj, J. Lu, C. Fang, F. Yu, and J. Hays. Texturegan: Controlling deep image syn- thesis with texture patches. In The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), June 2018. 2
Freeform image inpainting with gated convolution. J Yu, Z Lin, J Yang, X Shen, X Lu, T Huang, J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. Huang. Free- form image inpainting with gated convolution. 2018. 1
Free-form image inpainting with gated convolution. J Yu, Z Lin, J Yang, X Shen, X Lu, T S Huang, arXiv:1806.035891214arXiv preprintJ. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Free-form image inpainting with gated convolution. arXiv preprint arXiv:1806.03589, 2018. 11, 12, 14
Generative image inpainting with contextual attention. J Yu, Z Lin, J Yang, X Shen, X Lu, T S Huang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Gen- erative image inpainting with contextual attention. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5505-5514, 2018. 1
Generative visual manipulation on the natural image manifold. J.-Y Zhu, P Krähenbühl, E Shechtman, A A Efros, Proceedings of European Conference on Computer Vision (ECCV). European Conference on Computer Vision (ECCV)J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image mani- fold. In Proceedings of European Conference on Computer Vision (ECCV), 2016. 1
Generative visual manipulation on the natural image manifold. J.-Y Zhu, P Krhenbhl, E Shechtman, A A Efros, Lecture Notes in Computer Science. 2597613J.-Y. Zhu, P. Krhenbhl, E. Shechtman, and A. A. Efros. Gen- erative visual manipulation on the natural image manifold. Lecture Notes in Computer Science, page 597613, 2016. 2
Unpaired imageto-image translation using cycle-consistent adversarial networkss. J.-Y Zhu, T Park, P Isola, A A Efros, Computer Vision (ICCV. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image- to-image translation using cycle-consistent adversarial net- workss. In Computer Vision (ICCV), 2017 IEEE Interna- tional Conference on, 2017. 2
| [] |
[
"View-Invariant Probabilistic Embedding for Human Pose",
"View-Invariant Probabilistic Embedding for Human Pose"
] | [
"Jennifer J Sun \nGoogle Research\n\n",
"Jiaping Zhao \nGoogle Research\n\n",
"Liang-Chieh Chen \nGoogle Research\n\n",
"Florian Schroff \nGoogle Research\n\n",
"Hartwig Adam \nGoogle Research\n\n",
"Ting Liu \nGoogle Research\n\n",
"Caltech \nGoogle Research\n\n"
] | [
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [] | Depictions of similar human body configurations can vary with changing viewpoints. Using only 2D information, we would like to enable vision algorithms to recognize similarity in human body poses across multiple views. This ability is useful for analyzing body movements and human behaviors in images and videos. In this paper, we propose an approach for learning a compact view-invariant embedding space from 2D joint keypoints alone, without explicitly predicting 3D poses. Since 2D poses are projected from 3D space, they have an inherent ambiguity, which is difficult to represent through a deterministic mapping. Hence, we use probabilistic embeddings to model this input uncertainty. Experimental results show that our embedding model achieves higher accuracy when retrieving similar poses across different camera views, in comparison with 2D-to-3D pose lifting models. The results also suggest that our model is able to generalize across datasets, and our embedding variance correlates with input pose ambiguity. | 10.1007/978-3-030-58558-7_4 | [
"https://arxiv.org/pdf/1912.01001v1.pdf"
] | 208,527,516 | 1912.01001 | ca573091ba2fe5f7d834b133168406d427b62342 |
View-Invariant Probabilistic Embedding for Human Pose
Jennifer J Sun
Google Research
Jiaping Zhao
Google Research
Liang-Chieh Chen
Google Research
Florian Schroff
Google Research
Hartwig Adam
Google Research
Ting Liu
Google Research
Caltech
Google Research
View-Invariant Probabilistic Embedding for Human Pose
Depictions of similar human body configurations can vary with changing viewpoints. Using only 2D information, we would like to enable vision algorithms to recognize similarity in human body poses across multiple views. This ability is useful for analyzing body movements and human behaviors in images and videos. In this paper, we propose an approach for learning a compact view-invariant embedding space from 2D joint keypoints alone, without explicitly predicting 3D poses. Since 2D poses are projected from 3D space, they have an inherent ambiguity, which is difficult to represent through a deterministic mapping. Hence, we use probabilistic embeddings to model this input uncertainty. Experimental results show that our embedding model achieves higher accuracy when retrieving similar poses across different camera views, in comparison with 2D-to-3D pose lifting models. The results also suggest that our model is able to generalize across datasets, and our embedding variance correlates with input pose ambiguity.
Introduction
When we represent three dimensional (3D) human bodies in two dimensions (2D), the same human pose can appear different across camera views. There can be significant visual variations from a change in viewpoint due to changing relative depth of body parts and self-occlusions. Despite these variations, humans have the ability to recognize similar 3D human body poses in images and videos. This ability is useful for computer vision tasks where changing viewpoints should not change the labels of the task. We explore how we can embed 2D visual information of human poses to be consistent across camera views.
Inspired by 2D-to-3D lifting models [26], we learn view invariant embeddings directly from 2D pose keypoints. As illustrated in Figure 1, we explore whether view invariance of human bodies can be achieved from 2D poses alone, without predicting 3D pose. Typically, embedding models * This work was done while the author was a research intern at Google. are trained from images using deep metric learning techniques [28,11,7]. However, images with similar human poses can appear different because of changing viewpoints, subjects, backgrounds, clothing, etc. As a result, it can be difficult to understand errors in the embedding space from a specific factor of variation. Furthermore, multi-view image datasets for human poses are difficult to capture in the wild with 3D groundtruth annotations. In contrast, our method leverages existing 2D keypoint detectors, allowing the embedding model to focus on learning view invariance. Our 2D keypoint embeddings can be trained using datasets in lab environments, while having the model generalize to in-thewild data. Another advantage is that we can easily augment training data by synthesizing multi-view 2D poses from 3D poses through perspective projection.
Using our method, we explore the question: can we achieve view invariance without predicting absolute 3D pose? This enables us to work with lower dimensional representations in the embedding space and opens up the potential to train these models without 3D pose annotations in the future.
Another aspect that we address is input uncertainty. The input to our embedding model is 2D human pose, which has an inherent ambiguity. Many valid 3D poses can be projected to the same or very similar 2D pose [1]. This input uncertainty is difficult to represent using deterministic map-pings to the embedding space (point embeddings) [29,19]. Our embedding space consists of probabilistic embeddings based on multivariate Gaussians, as shown in Figure 1b. We show that the learned variance from our method correlates with input 2D ambiguities. We call our approach Pr-VIPE for Probabilistic View-Invariant Pose Embeddings. The non-probabilistic, point embedding formulation will be referred to as VIPE.
Contributions Our main contribution is the method for learning an embedding space where 2D pose embedding distances correspond to their similarities in absolute 3D pose space. We also develop a probabilistic formulation that captures 2D pose ambiguity. The view invariance property of our embeddings can be leveraged for downstream tasks. In this paper, our evaluation focuses on cross-view pose retrieval: given a single view image, we retrieve images of the same pose from different views without using camera parameters. We develop a method to compute retrieval confidence based on [29] and show that higher confidence correlates with smaller retrival error. Our results suggest that 2D poses are sufficient to achieve view-invariance in the absence of more information from images, and we do not have to predict absolute 3D pose to achieve this property. We plan to open-source our code and experiment details.
Related Work
Metric Learning We are working to understand similarity in human poses across views. Most works that aim to capture similarity between inputs generally apply techniques from metric learning. Objectives such as contrastive loss (based on pair matching) [4,9,29] and triplet loss (based on tuple ranking) [45,40,46,10] are often used to push together/pull apart similar/dissimilar examples in embedding space.
The number of possible training tuples increases exponentially with respect to the number of samples in the tuple, and not all combinations are equally informative. To find informative training tuples, various mining strategies are proposed [40,47,30,10]. In particular, semi-hard triplet mining has been widely used [40,47,33]. This mining method finds negative examples that are fairly hard as to be informative but not too hard for the model. The hardness of a negative sample is based on its embedding distance to the anchor. Commonly, this distance is the Euclidean distance [45,46,40,10], but any differentiable distance function could be applied [10]. [13,15] show that alternative distance metrics also work for image and object retrieval.
In our work, we learn a mapping from Euclidean distance in the embedding space to a probabilistic pose similarity score. This probabilistic similarity captures closeness in 3D pose space from 2D poses. Our work is inspired by the mapping used in soft contrastive loss [29] for learning from an occluded N-digit MNIST dataset.
Most of the papers discussed above involve deterministically mapping inputs to point embeddings. There are works that also map inputs to probabilistic embeddings. Probabilistic embeddings have been used to model specificity of word embeddings [44], uncertainty in graph representations [3], and input uncertainty due to occlusion [29]. We will apply probabilistic embeddings to address inherent ambiguities in 2D pose due to 3D-to-2D projection.
Human Pose Estimation 3D human poses in a global coordinate frame are view-invariant, since images across views are mapped to the same 3D pose. However, as mentioned by [26], it is difficult to infer the 3D pose in an arbitrary global frame since any changes to the frame does not change the input data. Many approaches work with poses in the camera coordinate system [26,5,34,37,50,41,38,42,6], where the pose description changes based on viewpoint.
Our approach is similar in setup to existing 2D-to-3D lifting pose estimators [26,5,34,37] in terms of using 2D pose keypoints as input. The difference is that lifting models are trained to regress to 3D pose keypoints, while our model is trained using metric learning and outputs an embedding distribution. Some recent works also use multiview datasets to predict 3D poses in the global coordinate frame [35,21,16,39,43]. Our work differs from these methods with our goal (view-invariant embeddings), task (cross-view pose retrieval), and approach (metric learning). Another work that focuses on pose retrieval [28] embeds images with similar 2D poses in the same view close together. Our method focuses on learning view-invariance, and we also differ from [28] in method (probabilistic 2D pose embeddings).
View Invariance and Object Retrieval When we capture a 3D scene in 2D as images or videos, changing the viewpoint often does not change other properties of the scene. The ability to recognize visual similarities across viewpoints is helpful for a variety of vision tasks, such as motion analysis [18,17], tracking [31], vehicle and human re-identification [7,49], object classification and retrieval [22,12,11], and action recognition [36,25,48,23].
Some of these works [11] focus on metric learning for object retrieval. Their learned embedding spaces place different views of the same object class close together. Our work on human pose retrieval differs in a few ways. Our labels are continuous 3D poses, whereas in object recognition tasks, each embedding is associated with a discrete class label. Furthermore, we embed 2D poses, while these works embed images. Our approach allows us to investigate the impact of input 2D uncertainty with probabilistic embeddings and explore different methods to measure cross-view pose retrieval confidence. We hope that our work provides a novel perspective on view invariance for human poses.
Our Approach
Our goal is to embed 2D poses such that distances in the embedding space correspond to similarities of their corresponding absolute 3D poses in Euclidean space. We achieve this view invariance property through our triplet ratio loss (Section 3.2), which pushes together/pull apart 2D poses corresponding to similar/dissimilar 3D poses. The positive pairwise loss (Section 3.3) is applied to increase the matching probability of similar poses. Finally, the Gaussian prior loss (Section 3.4) helps regularize embedding magnitude and variance. The training and inference framework of Pr-VIPE is illustrated in Figure 2
Matching Definition
The 3D pose space is continuous, and two 3D poses can be trivially different without being identical. We define two 3D poses to be matching if they are visually similar regardless of viewpoint. Given two sets of 3D keypoints (y i , y j ), we define a matching indicator function
m ij = 1, if NP-MPJPE(y i , y j ) κ 0, otherwise,(1)
where κ controls visual similarity between matching poses. Here, we use mean per joint position error (MPJPE) [14] between the two sets of 3D pose keypoints as a proxy to quantify their visual similarity. Before computing MPJPE, we normalize the 3D poses and apply Procrustes alignment between them. The reason is that we want our model to be view-invariant and to disregard rotation, translation, or scale differences between 3D poses. We refer to this normalized, Procrustes aligned MPJPE as NP-MPJPE.
Triplet Ratio Loss
The triplet ratio loss aims to embed 2D poses based on the matching indicator function (1). Let n be the dimension of the input 2D pose keypoints x, and d be the dimension of the output embedding. We would like to learn a mapping f :
R n → R d , such that D(z i , z j ) < D(z i , z j ), ∀m ij > m ij , where z = f (x), and D(z i , z j ) is an embedding space distance measure.
For a pair of input 2D poses (x i , x j ), we define p(m|x i , x j ) to be the probability that their corresponding 3D poses (y i , y j ) match, that is, they are visually similar. While it is difficult to define this probability directly, we propose to assign its values by estimating p(m|z i , z j ) via metric learning. We know that if two 3D poses are identical, then p(m|x i , x j ) = 1, and if two 3D poses are sufficiently different, p(m|x i , x j ) should be small. For any given input
triplet (x i , x i + , x i − ) with m i,i + > m i,i − , we want p(m|z i , z i + ) p(m|z i , z i − ) β,(2)
where β > 1 represents the ratio of the matching probability of a similar 3D pose pair to that of a dissimilar pair.
Applying negative logarithm to both sides, we have
(− log p(m|z i , z i + )) − (− log p(m|z i , z i − )) − log β.
(3) Notice that the model can be trained to satisfy this with the triplet loss framework [40]. Given batch size N , we define triplet ratio loss L ratio as
L ratio = N i=1 max(0, D m (z i , z i + ) − D m (z i , z i − ) + α)), (4) with distance kernel D m (z i , z j ) = − log p(m|z i , z j ) and margin α = log β. To form a triplet (x i , x i + , x i − ),
we set the anchor x i and positive x i + to be projected from the same 3D pose and perform online semi-hard negative mining [40] to find x i − .
It remains for us to compute matching probability using our embeddings. To compute p(m|z i , z j ), we use the formulation proposed by [29]:
p(m|z i , z j ) = σ(−a||z i − z j || 2 + b),(5)
where σ is a sigmoid function, and the trainable scalar parameters a > 0 and b ∈ R calibrate embedding distances to probabilistic similarity.
Positive Pairwise Loss
The positive pairs in our triplets have identical poses. We would like them to have high matching probabilities, which can be encouraged by adding the positive pairwise loss
L positive = N i=1 − log p(m|z i , z i + ).(6)
The combination of L ratio and L positive can be applied to training point embedding models, which we refer to as VIPE in this paper.
Probabilistic Embeddings
In this section, we discuss the extension of VIPE to the probabilistic formulation Pr-VIPE. The inputs to our model, 2D pose keypoints, are inherently ambiguous, and there are many valid 3D poses that can be projected to a similar 2D pose [1]. This input uncertainty can be difficult to model using point embeddings [19,29]. We investigate representing this uncertainty using distributions in the embedding space by mapping 2D poses to probabilistic embeddings: x → p(z|x). Similar to [29], we extend the input matching probability (5) to using probabilistic embeddings
as p(m|x i , x j ) = p(m|z i , z j )p(z i |x i )p(z j |x j )dz i dz j ,
which can be approximated using Monte-Carlo sampling with K samples drawn from each distribution as
p(m|x i , x j ) ≈ 1 K 2 K k1=1 K k2=1 p(m|z (k1) i , z (k2) j ).(7)
We model p(z|x) as a d-dimensional Gaussian with a diagonal covariance matrix. The model outputs mean µ(x) ∈ R d and covariance Σ(x) ∈ R d with shared base network and different output layers. We use the reparameterization trick [20] during sampling.
In order to prevent variance from collapsing to zero and to regularize embedding mean magnitudes, we place a unit Gaussian prior on our embeddings with KL divergence by adding the Gaussian prior loss
L prior = N i=1 D KL (N (µ(x i ), Σ(x i )) N (0, I)).(8)
Inference At inference time, our model takes a single 2D pose (either from detection or projection) and outputs the mean and the variance of the embedding Gaussian distribution.
Keypoint Augmentation
Our triplets can be made of detected and/or projected 2D keypoints as shown in Figure 2. When we train only with detected 2D keypoints, we are constrained to the camera views in training images. To reduce overfitting to these camera views, we perform keypoint augmentation by generating triplets using detected keypoints alongside projected 2D keypoints at random views.
To form triplets using multi-view image pairs, we run 2D keypoint detection and use detected 2D keypoints from different views as anchor-positive pairs. To use projected 2D keypoints, we perform two random rotations to a normalized input 3D pose to generate two 2D poses from different views for anchor/positive. The projection is based on a camera with unit focal length and centered at the origin. For finding negative matches, we perform online semihard mining [40]. Keypoint augmentation is then performed by using a mixture of detected and projected 2D keypoints as anchors, positives, and negatives. We find that training using keypoint augmentation can help our models learn to generalize better to unseen views (Section 4.3).
Implementation Details
Our 3D pose normalization procedure is based on [6], and we perform instance normalization to our 2D poses. More details are provided in the appendix.
The backbone network architecture for our model is based on [26] for its simplicity. We use two residual blocks, batch normalization, 0.3 dropout, and no weight norm clipping. We use d = 16 as a good trade-off between embedding size and accuracy. To weigh different losses, we use w ratio = 1, w positive = 0.005, and w prior = 0.001. We choose β = 2 for the triplet ratio loss margin and K = 20 for the number of samples. During training, we normalize matching probabilities to within [0.05, 0.95] for numerical stability. The matching NP-MPJPE threshold is κ = 0.1 for all training and evaluation. Our approach does not rely on a particular 2D keypoint detector, and we use PersonLab [32] for our experiments. For random rotation during keypoint augmentation, we uniformly sample azimuth angle between ±180 • , elevation between ±30 • , and roll between ±30 • . We use Adagrad optimizer [8] with fixed learning rate 0.02, and batch size N = 256. Additional ablation studies on hyperparameters can be found in the appendix. Our implementation is in TensorFlow, and all the models are trained with CPUs.
Experiments
We demonstrate the performance of our model through pose retrieval across different camera views. Given a multiview dataset, we query using detected 2D pose keypoints from one camera view and find the nearest neighbors in the embedding space from a different camera view. We iterate through all camera pairs in the dataset as query and index. Results averaged across all cameras pairs are reported.
Datasets
For all experiments, we train on a subset of the Hu-man3.6M [14] dataset. We present quantitative and qualitative results on the Human3.6M hold-out set and another dataset (MPI-INF-3DHP [27]) unseen during training. We also present qualitative results on MPII Human Pose [2], for which 3D groundtruth is not available.
Human3.6M (H3.6M) H3.6M is a large human pose dataset containing 3.6 million image frames from 4 chest level cameras with 3D pose groundtruth. We follow the standard protocol [26]: subjects 1, 5, 6, 7, and 8 for training, and subjects 9 and 11 held out for validation. For evaluation, we downsample the frames to 10Hz and further remove near-duplicate 3D poses within 0.02 NP-MPJPE. This process is camera-consistent, meaning if a frame is selected under one camera, it is selected under all cameras, so that the perfect retrieval result is possible. This results in a total of 10910 evaluation frames per camera. We pick the best training checkpoint for all models using this dataset.
MPI-INF-3DHP (3DHP) 3DHP is a more recent human pose dataset containing over 1.3 million images from 14 diverse camera views and scenarios, covering more pose variations than H3.6M [27]. We use 11 cameras from this dataset and exclude the 3 cameras with overhead views. Similar to H3.6M, we downsample frames to 5Hz and remove near-duplicate 3D poses, resulting in 6824 frames per camera.
MPII Human Pose (2DHP) This dataset is commonly used in 2D pose estimation, containing 25K images from YouTube videos. Since groundtruth 3D poses are not available, we only show qualitative results on this dataset.
Evaluation Procedure
Metric We report Hit@k with k = 1, 5, 10, and 20 on pose retrievals. A retrieval is considered accurate if the 3D groundtruth from the retrieved pose satisfies the matching function (1). All of our results are based on κ = 0.1. The metric Hit@k is the percentage of top-k retrieved poses that have at least one accurate retrieval. k = 1 measures model accuracy while k > 1 measures model recall.
Baseline Approaches We compare Pr-VIPE with 2Dto-3D lifting models [26] and L2-VIPE. L2-VIPE outputs L2-normalized point embeddings, and is trained with the squared L2 distance kernel, similar to [40].
For fair comparison, we use the same backbone network architecture for all the models. Notably, this architecture [26] has been tuned for the lifting task on H3.6M. Since the estimated 3D poses in the camera coordinate system are not view-invariant, we apply normalization and Procrustes alignment when running pose retrieval to align the estimated 3D poses between index and query. For comparison, our embeddings are used without alignment or other post-processing during retrieval. For Pr-VIPE, we retrieve poses using nearest neighbors in the embedding space with respect to the sampled matching probability (7), which we refer to as retrival confidence. We present the results on the VIPE models with and without keypoint augmentation. We applied similar keypoint augmentation to the lifting model, but did not see improvement in performance. We also show the results of pose retrieval using aligned 2D keypoints only. The poor performance of using input 2D keypoints for retrieval from different views confirms the fact that models must learn view invariance from inputs for this task.
Pose Retrieval Evaluation
Human3.6M Results From Table 1, we see that Pr-VIPE outperforms all baselines. We reduce retrieval top-1 error by 23.2% compared with 2D-to-3D lifting, and 10.2% compared with L2-VIPE. It is noteworthy that Pr-VIPE is able to achieve better performance with more compact embeddings (16 dimensions) compared with the lifting model output (39 dimensions). We further investigate the effect of embedding dimensions in Section 4.5. Keypoint augmentation reduces performance on H3.6M for both the Pr-VIPE and the L2-VIPE model. This is likely because augmentation reduces overfitting to the training camera views. As we will show on the unseen 3DHP dataset, keypoint augmentation improves model generalization to new views. Our model is robust to the choice of β and the number of samples K, for which detailed analysis is provided in the appendix.
MPI-INF-3DHP Results
We test the ability of our models to generalize to new poses and views using 3DHP. We separate our evaluations using either all 11 cameras or a subset of 5 chest-level cameras. When we use all cameras from 3DHP, we evaluate on camera positions from different elevations, for example, from knee-level. When we evaluate using only the 5 chest-level cameras from 3DHP, the views are more similar to H3.6M, and generalization to new poses becomes more important. Note that all models are trained with H3.6M (4 chest-level cameras) without seeing 3DHP.
As the results show in Table 2, Pr-VIPE without keypoint augmentation is able to perform better than the baselines for chest-level cameras. These results show that Pr-VIPE is able to generalize as well as other baseline methods to new poses. However, for all cameras in 3DHP, the performance for Pr-VIPE without augmentation is notably worse compared with chest-level cameras. This observation indicates that when trained on chest-level cameras only, Pr-VIPE does not generalize as well to new views. The same results can be observed for L2-VIPE between chest-level and all cameras. In contrast, the 3D lifting models are able to generalize better to new views with the help of additional Procrustes alignment, which requires expensive SVD computation for every index-query pair.
We further apply keypoint augmentation to training the Pr-VIPE and the L2-VIPE models. Note that this step does not require camera parameters or additional groundtruth. The results from Table 2 on Pr-VIPE show that the augmentation improves performance in all-camera retrieval by 6% to 9% for all metrics. This step also increases chest-level camera accuracy slightly. For L2-VIPE, we can observe a similar increase on all views. By performing keypoint augmentation, Pr-VIPE is able to generalize better to new poses and new views. Figure 4 shows qualitative retrieval results using Pr-VIPE. As shown in the first row, the retrieval confidence of the model is generally high for H3.6M. This indicates that the retrieved poses are close to the queries in the embedding space. Errors in 2D keypoint detection can lead to retrieval errors as shown by the rightmost pair. In the second and the third row, the retrieval confidence is lower for 3DHP. This is likely because there are new poses and views unseen during training, which has the nearest neighbor slightly further away in the embedding space. We see that the model can generalize to new views as the images are taken at different camera elevations from H3.6M. Interestingly, the rightmost pair on row 2 shows that the model can retrieve poses with large differences in roll angle, which is not present in the training set. The rightmost pair on row 3 shows an example of a large NP-MPJPE error due to mis-detection of the left leg in the index pose.
Qualitative Results
We show qualitative results using queries from H3.6M hold-out set to retrieve from 2DHP in the last two rows of Figure 4. The results on these in-the-wild images indicate that as long as the 2D keypoint detector works reliably, our model is able to retrieve poses across views and subjects. More qualitative results are provided in the appendix.
Ablation Study
Embedding Space Visualization We run Principal Component Analysis (PCA) on the 16-dimensional embeddings using the Pr-VIPE model and visualize the first two principal dimensions in Figure 3. To visualize more unique poses, we randomly subsample the H3. respectively. Despite the similar retrieval accuracies, Pr-VIPE is generally more accurate and, more importantly, has additional desirable properties in that the variance can model 2D input ambiguity as to be discussed next. ple 1200 poses from H3.6M hold-out set with a minimum gap of 0.1 3D NP-MPJPE. If a 2D pose has small 2D NP-MPJPE to its neighbors, it means there are many similar 2D poses corresponding to different 3D poses, or, in other words, the 2D pose is ambiguous. Figure 5 shows that the 2D pose with the greatest variance is ambiguous as it has similar 2D poses in H3.6M with different 3D poses. In contrast, we see that the closest 2D poses corresponding to the smallest variance pose on the first row of Figure 5 are clearly different. Figure 6 shows that as the average variance increases, the 2D NP-MPJPE between similar poses generally decreases, which means that 2D poses with larger variances are more ambiguous.
Embedding Dimensions Figure 7 demonstrates the effect of embedding dimensions on H3.6M and 3DHP. The lifting model lifts 13 2D keypoints to 3D, and therefore has a constant output dimension of 39. We see that Pr-VIPE is able to achieve a higher accuracy than lifting on both datasets at 16 embedding dimensions. Additionally, we can increase the number of embedding dimensions to 32, which increases accuracy of Pr-VIPE to 75.5%. Our method can extend to additional dimensions with no additional annotations, but to increase dimension of the predicted 3D pose, additional annotation of more 2D and 3D keypoints are needed.
Retrieval Confidence In order to validate the retrieval confidence values, we randomly sample 100 queries along with their top-5 retrievals (using Pr-VIPE retrieval confidence) from each query-index camera pair. This proce- dure forms 6000 query-retrieval sample pairs for H3.6M (4 views, 12 camera pairs) and 55000 for 3DHP (11 views, 110 camera pairs), which we further bin by their retrieval confidences. Figure 8 shows the matching accuracy under κ = 0.1 for each confidence bin. We can see that the accuracy positively correlates with the confidence values, which suggest our retrieval confidence is a valid indicator to model performance.
What if 2D keypoint detectors were perfect? We repeat our experiments using groundtruth 2D keypoints to simulate a perfect 2D keypoint detector on H3.6M and 3DHP. All experiments use the 4 views from H3.6M for training following the standard protocol. For the baseline lifting model in camera frame, we achieve 55.5% Hit@1 on H3.6M, 30.6% on 3DHP (all), and 25.9% on 3DHP (chest). For Pr-VIPE, we achieve 97.5% Hit@1 on H3.6M, 44.3% on 3DHP (all), and 66.4% on 3DHP (chest). Using perfect 2D keypoints, the Pr-VIPE model is able to perform much better than the lifting model. Comparing the results with using detected keypoints, the large improvement in performance using groundtruth keypoints suggests that a considerable fraction of error in our model is due to imperfect 2D keypoint detections.
Conclusion
We introduce Pr-VIPE, an approach to learning probabilistic view-invariant embeddings from 2D pose keypoints.
Our experiments suggest that input 2D keypoints alone are sufficient to achieve view invariant properties in the embedding space, without having to explicitly predict 3D pose. By working with 2D keypoints, we can use synthetic projection augmentation to improve model generalization to unseen camera views. We also demonstrate that our probabilistic embeddings learn to capture input ambiguity, which can be useful for measuring uncertainty in downstream tasks. Pr-VIPE is compact with a simple architecture, and in addition to cross-view retrieval, our embeddings can be applied to other human pose related tasks. We hope that our work can contribute towards future studies in recognizing human poses and body motions.
Acknowledgement
We are grateful to Yuxiao Wang and Liangzhe Yuan from Google Research and Xiao Zhang from University of Chicago for helpful discussions. We really appreciate the support of Pietro Perona and the Computational Vision Lab at Caltech for making this collaboration possible. 1/c. We center the 2D pose between the LHip and RHip and normalize such that the maximum distance between shoulder and hip joints are 1/c. The maximum distance computed is between Rshoulder, LShoulder, RHip and LHip. We use c = 2 in our experiments.
Additional Ablation Studies
Effect of Number of Samples K and Margin Parameter β Table 3 shows the effect of the number of samples K and the margin parameter β (actual triplet margin α = log β) on Pr-VIPE. The number of samples control how many points we sample from the embedding distribution to compute matching probability and β controls the ratio of matching probability between matching and non-matching pairs. Our model is robust to the choice of β in terms of retrieval accuracy as shown by Table 3. The main effect of β is on retrieval confidence, as non-matching pairs are scaled to a smaller matching probability for larger β. Pr-VIPE performance with 10 samples is competitive with the baselines in the main paper, but we do better with 20 samples. Increasing the number of samples further has similar performance. For our experiments, we use 20 samples and β = 2.
Effect of Keypoint Augmentation We explore the effect of different random rotations during keypoint augmentation on pose retrieval results in Table 4. All models are trained on the 4 chest-level cameras on H3.6M but the models with keypoint augmentation also use projected 2D keypoints from randomly rotated 3D poses. For the random rotation, we always use azimuth range of ±180 • , and we test performance with different angle limits for elevation and roll. We see that the model with no augmentation does the best on the H3.6M, which has the same 4 camera views as training. With increase in rotation angles during mixing, the performance on chest-level cameras drop while performance on new camera views generally increases. The results demonstrate that mixing detected and projected keypoints reduces model overfitting on camera views used during training. Training using randomly rotated keypoints enables our model to generalize much better to new views.
Effect of NP-MPJPE threshold κ We train and evaluate with different values of the NP-MPJPE threshold κ in Table 5. κ controls the NP-MPJPE threshold for a matching pose pair and visualizations of pose pairs with different NP-MPJPE are in Figure 9. Table 5 shows that Pr-VIPE generally achieves the best accuracy for a given NP-MPJPE threshold when the model is trained with the same matching threshold. Additionally, when we train with a tight threshold, i.e. κ = 0.05, we do comparatively well on accuracy at looser thresholds. In contrast, when we train with a loose threshold, i.e. κ = 0.20, we do not do as well given a tighter accuracy threshold. This is because when we push non-matching poses using the triplet ratio loss, κ = 0.20 only push poses that are more than 0.20 NP-MPJPE apart, and does not explicitly push poses less than the NP-MPJPE threshold. The closest retrieved pose will then be within 0.20 NP-MPJPE but it is not guaranteed to be within any threshold < 0.20 NP-MPJPE. But when we use κ = 0.05 for training, poses that are more than 0.05 NP-MPJPE are pushed apart, which also satisfies κ = 0.20 threshold.
In the main paper, we use κ = 0.1. For future applications with other matching definitions, the Pr-VIPE framework is flexible and can be trained with different κ to satisfy different accuracy requirements.
Additional Plots for Ordered Variances Similar to the main paper, we retrieve poses using 2D NP-MPJPE for the top-3 2D poses with smallest and largest variances in Figure 11. Figure 11a shows that for the poses with the top-3 smallest variances, the nearest 2D pose neighbors are visually distinct, which means that these 2D poses are less ambiguous. On the other hand, the nearest 2D pose neighbors of the poses with the largest variances in Figure 11b are visually similar, which means that these 2D poses are more ambiguous.
PCA Visualization
We run Principal Component Analysis (PCA) on the 16-dimensional embeddings using the Pr-VIPE model. Figure 12 visualizes the first two principal dimensions and this is similar to Figure 3 in the main paper with more poses. To visualize more unique poses, we randomly subsample the H3.6M hold-out set and select 3D poses at least 0.1 NP-MPJPE apart. Figure 12 demonstrates that 2D poses from similar 3D poses are close together, while non-matching poses are further apart.
Qualitative Results
We present more qualitative results for Pr-VIPE on all datasets in Figure 13. The first 2 rows show results on H3.6M, the next 3 rows are on 3DHP and the last 3 rows shows results using the hold-out set in H3.6M to retrieve from 2DHP. We are able to retrieve across camera views and subjects on all datasets.
On H3.6M, retrieval confidence is generally high and retrievals are visually accurate. NP-MPJPE is in general smaller on H3.6M compared to 3DHP, since 3DHP has more diverse poses and camera views. The model works reasonably well on 3DHP despite additional variations on pose, viewpoints and subjects. For the pairs on the right in rows 4 and 5, the subject is occluded by the chair and the pose inferred by the 2D keypoint detector may not be accurate. Our model is dependent on the result of the 2D keypoint detector. Interestingly, the right pair in row 4 and the middle pair in row 3 shows retrievals with large rolls, which is unseen during training. The results on 3DHP demonstrate the generalization capability of our model to unseen poses and views. To test on in-the-wild images, we use the hold-out set of H3.6M to retrieve from 2DHP. The retrieval results demonstrate that Pr-VIPE embeddings can retrieve visually accurate poses from detected 2D keypoints. The left pair in the last row is particularly interesting, as the retrieval has a large change in viewpoint. For the low confidence pairs in the right on the third to last row and last row, we can see that the arms of the subjects seems to be bent slightly differently. In contrast, the higher confidence retrieval pairs looks visually similar. The results suggest that performance of existing 2D keypoint detectors, such as [32], is sufficient to train pose embedding models to achieve the view-invariant property in diverse images. Figure 12: Visualization of Pr-VIPE space with 2D poses in H3.6M hold-out subset using the first two PCA dimensions.
Standing
Sitting
Arms Raised
Arms Down
Figure 1 :
1We embed 2D poses such that our embeddings are (a) view-invariant (2D projections of similar 3D poses from different views are embedded close) and (b) probabilistic (embeddings are distributions that cover different 3D poses projecting to the same input 2D pose).
Figure 2 :
2Overview of Pr-VIPE model training and inference. Our model takes keypoint input from a single 2D pose (detected from images and/or projected from 3D poses) and predicts embedding distributions. Three losses based on distributions with sampling are used for training.
.
Figure 3 :
3Visualization of Pr-VIPE space with 2D poses in H3.6M hold-out subset using the first two PCA dimensions.
6M hold-out set and select 3D poses at least 0.1 NP-MPJPE apart. The results show that 2D poses from different views of matching 3D poses are mapped close together, while 2D poses with non-matching 3D poses are further apart. In particular, the standing and sitting poses are well separated in the two dimensions. The transition in pose throughout the embedding space also appears smooth. Poses closer to the top of Figure 3 have arms raised, while poses closer to the bottom have arms lowered. We see leaning poses between fully
Figure 4 :
4Visualization of pose retrieval results. The first row is from H3.6M; the second and the third row are from 3DHP; the last two rows are using queries from H3.6M to retrieve from 2DHP. On each row, we show the query pose on the left for each image pair and the top-1 retrieval using the Pr-VIPE model with keypoint augmentation on the right. We display the retrieval confidences and top-1 NP-MPJPEs (if 3D pose groundtruth is available).standing and sitting poses. A larger visualization with more poses is presented in the appendix.
AFigure 5 :Figure 6 :
562D pose is ambiguous if there are similar 2D poses corresponding to very different poses in 3D. To measure this, we compute the average 2D NP-MPJPE between a 2D pose and its top-10 nearest neighbors in terms of 2D NP-MPJPE. To ensure that the 3D poses are different, we sam-Top retrievals by 2D NP-MPJPE from H3.6M hold-out subset for queries with largest and smallest variance. 2D poses are shown in the boxes. Relationship between mean embedding variance and mean 2D NP-MPJPE to top-10 nearest 2D pose neighbors from H3.6M hold-out subset. The orange curve represents the best fitting 5th degree polynomial.
Figure 7 :Figure 8 :
78Comparison of Hit@1 on H3.6M and 3DHP with different embedding dimensions. The baseline 3D lifting model in the camera frame predicts 39 dimensions. Relationship between retrieval confidence and matching accuracy on H3.6M and 3DHP.
Figure 10 :
10Visualization of pose keypoints used in our experiments.
with top-3 smallest variance and their nearest neighbors in terms of 2D NP-MPJPE. Largest Variance Closest Poses from 2D NP-MPJPE (b) Poses with top-3 largest variance and their nearest neighbors in terms of 2D NP-MPJPE.
Figure 11 :
11Top retrievals by 2D NP-MPJPE from H3.6M hold-out subset for queries with top-3 largest and smallest variances. 2D poses are shown in the boxes. Standing and sitting poses seem well separated from the two principle dimensions. Additionally, there are leaning poses between sitting and standing. Poses near the top of the figure have arms raised, and there is generally a gradual transition to the bottom of the figure, where arms are lowered. These results show that from 2D joint keypoints only, we are able to learn view-invariant properties with compact embeddings.
Table 2 :
2Comparison of cross-view pose retrieval results on 3DHP with chest-level cameras and all cameras. * indicates that normalization and Procrustes alignment are performed on query-index pairs.Standing
Sitting
Arms
Raised
Arms
Down
Point vs. Probabilistic Embeddings We compare VIPE point embedding formulation with Pr-VIPE. When trained only with detected keypoints, the Hit@1 for VIPE and Pr-VIPE are 75.36% and 76.25% on H3.6M, and 19.74% and 19.95% on 3DHP, respectively. When we add keypoint augmentation, the Hit@1 for VIPE and Pr-VIPE are 73.77% and 73.72% on H3.6M, and 26.10% and 26.45% on 3DHP,
[ 49 ]
49Liang Zheng, Yujia Huang, Huchuan Lu, and Yi Yang.Pose
invariant embedding for deep person re-identification. IEEE
Transactions on Image Processing, 2019. 2
[50] Xingyi Zhou, Qixing Huang, Xiao Sun, Xiangyang Xue, and
Yichen Wei. Towards 3d human pose estimation in the wild:
a weakly-supervised approach. In Proceedings of the IEEE
International Conference on Computer Vision, pages 398-
407, 2017. 2
(a) 17 keypoints based on H3.6M.
(b) 13 keypoints based on COCO.
Table 3 :
3Additional ablation study results of Pr-VIPE on H3.6M with the number of samples K and margin parameter β.
Table 4 :
4Additional ablation study results of Pr-VIPE on H3.6M and 3DHP using different rotation thresholds for keypoint augmentation. The angle threshold for azimuth is always ±180 • and the angle thresholds in the table are for elevation and roll. The row for w/o aug. corresponds to Pr-VIPE without augmentation.Hit@1 with evaluation κ
Training κ
0.05
0.10
0.15
0.20
0.05
0.495 0.761
0.908
0.962
0.10
0.489 0.762 0.909
0.963
0.15
0.462
0.753 0.910 0.965
0.20
0.429
0.731
0.906 0.965
Table 5 :
5Additional ablation study results of Pr-VIPE on H3.6M with different NP-MPJPE threshold κ for training and evaluation.
Additional Implementation DetailsKeypoint DefinitionFigure 10illustrates the keypoints that we use in our experiments. The 3D poses used in our experiments is the 17 keypoints corresponding to the H3.6M[14]skeleton used in[26], shown inFigure 10a. We use this keypoint definition to compute NP-MPJPE for 3D poses and evaluate retrieval accuracy. The Pr-VIPE training and inference process do not depend on a particular 2D keypoint detector. Here, we use PersonLab (ResNet152 single-scale)[32]in our experiments. Our 2D keypoints are selected from the keypoints in COCO[24], which is the set of keypoints detected by PersonLab[32]. We use the 12 body keypoints from COCO and select the "Nose" keypoint as the head, shown inFigure 10b.Pose Normalization We normalize our 2D and 3D poses such that camera parameters are not needed during training and inference. When we do projection, we assume a centered camera with a unit focal length. We adapt the normalization procedure in[6]to work with more keypoints so that the results are consistent even when the distance between some keypoints are small. For the 3D pose, we translate it so that the hip located at (0, 0, c). We then scale the hip to spine to head distance of the 3D pose to unit scale. When we project this 3D pose, the projected 2D pose has hip to head distance approximatelyAppendixVisualization of 3D Visual SimilarityThe 3D pose space is continuous, and we use the NP-MPJPE as a proxy to quantify visual similarity between pose pairs.Figure 9shows pairs of 3D pose keypoints with their corresponding NP-MPJPE, where each row depicts a different NP-MPJPE range. This plot demonstrates the effect of choosing different κ, which controls matching threshold between 3D poses. If we choose κ = 0.05, then only the first row inFigure 9would be considered matching, and the rest of the rows are non-matching. Our current value of κ = 0.10 corresponds to using the first two rows as matching pairs and the rest of the rows as non-matching ones. By loosening κ, poses with greater differences will be considered as matching, as shown by different rows inFigure 9. We note that pairs in rows 3 and 4 shows significant visual differences compared with the first two rows. We further investigate the effects of different κ during training and evaluation in Section 3.
Pose-conditioned joint angle limits for 3d human pose reconstruction. Ijaz Akhter, J Michael, Black, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition14Ijaz Akhter and Michael J Black. Pose-conditioned joint an- gle limits for 3d human pose reconstruction. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 1446-1455, 2015. 1, 4
2d human pose estimation: New benchmark and state of the art analysis. Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, Bernt Schiele, Proceedings of the IEEE Conference on computer Vision and Pattern Recognition. the IEEE Conference on computer Vision and Pattern RecognitionMykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In Proceedings of the IEEE Con- ference on computer Vision and Pattern Recognition, pages 3686-3693, 2014. 5
Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. Aleksandar Bojchevski, Stephan Günnemann, arXiv:1707.03815arXiv preprintAleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017. 2
Signature verification using a" siamese" time delay neural network. Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, Roopak Shah, Advances in neural information processing systems. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. In Advances in neural information processing systems, pages 737-744, 1994. 2
3d human pose es-timation= 2d pose estimation+ matching. Ching-Hang Chen, Deva Ramanan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChing-Hang Chen and Deva Ramanan. 3d human pose es- timation= 2d pose estimation+ matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7035-7043, 2017. 2
Unsupervised 3d pose estimation with geometric self-supervision. Ching-Hang Chen, Ambrish Tyagi, Amit Agrawal, Dylan Drover, Stefan Stojanov, James M Rehg, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition12Ching-Hang Chen, Ambrish Tyagi, Amit Agrawal, Dylan Drover, Stefan Stojanov, and James M Rehg. Unsupervised 3d pose estimation with geometric self-supervision. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5714-5724, 2019. 2, 4, 12
Vehicle re-identification with viewpointaware metric learning. Ruihang Chu, Yifan Sun, Yadong Li, Zheng Liu, Chi Zhang, Yichen Wei, arXiv:1910.041041arXiv preprintRuihang Chu, Yifan Sun, Yadong Li, Zheng Liu, Chi Zhang, and Yichen Wei. Vehicle re-identification with viewpoint- aware metric learning. arXiv preprint arXiv:1910.04104, 2019. 1, 2
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 124John Duchi, Elad Hazan, and Yoram Singer. Adap- tive subgradient methods for online learning and stochas- tic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011. 4
Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 2Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensional- ity reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pat- tern Recognition (CVPR'06), volume 2, pages 1735-1742.
. IEEE. 2IEEE, 2006. 2
Alexander Hermans, Lucas Beyer, Bastian Leibe, arXiv:1703.07737defense of the triplet loss for person re-identification. arXiv preprintAlexander Hermans, Lucas Beyer, and Bastian Leibe. In de- fense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017. 2
Pies: Pose invariant embeddings. Chih-Hui Ho, Pedro Morgado, Amir Persekian, Nuno Vasconcelos, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition1Chih-Hui Ho, Pedro Morgado, Amir Persekian, and Nuno Vasconcelos. Pies: Pose invariant embeddings. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12377-12386, 2019. 1, 2
Learning a probabilistic model mixing 3d and 2d primitives for view invariant object recognition. Wenze Hu, Song-Chun Zhu, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Wenze Hu and Song-Chun Zhu. Learning a probabilistic model mixing 3d and 2d primitives for view invariant object recognition. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2273- 2280. IEEE, 2010. 2
Local similarity-aware deep feature embedding. Chen Huang, Chen Change Loy, Xiaoou Tang, Advances in neural information processing systems. Chen Huang, Chen Change Loy, and Xiaoou Tang. Lo- cal similarity-aware deep feature embedding. In Advances in neural information processing systems, pages 1262-1270, 2016. 2
Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. Catalin Ionescu, Dragos Papava, Vlad Olaru, Cristian Sminchisescu, IEEE transactions on pattern analysis and machine intelligence. 3612Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and pre- dictive methods for 3d human sensing in natural environ- ments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013. 3, 5, 12
Mining on manifolds: Metric learning without labels. Ahmet Iscen, Giorgos Tolias, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYannis Avrithis, and Ondřej ChumAhmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondřej Chum. Mining on manifolds: Metric learning without labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7642-7651, 2018. 2
Karim Iskakov, Egor Burkov, Victor Lempitsky, Yury Malkov, arXiv:1905.05754Learnable triangulation of human pose. arXiv preprintKarim Iskakov, Egor Burkov, Victor Lempitsky, and Yury Malkov. Learnable triangulation of human pose. arXiv preprint arXiv:1905.05754, 2019. 2
Advances in view-invariant human motion analysis: a review. Xiaofei Ji, Honghai Liu, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 40Xiaofei Ji and Honghai Liu. Advances in view-invariant hu- man motion analysis: a review. IEEE Transactions on Sys- tems, Man, and Cybernetics, Part C (Applications and Re- views), 40(1):13-24, 2009. 2
Visualbased view-invariant human motion analysis: A review. Xiaofei Ji, Honghai Liu, Yibo Li, David Brown, International Conference on Knowledge-Based and Intelligent Information and Engineering Systems. Xiaofei Ji, Honghai Liu, Yibo Li, and David Brown. Visual- based view-invariant human motion analysis: A review. In International Conference on Knowledge-Based and Intelli- gent Information and Engineering Systems, pages 741-748.
. Springer, Springer, 2008. 2
What uncertainties do we need in bayesian deep learning for computer vision?. Alex Kendall, Yarin Gal, Advances in neural information processing systems. 24Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In Advances in neural information processing systems, pages 5574-5584, 2017. 2, 4
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding varia- tional bayes. arXiv preprint arXiv:1312.6114, 2013. 4
Selfsupervised learning of 3d human pose using multi-view geometry. Muhammed Kocabas, Salih Karagoz, Emre Akbas, arXiv:1903.02330arXiv preprintMuhammed Kocabas, Salih Karagoz, and Emre Akbas. Self- supervised learning of 3d human pose using multi-view ge- ometry. arXiv preprint arXiv:1903.02330, 2019. 2
Learning methods for generic object recognition with invariance to pose and lighting. Yann Lecun, Jie Fu, Leon Huang, Bottou, CVPR. Yann LeCun, Fu Jie Huang, Leon Bottou, et al. Learning methods for generic object recognition with invariance to pose and lighting. In CVPR (2), pages 97-104. Citeseer, 2004. 2
Unsupervised learning of view-invariant action representations. Junnan Li, Yongkang Wong, Qi Zhao, Mohan Kankanhalli, Advances in Neural Information Processing Systems. Junnan Li, Yongkang Wong, Qi Zhao, and Mohan Kankan- halli. Unsupervised learning of view-invariant action repre- sentations. In Advances in Neural Information Processing Systems, pages 1254-1264, 2018. 2
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755.
. Springer, 12Springer, 2014. 12
Viewpoint invariant action recognition using rgb-d videos. Jian Liu, Naveed Akhtar, Mian Ajmal, IEEE Access. 62Jian Liu, Naveed Akhtar, and Mian Ajmal. Viewpoint in- variant action recognition using rgb-d videos. IEEE Access, 6:70061-70071, 2018. 2
A simple yet effective baseline for 3d human pose estimation. Julieta Martinez, Rayat Hossain, Javier Romero, James J Little, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision512Julieta Martinez, Rayat Hossain, Javier Romero, and James J Little. A simple yet effective baseline for 3d human pose es- timation. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 2640-2649, 2017. 1, 2, 4, 5, 12
Monocular 3d human pose estimation in the wild using improved cnn supervision. Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, Christian Theobalt, 2017 International Conference on 3D Vision (3DV). IEEEDushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn supervision. In 2017 International Con- ference on 3D Vision (3DV), pages 506-516. IEEE, 2017. 5
Pose embeddings: A deep architecture for learning to match human poses. Greg Mori, Caroline Pantofaru, Nisarg Kothari, Thomas Leung, George Toderici, Alexander Toshev, Weilong Yang, arXiv:1507.003021arXiv preprintGreg Mori, Caroline Pantofaru, Nisarg Kothari, Thomas Le- ung, George Toderici, Alexander Toshev, and Weilong Yang. Pose embeddings: A deep architecture for learning to match human poses. arXiv preprint arXiv:1507.00302, 2015. 1, 2
Modeling uncertainty with hedged instance embedding. Kevin Seong Joon Oh, Jiyan Murphy, Joseph Pan, Florian Roth, Andrew Schroff, Gallagher, arXiv:1810.0031924arXiv preprintSeong Joon Oh, Kevin Murphy, Jiyan Pan, Joseph Roth, Florian Schroff, and Andrew Gallagher. Modeling uncer- tainty with hedged instance embedding. arXiv preprint arXiv:1810.00319, 2018. 2, 3, 4
Deep metric learning via lifted structured feature embedding. Hyun Oh Song, Yu Xiang, Stefanie Jegelka, Silvio Savarese, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured fea- ture embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4004- 4012, 2016. 2
Viewpoint invariant exemplar-based 3d human tracking. Eng-Jon Ong, Antonio S Micilotta, Richard Bowden, Adrian Hilton, Computer Vision and Image Understanding. 1042-3Eng-Jon Ong, Antonio S Micilotta, Richard Bowden, and Adrian Hilton. Viewpoint invariant exemplar-based 3d hu- man tracking. Computer Vision and Image Understanding, 104(2-3):178-189, 2006. 2
Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, Kevin Murphy, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)415George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, and Kevin Murphy. Person- lab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In Pro- ceedings of the European Conference on Computer Vision (ECCV), pages 269-286, 2018. 4, 12, 15
Deep face recognition. M Omkar, Andrea Parkhi, Andrew Vedaldi, Zisserman, bmvc. 16Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, et al. Deep face recognition. In bmvc, volume 1, page 6, 2015. 2
3d human pose estimation in video with temporal convolutions and semi-supervised training. Dario Pavllo, Christoph Feichtenhofer, David Grangier, Michael Auli, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionDario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli. 3d human pose estimation in video with tem- poral convolutions and semi-supervised training. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7753-7762, 2019. 2
Cross view fusion for 3d human pose estimation. Haibo Qiu, Chunyu Wang, Jingdong Wang, Naiyan Wang, Wenjun Zeng, arXiv:1909.01203arXiv preprintHaibo Qiu, Chunyu Wang, Jingdong Wang, Naiyan Wang, and Wenjun Zeng. Cross view fusion for 3d human pose estimation. arXiv preprint arXiv:1909.01203, 2019. 2
View-invariance in action recognition. Cen Rao, Mubarak Shah, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR. the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR2IEEECen Rao and Mubarak Shah. View-invariance in action recognition. In Proceedings of the 2001 IEEE Computer So- ciety Conference on Computer Vision and Pattern Recogni- tion. CVPR 2001, volume 2, pages II-II. IEEE, 2001. 2
Exploiting temporal information for 3d human pose estimation. Imtiaz Mir Rayat, James J Hossain, Little, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Mir Rayat Imtiaz Hossain and James J Little. Exploiting temporal information for 3d human pose estimation. In Pro- ceedings of the European Conference on Computer Vision (ECCV), pages 68-84, 2018. 2
Unsupervised geometry-aware representation for 3d human pose estimation. Helge Rhodin, Mathieu Salzmann, Pascal Fua, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Helge Rhodin, Mathieu Salzmann, and Pascal Fua. Unsu- pervised geometry-aware representation for 3d human pose estimation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 750-767, 2018. 2
Learning monocular 3d human pose estimation from multi-view images. Helge Rhodin, Jörg Spörri, Isinsu Katircioglu, Victor Constantin, Frédéric Meyer, Erich Müller, Mathieu Salzmann, Pascal Fua, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHelge Rhodin, Jörg Spörri, Isinsu Katircioglu, Victor Con- stantin, Frédéric Meyer, Erich Müller, Mathieu Salzmann, and Pascal Fua. Learning monocular 3d human pose estima- tion from multi-view images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8437-8446, 2018. 2
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlorian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clus- tering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823, 2015. 2, 3, 4, 5
Integral human pose regression. Xiao Sun, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Xiao Sun, Bin Xiao, Fangyin Wei, Shuang Liang, and Yichen Wei. Integral human pose regression. In Proceedings of the European Conference on Computer Vision (ECCV), pages 529-545, 2018. 2
Learning to fuse 2d and 3d image cues for monocular body pose estimation. Pablo Bugra Tekin, Mathieu Márquez-Neila, Pascal Salzmann, Fua, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionBugra Tekin, Pablo Márquez-Neila, Mathieu Salzmann, and Pascal Fua. Learning to fuse 2d and 3d image cues for monocular body pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3941- 3950, 2017. 2
Lourdes Agapito, and Chris Russell. Rethinking pose in 3d: Multi-stage refinement and recovery for markerless motion capture. Denis Tome, Matteo Toso, 2018 International Conference on 3D Vision (3DV). Denis Tome, Matteo Toso, Lourdes Agapito, and Chris Rus- sell. Rethinking pose in 3d: Multi-stage refinement and recovery for markerless motion capture. In 2018 Inter- national Conference on 3D Vision (3DV), pages 474-483.
. IEEE. 2IEEE, 2018. 2
Luke Vilnis, Andrew Mccallum, arXiv:1412.6623Word representations via gaussian embedding. arXiv preprintLuke Vilnis and Andrew McCallum. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623, 2014. 2
Learning fine-grained image similarity with deep ranking. Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, Ying Wu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learn- ing fine-grained image similarity with deep ranking. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1386-1393, 2014. 2
Learning descriptors for object recognition and 3d pose estimation. Paul Wohlhart, Vincent Lepetit, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionPaul Wohlhart and Vincent Lepetit. Learning descriptors for object recognition and 3d pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3109-3118, 2015. 2
Sampling matters in deep embedding learning. Chao-Yuan, R Wu, Alexander J Manmatha, Philipp Smola, Krahenbuhl, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionChao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. Sampling matters in deep embedding learning. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 2840-2848, 2017. 2
View invariant human action recognition using histograms of 3d joints. Lu Xia, Chia-Chih Chen, Jake K Aggarwal, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Lu Xia, Chia-Chih Chen, and Jake K Aggarwal. View invari- ant human action recognition using histograms of 3d joints. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 20-27.
. IEEE. 2IEEE, 2012. 2
On each row, we show the query pose on the left for each image pair and the top-1 retrieval using the Pr-VIPE model with keypoint augmentation on the right. We display the retrieval confidences and top-1 NP-MPJPEs. 13Visualization of pose retrieval results. if 3D pose groundtruth is availableFigure 13: Visualization of pose retrieval results. On each row, we show the query pose on the left for each image pair and the top-1 retrieval using the Pr-VIPE model with keypoint augmentation on the right. We display the retrieval confidences and top-1 NP-MPJPEs (if 3D pose groundtruth is available).
| [] |
[
"End-to-End Learning of Visual Representations from Uncurated Instructional Videos",
"End-to-End Learning of Visual Representations from Uncurated Instructional Videos"
] | [
"Antoine Miech [email protected] \nENS/Inria\n\n",
"Jean-Baptiste Alayrac [email protected] \n3 CTU 4 Oxford\n",
"Lucas Smaira \n3 CTU 4 Oxford\n",
"Ivan Laptev \nENS/Inria\n\n",
"Josef Sivic \nENS/Inria\n\n",
"Andrew Zisserman \n3 CTU 4 Oxford\n"
] | [
"ENS/Inria\n",
"3 CTU 4 Oxford",
"3 CTU 4 Oxford",
"ENS/Inria\n",
"ENS/Inria\n",
"3 CTU 4 Oxford"
] | [] | Timeyou have a little pressure you are cutting the wood readjusting the table saw I am using a roller sure you applied glueFigure 1: We describe an efficient approach to learn visual representations from highly misaligned and noisy narrations automatically extracted from instructional videos. Our video representations are learnt from scratch without relying on any manually annotated visual dataset yet outperform all self-supervised and many fully-supervised methods on several video recognition benchmarks.AbstractAnnotating videos is cumbersome, expensive and not scalable. Yet, many strong video models still rely on manually annotated data. With the recent introduction of the HowTo100M dataset, narrated videos now offer the possibility of learning video representations without manual supervision. In this work we propose a new learning approach, MIL-NCE, capable of addressing misalignments inherent to narrated videos. With this approach we are able to learn strong video representations from scratch, without the need for any manual annotation. We evaluate our representations on a wide range of four downstream tasks over eight datasets: action recognition (HMDB-51, UCF-101, Kinetics-700), text-tovideo retrieval (YouCook2, MSR-VTT), action localization (YouTube-8M Segments, CrossTask) and action segmentation (COIN). Our method outperforms all published selfsupervised approaches for these tasks as well as several fully supervised baselines. Our joint text-video pretrained model is publicly available at: https://tfhub.dev/ deepmind/mil-nce/i3d/1. * Equal contribution. | 10.1109/cvpr42600.2020.00990 | [
"https://arxiv.org/pdf/1912.06430v1.pdf"
] | 209,370,497 | 1912.06430 | 9de403a58395a1b56bfceee6e009788c43db6d08 |
End-to-End Learning of Visual Representations from Uncurated Instructional Videos
Antoine Miech [email protected]
ENS/Inria
Jean-Baptiste Alayrac [email protected]
3 CTU 4 Oxford
Lucas Smaira
3 CTU 4 Oxford
Ivan Laptev
ENS/Inria
Josef Sivic
ENS/Inria
Andrew Zisserman
3 CTU 4 Oxford
End-to-End Learning of Visual Representations from Uncurated Instructional Videos
Timeyou have a little pressure you are cutting the wood readjusting the table saw I am using a roller sure you applied glueFigure 1: We describe an efficient approach to learn visual representations from highly misaligned and noisy narrations automatically extracted from instructional videos. Our video representations are learnt from scratch without relying on any manually annotated visual dataset yet outperform all self-supervised and many fully-supervised methods on several video recognition benchmarks.AbstractAnnotating videos is cumbersome, expensive and not scalable. Yet, many strong video models still rely on manually annotated data. With the recent introduction of the HowTo100M dataset, narrated videos now offer the possibility of learning video representations without manual supervision. In this work we propose a new learning approach, MIL-NCE, capable of addressing misalignments inherent to narrated videos. With this approach we are able to learn strong video representations from scratch, without the need for any manual annotation. We evaluate our representations on a wide range of four downstream tasks over eight datasets: action recognition (HMDB-51, UCF-101, Kinetics-700), text-tovideo retrieval (YouCook2, MSR-VTT), action localization (YouTube-8M Segments, CrossTask) and action segmentation (COIN). Our method outperforms all published selfsupervised approaches for these tasks as well as several fully supervised baselines. Our joint text-video pretrained model is publicly available at: https://tfhub.dev/ deepmind/mil-nce/i3d/1. * Equal contribution.
Introduction
What we see changes what we know. What we know changes what we see.
Jean Piaget
Vision and language play an important role in the way humans learn to associate visual entities to abstract con-cepts and vice versa. This has also become the de facto way to successfully train computer vision models. Indeed, from classification where images are categorized based on a fixed list of words to the recent captioning tasks where images or videos are annotated with rich language descriptions, this interplay is one of the driving forces behind recent progress in the field. However, one of the main limitations of this approach is that it requires manually annotating large collections of visual data.
Manual annotation is both cumbersome and expensive. Moreover, for videos, which are the main focus of this work, annotation is also even more challenging due to the ambiguities of choosing the right vocabulary of actions and annotating action intervals in video. This significantly limits the scale at which fully supervised video dataset can be obtained and hence slows down the quest to improve visual representations. Recent work has proposed a promising alternative to this fully supervised approach: leveraging narrated videos that are readily available at scale on the web.
Of particular interest, the recent HowTo100M dataset [50] contains more than 100 million pairs of video clips and associated narrations. It was automatically collected by querying YouTube for instructional videos. Such videos usually depict someone explaining orally how to perform a complex human activity, e.g. preparing a particular meal or repairing a car. Our objective in this paper is to learn strong video representations using only this narrated material.
End-to-end learning from instructional videos is a highly challenging task. Indeed, these videos are made in general with the goal of maximizing the number of views, and with no specific intention to provide a training signal for machine learning algorithms. This means that the supervision present in the narration is only weak and noisy. Among typical sources of noise, the prominent one by far is the weak alignment between the video and the language: although for the most part the spoken words correlate with what is happening in these videos, this alignment is far from perfect. People might talk about something before actually demonstrating it, but they might also omit to talk about something that is happening because it is clear enough visually. Conversely they could only mention an action without showing it in the case where the step is not essential or trivial to convey with language alone. This is without even considering the irrelevant information given throughout the video (e.g. jokes or credits) as well as the general difficulty of working with spoken language obtained from potentially erroneous speech recognition algorithm as opposed to written text.
In this work, we propose a bespoke training loss, dubbed MIL-NCE as it inherits from Multiple Instance Learning (MIL) and Noise Contrastive Estimation (NCE). Our method is capable of addressing visually misaligned narrations from uncurated instructional videos as illustrated in Figure 1. Equipped with this novel training scheme and a simple joint video and text embedding model, we show that we can successfully train video representations from scratch directly from pixels on the HowTo100M [50] dataset. To demonstrate the quality of the learnt representations, we employ an extensive set of evaluation benchmarks on a wide variety of video understanding tasks: action recognition (HMDB-51, UCF-101, Kinetics-700), text-tovideo retrieval (YouCook2, MSR-VTT), action localization (YouTube-8M Segments, CrossTask) and action segmentation (COIN). Notably, our learnt video representations outperform fully supervised baselines trained on Kinetics or ImageNet for several of the tasks. We also show improvements over other self-supervised approaches on HMDB51 and UCF101 even without fine-tuning the learnt representations. Finally, by leveraging the joint video and text representations, our off-the-shelf trained model also reaches state-of-the-art results on YouCook2 and CrossTask, without any training on the target datasets.
Contributions. The contributions of this work are threefold. (i) We propose a method to learn a joint text video embedding in an end-to-end fashion from unlabelled, uncurated narrated videos using the recently introduced HowTo100M [50] dataset. In particular, we introduce a spe-cific loss, dubbed MIL-NCE for Multiple Instance Learning Noise Contrastive Estimation, that enables the learning to cope with the highly misaligned narration descriptions. (ii) We provide a thorough ablation study to quantitatively assess the importance of the different design choices of the approach. (iii) Finally, we demonstrate that the representations thus obtained are competitive with their strongly supervised counterparts on four downstream tasks over eight video datasets.
Related work
Learning visual representations from unlabeled videos. As labeling videos is cumbersome, expensive and not scalable, a significant number of prior works have studied the task of learning visual representations from unlabeled videos. Currently, the most effective approach is to collect a large amount of data from social media and use the available metadata as supervision [1,25]. However, this metadata is often in the form of keywords or tags, rather than (spoken) natural language considered in this work. In addition, the meta data is often platform dependent and rarely publicly available. Self-supervised approaches do not suffer from these issues as the idea is to define a supervised proxy task using labels directly generated from videos. Some of these tasks include: temporal ordering of video clips or frames [23,44,52,82], predicting geometric transformations [36], predicting motion and appearance [74], predicting the future, the past or a portion of masked input in the feature space [29,68,72], colorizing videos [73], predicting 3D geometry from synthetic data [24], predicting the audio in a feature space [7,41] or tasks leveraging temporal cycle consistency [22,77]. In our work, we leverage as supervision for our proxy task, the output of automatic speech recognition (ASR) ran on narrated instructional videos. The nature of this supervision has the potential to also provide semantic information [50,65], which is often missing in works that only exploit pixel-wise cues. Moreover, most of the top performing prior works only study their method on curated video datasets (e.g. Kinetics [14]) where labels have been removed. However, this is not truly learning from unlabeled data as these videos have been carefully selected and verified to belong to classes of interests. Caron et al. [12] further explain the performance gap between training on such curated data versus uncurated ones, truly available at scale. Instead, our approach focuses on the learning of representations only from uncurated videos. Vision, speech and language. A common alternative to training visual models using manually defined sets of labels is to exploit semantic supervision from natural language or speech. Numerous prior works [18,20,26,27,40,49,53,58,60,84,75,76,79,80] have used image / video description datasets [46,61,63,83,86] while the standard NCE approach would only consider the single (x, y) training pair and miss the visually grounded object description sander from pair (x, y 3 ) or the action description sanding down from (x, y 4 ). Right. Given a video x and an associated set of positive narration candidates P (green dots) that may or may not be correct, our MIL-NCE selects multiple correct positives (large blue areas) while downweighting incorrect positives (smaller blue areas) based on a discriminative ratio against negatives N (red dots). In contrast, traditional MIL considers only one positive (orange circle) while discarding the rest. they are semantically similar. These methods either rely on manually annotated image / video description datasets, or leverage representations already pre-trained on manually labelled datasets (e.g. ImageNet [64] or Kinetics [14]). In contrast, in this work no manually annotated visual data is involved at any stage of our approach. To avoid labelling visual data, several approaches have leveraged audio transcripts obtained from narrated videos using automatic speech recognition (ASR) as a way to supervise video models for object detection [3,15,54], captioning [33,69], classification [2,42,47,85], summarization [57] or retrieval [50] using large-scale narrated video datasets such as How2 [65] or HowTo100M [50]. Others [10,30] have investigated learning from narrated videos by directly using the raw speech waveform instead of generating transcriptions. Most related to us is the work of Miech et al. [50] who trained a joint video and text embedding from uncurated instructional videos [50]. However, as opposed to our work, they do not model any misalignment issue encountered when training on such videos and rely on visual representations pretrained on Kinetics-400 and ImageNet. Building on this work, Sun et al. [68] have used a contrastive bidirectional transformer (CBT) to learn long term contextual video representations from instructional videos. All these works use a visual representation pre-trained on either Kinetics or ImageNet when training on such narrated videos. In contrast, the key innovation of our work is that we demonstrate learning a generic video representation as well as a joint video-text embedding from scratch, without pre-training on manually annotated video or image datasets.
Multiple instance learning for video understanding.
Multiple instance learning methods have been employed in many weakly-supervised video understanding problems including: person recognition in movies using scripts [11,48,59], weakly supervised action classification [45,66] and localization [16,21,78], co-reference resolution of characters in TV series [62] or object tracking [8]. These methods often rely on some form of max-pooling (i.e. MIL-SVM [4]) or discriminative clustering (i.e. DIFFRAC [9]) to resolve the label ambiguities, and have used mostly linear (or shallow) models. In this work, we present MIL-NCE, a new approach marrying the noise contrastive estimation (NCE) framework [28] with multiple instance learning [19]. We show that MIL-NCE is well-suited to learn deep visual representations from scratch using weak and noisy training signals available in uncurated instructional videos.
Leveraging Uncurated Instructional Videos
This section describes the proposed approach to train joint video and text embeddings from unlabeled narrated videos in an end-to-end fashion. To start with, we are given n pairs of video clips and associated narrations. In practice, a pair is composed of a short 3.2 seconds video clip (32 frames at 10 FPS) together with a small number of words (not exceeding 16) that correspond to what the person is saying in the video. For example, someone might be sanding wood while mentioning the action "sanding down" or the object "sander" as illustrated in Figure 2a. Given this input, our goal is to learn a joint embedding space where similarity between the narration and video embedding is high when the text and visual content are semantically similar and low otherwise, and we wish to learn this starting from raw pixels in the video and text descriptions. As illustrated in Figure 1, this is a very challenging problem due to the often severely misaligned visual descriptions.
In this work, we address this issue by introducing the MIL-NCE objective:
max f,g n i=1 log (x,y)∈Pi e f (x) g(y) (x,y)∈Pi e f (x) g(y) + (x ,y )∼Ni e f (x ) g(y ) (1)
where x represents a video clip and y a narration. f and g are the two embedding functions that respectively operate over video and text. Given a specific sample i-th, we construct P i to be a valid set of positive video/narration candidate pairs (see Figure 2) while N i conversely refers to an associated set of negative video/narration pairs. This objective function comes down to maximizing the ratio of the sum of the positive candidate scores from P i to the sum of the scores of all negatives sampled from N i , where the score is measured by the exponentiated dot product of the corresponding video and language embeddings, f (x) and g(y).
In the following, we describe more precisely the motivation behind the MIL-NCE objective (1). First, section 3.1 introduces the chosen probabilistic model for joint text and video embedding. Given that model, Section 3.2 details the choice behind the training objective (1) explaining how it is specifically adapted to handle the misalignment noise inherent in narrated videos in comparison with existing approaches.
A simple joint probabilistic model
In the following, x ∈ X stands for a video clip and y ∈ Y for a narration. Given a set of n pairs of video clips and associated narrations {(x i , y i )} n i=1 ∈ (X × Y) n sampled from the joint data distribution P (X × Y), our goal is to learn a joint embedding space where semantically related videos and texts are close and far away otherwise.
Formally, we learn two parametrized mappings: f : X → R d maps a video clip x into a d-dimensional vector f (x) ∈ R d , and g : Y → R d maps a narration y into the same d-dimensional vector space, g(y) ∈ R d . We assume that we can estimate up to a constant factor the joint probability of a pair of video and narration (x, y) by exponentiating the dot product of the two embeddings:
p(x, y; f, g) ∝ e f (x) g(y) .(2)
In this work, f takes the form of a CNN that runs over a fixed-length clip. For g, we consider simple sentence based models that transform a set of words into a single vector. Note, for simplicity and with a slight abuse of notation, we refer to f (or g) as both a function and the parameters that define it. Also, we will refer to (2) as simply p(x, y), i.e. we keep the dependence in f and g implicit for the clarity of simpler equations. More details about the exact architecture of the models are provided in Section 4.
Learning from uncurated data: MIL-NCE
Recall that our goal is to learn a joint video and text representation only from uncurated narrated videos. In this section, we start by detailing why this is a highly challenging endeavor due to misalignments present in that data. Next, we explain how the introduced MIL-NCE objective (1) enables to learn despite that noise. Finally, we contrast our proposed approach to similar works in the selfsupervised domain. Misalignment in narrated videos. In [50], the authors estimate that around 50% of clip-narration pairs from the HowTo100M dataset are not aligned. In fact, people are likely to describe an event after or before performing it in the video as illustrated in Figure 1. This visual misalignment makes it more challenging to learn video representations than with manually annotated and aligned labels. How to learn despite noisy supervision? To address the aforementioned issues, we propose to consider multiple options for matching a video and a narration instead of only comparing a single video x with a single narration y as done in [50]. Let's consider the example illustrated in Figure 2a. Given a clip x, K narrations {y k } K k=1 that happen close in time within the same video can be considered as positive candidates. By doing so, the chance that spoken words correlate with what is happening in the video increases. In that case, we would like to match at least one of the narrations {y k } K k=1 with video x. Given the probabilistic model (2), a natural way to express this is by computing the joint probability of x happening with any of the y k . Because we can make the assumption that y k 's are mutually exclusive (i.e. neighbouring narrations are never repeated twice), this can be expressed mathematically by (2) as follows:
p(∪ k {(x, y k )}) = k p(x, y k ) ∝ k e f (x) g(y k ) . (3)
This is a MIL like extension; but note that it allows multiple y k 's to be matched with a single video x, i.e. it does not restrict the match to only one from the set {y k } K k=1 . More generally, and symmetrically, the case where several video clips are candidates for a given narration can also be envisioned. Hence, for generality, we assume that instead of having a single pair (x, y), we have a set of candidate positive pairs P = (x k , y k ) K k=1 , and we can simply repurpose (3) as p(P) ∝ (x,y)∈P e f (x) g(y) . We denote by {P i } n i=1 the training set of candidate positives deduced from the original training set {(x i , y i )} n i=1 . With this extension, we have the tools to address the misalignment problem. Practical details about how to construct P i are given in Section 4.1 How to train this model? MIL-NCE. We wish to learn a video representation based on the previously described probabilistic model p(P). However, this is challenging as one cannot directly apply standard generative techniques such as maximum likelihood due to the intractability of computing the normalization constant over all possible pairs of videos and narrations. Instead, we rely on a discriminative technique, namely the noise-contrastive estimation (NCE) approach [28,37], that has recently been shown to be effective in the context of feature learning [31,55]. The core idea is to directly optimize the unnormalized probabilistic model (3) to discriminate between data obtained from the true joint distribution P (X × Y) and some artificially generated noise data, a.k.a. "negatives". In this work, we use the softmax version of NCE [37]:
max f,g n i=1 log e f (xi) g(yi) e f (xi) g(yi) + (x ,y )∼Ni e f (x ) g(y ) (4)
and replacing the probability of a single positive match, e f (xi) g(yi) , with our MIL like extension, (x,y)∈Pi e f (x) g(y) , gives our proposed MIL-NCE training objective (1). Given this, we can simply estimate the parameters of our model by maximizing the objective (1), where N i is a specific set of negatives for the i-th sample. Next, we discuss how our approach differs from previous related work. NCE objectives for self-supervised learning. NCE has recently been successfully applied to self-supervision. In particular, CPC [31,55] introduces the InfoNCE loss to enforce the model to maximize the conditional probability of some targets (e.g. the bottom part of the image) conditioned on some context (e.g. the top part of the image). Differently from CPC, which creates an asymmetric set of negatives by fixing the context and only sampling negative targets, we instead use NCE to model the symmetric joint probability between text and video (2). For that reason, we construct N i so that it contains both negatives for video x i and narration y i . In Section 4, we describe precisely how N i is obtained as well as evaluate the benefit of this symmetric approach.
Experiments
We first describe implementation details of our method in Section 4.1. The eight datasets used in our evaluation are outlined in Section 4.2. We present a thorough ablation study emphasizing key ingredients of our approach in Section 4.3. Finally, we compare our learnt representations to previous self-supervised and fully-supervised methods in Section 4.4.
Implementation details
Model and Inputs. For the 3D CNN backbone, we use the standard I3D implementation from [14]. We use the Google News self-supervised pre-trained word2vec (d=300) embedding from [51] for our word representation. Each video clip at training contains 32 frames sampled at 10 fps (3.2 seconds) with a 200x200 resolution (224x224 at test time).
For each narration, we take a maximum of 16 words. More details about the model architecture and input dimensions are provided in Table 1. A detailed illustration of the architecture is also given in Appendix B.
Visual representations evaluation. We evaluate our visual representations at two different semantic levels. First, we use the output of the I3D Global avg pool (see Table 1), to evaluate our representation for action recognition, action segmentation and action localization. Next, the output of the last I3D Linear layer (see Table 1), which maps the video to the joint text-video semantic space, is used in conjunction with the output of the language model for the text-video retrieval tasks. Training dataset. We train our model using the HowTo100M [50] narrated video dataset. It consists of more than 1.2M videos accompanied with automatically generated speech transcription. We use the provided transcription to create pairs of video / caption defined by each caption time stamp. Note that while the original dataset [50] consists of 136M pairs, we only used 120M pairs to comply with the YouTube wipe out policy. Each video shorter than 5 seconds is extended symmetrically in time so that the duration is at least 5 seconds. Then we randomly sample, a fixed length clip of 3.2 seconds within each video at training. For each clip-narration training pair (x, y) sampled, we construct the bag of positive candidate pairs P by considering the nearest captions in time to y as depicted in Figure 2a. For example, if we set the number of positive candidate pairs to 3, we would have P = {(x, y), (x, y 1 ), (x, y 2 )} where y 1 and y 2 are the 2 closest narrations in time to y. For the negative set N , we first fix x and take negative narrations from other samples within the batch and symmetrically fix the narration y and take negative videos from other other samples in the batch. Optimization. We use the ADAM [39] optimizer with an initial learning rate of 10 −3 with linear warm up of 5k steps. The learning rate is decayed twice by a factor of 10. We train our model using Cloud TPUs v3 1 , each Cloud TPU having a batch size of 128 videos. Given the high computational load required for training on HowTo100M, we run ablation studies on 4 Cloud TPUs and train our model
Downstream tasks
To show the generality of our learnt representations, we perform evaluation on five diverse downstream tasks using eight datasets described below. Action Recognition: HMDB-51 [43], UCF-101 [67], Kinetics-700 [13]. We evaluate our video-only representation on the traditional HMDB-51 / UCF-101 as well as the recent Kinetics-700 action recognition tasks. Text-to-Video retrieval: YouCook2 [86], MSR-VTT [83]. We use the YouCook2 and MSR-VTT text-to-video retrieval benchmarks to evaluate our off-the-shelf learnt joint text-video representation. We follow the same evaluation protocol as described in [50]. We report the retrieval performance using the recall at K (R@K) metric (with K=1,5,10) which measures the percentage of clips retrieved at the top K (the higher the better). We also report the median rank (MedR) of videos to be retrieved (the lower the better). Action Localization: YouTube-8M [1] Segments. We evaluate our video representation on YouTube-8M Segments 2 , a subset of the YouTube-8M [1] with precise temporal annotation. We follow the YouTube-8M Segments challenge evaluation protocol and report the mAP metric. 3 Action
Step Localization: CrossTask [87]. We use the recently released CrossTask instructional video dataset to evaluate our off-the-shelf learnt joint text-video representation on the task of action step localization. We perform the same evaluation protocol as in [87] and report the average recall (CTR) metric for the localization task. Action Segmentation: COIN [70]. We evaluate our videoonly representation on the COIN action segmentation task and follow the evaluation protocol of [68] by reporting the frame-wise accuracy (FA).
Ablation studies
We perform the ablation studies on the following downstream tasks: MSR-VTT R@10 (MR10), YouCook2 R@10 (YR10), HMDB-51 and UCF-101 recognition accuracy on split 1 and CrossTask average recall (CTR). This subset of downstream tasks has been chosen for their simplicity of evaluation and because they cover a wide range of tasks. Which loss is better for learning the joint embedding ? In this ablation study (Table 2a), we compare different losses for matching the text and video embeddings in the standard single-instance learning setting where we pair each video clip to its closest narration in time. We compare the NCE based approach (ours) to the frequently used max margin ranking loss [18,20,32,38,49,53,75,76,79,80] and a binary classification loss (i.e. sigmoid cross entropy loss) that has shown to be effective in video-audio matching [6,7]. Overall, the NCE loss outperforms other losses or works similarly on all five tested datasets. The more negatives, the better. We keep the same singleinstance learning setting and assess the quality of our representations trained with different number of sampled negative examples per positive pair in Table 2b. We can see that the overall performance increases with the number of negatives. For the rest of the ablation studies, we use 512 negative samples per positive. How many positive candidates pairs to consider ? We evaluate the benefit of going from the single-instance learning approach to the proposed multiple-instance based approach in Table 2c. In this experiment, we vary the number of positive candidate training pairs P for each video clip from 1 (i.e. single-instance learning setting) up to 33 candidates. Adding candidates significantly improves the performance upon the single-instance learning baseline. Moreover, we observe a trade-off between having too many candidates and not having enough of them, as we reach the best results by considering 3 to 5 positive candidates. We believe that adding too many contextual narrations increases the chance for irrelevant ones as they are sampled further in time from the considered video clip. For the rest of the paper we fix the number of positive candidate pairs to 5. Table 2d, we compare our MIL-NCE approach with methods that can also handle multiple possible candidate captions at training time. The max-pool based approach [4,7,56] (Max+NCE) only optimizes over the clip-caption pair with the highest similarity score among the positive candidates. On the other hand, the attention-based approach [35] (Attn+NCE) computes cross-modal attention weights between all the clipcaption pairs and perform a weighted average of the similarity scores in order to consider the most relevant positive candidate pairs. More details about these baselines are provided in Appendix A. Finally, we also compare to the single-instance learning baseline where we concatenate all of the candidate narrations as one longer narration (Cat+NCE). Our proposed MIL-NCE method outperforms these two standard approaches on five out of six tasks. Figure 3 illustrates examples of selected pairs from a hold-out set of HowTo100M videos, using MIL-NCE. Symmetric or asymmetric negative sampling ? Recall that given a pair of video/narration (x, y), we create N in a symmetric manner by sampling negative narrations for the video x and negative videos for the narration y. Table 2e compares that approach (x, y) to asymmetric alternatives: (i) by fixing the video x and only sampling negative captions (y|x) and (ii) by fixing the narration y and only sampling negative videos (x|y). Overall, the best results are achieved when sampling jointly the negatives (x, y), i.e. when we equally sample both video and narration negatives. Which language model? Finally, we also experiment with different language models (1 layer LSTM [34] or GRU [17], 1 layer and 8 attention heads Transformer [71] and NetVLAD with 32 clusters [5]) and compare them to our simple model (see Table 1) in Table 2f. Even though our language model is similar to simple bag-of-word approach, it performs in average better and is more consistent over the five tasks than the other models. In particular, our model significantly outperforms the other language models on the text-to-video retrieval tasks (YR10 and MR10), where language plays the most important role. We believe that a sophisticated language understanding is not key for our learning task. Instead, most of the time, detecting and matching the main keywords in each narration is enough.
MIL-NCE vs other MIL based approaches. In
Comparison to the state-of-the-art
Video only representation. In Table 3, we evaluate our learnt representations on the HMDB-51 [43] and UCF-101 [67] action recognition benchmarks by extracting averaged pooled Mixed 5c features from the HowTo100M pretrained I3D [14]. More specifically, we compare to selfsupervised approaches, which similarly to our work, do not make use of any annotated video nor image dataset when training the visual representation. For AVTS [41], we report performance with the same I3D [14] backbone as ours. Our learnt representation significantly outperforms prior work. Importantly, we achieve state-of-the-art over self-supervised approaches on HMDB-51 only by training a linear classifier on top of the frozen representation. In contrast, all the other top performing approaches require fine-tuning their representation. This result is significant as it demonstrates the generalization of our representation to diverse sets of actions despite being trained on uncurated instructional videos. Finally, fine-tuning I3D leads to further improvements. Next, we evaluate our visual representation on COIN [70] action segmentation task in Table 4a. We split videos in subsequent clips of 1.5 second and represent them by concatenating three features: the local representation from I3D, the global average pooled representation across the entire video and the relative positional embedding of the video clip. We train a logistic regression to predict the label for each clip. We compare our HowTo100M pretrained I3D network to an I3D fully-supervised on Kinetics-400, Kinetics-700 as well as a ResNet-50 fully supervised on Im-ageNet. We also compare to the state-of-the-art approach on COIN, CBT [68], which relies on a fully supervised S3D [81] trained on Kinetics-600. Our learnt representation performs better than representations trained on Kinetics-400, Kinetics-700 or ImageNet. Moreover, our method also significantly outperforms the state-of-the-art CBT [68] despite their use of fully-supervised representation trained on Kinetics-600 and a Transformer model.
We also report performance on the recently released YouTube-8M Segments dataset in Table 4b. Since no results have been published for this benchmark yet, we only compare the classification performance using different fullysupervised representations (i.e. I3D trained on Kinetics-400 / Kinetics-700 or ResNet-50 trained on ImageNet). Here again, our learnt representation outperforms all of the fullysupervised counterparts despite the domain gap between YouTube-8M and uncurated instructional videos.
Finally, in Table 4c we investigate the benefit of initializing an I3D model with our learned weights for a large-scale action recognition dataset (i.e. Kinetics-700). We compare to a randomly initialized I3D and one inflated from an Inception network pretrained on ImageNet [14]. We obtain a 4% improvement over a randomly initialized I3D, indicating that our representation provides a good initialization. More interestingly, it is also a better initialization than an I3D inflated from an ImageNet pretrained network.
Joint text-video representation. We report text-to-video retrieval results on the YouCook2 (Table 5a) and MSR-VTT (Table 5b) using our off-the-shelf model trained on HowTo100M. Note that our model has not seen any YouCook2 nor MSR-VTT annotated videos, hence for fair comparison on the MSR-VTT dataset we only compare to prior work [50] that did not finetune on MSR-VTT. On YouCook2, our off-the-shelf model significantly outperforms all prior work. More specifically, it performs better than prior state-of-the-art [50] which uses visual representation trained on Kinetics-400 + ImageNet and trains the joint text-video representation on both HowTo100M and YouCook2. On MSR-VTT, our method performs slightly better than [50] but without using any manually annotated dataset. Finally, we also evaluate our off-the-shelf model trained on HowTo100M on the CrossTask [87] action localization benchmark in Table 4d. We perform localization via a video-to-text retrieval approach similarly to [50]. Our method outperforms state-of-the-art approaches on this benchmark, here again, without using manual supervision.
Conclusion
We have addressed the challenging task of learning visual representations from scratch using uncurated instructional videos. Our approach did not rely on any manually annotated video nor image dataset. To deal with highly misaligned narrations and videos, we have introduced MIL-NCE, a multiple instance learning approach derived from the noise contrastive estimation framework. We have applied MIL-NCE to the uncurated HowTo100M dataset and obtained strong visual representations that outperformed self-supervised as well as fully-supervised representations on many downstream tasks. More generally, we believe MIL-NCE can be applicable in many multiple instance learning problems where representation learning is key.
Appendix overview
We provide in Section A technical details about the baselines introduced in Table 2 (d -MIL strategy) from the ablation studies. Finally Section B provides a visualization of the model architecture used in our work.
A. Max+NCE and Attn+NCE baselines
We use the same notation as in the main paper for this section.
Max+NCE. This baseline aims at reproducing the standard max-pool based approach often used in multiple instance learning, but here combined with the NCE loss. Formally, this can be written as maximizing the following objective:
max f,g n i=1 log (MaxNCE i ) ,(5)
where:
MaxNCE i = max (x,y)∈Pi e f (x) g(y)
max (x,y)∈Pi e f (x) g(y) + (x ,y )∼Ni e f (x ) g(y ) . (6) Intuitively, this corresponds to choosing the best positive candidate pair among all pairs P i according to the model. Attn+NCE. This other baseline aims at selecting best candidate pairs via a cross-modal soft-attention mechanism between the clips and narrations. The cross-modal attention mechanism a is defined as follows:
a(x, y, P i ) = e fa(x) ga(y) (x ,y )∈Pi e fa(x ) ga(y ) ,
where f a and g a are two parametrized functions. In practice f a and g a are sharing parameters with f (respectively g) except for the last 'Linear' layer (see Figure 4). Given that cross-modal attention mechansim, the Attn+NCE objective is:
max f,g,a n i=1 log (AttnNCE i ) ,(8)
where: .
AttnNCE i = e (x,
(9) The intuition behind this approach is to allow the model to have a separate selection mechanism for the positive candidate pairs.
Word2Vec
Text Processing lower, tokenization, rm stop words, PAD to 16
[add, milk, bowl, PAD, ..., PAD] [16,300] "Add milk to the bowl" [16,2048] Linear + ReLU Linear is applied independetly on each vector MaxPool [1,2048] Linear [1,512] Linear [1,512] [ 32,200,200,3] [4, 6, 6, 1024] GlobalAvgPool [1,1024]
I3D Mixed5c
Video model Text model
Trained on GoogleNews, dim 300 Figure 4: Model architecture. In this figure, we provide a visualization of the video embedding network f (left) and the text embedding network g (right). Modules displayed in blue are trained from scratch on the challenging uncurated HowTo100M dataset using the MIL-NCE loss. The word embeddings are learned in an unsupervised fashion using Word2Vec trained on GoogleNews and are kept fixed during training. Finally, the dimensions of the outputs of each layer are specified in brackets, e.g. the output of the 'Word2Vec' layer is of size [16,300] corresponding to the 16 word embedding vectors of dimension 300 (one vector for each word, also materialized by the 16 grey rectangles). Figure 4 provides an illustration of the video model f and text model g used in the main paper.
B. Model architecture
Figure 2 :
2to learn an embedding space where visual and textual data are close only if sander as you're going over this entire area otherwise the end all product won't be as flat as you would like it so just be aware now once you have them enjoy Left. Our MIL-NCE makes it possible to consider a set of multiple positive candidate pairs {(x, y), (x, y 1 ), . . . , (x, y 4 )}
Figure 3 :
3Selected video and narration pairs from five positive candidates on HowTo100M held-out samples using MIL-NCE.
Table 1 :
1Video (left) and text (right) model architectures.
1 https://cloud.google.com/tpu/ (a) Training lossLoss
YR10 MR10 CTR HMDB UCF
Binary-Classif 18.5 23.1 32.6 44.2 68.5
Max margin
16.3 24.1 29.3 56.2 76.6
NCE
29.1 27.0 35.6 55.4 77.5
(b) Negatives per positive
N YR10 MR10 CTR HMDB UCF
64
26.0 25.5 33.1 56.1 76.0
128 27.1 26.4 33.3 57.2 76.2
256 28.7 28.7 36.5 56.5 77.5
512 28.8 29.0 35.6 55.4 77.4
(c) Number of positive candidate pair
NCE
MIL-NCE
P → 1
3
5
9 17 33
YR10
29.1 33.6 35.0 33.1 32.4 28.3
MR10 27.0 30.2 31.8 30.5 29.2 30.4
CTR
35.6 37.3 34.2 31.8 25.0 25.0
HMDB 55.4 57.8 56.7 55.7 54.8 51.4
UCF
77.5 79.7 80.4 79.5 78.5 77.9
(d) MIL strategy
Method
YR10 MR10 CTR HMDB UCF
Cat+NCE 31.9 30.8 35.2 56.3 78.9
Max+NCE 32.3 31.3 32.2 55.3 79.2
Attn+NCE 32.4 30.2 33.4 55.2 78.4
MIL-NCE 35.0 31.8 34.2 56.7 80.4
(e) Symmetric vs asymmetric negatives
Negatives YR10 MR10 CTR HMDB UCF
(x|y)
34.4 29.0 33.9 55.1 78.1
(y|x)
19.3 19.4 28.2 57.1 79.2
(x, y)
35.0 31.8 34.2 56.7 80.4
(f) Language models
Text model YR10 MR10 CTR HMDB UCF
LSTM
16.6 15.6 23.8 53.1 80.1
GRU
16.8 16.9 22.2 54.7 82.8
Transformer 26.7 26.5 32.7 53.4 78.4
NetVLAD
33.4 29.2 35.5 51.8 79.3
Ours
35.0 31.8 34.2 56.7 80.4
Table 2 :
2Ablation studies
Table 3 :
3Comparisonto self-supervised methods on
HMDB/UCF. Results are reported by averaging the accuracy over
the 3 splits for both datasets. *Shuffle and Learn and 3DRotNet
reported numbers are reimplemented in [68] by using S3D, which
yields better accuracy than with the original backbone.
Table 4 :
4Evaluation on action segmentation (a), localization (b, d) and recognition (c) benchmarks. K400: Kinetics-400, K600: Kinetics-
600, K700: Kinetics-700, HTM: HowTo100M, ImNet: ImageNet, YT8M-S: YouTube-8M Segments, R50: 2D ResNet-50.
Method
Labeled dataset used
R@1↑ R@5↑ R@10↑ MedR↓
Random
None
0.03 0.15
0.3
1675
HGLMM FV CCA [40] ImNet + K400 + YouCook2 4.6
14.3
21.6
75
Miech et al. [50]
ImNet + K400
6.1
17.3
24.8
46
Miech et al. [50]
ImNet + K400 + YouCook2 8.2
24.5
35.3
24
Ours
None
11.4 30.6
42.0
16
(a) YouCook2
Method
Labeled dataset used
R@1↑ R@5↑ R@10↑ MedR↓
Random
None
0.01 0.05
0.1
460
Miech et al. [50] ImNet + K400
8.6
22.6
32.0
34
Ours
None
10.3 22.6
31.2
33
(b) MSRVTT
Table 5 :
5Zero-shot evaluation on text-to-video retrieval.
https://research.google.com/youtube8m 3 https://www.kaggle.com/c/youtube8m-2019/ overview/evaluation
Acknowledgements. We would like to thank: Relja Arandjelović, Pauline Luc, Gunnar Sigurdsson for helpful discussions. The project was partially supported by Antoine Miech Google PhD fellowship, the ERC grant LEAP (No. 336845), the CIFAR Learning in Ma-chines&Brains program, and the European Regional Development Fund under the project IMPACT (reg. no. CZ.02.1.01/0.0/0.0/15 003/0000468).
Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, arXiv:1609.08675YouTube-8M: A largescale video classification benchmark. 26arXiv preprintSami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. YouTube-8M: A large- scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016. 2, 6
Unsupervised learning from narrated instruction videos. Piotr Jean-Baptiste Alayrac, Nishant Bojanowski, Ivan Agrawal, Josef Laptev, Simon Sivic, Lacoste-Julien, CVPR. 3Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Ivan Laptev, Josef Sivic, and Simon Lacoste-Julien. Unsu- pervised learning from narrated instruction videos. In CVPR, 2016. 3, 8
Toward self-supervised object detection in unlabeled videos. Elad Amrani, Rami Ben-Ari, Tal Hakim, Alex Bronstein, arXiv:1905.11137arXiv preprintElad Amrani, Rami Ben-Ari, Tal Hakim, and Alex Bronstein. Toward self-supervised object detection in unlabeled videos. arXiv preprint arXiv:1905.11137, 2019. 3
Support vector machines for multiple-instance learning. Stuart Andrews, Ioannis Tsochantaridis, Thomas Hofmann, NIPS. 37Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hof- mann. Support vector machines for multiple-instance learn- ing. In NIPS, 2003. 3, 7
NetVLAD: CNN architecture for weakly supervised place recognition. Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, Josef Sivic, CVPR. Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pa- jdla, and Josef Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In CVPR, 2016. 7
Look, listen and learn. Relja Arandjelovic, Andrew Zisserman, ICCV. Relja Arandjelovic and Andrew Zisserman. Look, listen and learn. In ICCV, 2017. 6
Objects that sound. Relja Arandjelovic, Andrew Zisserman, ECCV. 7Relja Arandjelovic and Andrew Zisserman. Objects that sound. In ECCV, 2018. 2, 6, 7
Visual tracking with online multiple instance learning. Boris Babenko, Ming-Hsuan Yang, Serge Belongie, CVPR. Boris Babenko, Ming-Hsuan Yang, and Serge Belongie. Vi- sual tracking with online multiple instance learning. In CVPR, 2009. 3
Diffrac: a discriminative and flexible framework for clustering. Francis Bach, Zaïd Harchaoui, NIPS. Francis Bach and Zaïd Harchaoui. Diffrac: a discriminative and flexible framework for clustering. In NIPS, 2007. 3
Grounding spoken words in unlabeled video. Angie Boggust, Kartik Audhkhasi, Dhiraj Joshi, David Harwath, Samuel Thomas, Rogerio Feris, Dan Gutfreund, Yang Zhang, Antonio Torralba, Michael Picheny, CVPRW. Angie Boggust, Kartik Audhkhasi, Dhiraj Joshi, David Har- wath, Samuel Thomas, Rogerio Feris, Dan Gutfreund, Yang Zhang, Antonio Torralba, Michael Picheny, et al. Grounding spoken words in unlabeled video. In CVPRW, 2019. 3
Finding Actors and Actions in Movies. Piotr Bojanowski, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, Josef Sivic, ICCV. Piotr Bojanowski, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, and Josef Sivic. Finding Actors and Ac- tions in Movies. In ICCV, 2013. 3
Unsupervised pre-training of image features on non-curated data. Mathilde Caron, Piotr Bojanowski, Julien Mairal, Armand Joulin, ICCV. Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Ar- mand Joulin. Unsupervised pre-training of image features on non-curated data. In ICCV, 2019. 2
Joao Carreira, Eric Noland, Chloe Hillier, Andrew Zisserman, arXiv:1907.06987A short note on the kinetics-700 human action dataset. arXiv preprintJoao Carreira, Eric Noland, Chloe Hillier, and Andrew Zis- serman. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987, 2019. 6
Quo vadis, action recognition? a new model and the kinetics dataset. J Carreira, A Zisserman, CVPR. 7J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 2, 3, 5, 7, 8
Discover and learn new objects from documentaries. Kai Chen, Hang Song, Chen Change Loy, Dahua Lin, CVPR. Kai Chen, Hang Song, Chen Change Loy, and Dahua Lin. Discover and learn new objects from documentaries. In CVPR, 2017. 3
A flexible model for training action localization with varying levels of supervision. Guilhem Chéron, Jean-Baptiste Alayrac, Ivan Laptev, Cordelia Schmid, In NeurIPS. 3Guilhem Chéron, Jean-Baptiste Alayrac, Ivan Laptev, and Cordelia Schmid. A flexible model for training action local- ization with varying levels of supervision. In NeurIPS, 2018. 3
Kyunghyun Cho, Dzmitry Bart Van Merrienboer, Yoshua Bahdanau, Bengio, arXiv:1409.1259On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv preprintKyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv preprint arXiv:1409.1259, 2014. 7
Evangelos Papalexakis, and Amit Roy-Chowdhury. Webly supervised joint embedding for cross-modal image-text retrieval. Mithun Chowdhury, Panda Rameswar, ACM International Conference on Multimedia. 26Mithun Chowdhury, Panda Rameswar, Evangelos Papalex- akis, and Amit Roy-Chowdhury. Webly supervised joint em- bedding for cross-modal image-text retrieval. In ACM Inter- national Conference on Multimedia, 2018. 2, 6
Solving the multiple instance problem with axis-parallel rectangles. G Thomas, Dietterich, H Richard, Tomás Lathrop, Lozano-Pérez, Artificial intelligence. 891-2Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1-2):31- 71, 1997. 3
Dual encoding for zero-example video retrieval. Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, Yuan He, Gang Yang, Xun Wang, CVPR. 26Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, Yuan He, Gang Yang, and Xun Wang. Dual encoding for zero-example video retrieval. In CVPR, 2019. 2, 6
Automatic annotation of human actions in video. Olivier Duchenne, Ivan Laptev, Josef Sivic, Francis Bach, Jean Ponce, ICCV. Olivier Duchenne, Ivan Laptev, Josef Sivic, Francis Bach, and Jean Ponce. Automatic annotation of human actions in video. In ICCV, 2009. 3
Temporal cycleconsistency learning. Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman, CVPR. Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. Temporal cycle- consistency learning. In CVPR, 2019. 2
Self-supervised video representation learning with odd-one-out networks. Basura Fernando, Hakan Bilen, Efstratios Gavves, Stephen Gould, CVPR. 27Basura Fernando, Hakan Bilen, Efstratios Gavves, and Stephen Gould. Self-supervised video representation learn- ing with odd-one-out networks. In CVPR, 2017. 2, 7
Geometry guided convolutional neural networks for self-supervised video representation learning. Chuang Gan, Boqing Gong, Kun Liu, Hao Su, Leonidas J Guibas, CVPR. 27Chuang Gan, Boqing Gong, Kun Liu, Hao Su, and Leonidas J Guibas. Geometry guided convolutional neural networks for self-supervised video representation learning. In CVPR, 2019. 2, 7
Largescale weakly-supervised pre-training for video action recognition. Deepti Ghadiyaram, Du Tran, Dhruv Mahajan, CVPR. Deepti Ghadiyaram, Du Tran, and Dhruv Mahajan. Large- scale weakly-supervised pre-training for video action recog- nition. In CVPR, 2019. 2
Michael Isard, and Svetlana Lazebnik. A multi-view embedding space for modeling internet images, tags, and their semantics. Yunchao Gong, Qifa Ke, IJCV. 2Yunchao Gong, Qifa Ke, Michael Isard, and Svetlana Lazeb- nik. A multi-view embedding space for modeling internet images, tags, and their semantics. IJCV, 2014. 2
Improving image-sentence embeddings using large weakly annotated photo collections. Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, Svetlana Lazebnik, ECCV. Yunchao Gong, Liwei Wang, Micah Hodosh, Julia Hocken- maier, and Svetlana Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In ECCV, 2014. 2
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Michael Gutmann, Aapo Hyvärinen, AISTATS. 35Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized sta- tistical models. In AISTATS, 2010. 3, 5
Video representation learning by dense predictive coding. Tengda Han, Weidi Xie, Andrew Zisserman, arXiv:1909.0465627arXiv preprintTengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. arXiv preprint arXiv:1909.04656, 2019. 2, 7
Jointly discovering visual objects and spoken words from raw sensory input. David Harwath, Adrià Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, James Glass, In ECCV. 3David Harwath, Adrià Recasens, Dídac Surís, Galen Chuang, Antonio Torralba, and James Glass. Jointly dis- covering visual objects and spoken words from raw sensory input. In ECCV, 2018. 3
J Olivier, Ali Hénaff, Carl Razavi, Doersch, Aaron Sm Eslami, Van Den Oord, arXiv:1905.09272Data-efficient image recognition with contrastive predictive coding. arXiv preprintOlivier J Hénaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recog- nition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019. 5
Localizing moments in video with natural language. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell, Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing mo- ments in video with natural language. ICCV, 2017. 6
A case study on combining asr and visual features for generating instructional video captions. Jack Hessel, Bo Pang, Zhenhai Zhu, Radu Soricut, arXiv:1910.02930arXiv preprintJack Hessel, Bo Pang, Zhenhai Zhu, and Radu Soricut. A case study on combining asr and visual features for generating instructional video captions. arXiv preprint arXiv:1910.02930, 2019. 3
Long short-term memory. Sepp Hochreiter, Jurgen Schmidhuber, Neural Computing. Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. In Neural Computing, 1997. 7
Attention-based deep multiple instance learning. Maximilian Ilse, M Jakub, Max Tomczak, Welling, arXiv:1802.04712arXiv preprintMaximilian Ilse, Jakub M Tomczak, and Max Welling. Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712, 2018. 7
Self-supervised spatiotemporal feature learning by video geometric transformations. Longlong Jing, Yingli Tian, arXiv:1811.1138727arXiv preprintLonglong Jing and Yingli Tian. Self-supervised spatiotem- poral feature learning by video geometric transformations. arXiv preprint arXiv:1811.11387, 2018. 2, 7
Exploring the limits of language modeling. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu, arXiv:1602.02410arXiv preprintRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. 5
Deep fragment embeddings for bidirectional image sentence mapping. Andrej Karpathy, Armand Joulin, Fei Fei, F Li, NIPS. Andrej Karpathy, Armand Joulin, and Fei Fei F Li. Deep fragment embeddings for bidirectional image sentence map- ping. In NIPS, 2014. 6
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 5
Associating neural word embeddings with deep image representations using fisher vectors. Benjamin Klein, Guy Lev, Gil Sadeh, Lior Wolf, CVPR. 2Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. Asso- ciating neural word embeddings with deep image represen- tations using fisher vectors. In CVPR, 2015. 2, 8
Cooperative learning of audio and video models from self-supervised synchronization. Bruno Korbar, Du Tran, Lorenzo Torresani, In NeurIPS. 8Bruno Korbar, Du Tran, and Lorenzo Torresani. Coopera- tive learning of audio and video models from self-supervised synchronization. In NeurIPS, 2018. 2, 7, 8
Mining youtube-a dataset for learning fine-grained action concepts from webly supervised video data. Hilde Kuehne, Ahsan Iqbal, Alexander Richard, Juergen Gall, arXiv:1906.01012arXiv preprintHilde Kuehne, Ahsan Iqbal, Alexander Richard, and Juer- gen Gall. Mining youtube-a dataset for learning fine-grained action concepts from webly supervised video data. arXiv preprint arXiv:1906.01012, 2019. 3
Hmdb: a large video database for human motion recognition. Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, Thomas Serre, ICCV. 6Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. Hmdb: a large video database for human motion recognition. In ICCV, 2011. 6, 7
Unsupervised representation learning by sorting sequences. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, Ming-Hsuan Yang, ICCV. 27Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming- Hsuan Yang. Unsupervised representation learning by sort- ing sequences. In ICCV, 2017. 2, 7
Handling label noise in video classification via multiple instance learning. Thomas Leung, Yang Song, John Zhang, ICCV. Thomas Leung, Yang Song, and John Zhang. Handling label noise in video classification via multiple instance learning. In ICCV, 2011. 3
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollr, C Lawrence Zitnick, ECCV. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollr, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 2
What's cookin'? interpreting cooking videos using text, speech and vision. Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, Kevin Murphy, NAACL. 3Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Murphy. What's cookin'? interpreting cooking videos using text, speech and vision. NAACL, 2015. 3
Learning from Video and Text via Large-Scale Discriminative Clustering. Antoine Miech, Jean-Baptiste Alayrac, Piotr Bojanowski, Ivan Laptev, Josef Sivic, ICCV. Antoine Miech, Jean-Baptiste Alayrac, Piotr Bojanowski, Ivan Laptev, and Josef Sivic. Learning from Video and Text via Large-Scale Discriminative Clustering. In ICCV, 2017. 3
Learning a Text-Video Embedding from Incomplete and Heterogeneous Data. Antoine Miech, Ivan Laptev, Josef Sivic, arXiv:1804.0251626Antoine Miech, Ivan Laptev, and Josef Sivic. Learning a Text-Video Embedding from Incomplete and Heterogeneous Data. arXiv:1804.02516, 2018. 2, 6
Howto100M: Learning a text-video embedding by watching hundred million narrated video clips. Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, Josef Sivic, ICCV. 6Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100M: Learning a text-video embedding by watching hundred million narrated video clips. In ICCV, 2019. 1, 2, 3, 4, 5, 6, 8
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. 5
Shuffle and learn: unsupervised learning using temporal order verification. Ishan Misra, Lawrence Zitnick, Martial Hebert, ECCV. 27Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuf- fle and learn: unsupervised learning using temporal order verification. In ECCV, 2016. 2, 7
Learning joint embedding with multimodal cues for cross-modal video-text retrieval. Juncheng Niluthpol Chowdhury Mithun, Florian Li, Amit K Roy-Chowdhury Metze, ICMR. ACM. 26Niluthpol Chowdhury Mithun, Juncheng Li, Florian Metze, and Amit K Roy-Chowdhury. Learning joint embedding with multimodal cues for cross-modal video-text retrieval. In ICMR. ACM, 2018. 2, 6
Yasufumi Moriya, Ramon Sanabria, Florian Metze, Gareth, Jones, arXiv:1906.06147Grounding object detections with transcriptions. arXiv preprintYasufumi Moriya, Ramon Sanabria, Florian Metze, and Gareth JF Jones. Grounding object detections with transcrip- tions. arXiv preprint arXiv:1906.06147, 2019. 3
Representation learning with contrastive predictive coding. Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. Repre- sentation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 5
Is object localization for free? -weakly-supervised learning with convolutional neural networks. Maxime Oquab, Leon Bottou, Ivan Laptev, Josef Sivic, CVPR. Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Is object localization for free? -weakly-supervised learning with convolutional neural networks. In CVPR, June 2015. 7
Multimodal abstractive summarization for how2 videos. Shruti Palaskar, Jindrich Libovickỳ, Spandana Gella, Florian Metze, arXiv:1906.07901arXiv preprintShruti Palaskar, Jindrich Libovickỳ, Spandana Gella, and Florian Metze. Multimodal abstractive summarization for how2 videos. arXiv preprint arXiv:1906.07901, 2019. 3
Jointly modeling embedding and translation to bridge video and language. Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, CVPR. Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui. Jointly modeling embedding and translation to bridge video and language. In CVPR, 2016. 2
It's in the bag: Stronger supervision for automated face labelling. Omkar Parkhi, Esa Rahtu, Andrew Zisserman, ICCV Workshop. Omkar Parkhi, Esa Rahtu, and Andrew Zisserman. It's in the bag: Stronger supervision for automated face labelling. In ICCV Workshop, 2015. 3
Enhancing video summarization via vision-language embedding. A Bryan, Matthew Plummer, Svetlana Brown, Lazebnik, CVPR. Bryan A Plummer, Matthew Brown, and Svetlana Lazebnik. Enhancing video summarization via vision-language embed- ding. In CVPR, 2017. 2
Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. A Bryan, Liwei Plummer, Chris M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, ICCV. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazeb- nik. Flickr30k entities: Collecting region-to-phrase corre- spondences for richer image-to-sentence models. In ICCV, pages 2641-2649. IEEE, 2015. 2
Linking people in videos with their names using coreference resolution. Vignesh Ramanathan, Armand Joulin, Percy Liang, Li Fei-Fei, ECCV. Vignesh Ramanathan, Armand Joulin, Percy Liang, and Li Fei-Fei. Linking people in videos with their names using coreference resolution. In ECCV, 2014. 3
Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. Movie description. IJCV. Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. Movie description. IJCV, 2017. 2
. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, ImageNet Large Scale Visual Recognition Challenge. IJCVOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. IJCV, 2015. 3
How2: a large-scale dataset for multimodal language understanding. Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loïc Barrault, Lucia Specia, Florian Metze, Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL). the Workshop on Visually Grounded Interaction and Language (ViGIL)23Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loïc Barrault, Lucia Specia, and Florian Metze. How2: a large-scale dataset for multimodal language un- derstanding. In Proceedings of the Workshop on Visu- ally Grounded Interaction and Language (ViGIL). NeurIPS, 2018. 2, 3
Similarity constrained latent support vector machine: An application to weakly supervised action classification. Nataliya Shapovalova, Arash Vahdat, Kevin Cannons, Tian Lan, Greg Mori, ECCV. Nataliya Shapovalova, Arash Vahdat, Kevin Cannons, Tian Lan, and Greg Mori. Similarity constrained latent support vector machine: An application to weakly supervised action classification. In ECCV, 2012. 3
UCF101: A dataset of 101 human actions classes from videos in the wild. Khurram Soomro, Mubarak Amir Roshan Zamir, Shah, arXiv:1212.040267arXiv preprintKhurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 6, 7
Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid, arXiv:1906.05743Contrastive bidirectional transformer for temporal representation learning. 7arXiv preprintChen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743, 2019. 2, 3, 6, 7, 8
Videobert: A joint model for video and language representation learning. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid, ICCV. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In ICCV, 2019. 3
Coin: A large-scale dataset for comprehensive instructional video analysis. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, Jie Zhou, CVPR. 6Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video analysis. In CVPR, 2019. 6, 8
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 7
Anticipating visual representations from unlabeled video. Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, CVPR. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. An- ticipating visual representations from unlabeled video. In CVPR, 2016. 2
Tracking emerges by colorizing videos. Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, Kevin Murphy, ECCV. Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, and Kevin Murphy. Tracking emerges by col- orizing videos. In ECCV, 2018. 2
Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics. Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Yunhui Liu, Wei Liu, CVPR. 27Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Yunhui Liu, and Wei Liu. Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics. In CVPR, 2019. 2, 7
Learning two-branch neural networks for image-text matching tasks. Liwei Wang, Yin Li, Jing Huang, Svetlana Lazebnik, PAMI. 26Liwei Wang, Yin Li, Jing Huang, and Svetlana Lazebnik. Learning two-branch neural networks for image-text match- ing tasks. PAMI, 2018. 2, 6
Learning deep structure-preserving image-text embeddings. Liwei Wang, Yin Li, Svetlana Lazebnik, CVPR. 26Liwei Wang, Yin Li, and Svetlana Lazebnik. Learning deep structure-preserving image-text embeddings. In CVPR, pages 5005-5013, 2016. 2, 6
Unsupervised learning of visual representations using videos. Xiaolong Wang, Abhinav Gupta, ICCV. Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. 2
Towards weakly-supervised action localization. Philippe Weinzaepfel, Xavier Martin, Cordelia Schmid, arXiv:1605.05197arXiv preprintPhilippe Weinzaepfel, Xavier Martin, and Cordelia Schmid. Towards weakly-supervised action localization. arXiv preprint arXiv:1605.05197, 2016. 3
Fine-grained action retrieval through multiple partsof-speech embeddings. Michael Wray, Diane Larlus, Gabriela Csurka, Dima Damen, ICCV. 26Michael Wray, Diane Larlus, Gabriela Csurka, and Dima Damen. Fine-grained action retrieval through multiple parts- of-speech embeddings. In ICCV, 2019. 2, 6
Sampling matters in deep embedding learning. Chao-Yuan, R Wu, Alexander J Manmatha, Philipp Smola, Krähenbühl, 26Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krähenbühl. Sampling matters in deep embedding learning. ICCV, 2017. 2, 6
Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, Kevin Murphy, In ECCV. 8Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. 8
Self-supervised spatiotemporal learning via video clip order prediction. Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, Yueting Zhuang, CVPR. 27Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, and Yueting Zhuang. Self-supervised spatiotemporal learning via video clip order prediction. In CVPR, 2019. 2, 7
Msr-vtt: A large video description dataset for bridging video and language. Jun Xu, Tao Mei, Ting Yao, Yong Rui, CVPR. 26Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In CVPR, 2016. 2, 6
Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. Ran Xu, Caiming Xiong, Wei Chen, Jason J Corso, In AAAI. 526Ran Xu, Caiming Xiong, Wei Chen, and Jason J Corso. Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In AAAI, vol- ume 5, page 6, 2015. 2
Instructional videos for unsupervised harvesting and learning of action examples. -I Shoou, Lu Yu, Alexander Jiang, Hauptmann, ACM. Shoou-I Yu, Lu Jiang, and Alexander Hauptmann. Instruc- tional videos for unsupervised harvesting and learning of ac- tion examples. In ACM, 2014. 3
Towards automatic learning of procedures from web instructional videos. Luowei Zhou, Chenliang Xu, Jason J Corso, AAAI. 26Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In AAAI, 2018. 2, 6
Crosstask weakly supervised learning from instructional videos. Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David Fouhey, Ivan Laptev, Josef Sivic, CVPR. 6Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. Cross- task weakly supervised learning from instructional videos. In CVPR, 2019. 6, 8
| [] |
[
"Style Aggregated Network for Facial Landmark Detection",
"Style Aggregated Network for Facial Landmark Detection",
"Style Aggregated Network for Facial Landmark Detection",
"Style Aggregated Network for Facial Landmark Detection"
] | [
"Xuanyi Dong [email protected] \nUniversity of Technology Sydney\n\n",
"Yan Yan [email protected] \nUniversity of Technology Sydney\n\n",
"Wanli Ouyang [email protected] \nUniversity of Technology Sydney\n\n\nThe University of Sydney\n\n",
"Yi Yang [email protected] ",
"Xuanyi Dong [email protected] \nUniversity of Technology Sydney\n\n",
"Yan Yan [email protected] \nUniversity of Technology Sydney\n\n",
"Wanli Ouyang [email protected] \nUniversity of Technology Sydney\n\n\nThe University of Sydney\n\n",
"Yi Yang [email protected] "
] | [
"University of Technology Sydney\n",
"University of Technology Sydney\n",
"University of Technology Sydney\n",
"The University of Sydney\n",
"University of Technology Sydney\n",
"University of Technology Sydney\n",
"University of Technology Sydney\n",
"The University of Sydney\n"
] | [] | Recent advances in facial landmark detection achieve success by learning discriminative features from rich deformation of face shapes and poses. Besides the variance of faces themselves, the intrinsic variance of image styles, e.g., grayscale vs. color images, light vs. dark, intense vs. dull, and so on, has constantly been overlooked. This issue becomes inevitable as increasing web images are collected from various sources for training neural networks. In this work, we propose a style-aggregated approach to deal with the large intrinsic variance of image styles for facial landmark detection. Our method transforms original face images to style-aggregated images by a generative adversarial module. The proposed scheme uses the style-aggregated image to maintain face images that are more robust to environmental changes. Then the original face images accompanying with style-aggregated ones play a duet to train a landmark detector which is complementary to each other. In this way, for each face, our method takes two images as input, i.e., one in its original style and the other in the aggregated style. In experiments, we observe that the large variance of image styles would degenerate the performance of facial landmark detectors. Moreover, we show the robustness of our method to the large variance of image styles by comparing to a variant of our approach, in which the generative adversarial module is removed, and no style-aggregated images are used. Our approach is demonstrated to perform well when compared with state-of-the-art algorithms on benchmark datasets AFLW and 300-W. Code is publicly available on GitHub: https://github.com/D-X-Y/SAN | 10.1109/cvpr.2018.00047 | [
"https://arxiv.org/pdf/1803.04108v4.pdf"
] | 4,110,173 | 1803.04108 | 6a4d1c6bc4605b0feb2bc94e6aa738664ec78213 |
Style Aggregated Network for Facial Landmark Detection
Xuanyi Dong [email protected]
University of Technology Sydney
Yan Yan [email protected]
University of Technology Sydney
Wanli Ouyang [email protected]
University of Technology Sydney
The University of Sydney
Yi Yang [email protected]
Style Aggregated Network for Facial Landmark Detection
Recent advances in facial landmark detection achieve success by learning discriminative features from rich deformation of face shapes and poses. Besides the variance of faces themselves, the intrinsic variance of image styles, e.g., grayscale vs. color images, light vs. dark, intense vs. dull, and so on, has constantly been overlooked. This issue becomes inevitable as increasing web images are collected from various sources for training neural networks. In this work, we propose a style-aggregated approach to deal with the large intrinsic variance of image styles for facial landmark detection. Our method transforms original face images to style-aggregated images by a generative adversarial module. The proposed scheme uses the style-aggregated image to maintain face images that are more robust to environmental changes. Then the original face images accompanying with style-aggregated ones play a duet to train a landmark detector which is complementary to each other. In this way, for each face, our method takes two images as input, i.e., one in its original style and the other in the aggregated style. In experiments, we observe that the large variance of image styles would degenerate the performance of facial landmark detectors. Moreover, we show the robustness of our method to the large variance of image styles by comparing to a variant of our approach, in which the generative adversarial module is removed, and no style-aggregated images are used. Our approach is demonstrated to perform well when compared with state-of-the-art algorithms on benchmark datasets AFLW and 300-W. Code is publicly available on GitHub: https://github.com/D-X-Y/SAN
Introduction
Facial landmark detection aims to detect the location of predefined facial landmarks, such as the corners of the eyes, eyebrows, the tip of the nose. It has drawn much attention Figure 1. A face image in three different styles and the locations of the facial landmarks predicted by a facial landmark detector on them. The image styles, e.g., grayscale vs. color images, light vs. dark, intense vs. dull, can be quite distinct owing to various collection sources. The contents of the above three images are identical. The only difference is the image style. We apply a well-trained facial landmark detector to localize the facial landmarks. The zoomin parts show the deviation among the predicted locations of the same facial landmarks on different styled images. recently as it is a prerequisite in many computer vision applications. For example, facial landmark detection can be applied to a large variety of tasks, including face recognition [74,30], head pose estimation [58], facial reenactment [53] and 3D face reconstruction [28], to name a few.
Recent advances in facial landmark detection mainly focus on learning discriminative features from abundant deformation of face shapes and poses, different expressions, partial occlusions, and others [58,73,59,20]. A very typical framework is to construct features to depict the facial appearance and shape information by the convolutional neural networks (ConvNets) or hand-crafted features, and then learn a model, i.e., a regressor, to map the features to the landmark locations [64,10,7,42,72,67,40]. Most of them apply a cascade strategy to concatenate prediction modules and update the predicted locations of landmarks progressively [67,10,73].
However, the issue from image style variation has been overlooked by recent studies on facial landmark detection. In real-world applications, face images collected in the wild usually are additionally under unconstrained variations [46,73]. Large intrinsic variance of image styles, e.g., grayscale vs. color images, light vs. dark, intense vs. dull, is introduced when face images are collected under different environments and camera settings. The variation in image style causes the variation in prediction results. For example, Figure 1 shows three different styles of a face image and the facial landmark predictions on them when applying a well-trained detector. The contents of the three images are the same, but the visual styles are quite distinct, including original, grayscale and light. We can observe that the location predictions of a same facial landmark on them can be different. The zoom-in parts show the detailed deviation among the predicted locations of the same facial landmark on different styled images. This intrinsic variance of image styles would distort the prediction of the facial landmark detector and further degenerate the accuracy, which will be empirically demonstrated later. This problem commonly exists in the face in-the-wild landmark detection datasets [23,46] (see Figure 2), and becomes inevitable for such face images captured under uncontrolled conditions. Motivated by the issue of large variance of different image styles, we propose a Style-Aggregated Network (SAN) for facial landmark detection, which is insensitive to the large variance of image styles. The key idea of SAN is to first generate a pool of style-aggregated face images by the generative adversarial network (GAN) [16]. Then SAN exploits the complementary information from both the original images and the style-aggregated ones. The original images contain undistorted appearance contents of faces but may vary in image styles. The style-aggregated images contain stationary environments around faces, but may lack certain shape information due to the less fidelity caused by GAN. Therefore, our SAN takes both the original and styleaggregated faces together as complementary input, and applies a cascade strategy to generate the heatmap predictions which can be robust to the large variance of image styles.
To summarize, our contributions include:
1. To the best of our knowledge, we are the first to explicitly handle the problem caused by the variation of image styles in facial landmark detection problems, which has been overlooked in recent studies. We further empirically verify the performance degeneration caused by the large variance of image styles.
2. To facilitate style analysis, we release two new facial landmark detection datasets, 300W-Styles (≈ 12000 images) and AFLW-Styles (≈ 80000 images), by transferring the 300-W [46] and AFLW [23] into different styles.
Related Work
Facial Landmark Detection
Increasing researchers focus on facial landmark detection [46]. The goal of facial landmark detection is to detect key-points in human faces, e.g., the tip of the nose, eyebrows, the eye corner and the mouth. Facial landmark detection is a prerequisite for a variety of computer vision applications. For example, Zhu et al. [74] take facial landmark detection results as input of 3D Morphable model. Wu et al. [58] propose a unified framework to deal with facial landmark detection, head pose estimation, and facial deformation analysis simultaneously, which couples each other. Thies et al. [53] use facial landmark detection confidences of keypoints in feature alignment for facial reenactment. Therefore, it is important to predict precise and accurate locations of the facial landmark.
A common approach to facial landmark detection problem is to learn a regression model [31,64,75,5,73,7,63]. Many of them leverage deep CNN to learn facial features and regressors in an end-to-end fashion [51,31,73] with a cascade architecture to progressively update the landmark estimation [73,51,10]. Yu et al. [66] propose a deep deformation network to incorporates geometric constraints within the CNN framework. Zhu et al. [73] leverage cascaded regressors to handle extreme head poses and rich shape deformation. Zhu et al. [72] utilize a coarse search over a shape space with diverse shapes to overcome the poor Another category of facial landmark detection methods takes the advantages of end-to-end training from deep CNN model to learn robust heatmap for facial landmark detection [27,57,6,4]. Wei et al. [27] and Newell et al. [34] take the location with the highest response on the heatmap as the coordinate of the corresponding landmarks. Li et al. [27] enhance the facial landmark detection by multi-task learning. Bulat et al. [6] propose a robust network structure utilizing the state-of-the-art residual architectures.
These existing facial landmark detection algorithms usually focus on the facial shape information, e.g., the extreme head pose [20] or rich facial deformation [73]. However, few of them engage in a consideration of the intrinsic variance of image styles, e.g., grayscale vs. color images, light vs. dark and intense vs. dull. We also empirically demonstrate the performance fall caused by such intrinsic variance of image styles. This issue has been overlooked by recent studies but becomes inevitable as increasing web images are collected from various sources. Therefore, it is necessary to investigate the approach to dealing with the style variance, which is the focus of this paper.
Some researchers extend the landmark detection in the image to video settings [22,13,40] or 3D settings [6,47]. In contrast, we focuses on image-based landmark detection.
Generative Adversarial Networks
We leverage the generator of trained GAN to generate faces into different styles to combat the large variance of face image styles.
GANs are first proposed in [16] to estimate generative models via an adversarial process. Following that, many researchers devoted great efforts to improve this research topic regarding theory [2,8,25,35,54] and applications [36,41,50,71]. Some of them contribute to face applications, such as makeup-invariant face verification [26] and face aging [1]. In this work, we leverage a recently proposed technique, CycleGAN [71], to integrate a face generation model in our detection network. There are two different main focuses between this work and the previous works. First, we aim to group images into specific styles in an unsupervised manner, while they usually assume a stationary style in a dataset. Second, sophisticated face generation methods are not our target.
Methodology
How to design a neural network that is insensitive to the style variations for facial landmark detection? As illustrated in Figure 3, we design a network by combine two sub-modules to solve this problem: (1) The face generation module learns a neutral style of face images to combat the effect of style variations, i.e., transform faces with different styles into an aggregated style. (2) Figure 4. The pipeline to train the style-aggregated face generation module in an unsupervised way. We first utilize PS to transfer the original dataset into C = 3 different styles. These transferred datasets accompanying with the original dataset are then used to fine-tune the ResNet-152 with C + 1 classes. The fine-tuned features from the global average pooling layer can be considered as the style-discriminative features. We then leverage these features to cluster all images in the original dataset into k clusters, which can potentially contain the information of hidden styles. Lastly, we use these clustered data to train style transformation models via CycleGAN, and combine the trained models to obtain the final style-aggregated faces.
module leverages the complementary information from the neutral face and the original face to jointly predict the final coordinate for each landmark.
Style-Aggregated Face Generation Module
This module is motivated by the recent advances on image-to-image translation [19,71] and style-transfer [14,15,56]. They can transform face images into a different style, whereas they require the style of images are already known in the training procedure as well as testing. However, face images in facial landmark detection datasets are usually collected from multiple sources. These images can have various styles, but we have no labels of these styles. Therefore, current facial landmark datasets do not align with the settings of image-to-image translation, and can thus not directly apply their techniques to our problem.
We design an unsupervised approach to learn a face generation model to first transfer faces into different styles and then combine them into an aggregated style. We first transfer the original dataset into three different styles by Adobe Photoshop (PS) 1 . These three transferred datasets accompanying with the original dataset are regarded as four classes to fine-tune the classification model [48,17,52,11,65,62,18]. The fine-tuned feature of the average-pooling layer thus has the style-discriminative characteristic, because the style information is learned in the training procedure by machine-generated style supervision.
To learn the stylized face generation model, we need to obtain the style information. For most face in-the-wild datasets, we can identify that faces have different styles. Figure 2 illustrates some examples of faces in various styles from 300-W [46]. However, it is hard to label such datasets with different styles due to two reasons: (1) Some style definitions are ambiguity, e.g., a face with light style can also be classified as the color. (2) It requires substantial labors to label the style information. Therefore, we leverage the learned style-discriminative feature to automatically cluster the whole dataset into k hidden styles by k-means.
Lastly, we regard the face images in different clusters as different hidden styles, and we then train face generation models to transfer styles via CycleGAN. CycleGAN is capable of preserving the structure of the input image because its cycle consistency loss guarantees the reconstructed images will match closely to the input images. The overall pipeline is illustrated in Figure 4. The final output is several face generation models that can transfer face images into different styles, and average the transferred faces into the style-aggregated ones.
Facial Landmark Prediction Module
The facial landmark prediction module leverages the mutual benefit of both the original images and the styleaggregated ones to overcome negative effects caused by style variations. This module is illustrated in Figure 3, where the green stream indicates the style-aggregated face and the blue stream represents the faces in the original styles. The blue stream contains undistorted appearance contents of faces but may vary in image styles. The green stream contains stationary environments around faces, but may lack certain shape information due to the less fidelity caused by GAN. By leveraging their complementary information, we can generate more robust predictions. The architecture is inspired by CPM [57]. We use the first four convolutional blocks from VGG-16 [49] followed by two additional convolution layers as feature extraction part. The feature extraction part takes the face image I o ∈ R h×w in the original styles and the one I s ∈ R h×w from the styleaggregated stream as input, where w and h represent the width and the height of image. In this part, each of the first three convolution blocks is followed by one pooling layer. It thus outputs the features F ∈ R C×h ×w with eight times down-sample size compared to the input image I, where (h , w ) = (h/8, w/8) and C is the channel of the last convolutional layer. The output features from the original and the style-aggregated faces are represented as F o and F s , respectively. Three subsequent stages are used to produce 2D belief maps [57]. Each stage is a fully-convolution structure. Its output tensor H ∈ R (K+1)×h ×w has the same spatial size of the input tensor, where K indicates the number of landmarks. The first stage takes F o and F s as inputs and generate the belief maps for each of them, H o and H s . The second stage g 2 takes the concatenation of F o , F s , H o and H s as inputs, and output the belief map for stage-2:
g 2 (F o , F s , H o , H s ) = H 2 .(1)
The last stage is similar to the second one, which can be formulated as follows:
g 3 (F o , F s , H 2 ) = H 3 .(2)
Following [34,57], we minimize the following loss functions for each face image during the training procedure:
Loss = i∈{o,s,2,3} ||H i − H * i || 2 F ,(3)
where H * represents the ideal belief map.
To generate the final landmark coordinates, we first upsample the belief map H 3 to the original image size using bicubic interpolation. We then use the argmax function on each belief map to obtain the coordinate of each landmark. [46]. This dataset annotates five face datasets with 68 landmarks, LFPW [3], AFW [75], HELEN [24], XM2VTS, IBUG. Following the common settings in [72,31], we regard all the training samples from LFPW, HELEN and the full set of AFW as the training set, in which there is 3148 training images. 554 testing images from LFPW and HELEN form the common testing subset; 135 images from IBUG are regarded as the challenging testing subset. Both of these two subsets form the full testing set.
Experiments
Datasets
300-W
AFLW [23]. This dataset contains 21997 real-world images with 25993 faces in total. They provide at most 21 landmark coordinates for each face but excluding invisible landmark. Faces in AFLW usually have different pose, expression, occlusion or illumination, therefore causes difficulties to train a robust detector. Following the same setting as in [31,73], we do not use the landmarks of two ears. There are two types of AFLW splits, AFLW-Full and AFLW-Frontal following [73]. AFLW-Full contains 20000 training samples and 4386 testing samples. AFLW-Front uses the same training samples as in AFLW-Full, but only use the 1165 samples with the frontal face as the testing set.
Experiment Settings
Training. We use PyTorch [39] for all experiments. To train the style-discriminative feature, we regard the original dataset and the PS-generated three datasets as four different classes. We then use them to fine-tune ResNet-152 ImageNet pre-trained model, and we train the model with the learning rate of 0.01 for two epochs in total. We use k-means to cluster the whole dataset into k = 3 groups, and regard the group with the maximum element and the group with the minimum as two different style sets by default. These two different groups are then used to train our style-unified face generation module via Cycle-GAN [71]. We follow the similar training settings as in [71], whereas we train our model with the batch size of 32 on two GPUs, and also set the identity loss in [71] as 0.1. To train the facial landmark prediction module, the first four convolutional blocks are initialized by VGG-16 ImageNet pretrained model, and other layers are initialized using a Gaussian distribution with the variance of 0.01. Lastly, we train the facial landmark prediction model with the batch size of 8 and weight decay of 0.0005 on two GPUs. We start the learning rate at 0.00005 and reduce the learning rate at 30th/35th/40th/45th epochs by 0.5, and we then stop training at 50th epoch. The face bounding box is expanded by the ratio of 0.2. We use the random crop for pre-processing during training as data argumentation. Evaluation. Normalized Mean Error (NME) is usually applied to evaluate the performance for facial landmark predictions [31,43,73]. For 300-W dataset, we use the interocular distance to normalize mean error following the same setting as in [46,31,7,43]. For AFLW dataset, we use the face size to normalize mean error [31]. We also use Cumulative Error Distribution (CED) curve to compare the algorithms provided in [45]. Area Under the Curve (AUC) @ 0.08 error is also employed for evaluation [6,55]
Comparison with State-of-the-art Methods
Results on 300-W. Table 1 shows the performance of different facial landmark detection algorithms on the 300-W. We compare our approach with recently proposed stateof-the-art algorithms [31,61,20]. We compare our approaches based on two types of face bounding boxes: (1) ground truth bounding box, denoted as GT; (2) sults compared with others by using the same face bounding box (OD). We improve the performance of NME on 300-W common set by relative 21.8% compared to the state-of-theart method. It can further enhance our approach by applying a better initialization (GT). This implies that SAN has potential to be more robust by incorporating the face alignment [31] or landmark refinement [73,55] methods. Results on AFLW. We use the training/testing splits and the bounding box provided from [73,72]. Table 2 shows the performance comparison on AFLW. Our SAN also achieves the very competitive NME results, which are better than the previous state-of-the-art by more than 11% on AFLW-Full. On the AFLW-Front testing set, our result is also better than state-of-the-art by more than 14%. We find that more clusters and more generation models in style-aggregated face generation module will obtain a similar result as k = 3, we thus use the setting of k = 3 by default.
SAN achieves new state-of-the-art results on two benchmark datasets, e.g., 300-W and AFLW. It takes two complementary images to generate predictions which are insensitive to style variations. The idea of using the two-stream input for facial landmark detection can be complementary to other algorithms [20,31,61,73]. They usually do not consider the effect of image style, while the style-aggregated face in the two-steam input can handle this problem.
Ablation Studies
In this section, we first verify the significance of each component in our proposed SAN. Figure 5 shows the comparison regarding CED curves for our SAN and two variants of SAN on the 300-W common and testing sets. As we can observe, the performance will significantly be deteriorated if we remove the original face image or the generated styleaggregated face image. This observation demonstrates that mean face from cluster-1 face images from cluster-1 face images from cluster-2 mean face from cluster-2 face images from cluster-3 mean face from cluster-3 Figure 6. Qualitative results of the clustered face images from 300-W by using the style-discriminative features. The face images in each cluster have some different hidden styles. For example, the first cluster has many grayscale faces; the second cluster shows the dark illumination; the last cluster shows the light illumination. We generate the mean face for each cluster. These mean face images show the very similar face, while they have quite different environments.
taking two complementary face images as the input benefits the facial landmark prediction results. Figure 6 shows the results of k-means clustering on 300-W dataset. 300-W dataset is the face in-the-wild dataset, where face images have large style variations but this style information is not approachable. Our style-discriminative feature is capable of distinguishing images with different hidden styles. We can find that most of the face images in one cluster share a similar style. The mean face images generated from three clusters contain different styles. If we directly use ImageNet pre-trained features for k-means clustering, we can not guarantee to group faces into different hidden styles. In experiments, we find that ImageNet pretrained features tend to group face images by the gender or other information.
Discussions of Benchmark Datasets
Facial landmark detection datasets with constrained face images [33] usually have the similar environment for each image. There are only small style changes in these datasets, and they may also not be applicable for real-world applications due to the small face variance. We thus do not discuss these datasets in this paper. The face in-the-wild datasets [46,23] contain face images with large intrinsic variance. However, this intrinsic variance information is not available from the official datasets, but can also affect the predictions of the detector. Therefore, we propose two new datasets, 300W-Style and AFLW-Style, to facilitate the style analysis for facial landmark detection problem.
As shown in Figure 7, 300W-Style consists of four different styles, original, sketch, light and gray. The original part is the original 300-W datasets, and the other three synthetic styles are generated using PS. Each image in 300W-Style is corresponding to one image in the 300-W dataset, and we thus directly use the annotation provided from 300-W for our 300W-Style. AFLW-Style is similar as 300W-Style, which transfer the AFLW dataset into three different styles. For training and testing split, we follow the common settings of the original datasets [46,23].
Can PS-generated images be realistic? Internet users usually use PS (or similar software) to change image styles and/or edit image content; thus PS-generated images are indeed realistic in many real-world applications. In addi-P P P P P P P tion, we have chosen three representative filters to generate images of different styles. These filters have been widely used by users to edit their photos and upload to the Internet. Therefore, the proposed datasets are realistic.
Effect of SAN for style variances. These two proposed datasets can be used to analyze the effect of face image styles for facial landmark detection. We consider the situation that testing set has a different style with the training set. For example, we train the detector on the light-style 300-W training set and evaluate the well-trained detector on 300-W testing sets with different styles. Table 3, Table 4 and Table 5 show the evaluation results of 16 training and testing style combinations, i.e., four different training styles multiply four different testing styles. Our SAN algorithm is specifically designed to deal with style variances for face landmark detection. When style variance between the training and testing sets is large (e.g., light and gray), our approach usually obtains a significant improvement. However, if style variance between the training and testing sets is not that large (e.g., gray and sketch), the improvement of SAN is less significant. On average, SAN obtains 7% relative improvement on the full testing set of the 300W-Style dataset when the training style is different from the testing style. Moreover, our SAN achieves consistent improvements over all the 16 different train-test style combinations. This demonstrates the effectiveness of our method.
Self-Evaluation: We compare two variants of our SAN: (1) train SAN without GAN using the training set of AFLW-Style and the testing set of AFLW. This can be considered as data argumentation, because the amount of training data that we use is four times larger than the original one. In this case, our SAN can achieve 79.82 [email protected] on AFLW-Full by only using the original AFLW training set, while the data argumentation one achieves a worse performance, 78.99 [email protected], than SAN. SAN is better than the data argumentation way, which uses our PS-generated images as additional training data. (2) replace the style-aggregated stream of SAN by a Photo-generated face image. If we train the detector on the original style 300-W training set and test it on the gray style 300-W challenging test set, our SAN can achieve 6.91 NME. However, replacing the styleaggregated stream by light style images can only achieve 7.30 NME, which is worse than ours. SAN can always achieve better results than the replaced variant, except for replacing the style-aggregated stream by the testing style. SAN can automatically learn the hidden styles in the dataset and generate the style-aggregated face images. This automatic way is better than providing images with a fixed style.
Error Analysis: The faces in uncontrolled conditions have large variations regarding the image style. Detectors will usually fail when image style changes a lot, whereas our SAN is insensitive to this style change. Figure 8 shows the qualitative results of our SAN and the base detector on 300-W. The first line shows the ground truth landmarks. The second and third lines show the predictions from SAN without GAN and SAN, respectively. In the first column, the base detector fails for the predictions on the face contour, while the predictions from SAN still preserves the overall structure. In the fourth column, some perdition from the base detector drifts to the right, while SAN not.
Conclusion & Future Work
The large intrinsic variance of image styles, which comes from their uncontrolled collection sources, has been overlooked by recent studies in facial landmark detection. To deal with this issue, we propose a style-aggregated network (SAN). SAN takes two complementary images for each face, one in the original style and the other in the aggregated style that is generated by GAN. Empirical studies verify that style variations degenerate the performance of landmark detection, and SAN is robust to the large variance of image styles. Additionally, SAN achieves state-of-the-art performance on 300-W and AFLW datasets.
The first step of SAN is to generate the style-aggregated images. This step can be decoupled from our landmark detector, and potentially used to improve other landmark detectors [7,43,72,68,37]. Moreover, the intrinsic variance of image styles also exists in other computer vision tasks, such as object detection [12,44,38,9,29] and person re-identification [60,69,70,32]. Therefore, the styleaggregation method can also be used to solve the problem of the style variance in other applications. In our future work, we will explore how to generalize the style-aggregation method for other computer vision tasks.
Figure 3 .
3Overview of the SAN architecture. Our network consists of two components. The first is the style-aggregated face generation module, which transforms the input image into different styles and then combines them into a style-aggregated face. The second is the facial landmark prediction module. This module takes both the original image and the style-aggregated one as input to obtain two complementary features and then fuses the two features to generate heat-map predictions in a cascaded manner. "FC" means fully-convolution. initialization problem. Lv et al.[31] present a deep regression architecture with two-stage reinitialization to explicitly deal with the initialization problem.
.
Figure 5 .
5CED curves for 300-W common and challenging testing sets. The blue line shows the performance of SAN. The green and red lines indicate SAN with only the style-aggregated face and with only the original face being the input, respectively.
Figure 7 .
7Our PS-generated datasets based on 300-W and AFLW with the original and three synthetic styles, i.e., sketch, light and gray. These datasets have different styles and can be used to facilitate style analysis.
Figure 8 .
8Representative results on 300-W. The red points in the first line indicate the ground-truth landmarks. The blue points in the second line and the green points in the third line indicate the landmark predictions from the base detector and SAN, respectively.
The landmark predictionoriginal dataset
original
dataset
PhotoShop
generated
datasets with
different styles
7x7 conv. 64
3x3 conv. 64
3x3 conv. 64
3x3 conv. 64
3x3 conv. 64
average pool
fc
style-discriminative feature
fine-tune
ResNet-152
find k hidden styles
by clustering
conv
deconv
residual
residual
...
conv
deconv
residual
residual
...
......
train style transformation
face generation models
via CycleGAN
combine to obtain
style-aggregated
faces
official detector, denoted as OD. SAN achieves very competitive re-Methods
SDM [64] ERT [21] LBF [43] CFSS [72] CCL [73] Two-Stage [31] SAN
AFLW-Full
4.05
4.35
4.25
3.92
2.72
2.17
1.91
AFLW-Front
2.94
2.75
2.74
2.68
2.17
-
1.85
Table 2. Comparisons of normalized mean (NME) errors on AFLW dataset.
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07
NME
0.2
0.4
0.6
0.8
1.0
Fraction of Test Faces
SAN
SAN only using GAN
SAN w/o GAN
(a) 300-W Common Testing Set
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07
NME
0.2
0.4
0.6
0.8
1.0
Fraction of Test Faces
SAN
SAN only using GAN
SAN w/o GAN
(b) 300-W Challenging Testing Set
Table 3. Comparisons of NME on the 300W-Style common testing set. We use different styles for training and testing.Table 4. Comparisons of NME on the 300W-Style challenging testing set. We use different styles for training and testing. 9%) (↑ 5.7%) (↑ 10.2%) (↑ 9.9%) 0%) (↑ 0.4%) (↑ 13.7%) (↑ 13.1%) 4%) (↑ 9.7%) (↑ 1.2%) (↑ 1.4%) 4%) (↑ 6.7%) (↑ 3.2%) (↑ 1.2%)Table 5. Comparisons of NME on the 300W-Style full testing set. We use different styles for training and testing.Test
Train Original
Light
Gray
Sketch
SAN w/o GAN
Original
3.37
3.56
3.77
3.92
Light
3.61
3.41
4.01
4.13
Gray
3.47
3.79
3.43
3.60
Sketch
3.71
3.97
3.66
3.40
SAN
Original
3.34
3.44
3.46
3.54
(↑ 0.8%) (↑ 3.3%) (↑ 8.2%) (↑ 9.7%)
Light
3.48
3.39
3.56
3.68
(↑ 3.6%) (↑ 0.5%) (↑ 11.2%) (↑ 10.9%)
Gray
3.45
3.56
3.38
3.52
(↑ 0.6%) (↑ 6.1%) (↑ 1.4%) (↑ 2.2%)
Sketch
3.53
3.62
3.55
3.35
(↑ 4.9%) (↑ 8.8%) (↑ 3.0%) (↑ 1.4%)
P
P
P
P
P
P
P
Test
Train Original
Light
Gray
Sketch
SAN w/o GAN
Original
6.88
7.82
7.84
7.74
Light
7.31
7.16
8.91
8.67
Gray
7.08
8.59
6.77
6.98
Sketch
7.59
8.68
7.17
6.83
SAN
Original
6.60
7.00
6.73
6.97
(↑ 4.1%) (↑ 10.5%) (↑ 14.2%) (↑ 9.9%)
Light
7.15
7.08
7.26
7.15
(↑ 2.2%) (↑ 1.1%) (↑ 18.5%) (↑ 17.5%)
Gray
6.91
7.18
6.69
6.97
(↑ 2.4%) (↑ 16.4%) (↑ 1.1%) (↑ 0.2%)
Sketch
7.08
7.64
6.95
6.77
(↑ 6.7%) (↑ 12.0%) (↑ 3.1%) (↑ 0.8%)
P
P
P
P
P
P
P
Test
Train Original
Light
Gray
Sketch
SAN w/o GAN
Original
4.06
4.39
4.57
4.67
Light
4.33
4.14
4.97
5.02
Gray
4.19
4.73
4.08
4.26
Sketch
4.47
4.89
4.35
4.07
SAN
Original
3.98
4.14
4.10
4.21
(↑ 1.Light
4.20
4.12
4.29
4.36
(↑ 3.Gray
4.13
4.27
4.03
4.20
(↑ 1.Sketch
4.23
4.41
4.21
4.02
(↑ 5.
Three styles: Light, Gray and Sketch. See details in Sec 4.5.
Face aging with conditional generative adversarial networks. G Antipov, M Baccouche, J.-L Dugelay, ICIP. G. Antipov, M. Baccouche, and J.-L. Dugelay. Face aging with conditional generative adversarial networks. In ICIP, 2017. 3
Wasserstein generative adversarial networks. M Arjovsky, S Chintala, L Bottou, ICML. M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gener- ative adversarial networks. In ICML, 2017. 3
Localizing parts of faces using a consensus of exemplars. P N Belhumeur, D W Jacobs, D J Kriegman, N Kumar, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3512P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Ku- mar. Localizing parts of faces using a consensus of exem- plars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2930-2940, 2013. 5
Convolutional aggregation of local evidence for large pose face alignment. A Bulat, G Tzimiropoulos, BMVC. A. Bulat and G. Tzimiropoulos. Convolutional aggregation of local evidence for large pose face alignment. In BMVC, 2016. 3
Binarized convolutional landmark localizers for human pose estimation and face alignment with limited resources. A Bulat, G Tzimiropoulos, ICCV. A. Bulat and G. Tzimiropoulos. Binarized convolutional landmark localizers for human pose estimation and face alignment with limited resources. In ICCV, 2017. 2
How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3d facial landmarks). A Bulat, G Tzimiropoulos, ICCV. 35A. Bulat and G. Tzimiropoulos. How far are we from solv- ing the 2D & 3D face alignment problem? (and a dataset of 230,000 3d facial landmarks). In ICCV, 2017. 3, 5
Face alignment by explicit shape regression. X Cao, Y Wei, F Wen, J Sun, International Journal of Computer Vision. 1072X. Cao, Y. Wei, F. Wen, and J. Sun. Face alignment by ex- plicit shape regression. International Journal of Computer Vision, 107(2):177-190, 2014. 1, 2, 5, 8
InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. X Chen, Y Duan, R Houthooft, J Schulman, I Sutskever, P Abbeel, NIPS. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learn- ing by information maximizing generative adversarial nets. In NIPS, 2016. 3
R-FCN: Object detection via region-based fully convolutional networks. J Dai, Y Li, K He, J Sun, NIPS. J. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object detection via region-based fully convolutional networks. In NIPS, 2016. 8
Cascaded pose regression. P Dollár, P Welinder, P Perona, CVPR. 1P. Dollár, P. Welinder, and P. Perona. Cascaded pose regres- sion. In CVPR, 2010. 1, 2
More is less: A more complicated network with less inference complexity. X Dong, J Huang, Y Yang, S Yan, CVPR. X. Dong, J. Huang, Y. Yang, and S. Yan. More is less: A more complicated network with less inference complexity. In CVPR, 2017. 4
A dual-network progressive approach to weakly supervised object detection. X Dong, D Meng, F Ma, Y Yang, ACM Multimedia. X. Dong, D. Meng, F. Ma, and Y. Yang. A dual-network progressive approach to weakly supervised object detection. In ACM Multimedia, 2017. 8
Supervision-by-Registration: An unsupervised approach to improve the precision of facial landmark detectors. X Dong, S.-I Yu, X Weng, S.-E Wei, Y Yang, Y Sheikh, In CVPR. 3X. Dong, S.-I. Yu, X. Weng, S.-E. Wei, Y. Yang, and Y. Sheikh. Supervision-by-Registration: An unsupervised approach to improve the precision of facial landmark detec- tors. In CVPR, 2018. 3
A learned representation for artistic style. V Dumoulin, J Shlens, M Kudlur, A Behboodi, F Lemic, A Wolisz, M Molinaro, C Hirche, M Hayashi, E Bagan, V. Dumoulin, J. Shlens, M. Kudlur, A. Behboodi, F. Lemic, A. Wolisz, M. Molinaro, C. Hirche, M. Hayashi, E. Bagan, et al. A learned representation for artistic style. In ICLR, 2017. 4
Image style transfer using convolutional neural networks. L A Gatys, A S Ecker, M Bethge, CVPR. L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. 4
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, NIPS. 23I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. In NIPS, 2014. 2, 3
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 4
Densely connected convolutional networks. G Huang, Z Liu, K Q Weinberger, L Van Der Maaten, CVPR. G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. In CVPR, 2017. 4
Image-to-image translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, CVPR. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. 4
Pose-invariant face alignment with a single cnn. A Jourabloo, X Liu, M Ye, L Ren, ICCV. 56A. Jourabloo, X. Liu, M. Ye, and L. Ren. Pose-invariant face alignment with a single cnn. In ICCV, 2017. 1, 3, 5, 6
One millisecond face alignment with an ensemble of regression trees. V Kazemi, J Sullivan, CVPR. V. Kazemi and J. Sullivan. One millisecond face alignment with an ensemble of regression trees. In CVPR, 2014. 6
Synergy between face alignment and tracking via discriminative global consensus optimization. M H Khan, J Mcdonagh, G Tzimiropoulos, ICCV. M. H. Khan, J. McDonagh, and G. Tzimiropoulos. Syn- ergy between face alignment and tracking via discriminative global consensus optimization. In ICCV, 2017. 3
Annotated facial landmarks in the wild: A large-scale, realworld database for facial landmark localization. M Koestinger, P Wohlhart, P M Roth, H Bischof, ICCV-W. 67M. Koestinger, P. Wohlhart, P. M. Roth, and H. Bischof. Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization. In ICCV-W, 2011. 2, 5, 6, 7
Interactive facial feature localization. V Le, J Brandt, Z Lin, L Bourdev, T S Huang, ECCV. V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang. Inter- active facial feature localization. In ECCV, 2012. 5
Y Li, A Schwing, K.-C Wang, R Zemel, Gans, NIPS. Y. Li, A. Schwing, K.-C. Wang, and R. Zemel. Dualing GANs. In NIPS, 2017. 3
Anti-Makeup: Learning a bi-level adversarial network for makeup-invariant face verification. Y Li, L Song, X Wu, R He, T Tan, In AAAI. 3Y. Li, L. Song, X. Wu, R. He, and T. Tan. Anti-Makeup: Learning a bi-level adversarial network for makeup-invariant face verification. In AAAI, 2018. 3
Face detection with endto-end integration of a convnet and a 3d model. Y Li, B Sun, T Wu, Y Wang, ECCV. Y. Li, B. Sun, T. Wu, and Y. Wang. Face detection with end- to-end integration of a convnet and a 3d model. In ECCV, 2016. 3
Joint face alignment and 3D face reconstruction. F Liu, D Zeng, Q Zhao, X Liu, ECCV. F. Liu, D. Zeng, Q. Zhao, and X. Liu. Joint face alignment and 3D face reconstruction. In ECCV, 2016. 1
Recurrent scale approximation for object detection in cnn. Y Liu, H Li, J Yan, F Wei, X Wang, X Tang, ICCV. Y. Liu, H. Li, J. Yan, F. Wei, X. Wang, and X. Tang. Re- current scale approximation for object detection in cnn. In ICCV, 2017. 8
Exploring disentangled feature representation beyond face identification. Y Liu, F Wei, J Shao, L Sheng, J Yan, X Wang, CVPR. Y. Liu, F. Wei, J. Shao, L. Sheng, J. Yan, and X. Wang. Exploring disentangled feature representation beyond face identification. In CVPR, 2018. 1
A deep regression architecture with two-stage reinitialization for high performance facial landmark detection. J Lv, X Shao, J Xing, C Cheng, X Zhou, CVPR. 56J. Lv, X. Shao, J. Xing, C. Cheng, and X. Zhou. A deep re- gression architecture with two-stage reinitialization for high performance facial landmark detection. In CVPR, 2017. 2, 3, 5, 6
Self-paced co-training. F Ma, D Meng, Q Xie, Z Li, X Dong, ICML. F. Ma, D. Meng, Q. Xie, Z. Li, and X. Dong. Self-paced co-training. In ICML, 2017. 8
The MUCT landmarked face database. S Milborrow, J Morkel, F Nicolls, Pattern Recognition Association of South Africa. 2010S. Milborrow, J. Morkel, and F. Nicolls. The MUCT land- marked face database. Pattern Recognition Association of South Africa, 201(0), 2010. 6
Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, ECCV. 35A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016. 3, 5
Training generative neural samplers using variational divergence minimization. S Nowozin, B Cseke, R Tomioka, F-Gan, NIPS. S. Nowozin, B. Cseke, and R. Tomioka. F-GAN: Training generative neural samplers using variational divergence min- imization. In NIPS, 2016. 3
GANs for biological image synthesis. A Osokin, A Chessel, R E C Salas, F Vaggi, ICCV. A. Osokin, A. Chessel, R. E. C. Salas, and F. Vaggi. GANs for biological image synthesis. In ICCV, 2017. 3
Multi-source deep learning for human pose estimation. W Ouyang, X Chu, X Wang, CVPR. W. Ouyang, X. Chu, and X. Wang. Multi-source deep learn- ing for human pose estimation. In CVPR, 2014. 8
A discriminative deep model for pedestrian detection with occlusion handling. W Ouyang, X Wang, CVPR. W. Ouyang and X. Wang. A discriminative deep model for pedestrian detection with occlusion handling. In CVPR, 2012. 8
Automatic differentiation in PyTorch. A Paszke, S Gross, S Chintala, G Chanan, E Yang, Z De-Vito, Z Lin, A Desmaison, L Antiga, A Lerer, A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De- Vito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Auto- matic differentiation in PyTorch. In NIPS-W, 2017. 5
A recurrent encoder-decoder network for sequential face alignment. X Peng, R S Feris, X Wang, D N Metaxas, ECCV. 13X. Peng, R. S. Feris, X. Wang, and D. N. Metaxas. A recur- rent encoder-decoder network for sequential face alignment. In ECCV, 2016. 1, 3
Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, ICLR. A. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. In ICLR, 2016. 3
Face alignment at 3000 fps via regressing local binary features. S Ren, X Cao, Y Wei, J Sun, CVPR. S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 fps via regressing local binary features. In CVPR, 2014. 1
Face alignment via regressing local binary features. S Ren, X Cao, Y Wei, J Sun, IEEE Transactions on Image Processing. 253S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment via re- gressing local binary features. IEEE Transactions on Image Processing, 25(3):1233-1245, 2016. 5, 6, 8
Faster r-cnn: towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Transactions on Pattern Analysis and Machine Intelligence. 39S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: to- wards real-time object detection with region proposal net- works. Transactions on Pattern Analysis and Machine Intel- ligence, 39(6):1137-1149, 2017. 8
300 faces in-the-wild challenge: Database and results. C Sagonas, E Antonakos, G Tzimiropoulos, S Zafeiriou, M Pantic, Image and Vision Computing. 475C. Sagonas, E. Antonakos, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: Database and results. Image and Vision Computing, 47:3-18, 2016. 5
300 faces in-the-wild challenge: The first facial landmark localization challenge. C Sagonas, G Tzimiropoulos, S Zafeiriou, M Pantic, ICCV-W. 67C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In ICCV-W, 2013. 2, 4, 5, 6, 7
Kronecker-Markov prior for dynamic 3D reconstruction. T Simon, J Valmadre, I Matthews, Y Sheikh, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3911T. Simon, J. Valmadre, I. Matthews, and Y. Sheikh. Kronecker-Markov prior for dynamic 3D reconstruction. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 39(11):2201-2214, 2017. 3
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 4
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 4
RenderGAN: Generating realistic labeled data. L Sixt, B Wild, T Landgraf, ICLR. L. Sixt, B. Wild, and T. Landgraf. RenderGAN: Generating realistic labeled data. In ICLR, 2017. 3
Deep convolutional network cascade for facial point detection. Y Sun, X Wang, X Tang, CVPR. Y. Sun, X. Wang, and X. Tang. Deep convolutional network cascade for facial point detection. In CVPR, 2013. 2
Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, CVPR. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015. 4
Face2face: Real-time face capture and reenactment of rgb videos. J Thies, M Zollhofer, M Stamminger, C Theobalt, M Nießner, CVPR. 1J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Real-time face capture and reen- actment of rgb videos. In CVPR, 2016. 1, 2
AdaGan: Boosting generative models. I Tolstikhin, S Gelly, O Bousquet, C.-J Simon-Gabriel, B Schölkopf, NIPS. I. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel, and B. Schölkopf. AdaGan: Boosting generative models. In NIPS, 2017. 3
Mnemonic descent method: A recurrent process applied for end-to-end face alignment. G Trigeorgis, P Snape, M A Nicolaou, E Antonakos, S Zafeiriou, CVPR. 56G. Trigeorgis, P. Snape, M. A. Nicolaou, E. Antonakos, and S. Zafeiriou. Mnemonic descent method: A recurrent pro- cess applied for end-to-end face alignment. In CVPR, 2016. 5, 6
Texture networks: Feed-forward synthesis of textures and stylized images. D Ulyanov, V Lebedev, A Vedaldi, V S Lempitsky, ICML. D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. 4
Convolutional pose machines. S.-E Wei, V Ramakrishna, T Kanade, Y Sheikh, CVPR. 35S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Con- volutional pose machines. In CVPR, 2016. 3, 4, 5
Simultaneous facial landmark detection, pose and deformation estimation under facial occlusion. Y Wu, C Gou, Q Ji, CVPR. 1Y. Wu, C. Gou, and Q. Ji. Simultaneous facial landmark detection, pose and deformation estimation under facial oc- clusion. In CVPR, 2017. 1, 2
Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection. Y Wu, Q Ji, CVPR. Y. Wu and Q. Ji. Constrained joint cascade regression frame- work for simultaneous facial action unit recognition and fa- cial landmark detection. In CVPR, 2016. 1
Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. Y Wu, Y Lin, X Dong, Y Yan, W Ouyang, Y Yang, In CVPR. 8Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang. Exploit the unknown gradually: One-shot video-based per- son re-identification by stepwise learning. In CVPR, 2018. 8
Recurrent 3D-2D dual learning for large-pose facial landmark detection. S Xiao, J Feng, L Liu, X Nie, W Wang, S Yan, A Kassim, 56S. Xiao, J. Feng, L. Liu, X. Nie, W. Wang, S. Yan, and A. Kassim. Recurrent 3D-2D dual learning for large-pose facial landmark detection. In CVPR, 2017. 5, 6
Aggregated residual transformations for deep neural networks. S Xie, R Girshick, P Dollár, Z Tu, K He, CVPR. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017. 4
Towards multiview and partially-occluded face alignment. J Xing, Z Niu, J Huang, W Hu, S Yan, CVPR. J. Xing, Z. Niu, J. Huang, W. Hu, and S. Yan. Towards multi- view and partially-occluded face alignment. In CVPR, 2014. 2
Supervised descent method and its applications to face alignment. X Xiong, F De La, Torre , CVPR. 6X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In CVPR, 2013. 1, 2, 5, 6
Image classification by cross-media active learning with privileged information. Y Yan, F Nie, W Li, C Gao, Y Yang, D Xu, IEEE Transactions on Multimedia. 1812Y. Yan, F. Nie, W. Li, C. Gao, Y. Yang, and D. Xu. Im- age classification by cross-media active learning with priv- ileged information. IEEE Transactions on Multimedia, 18(12):2494-2502, 2016. 4
Deep deformation network for object landmark localization. X Yu, F Zhou, M Chandraker, ECCV. X. Yu, F. Zhou, and M. Chandraker. Deep deformation net- work for object landmark localization. In ECCV, 2016. 2
Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. J Zhang, S Shan, M Kan, X Chen, ECCV. J. Zhang, S. Shan, M. Kan, and X. Chen. Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In ECCV, 2014. 1
Facial landmark detection by deep multi-task learning. Z Zhang, P Luo, C C Loy, X Tang, ECCV. 5Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, 2014. 5, 8
Unsupervised salience learning for person re-identification. R Zhao, W Ouyang, X Wang, CVPR. R. Zhao, W. Ouyang, and X. Wang. Unsupervised salience learning for person re-identification. In CVPR, 2013. 8
Camera style adaptation for person re-identification. Z Zhong, L Zheng, Z Zheng, S Li, Y Yang, CVPR. Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang. Camera style adaptation for person re-identification. In CVPR, 2018. 8
Unpaired imageto-image translation using cycle-consistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, ICCV. 35J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image- to-image translation using cycle-consistent adversarial net- works. In ICCV, 2017. 3, 4, 5
Face alignment by coarse-to-fine shape searching. S Zhu, C Li, C Change Loy, X Tang, CVPR. 6S. Zhu, C. Li, C. Change Loy, and X. Tang. Face alignment by coarse-to-fine shape searching. In CVPR, 2015. 1, 2, 5, 6, 8
Unconstrained face alignment via cascaded compositional learning. S Zhu, C Li, C.-C Loy, X Tang, CVPR. 56S. Zhu, C. Li, C.-C. Loy, and X. Tang. Unconstrained face alignment via cascaded compositional learning. In CVPR, 2016. 1, 2, 3, 5, 6
High-fidelity pose and expression normalization for face recognition in the wild. X Zhu, Z Lei, J Yan, D Yi, S Z Li, CVPR. 1X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In CVPR, 2015. 1, 2
Face detection, pose estimation, and landmark localization in the wild. X Zhu, D Ramanan, CVPR. 25X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, 2012. 2, 5
| [
"https://github.com/D-X-Y/SAN"
] |
[
"Fast Learning of Temporal Action Proposal via Dense Boundary Generator",
"Fast Learning of Temporal Action Proposal via Dense Boundary Generator"
] | [
"Chuming Lin [email protected] \nXiamen University\nChina\n",
"Jian Li \nXiamen University\nChina\n",
"Yabiao Wang \nXiamen University\nChina\n",
"Ying Tai [email protected]† \nXiamen University\nChina\n",
"Donghao Luo \nXiamen University\nChina\n",
"Zhipeng Cui [email protected]† \nXiamen University\nChina\n",
"Chengjie Wang \nXiamen University\nChina\n",
"Jilin Li \nXiamen University\nChina\n",
"Feiyue Huang \nXiamen University\nChina\n",
"Rongrong Ji \nXiamen University\nChina\n",
"TencentYoutu Lab \nXiamen University\nChina\n"
] | [
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina",
"Xiamen University\nChina"
] | [] | Generating temporal action proposals remains a very challenging problem, where the main issue lies in predicting precise temporal proposal boundaries and reliable action confidence in long and untrimmed real-world videos. In this paper, we propose an efficient and unified framework to generate temporal action proposals named Dense Boundary Generator (DBG), which draws inspiration from boundary-sensitive methods and implements boundary classification and action completeness regression for densely distributed proposals. In particular, the DBG consists of two modules: Temporal boundary classification (TBC) and Action-aware completeness regression (ACR). The TBC aims to provide two temporal boundary confidence maps by low-level two-stream features, while the ACR is designed to generate an action completeness score map by high-level action-aware features. Moreover, we introduce a dual stream BaseNet (DSB) to encode RGB and optical flow information, which helps to capture discriminative boundary and actionness features. Extensive experiments on popular benchmarks ActivityNet-1.3 and THUMOS14 demonstrate the superiority of DBG over the state-of-the-art proposal generator (e.g., MGG and BMN). * indicates equal contributions. This work was done when Chuming Lin was an intern at Tencent Youtu Lab. | 10.1609/aaai.v34i07.6815 | [
"https://arxiv.org/pdf/1911.04127v1.pdf"
] | 207,852,689 | 1911.04127 | e2534a3c894c93053341d514967c45c78657969c |
Fast Learning of Temporal Action Proposal via Dense Boundary Generator
Chuming Lin [email protected]
Xiamen University
China
Jian Li
Xiamen University
China
Yabiao Wang
Xiamen University
China
Ying Tai [email protected]†
Xiamen University
China
Donghao Luo
Xiamen University
China
Zhipeng Cui [email protected]†
Xiamen University
China
Chengjie Wang
Xiamen University
China
Jilin Li
Xiamen University
China
Feiyue Huang
Xiamen University
China
Rongrong Ji
Xiamen University
China
TencentYoutu Lab
Xiamen University
China
Fast Learning of Temporal Action Proposal via Dense Boundary Generator
Generating temporal action proposals remains a very challenging problem, where the main issue lies in predicting precise temporal proposal boundaries and reliable action confidence in long and untrimmed real-world videos. In this paper, we propose an efficient and unified framework to generate temporal action proposals named Dense Boundary Generator (DBG), which draws inspiration from boundary-sensitive methods and implements boundary classification and action completeness regression for densely distributed proposals. In particular, the DBG consists of two modules: Temporal boundary classification (TBC) and Action-aware completeness regression (ACR). The TBC aims to provide two temporal boundary confidence maps by low-level two-stream features, while the ACR is designed to generate an action completeness score map by high-level action-aware features. Moreover, we introduce a dual stream BaseNet (DSB) to encode RGB and optical flow information, which helps to capture discriminative boundary and actionness features. Extensive experiments on popular benchmarks ActivityNet-1.3 and THUMOS14 demonstrate the superiority of DBG over the state-of-the-art proposal generator (e.g., MGG and BMN). * indicates equal contributions. This work was done when Chuming Lin was an intern at Tencent Youtu Lab.
Introduction
Generating temporal action proposals in video is a fundamental task, which serves as a crucial step for various tasks, like action detection and video analysis. In an optimal case, such proposals should well predict action intervals, with precise temporal boundaries and reliable confidence in untrimmed videos. Despite the extensive endeavors (Lin et al. 2018;Lin et al. 2019;, temporal action proposal generation retains as an open problem, especially when facing action duration variability, activity complexity, blurred boundary, camera motion, background clutter and viewpoint changes in real-world scenarios.
Previous works in temporal action proposals can be roughly divided into two categories: anchor based (Buch et al. 2017;Heilbron, Niebles, and Ghanem 2016;Gao et al. 2017;Shou, Wang, and Chang 2016) and boundary based (Zhao et al. 2017a;Lin et al. 2018;Lin et al. 2019). Anchorbased methods design a set of anchors at different scale for Figure 1: Overview of our proposed method. Given an untrimmed video, DBG densely evaluates all proposals by producing simultaneously three score maps: starting confidence score map, ending confidence score map and action completeness score map. each video segment, which are regularly distributed over the video sequence. These candidate anchors are then evaluated by a binary classifier. However, anchor-based methods can not predict precise boundaries and are not flexible to cover multi-duration actions.
Boundary-based methods evaluate each temporal location over the video sequence. Such local information helps to generate proposals with more precise boundaries and more flexible durations. As one of the pioneering works (Zhao et al. 2017a) groups continuous high-score regions as proposal by actionness scores. (Lin et al. 2018) adopts a twostage strategy to locate locally temporal boundaries with high probabilities, and then evaluate global confidences of candidate proposals generated by these boundaries. To explore the rich context for evaluating all proposals, (Lin et al. 2019) propose a boundary-matching mechanism for the confidence evaluation of proposals in an end-to-end pipeline. However, it drops actionness information and only adopts the boundary matching to capture low-level features, which can not handle complex activities and clutter background. Besides, different from our method shown in Fig. 1, it employs the same methods of (Lin et al. 2018) to generate boundary probability sequence instead of map, which lacks a global scope for action instances with blurred boundaries and variable temporal durations. Fig. 2 illustrates the difference between local information and our global proposal information for boundary prediction.
To address the aforementioned drawbacks, we propose dense boundary generator (DBG) to employ global proposal features to predict the boundary map, and explore action-aware features for action completeness analysis. In our framework, a dual stream BaseNet (DSB) takes spatial and temporal video representation as input to exploit the rich local behaviors within the video sequence, which is supervised via actionness classification loss. DSB generates two types of features: Low-level dual stream feature and highlevel actionness score feature. In addition, a proposal feature generation (PFG) layer is designed to transfer these two types of sequence features into a matrix-like feature. And an action-aware completeness regression (ACR) module is designed to input the actionness score feature to generate a reliable completeness score map. Finally, a temporal boundary classification (TBC) module is designed to produce temporal boundary score maps based on dual stream feature. These three score maps will be combined to generate proposals.
The main contributions of this paper are summarized as: • We propose a fast and unified dense boundary generator (DBG) for temporal action proposal, which evaluates dense boundary confidence maps for all proposals. • We introduce auxiliary supervision via actionness classification to effectively facilitate action-aware feature for the action-aware completeness regression. • We design an efficient proposal feature generation layer to capture global proposal features for subsequent regression and classification modules.
• Experiments conducted on popular benchmarks like ActivityNet-1.3 (Heilbron et al. 2015) and THUMOS14 (Idrees et al. 2017) demonstrate the superiority of our network over the state-of-the-art methods.
Related Work
Action recognition. Early methods for video action recognition mainly relied on hand-crafted features such as HOF, HOG and MBH. Recent advances resort to deep convolutional networks to promote recognition accuracy. These networks can be divided into two patterns: Two-stream networks (Feichtenhofer, Pinz, and Zisserman 2016;Simonyan and Zisserman 2014;Wang et al. 2015;, and 3D networks (Tran et al. 2015;Carreira and Zisserman 2017). Two-stream networks explore video appearance and motion clues by passing RGB image and stacked optical flow through ConvNet pretrained on ImageNet separately. Instead, 3D methods directly create hierarchical representations of spatio-temporal data with spatio-temporal filters. Temporal action proposal. Temporal action proposal aims to detect action instances with temporal boundaries and confidence in untrimmed videos. Anchor-based methods generate proposals by designing a set of multi-scale anchors with regular temporal interval. The work in (Shou, Wang, and Chang 2016) adopts C3D network (Tran et al. 2015) as the binary classifier for anchor evaluation. (Heilbron, Niebles, and Ghanem 2016) proposes a sparse learning framework for scoring temporal anchors. proposes to apply temporal regression to adjust the action boundaries. Boundary-based methods evaluate each temporal location in video. (Zhao et al. 2017a) groups continuous high-score region to generate proposals by temporal watershed algorithm. (Lin et al. 2018) locates locally temporal boundaries with high probabilities and evaluate global confidences of candidate proposals generated by these boundaries. (Lin et al. 2019) proposes a boundary-matching mechanism for confidence evaluation of densely distributed proposals in an end-to-end pipeline. MGG ) combines anchor based method and boundary based method to accurately generate temporal action proposal. Temporal action detection. The temporal action detection includes generating temporal proposal generation and recognizing actions, which can be divided into two patterns, i.e., one-stage (Lin, Zhao, and Shou 2017;Long et al. 2019) and two-stage (Shou, Wang, and Chang 2016;Gao, Yang, and Nevatia 2017;Zhao et al. 2017b;Xu, Das, and Saenko 2017;Chao et al. 2018). The two-stage method first generates candidate proposals, and then classifies these proposals. (Chao et al. 2018) improves two-stage temporal action detection by addressing both receptive field alignment and context feature extraction. For one-stage method, (Lin, Zhao, and Shou 2017) skips the proposal generation via directly detecting action instances in untrimmed video. (Long et al. 2019) introduces Gaussian kernels to dynamically optimize temporal scale of each action proposal.
= {ϕ i = (ts i , te i )} Ng i=1 ,
where N g is the number of ground truth action instances in video V , and ts i , te i are starting and ending points of action instance ϕ i . The generation of temporal action proposal aims to predict proposals
ψ p = {ϕ i = (ts i , te i , p i )} Np i=1
to cover ψ g with high recall and overlap, where p i is the confidence of ϕ i . Fig. 3 illustrates the proposed pipeline. In the phrase of video representation, spatial and temporal network are employed to encode video visual contents. The output scores of the two-stream network are used as RGB and flow features separately, which are fed into our dense boundary generator (DBG). DBG contains three modules: dual stream BaseNet (DSB), action-aware completeness regression (ACR) and temporal boundary classification (TBC). DSB can be regarded as a DBG backbone to exploit the rich local behaviors within the video sequence. DSB will generate two types of features: low-level dual stream feature and high-level actionness score feature. Actionness score feature is learned under auxiliary supervision of actionness classification loss, while dual stream feature is generated by late fusion of RGB and flow information. The proposal feature generation (PFG) layer transfers these two types of sequence features into a matrix-like feature. ACR will take actionness score features as input to produce an action completeness score map for dense proposals. TBC will produce temporal boundary confidence maps based on the dual stream features. ACR and TBC are trained by completeness regression loss and binary classification loss simultaneously. At last, the postprocessing step generates dense proposals with boundaries and confidence by score map fusion and Soft-NMS.
Pipeline of our framework
Video Representation
To explore video appearance and motion information separately, we encode the raw video sequence to generate video representation by , which contains spatial network for single RGB frame and temporal network for stacked optical flow field. We partition the untrimmed video frame sequence
F = {f t } l f t=1 into snippets sequence S = {s t } ls
t=1 by a regular frame interval δ, where l s = l f /δ. A snippet s t contains 1 RGB frame and 5 stacked optical flow field frames. We use output scores in the top layer of both spatial and temporal network to formulate the RGB feature S t and flow feature T t . Thus, a video can be represented by a two-stream feature sequence {S t , T t } ls t=1 . We set l s = L to keep the length of two-stream video feature sequence a constant.
Dense Boundary Generator
Dual stream BaseNet. The DBG backbone receives the spatial and temporal video feature sequences as input, and outputs actionness score feature and dual stream feature for ACR and TBC separately. DSB serves as the backbone of our framework, which adopts several one-dimensional temporal convolutional layers to explore local semantic information for capturing discriminative boundary and actionness features. As show in Tab. 1, we use two stacked onedimensional convolutional layers to exploit spatial and temporal video representation respectively, written by sf = F conv12 (F conv11 (S)), tf = F conv22 (F conv21 (T )). Then, following (Li, Qian, and Yang 2017), we fuse sf , tf by element-wise sum to construct low-level dual stream feature, denoted by dsf = F sum (sf, tf ). Three convolutional layers will be adopted for sf , tf , dsf separately to generate three actionness feature sequences P a = (F conv13 (sf ), F conv23 (tf ), F conv33 (dsf )). In training, we use three auxiliary actionness binary classification loss to supervise P a . In inference, three actionness feature sequence are averaged to generate high-level actionness score feature, which can be defined by asf = F avg (P a ).
PFG L×L×32 PFG L×L×32×128 Conv2D 11 1×1 L×L×256 Conv3D 21 1×1×32 L×L×512 Conv2D 12 1×1 L×L×256 Conv2D 22 1×1 L×L×256 Conv2D 13 1×1 L×L×1 Conv2D 23 1×1 L×L×2
Proposal feature generation layer. The PFG layer is an efficient and differentiable layer that is able to generate temporal context feature for each proposal and make our framework be end-to-end trainable. For an arbitrary input feature f in whose shape is L × C, the PFG layer is able to produce the proposal feature tensor whose shape is L × L × N × C, which contains L × L proposal features f p whose size is N × C. Fig. 4 shows the detail of our PFG layer. First, for each candidate proposal ϕ = (t s , t e ), we sample N l locations from the left region r s = [t s − d g /k, t s + d g /k], N c locations from the center region r a = [t s , t e ] and N r locations from the right region r e = [t e − d g /k, t e + d g /k] by linear interpolation, respectively, where d g = t e − t s , k = 5 and N = N l +N c +N r . Then, with these sampling locations, we concatenate the corresponding temporal location features to produce the context proposal feature. Therefore, it is obvious to generate each proposal feature f p ts,te from the input feature f in through the following formula:
f p ts,te,n,c = w l f in t l ,c + w r f in tr,c ,(1)
where
t l = t s − dg k + 2dg k(N l −1) n , n < N l , t s + dg Nc−1 n , N l ≤ n < N l + N c , t e − dg k + 2dg k(Nr−1) n , n ≥ N l + N c ,(2)w l = t r − t s + dg k − 2dg k(N l −1) n, n < N l , t r − t s − dg Nc−1 n, N l ≤ n < N l + N c , t r − t e + dg k − 2dg k(Nr−1) n, n ≥ N l + N c ,(3)t r = 1 + t l , w r = 1 − w l .(4)
When calculating gradient for training PFG layer, f p ts,te is differentiable for f in , and its differential formulas are:
In our experiments, we set N l = N r = 8 and N c = 16, thus N = 32. Note that if t s ≥ t e , then the proposal feature f p ts,te will be zero. Action-aware completeness regression. The ACR branch receives actionness score feature as input and outputs action completeness map P c to estimate the overlap between candidate proposals and ground truth action instances. In ACR, we employ the PFG layer and several two-dimensional convolutional layers for each proposal to explore semantic information in the global proposal level. As show in Tab. 1, the PFG layer can transfer temporal actionness score features asf to three-dimensional proposal feature tensors, which are fed into multi two-dimensional convolutional layers to generate L×L action completeness maps, denoted as P c = F (Conv11,Conv12,Conv13) (F P F G (asf )). For each location or proposal in the action completeness map, we use a smooth L1 regression loss to supervise P c to generate reliable action completeness score.
Temporal boundary classification. The TBC branch receives dual stream feature as input and outputs boundary confidence map P s,e to estimate the starting and ending probabilities for dense candidate proposals. Similar with ACR, TBC includes the PFG layer, a three-dimensional convolutional layer and several two-dimensional convolutional layers. As show in Tab. 1, dual stream features dsf from DSB is transfered by the PFG layer to four-dimensional proposal tensors. Multi convolutional layers are stacked to generate L×L×2 boundary confidence maps written by P s,e = F (Conv21,Conv22,Conv23) (F P F G (dsf )). For each location or proposal in the boundary confidence map, we use the binary classification loss to supervise P s,e to predict precise temporal boundaries.
Training and Inference
To jointly learn action completeness map and boundary confidence map, a unified multi-task loss is further proposed. In inference, with three score maps generated by DBG, a score fusion strategy and Soft-NMS can generate dense proposals with confidence.
Label and Loss
Given the annotation ψ g = {ϕ i = (ts i , te i )} Ng i=1 of a video V , we compose actionness label g a for auxiliary DSB actionness classification loss, boundary label g s , g e for TBC boundary classification loss, and action completeness label g c for ACR completeness regression loss.
For a given ground truth action instance ϕ = (t s , t e ), we define its action region as r a g = [t s , t e ], starting region as r s g = [t s − d t , t s + d t ] and ending region as r e g = [t e − d t , t e + d t ], where d t is the two temporal locations intervals. DSB actionness classification. For each temporal location i within actionness score feature sequence P a , we denote its region as r i = [i − d t /2, i + d t /2]. Then, we calculate maximum overlap ratio IoR for r i with r a g , where IoR is defined as the overlap ratio with ground truth proportional to the duration of this region. If this ratio is bigger than an overlap threshold 0.5, we set the actionness label as g a i = 1, else we have g a i = 0. With three actionness probability sequences P a , we can construct DSB actionness classification loss using binary logistic regression:
L a DSB = 1 3L 3 j=1 L i=1 g a i log(p aj i ) + (1 − g a i )log(1 − p aj i ).(6)
TBC boundary classification. For each location (i, j) within starting confidence map P s or ending confidence map P e , we denote its starting region as r s i,j = [i − d t /2, i + d t /2] and its ending region as r e i,j = [j − d t /2, j + d t /2]. Similar with above actionness label, we calculate the starting label g s i,j for r s i,j with r s g and the ending label g e i,j for r e i,j with r e g . We also adopt binary logistic regression to construct the classification loss function of TBC for the starting and ending separately:
L s T BC = 1 L 2 L i=1 L j=1 g s i,j log(p s i,j )+(1−g s i,j )log(1−p s i,j ), (7) L e T BC = 1 L 2 L i=1 L j=1
g e i,j log(p e i,j )+(1−g e i,j )log(1−p e i,j ). (8) ACR completeness regression. For each location or proposal (i, j) within action completeness map P c , we denote its region as r i,j = [i, j]. For r i,j , We caculate the maximum Intersection-over-Union (IoU) with all r c g to generate completeness label g c i,j . With the action completeness map P c from ACR, we simply adopt smooth L1 loss to construct the ACR loss function:
L c ACR = 1 L 2 L i=1 L j=1 smoothL1(p c i,j − g c i,j ).(9)
Following BSN, we balance the effect of positive and negative samples for the above two classification losses during training. For regression loss, we randomly sample the proposals to ensure the ratio of proposals in different IoU intervals [0,0.2],[0.2,0.6] and [0.6,1] that satisfies 2:1:1. We use the above three-task loss function to define the training objective of our DGB as:
LDBG = λL a DSB + L s T BC + L e T BC + L c ACR ,(10)
where weight term λ is set to 2 to effectively facilitate the actionness score features.
Prediction and Post-processing
In inference, different from BSN, three actionness probability sequences from DSB will not participate in computation of the final proposal results. Based on three score maps from ACR and TBC, we adopt post-processing to generate dense proposals with confidence. Score map fusion. To make boundaries smooth and robustness, we average boundary probability of these proposals sharing the same starting or ending location. For starting and ending score map P s , P e from TBC, we compute each location or proposal boundary probability P s i,j and P e i,j by:
P s i,j = 1 L L k=1 P s i,k , P e i,j = 1 L L k=1 P e k,j .(11)
For each proposal (i, j) whose starting and ending locations are i and j, we fuse boundary probability with completeness score map P c to generate the final confidence score P i,j :
Pi,j = P c i,j × P s i,j × P e i,j .(12)
For the fact that the starting location is in front of the ending location, we consider the upper right part of the score map, and then get the dense candidate proposals set as ψ p = {ϕ i = (i, j, P i,j )} i<=j<=L i=1,j=1 . Proposal retrieving. The above proposal generation will produce dense and redundant proposals around ground truth action instances. Subsequently, we need to suppress redundant proposals by Soft-NMS, which is a non-maximum suppression by a score decaying function. After Soft-NMS step, we employ a confidence threshold to get the final sparse candidate proposals set as ψ p = {ϕ i = (s i , e i , P i )} N i=1 , where N is the number of retrieved proposals.
Experiments Evaluation Datasets
ActivityNet-1.3. It is a large-scale dataset containing 19,994 videos with 200 activity classes for action recognition, temporal proposal generation and detection. The quantity ratio of training, validation and testing sets satisfies 2:1:1. THUMOS14. This dataset has 1,010 validation videos and 1,574 testing videos with 20 classes. For the action proposal or detection task, there are 200 validation videos and 212 testing videos labeled with temporal annotations. We train our model on the validation set and evaluate on the test set.
Implementation Details
For video representation, we adopt the same two-stream network ) pretrained on ActivityNet-1.3 and parameter setting by following (Lin et al. 2019;Lin et al. 2018) to encode video features. For ActivityNet-1.3, we resize video feature sequence by linear interpolation and set L = 100. For THUMOS14, we slide the window on video feature sequence with overlap = 0.5 and L = 128. When training DBG, we use Adam for optimization. The batch size is set to 16. The learning rate is set to 10 −3 for the first 10 epochs, and we decay it to 10 −4 for another 2 epochs. For Soft-NMS, we set the threshold to 0.8 on the
Temporal Proposal Generation
To evaluate the proposal quality, we adopt different IoU thresholds to calculate the average recall (AR) with average number of proposals (AN). A set of IoU thresholds Our method achieves state-of-the-art performance and improves AUC from 67.10% to 68.23%, which demonstrates that our DBG can achieve an overall performance promotion of action proposal generation. Especially, with multiple video representation networks and multi-scale video features, our ensemble DBG achieves 73.05% AUC, which ranks top-1 on Activi-tyNet Challenge 2019 on temporal action proposals. Tab. 3 compares proposal generation methods on the testing set of THUMOS14. To ensure a fair comparison, we adopt the same video feature and post-processing step. Tab. 3 shows that our method using C3D or two-stream video features outperforms other methods significantly when the proposal number is set within [50,100,200,500,1000]. We conduct a more detailed comparison on the validation set of ActivityNet-1.3 to evaluate the effectiveness and efficiency among BSN, BMN, and DBG. As shown in Tab. 4, for a 3-minute video processed on Nvidia GTX 1080Ti, our inference speed accelerates a lot. And our proposal feature generation is reduced from 47ms to 8ms, while the total inference time decreases to 13ms. Ablation study. We further conduct detailed ablation study to evaluate different components of the proposed framework, including DSB, ACR, and TBC, include the following DBG w/o DSB: We discard DSB and feed concatenated spatial and temporal features into the BSN-like BaseNet. DBG w/o ACR: We discard action-aware feature and auxiliary actionness classification loss, and adopt dual stream feature for action-aware completeness regression like TBC. DBG w/o TBC: We discard the whole temporal boundary classification module, and instead predict boundary probability sequence like actionness feature sequence in DSB.
As illustrated in Fig. 5, the proposed DBG outperforms all its variants in terms of AUC with different IoU thresholds, which verifies the effectiveness of our contributions. The DBG w/o ACR results demonstrate that action-aware feature using auxiliary supervision is more helpful than dual stream feature for action completeness regression. The DBG w/o TBC results explain the remarkable superiority of dense boundary maps for all proposals. When the IoU threshold is strict and set to be 0.9 for evaluation, a large AUC gap between DBG (blue line) and DBG w/o TBC (red line) shows TBC can predict more precise boundaries. Fig. 6 shows more examples to demonstrate the effects of DBG on handling actions with various variations. Analysis of PFG layer. To confirm the effect of the PFG layer, we conduct experiments to examine how different sampling locations within features affect proposal generation performance. As shown in Tab. 5, The experiments that sampling 8, 16, 8 locations from left region, center region and right region respectively within proposal features achieves the best performance. The 0/16/0 results indicate that context information around proposals are necessary for better performance on proposal generation. The 8/0/8 experiment that only adopting left or right local region features for TBC to predict starting or ending boundary confidence map shows the importance of the global proposal information. Generalizability. Following BMN, we choose two different action subsets on ActivityNet-1.3 for generalizability analysis: "Sports, Exercise, and Recreation" and "Socializing, Relaxing, and Leisure" as seen and unseen subsets, respectively. We employ I3D network (Carreira and Zisserman 2017) pretrained on Kinetics-400 for video representation. Tab. 6 shows the slight AUC drop when testing the unseen subset, which clearly explains that DBG works well to generate high-quality proposals for unseen actions.
Temporal Proposal Detection
To evaluate the proposal quality of DBG, we put proposals in a temporal action detection framework. We adopt mean Average Precision (mAP) to evaluates the temporal action detection task. We adopt a set of IoU thresholds {0.3,0.4,0.5,0.6,0.7} for THUMOS14.
We follow a two-stage "detection by classifying proposals" framework in evaluation, which feeds the detected proposals into the state-of-the-art action classifiers SCNN (Shou, Wang, and Chang 2016) and UntrimmedNet . For fair comparisons, we use the same classifiers for other proposal generation methods, including SST (Buch et al. 2017), TURN , CTAP (Gao, Chen, and Nevatia 2018), BSN (Lin et al. 2018), MGG ) and BMN (Lin et al. 2019). The experimental results on THUMOS14 are shown in Tab. 7, which demonstrates that DBG based detection significantly outperforms other state-of-the-art methods in temporal action detection methods. Especially, with the same IOU threshold 0.7, our DBG based detection achieves an mAP improvements of 1.4% and 1.2% for two types of classifiers separately from BMN based methods.
Conclusion
This paper introduces a novel and unified temporal action proposal generator named Dense Boundary Generator (DBG). In this work, we propose dual stream BaseNet to generate two different level and more discriminative features. We then adopt a temporal boundary classification module to predict precise temporal boundaries, and an action-aware completeness regression module to provide reliable action completeness confidence. Comprehensive experiments are conducted on popular benchmarks including ActivityNet-1.3 and THUMOS14, which demonstrates the superiority of our proposed DBG compared to state-of-theart methods.
Figure 2 :
2Boundary prediction comparison of (a) local information based and (b) global proposal information based methods.
Figure 3 :
3(a)Video Representation: Spatial & temporal network is used to encode video visual contents. (b)Dense Boundary Generator: It contains Dual Stream BaseNet, Action-aware Completeness Regression branch and Temporal Boundary Classification branch. (c)Post-processing: In this step, three score maps are fused and Soft-NMS is leveraged to generate proposals.ApproachSuppose there are a set of untrimmed video frames F = {f t } l f t=1 , where f t is the t-th RGB frame and l f is the number of frames in the video V . The annotation of V can be denoted by a set of action instances ψ g
Figure 4 :
4Details of the proposal feature generation layer. Given a feature sequence, we concatenate the sampled feature regions to construct proposal context feature map.
Figure 5 :
5Ablation study of effectiveness of modules in DBG on validation set of ActivityNet-1.3 in terms of AR@AN curve. ActivityNet-1.3 and 0.65 on the THUMOS14. in Gaussian function is set to 0.75 on both temporal proposal generation datasets.
[0.5:0.05:0.95] is used on ActivityNet-1.3, while a set of IoU thresholds [0.5:0.05:1.0] is used on THUMOS14. For ActivityNet-1.3, area under the AR vs. AN curve (AUC) is also used as the evaluation metrics. Comparison experiments. We further compare our DBG with other methods on the validation set of ActivityNet-1.3. Tab. 2 lists a set of proposal genearation methods including TCN (Dai et al. 2017), MSRA (Yao et al. 2017), Prop-SSAD (Lin, Zhao, and Shou 2017), CTAP (Gao, Chen, and Nevatia 2018), BSN (Lin et al. 2018), MGG (Liu et al. 2019) and BMN (Lin et al. 2019).
Table 1 :
1The detail design of dual stream BaseNet (DSB), action-aware completeness regression (ACR) module and temporal boundary classification (TBC) module.DSB
layer
kernel output
layer
kernel
output
Conv1D 11 3
L×256
Conv1D 21 3
L× 256
Conv1D 12 3
L×128
Conv1D 22 3
L× 128
Sum
L×128
Conv1D 33 1
L×1
Conv1D 13 1
L×1
Conv1D 23 1
L× 1
Averaging
L×1
ACR
TBC
layer
kernel output
layer
kernel
output
Table 2 :
2Comparison between our approach and other state-of-the-art temporal action generation approaches on validation set and test set of ActivityNet-1.3 dataset in terms of AR@AN and AUC.Method
TCN MSRA Prop-SSAD CTAP BSN MGG BMN Ours
AR@100 (val)
-
-
73.01
73.17 74.16 74.54 75.01 76.65
AUC (val)
59.58
63.12
64.40
65.72 66.17 66.43 67.10 68.23
AUC (test)
61.56
64.18
64.80
-
66.26 66.47 67.19 68.57
Table 3 :
3Comparison between DBG with other state-of-theart methods on THUMOS14 in terms of [email protected] Method
@50 @100 @200 @500 @1000
C3D SCNN-prop 17.22 26.17 37.01 51.57 58.20
C3D SST
19.90 28.36 37.90 51.58 60.27
C3D TURN
19.63 27.96 38.34 53.52 60.75
C3D MGG
29.11 36.31 44.32 54.95 60.98
C3D BSN+NMS 27.19 35.38 43.61 53.77 59.50
C3D BSN+SNMS 29.58 37.38 45.55 54.67 59.48
C3D BMN+NMS 29.04 37.72 46.79 56.07 60.96
C3D BMN+SNMS 32.73 40.68 47.86 56.42 60.44
C3D Ours+NMS 32.55 41.07 48.83 57.58 59.55
C3D Ours+SNMS 30.55 38.82 46.59 56.42 62.17
2Stream TAG
18.55 29.00 39.61
-
-
Flow TURN
21.86 31.89 43.02 57.63 64.17
2Stream CTAP
32.49 42.61 51.97
-
-
2Stream MGG
39.93 47.75 54.65 61.36 64.06
2Stream BSN+NMS 35.41 43.55 52.23 61.35 65.10
2Stream BSN+SNMS 37.46 46.06 53.21 60.64 64.52
2Stream BMN+NMS 37.15 46.75 54.84 62.19 65.22
2Stream BMN+SNMS 39.36 47.72 54.70 62.07 65.49
2Stream Ours+NMS 40.89 49.24 55.76 61.43 61.95
2Stream Ours+SNMS 37.32 46.67 54.50 62.21 66.40
Table 4 :
4Efficiency comparison among DBG and BMN and BSN in validation set of ActivityNet-1.3. e2e means the method is able to be trained end-to-end.Method e2e AR@100 AUC
Tpro
T all
BSN
×
74.16
66.17 0.624 0.629
BMN
75.01
67.10 0.047 0.052
DBG
76.65
68.23 0.008 0.013
Table 5 :
5Performance analysis of PFG layer. N l /Nc/Nr 4/8/4 6/12/6 8/16/8 10/20/10 0/16/0 8/0/8AR@10
57.22 57.29 57.29
57.09
55.74 56.85
AR@50
71.13 71.57 71.59
71.36
70.29 71.17
AR@100 76.14 76.27 76.65
76.50
75.53 76.13
AUC
67.91 68.14 68.23
68.11
66.94 67.83
Table 6 :
6Generalization evalation on ActivityNet-1.3.Seen
Unseen
Training Data AR@100 AUC AR@100 AUC
Seen+Unseen
73.30
66.57
67.23
64.59
Seen
72.95
66.23
64.77
62.18
Table 7 :
7Action detection results on testing set of THU-MOS14 in terms of [email protected] classifier
0.7
0.6
0.5
0.4
0.3
SST
SCNN-cls -
-
23.0 -
-
TURN
SCNN-cls 7.7
14.6 25.6 33.2 44.1
BSN
SCNN-cls 15.0 22.4 29.4 36.6 43.1
MGG
SCNN-cls 15.8 23.6 29.9 37.8 44.9
BMN
SCNN-cls 17.0 24.5 32.2 40.2 45.7
Ours
SCNN-cls 18.4 25.3 32.9 40.4 45.9
SST
UNet
4.7
10.9 20.0 31.5 41.2
TURN
UNet
6.3
14.1 24.5 35.3 46.3
BSN
UNet
20.0 28.4 36.9 45.0 53.5
MGG
UNet
21.3 29.5 37.4 46.8 53.9
BMN
UNet
20.5 29.7 38.8 47.4 56.0
Ours
UNet
21.7 30.2 39.8 49.4 57.8
Quo vadis, action recognition? A new model and the kinetics dataset. [ References, Buch, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA; Honolulu, HI, USA2017 IEEE Conference on Computer Vision and Pattern RecognitionReferences [Buch et al. 2017] Buch, S.; Escorcia, V.; Shen, C.; Ghanem, B.; and Niebles, J. C. 2017. SST: single-stream temporal action pro- posals. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 6373-6382. [Carreira and Zisserman 2017] Carreira, J., and Zisserman, A. 2017. Quo vadis, action recognition? A new model and the ki- netics dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 4724-4733.
Visualization examples of proposals generated by DBG on ActivityNet-1.3 dataset. 6Figure 6: Visualization examples of proposals generated by DBG on ActivityNet-1.3 dataset.
Rethinking the faster R-CNN architecture for temporal action localization. [ Chao, 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA[Chao et al. 2018] Chao, Y.; Vijayanarasimhan, S.; Seybold, B.; Ross, D. A.; Deng, J.; and Sukthankar, R. 2018. Rethinking the faster R-CNN architecture for temporal action localization. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 1130- 1139.
Temporal context network for activity localization in videos. [ Dai, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer Vision[Dai et al. 2017] Dai, X.; Singh, B.; Zhang, G.; Davis, L. S.; and Qiu Chen, Y. 2017. Temporal context network for activity local- ization in videos. In Proceedings of the IEEE International Con- ference on Computer Vision, 5793-5802.
Convolutional two-stream network fusion for video action recognition. Pinz Feichtenhofer, C Feichtenhofer, A Pinz, A Zisserman, 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAFeichtenhofer, Pinz, and Zisserman 2016] Feichtenhofer, C.; Pinz, A.; and Zisserman, A. 2016. Convolutional two-stream network fusion for video action recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 1933-1941.
TURN TAP: temporal unit regression network for temporal action proposals. [ Gao, IEEE International Conference on Computer Vision. Venice, Italy[Gao et al. 2017] Gao, J.; Yang, Z.; Sun, C.; Chen, K.; and Neva- tia, R. 2017. TURN TAP: temporal unit regression network for temporal action proposals. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 3648-3656.
Ctap: Complementary temporal action proposal generation. Chen Gao, J Gao, K Chen, R Nevatia, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Gao, Chen, and Nevatia 2018] Gao, J.; Chen, K.; and Nevatia, R. 2018. Ctap: Complementary temporal action proposal generation. In Proceedings of the European Conference on Computer Vision (ECCV), 68-83.
Activitynet: A large-scale video benchmark for human activity understanding. Yang Gao, J Gao, Z Yang, R Nevatia, F C Heilbron, V Escorcia, B Ghanem, J C Niebles, IEEE Conference on Computer Vision and Pattern Recognition. London, UK; Boston, MA, USABritish Machine Vision ConferenceGao, Yang, and Nevatia 2017] Gao, J.; Yang, Z.; and Nevatia, R. 2017. Cascaded boundary regression for temporal action detection. In British Machine Vision Conference 2017, BMVC 2017, London, UK, September 4-7, 2017. [Heilbron et al. 2015] Heilbron, F. C.; Escorcia, V.; Ghanem, B.; and Niebles, J. C. 2015. Activitynet: A large-scale video bench- mark for human activity understanding. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, 961-970.
Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. Niebles Heilbron, F C Ghanem ; Heilbron, J C Niebles, B Ghanem, 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Las Vegas, NV, USAHeilbron, Niebles, and Ghanem 2016] Heilbron, F. C.; Niebles, J. C.; and Ghanem, B. 2016. Fast temporal activity proposals for efficient detection of human actions in untrimmed videos. In 2016 IEEE Conference on Computer Vision and Pattern Recogni- tion, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 1914- 1923.
The thumos challenge on action recognition for videos in the wild. Idrees, Computer Vision and Image Understanding. 155[Idrees et al. 2017] Idrees, H.; Zamir, A. R.; Jiang, Y.-G.; Gorban, A.; Laptev, I.; Sukthankar, R.; and Shah, M. 2017. The thumos challenge on action recognition for videos in the wild. Computer Vision and Image Understanding 155:1-23.
BSN: boundary sensitive network for temporal action proposal generation. Qian Li, Yang ; Li, J Qian, J Yang, J Lin, T Zhao, X Su, H Wang, C , Yang , M , 2017 IEEE International Conference on Image Processing (ICIP). Munich, Germany; Part IVIEEEComputer Vision -ECCV 2018 -15th European Conference. ProceedingsLi, Qian, and Yang 2017] Li, J.; Qian, J.; and Yang, J. 2017. Object detection via feature fusion based single network. In 2017 IEEE International Conference on Image Processing (ICIP), 3390-3394. IEEE. [Lin et al. 2018] Lin, T.; Zhao, X.; Su, H.; Wang, C.; and Yang, M. 2018. BSN: boundary sensitive network for temporal action pro- posal generation. In Computer Vision -ECCV 2018 -15th Euro- pean Conference, Munich, Germany, September 8-14, 2018, Pro- ceedings, Part IV, 3-21.
BMN: boundary-matching network for temporal action proposal generation. CoRR abs/1907.09702et al. 2019] Lin, T.; Liu, X.; Li, X.; Ding, E.; and Wen, S. 2019. BMN: boundary-matching network for temporal action proposal generation. CoRR abs/1907.09702.
Single shot temporal action detection. Zhao Lin, T Shou ; Lin, X Zhao, Z Shou, Proceedings of the 2017 ACM on Multimedia Conference. the 2017 ACM on Multimedia ConferenceMountain View, CA, USALin, Zhao, and Shou 2017] Lin, T.; Zhao, X.; and Shou, Z. 2017. Single shot temporal action detection. In Proceedings of the 2017 ACM on Multimedia Conference, MM 2017, Mountain View, CA, USA, October 23-27, 2017, 988-996.
Multi-granularity generator for temporal action proposal. [ Liu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition[Liu et al. 2019] Liu, Y.; Ma, L.; Zhang, Y.; Liu, W.; and Chang, S.- F. 2019. Multi-granularity generator for temporal action proposal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3604-3613.
Gaussian temporal awareness networks for action localization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognitionet al. 2019] Long, F.; Yao, T.; Qiu, Z.; Tian, X.; Luo, J.; and Mei, T. 2019. Gaussian temporal awareness networks for action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 344-353.
Learning spatio-temporal representation with pseudo-3d residual networks. Yao Mei ; Qiu, Z Yao, T Mei, T Shou, Z Wang, D Chang, S , 2016 IEEE Conference on Computer Vision and Pattern Recognition. Venice, Italy; Las Vegas, NV, USAIEEE International Conference on Computer Vision, Yao, and Mei 2017] Qiu, Z.; Yao, T.; and Mei, T. 2017. Learning spatio-temporal representation with pseudo-3d residual networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 5534-5542. [Shou, Wang, and Chang 2016] Shou, Z.; Wang, D.; and Chang, S. 2016. Temporal action localization in untrimmed videos via multi- stage cnns. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27- 30, 2016, 1049-1058.
Two-stream convolutional networks for action recognition in videos. K Simonyan, A Zisserman, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaSimonyan and Zisserman[Simonyan and Zisserman 2014] Simonyan, K., and Zisserman, A. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, 568-576.
Learning spatiotemporal features with 3d convolutional networks. Tran, 2015 IEEE International Conference on Computer Vision. Santiago, Chile[Tran et al. 2015] Tran, D.; Bourdev, L. D.; Fergus, R.; Torresani, L.; and Paluri, M. 2015. Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, 4489-4497.
Towards good practices for very deep two-stream convnets. CoRR abs/1507.02159et al. 2015] Wang, L.; Xiong, Y.; Wang, Z.; and Qiao, Y. 2015. Towards good practices for very deep two-stream convnets. CoRR abs/1507.02159.
Temporal segment networks: Towards good practices for deep action recognition. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionAmsterdam, The NetherlandsComputer Vision -ECCV 2016 -14th European Conferenceet al. 2016] Wang, L.; Xiong, Y.; Wang, Z.; Qiao, Y.; Lin, D.; Tang, X.; and Gool, L. V. 2016. Temporal segment networks: Towards good practices for deep action recognition. In Computer Vision -ECCV 2016 -14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII, 20-36. [Wang et al. 2017] Wang, L.; Xiong, Y.; Lin, D.; and Van Gool, L. 2017. Untrimmednets for weakly supervised action recognition and detection. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 4325-4334.
CUHK & ETHZ & SIAT submission to activitynet challenge. CoRR abs/1608.00797et al. 2016] Xiong, Y.; Wang, L.; Wang, Z.; Zhang, B.; Song, H.; Li, W.; Lin, D.; Qiao, Y.; Gool, L. V.; and Tang, X. 2016. CUHK & ETHZ & SIAT submission to activitynet challenge 2016. CoRR abs/1608.00797.
R-C3D: region convolutional 3d network for temporal activity detection. Das Xu, H Xu, A Das, K Saenko, IEEE International Conference on Computer Vision. Venice, ItalyXu, Das, and Saenko 2017] Xu, H.; Das, A.; and Saenko, K. 2017. R-C3D: region convolutional 3d network for temporal activity de- tection. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 5794-5803.
Msr asia msm at activitynet challenge 2017: Trimmed action recognition, temporal action proposals and densecaptioning events in videos. [ Yao, CVPR ActivityNet Challenge Workshop. [Yao et al. 2017] Yao, T.; Li, Y.; Qiu, Z.; Long, F.; Pan, Y.; Li, D.; and Mei, T. 2017. Msr asia msm at activitynet challenge 2017: Trimmed action recognition, temporal action proposals and dense- captioning events in videos. In CVPR ActivityNet Challenge Work- shop.
Temporal action detection with structured segment networks. [ Zhao, IEEE International Conference on Computer Vision. Venice, Italy; Venice, ItalyIEEE International Conference on Computer Vision[Zhao et al. 2017a] Zhao, Y.; Xiong, Y.; Wang, L.; Wu, Z.; Tang, X.; and Lin, D. 2017a. Temporal action detection with structured segment networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 2933-2942. [Zhao et al. 2017b] Zhao, Y.; Xiong, Y.; Wang, L.; Wu, Z.; Tang, X.; and Lin, D. 2017b. Temporal action detection with structured segment networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, 2933-2942.
| [] |
[] | [
"\nNorthwestern University\n60208EvanstonIllinoisUSA\n"
] | [
"Northwestern University\n60208EvanstonIllinoisUSA"
] | [] | We report on a search for the X(3872) state using 15.1 fb −1 of e + e − annihilation data taken with the CLEO III detector in the √ s = 9.46-11.30 GeV region. Separate searches for the production of X(3872) in untagged γγ fusion and e + e − annihilation following initial state radiation (ISR) are made by taking advantage of the unique correlation of J/ψ → l + l − in X(3872) decay into π + π − J/ψ. No signals are observed in either case, and 90% confidence upper limits are established as (2J + 1)Γγγ (X(3872))B(X → π + π − J/ψ) < 12.9 eV and Γee(X(3872))B(X → π + π − J/ψ) < 8.3 eV. | 10.1088/1742-6596/9/1/011 | [
"https://export.arxiv.org/pdf/hep-ex/0501015v1.pdf"
] | 13,195,864 | hep-ex/0501015 | 46114927032f794fe7e6b6eed71a60097e4c44ac |
Northwestern University
60208EvanstonIllinoisUSA
arXiv:hep-ex/0501015v1 7 Jan 2005Search for X(3872) in γγ Fusion and ISR at CLEO Peter Zweber (CLEO Collaboration)
We report on a search for the X(3872) state using 15.1 fb −1 of e + e − annihilation data taken with the CLEO III detector in the √ s = 9.46-11.30 GeV region. Separate searches for the production of X(3872) in untagged γγ fusion and e + e − annihilation following initial state radiation (ISR) are made by taking advantage of the unique correlation of J/ψ → l + l − in X(3872) decay into π + π − J/ψ. No signals are observed in either case, and 90% confidence upper limits are established as (2J + 1)Γγγ (X(3872))B(X → π + π − J/ψ) < 12.9 eV and Γee(X(3872))B(X → π + π − J/ψ) < 8.3 eV.
Introduction
The Belle Collaboration reported the observation of a narrow state, X(3872), in the decay B ± → K ± X, X → π + π − J/ψ, J/ψ → l + l − (l = e, µ) [1]. The observation was confirmed by the CDF II [2], DØ [3], and BABAR [4] Collaborations with consistent results, i.e., M(X) = 3872.0 ± 1.4 MeV/c 2 and Γ(X) ≤ 3 MeV/c 2 .
Many different theoretical interpretations of the nature of the X(3872) state and its possible quantum numbers have been proposed [5,6,7,8]. These include that (a) X(3872) is a charmonium state [5]; (b) X(3872) is a D 0D * 0 "molecular" state [6]; and (c) X(3872) is an exotic state [7].
No positive signals for X(3872) have been observed in searches for the decays X(3872) → γχ c1 [1], γχ c2 , γJ/ψ, π 0 π 0 J/ψ [9], ηJ/ψ [10], D + D − , D 0D0 , and D 0D0 π 0 [11], or for possible charged partners of X(3872) [12]. Yuan, Mo, and Wang [13] have used 22.3 pb −1 of BES data at √ s = 4.03 GeV to determine the 90% confidence upper limit of Γ ee (X(3872))B(X → π + π − J/ψ) < 10 eV for ISR production of X(3872). Belle [9] has recently reported a small enhancement in the π + π − π 0 J/ψ effective mass near the X(3872) mass.
The variety of possibilities for the structure of X(3872) suggests that it is useful to limit the J P C of X(3872) as much as possible. The present investigation is designated to provide experimental constraints for the J P C of X(3872) by studying its production in γγ fusion and ISR, and its decay into π + π − J/ψ [14]. Production of X(3872) in γγ fusion can shed light on the positive charge parity candidate states, charmonium states 2 3 P 0 , 2 3 P 2 and 1 1 D 2 [5], and the 0 −+ molecular state [6]. ISR production can address the 1 −− vector state.
Event Selection
The data consist of a 15.1 fb −1 sample of e + e − collisions at or near the energies of the Υ(nS) resonances (n = 1-5) and in the vicinity of the Λ bΛb threshold collected with the CLEO III detector [15]. Table 1 lists the six different initial center-of-mass energies and e + e − integrated luminosities at each. Table 1. Data sample for the present X(3872) search. The average center-of-mass energies and e + e − integrated luminosities near Υ(1S − 5S) and Λ bΛb threshold are denoted by √ s i and Resonance production by untagged γγ fusion and by ISR has similar characteristics. The undetected electrons in untagged γγ fusion and the undetected radiated photons in ISR have angular distributions sharply peaked along the beam axis. Both processes have total observed energy (E tot ) much smaller than the center-of-mass energy, √ s, of the original e + e − system and have small observed transverse momentum. The detailed characteristics for γγ fusion and ISR mediated X(3872) production are studied by generating signal Monte Carlo (MC) samples, using the formalism of Budnev et al. [16] for γγ fusion and the formalism of M. Benayoun et al. [17] for ISR. A fully reconstructed event has four charged particles and zero net charge. All charged particles are required to individually lie within the drift chamber volume, satisfy standard requirements for track quality and distance of closest approach to the interaction point, and satisfy their respective particle identification criteria. Events must also have detected E tot < 6 GeV, total neutral energy (E neu ) < 0.4 GeV and total transverse momentum (p tr ) < 0.3 GeV/c. The lepton pair invariant mass must be consistent with a J/ψ decay; M(e + e − ) = 2.96-3.125 GeV/c 2 for events with a J/ψ → e + e − decay and M(µ + µ − ) = 3.05-3.125 GeV/c 2 for events with a J/ψ → µ + µ − decay. Figure 1 shows the ∆M ≡ M(π + π − l + l − ) − M(l + l − ) distribution for data events which pass the selection criteria. The ψ(2S) is clearly visible while no enhancement is apparent for X(3872), i.e., at ∆M = 0.775 GeV/c 2 , which is indicted by the arrow in Figure 1. At √ s ∼ 10 GeV, a feature unique to the ISR mediated production of a vector resonance which decays via π + π − J/ψ, J/ψ → l + l − is the correlation between the cos(θ) of the two leptons. Figure 2 shows the MC prediction for the two-dimensional cos(θ) distributions for leptons from X(3872) decay for the ISR mediated and γγ fusion productions. As shown in Figure 2, a parabolic cut applied to the two-dimensional cos(θ) distribution efficiently separates the events from the two production processes. With this cut, the γγ sample contains ∼86% of the γγ events and < 0.5% of the ISR events, and the ISR sample contains > 99.5% of the ISR events and ∼14% of the γγ events.
L i (e + e − ), respectively. √ s i (GeV) L i (e + e − ) (fb −1 ) Υ(1S) 9.
Results
The number of observed X(3872) events (N γγ,ISR (X(3872))) is determined by maximum likelihood fits of the ∆M data distributions using flat backgrounds and the appropriate detector resolution functions for the two production processes. The detector resolution functions are determined by the MC simulations fitted with double Gaussians. The 90% confidence upper limits on the observed number of X(3872) events in γγ fusion and ISR mediated production are determined to be N γγ,ISR (X(3872)) < 2.36 for both processes.
Systematic uncertainty arises from possible biases in the detection efficiency and estimated background level. These are studied by varying the event selection criteria described above. Other systematic uncertainties are from the e + e − luminosity measurement and J/ψ → l + l − branching fractions. Adding these in quadrature, the total systematic uncertainties in γγ fusion and ISR are 18.5% and 13.2%, respectively. A conservative way to incorporate these systematic uncertainties is to increase the measured upper limits by these amounts. This leads to the 90% confidence upper limits (2J + 1)Γ γγ (X(3872))B(X → π + π − J/ψ) < 12.9 eV for X(3872) having positive C parity and Γ ee (X(3872))B(X → π + π − J/ψ) < 8.3 eV for X(3872) being a vector meson with J P C = 1 −− .
Summary
With 15.1 fb −1 of e + e − annihilation data taken with the CLEO III detector near √ s = 10 GeV, we determine 90% confidence upper limits for untagged γγ fusion and ISR mediated production of X(3872). If B(B ± → K ± X(3872)) ≈ B(B ± → K ± ψ(2S)) = (6.8±0.4)×10 −4 [18] is assumed, we obtain B(X → π + π − J/ψ) ≈ 0.02 from both the Belle [1] and BABAR [4] results. This leads to 90% confidence upper limits (2J + 1)Γ γγ (X(3872)) < 0.65 keV and Γ ee (X(3872)) < 0.42 keV.
The (2J+1)Γ γγ (X(3872)) upper limit is almost 1/4 the corresponding values for χ c0 and χ c2 , but it is nearly 6 times larger than the prediction for the 1 1 D 2 state of charmonium [19]. The upper limit for Γ ee (X(3872)) is comparable to the measured electron width of ψ(3770) and is about 1/2 that of ψ(4040) [20].
Figure 1 .
1Data events as function of ∆M ≡ M(π + π − l + l − ) − M(l + l − ). The ψ(2S) is clearly visible and no apparent enhancement is seen in the X(3872) region.
Figure 2 .
2MC predictions for the two-dimensional cos(θ) distributions for the lepton pair for ISR mediated (left) and γγ fusion (right) X(3872) production. The lines indicate how the ISR and γγ fusion samples are separated.
AcknowledgmentsWe gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. This work was supported by the National Science Foundation and the U.S. Department of Energy.
. S Choi, Belle CollaborationPhys. Rev. Lett. 91262001Belle Collaboration, Choi S K et al. 2003 Phys. Rev. Lett. 91 262001
. D Acosta, CDF II CollaborationPhys. Rev. Lett. 9372001CDF II Collaboration, Acosta D et al. 2004 Phys. Rev. Lett. 93 072001
. V Abazov, DØ CollaborationPhys. Rev. Lett. 93162002DØ Collaboration, Abazov V M et al. 2004 Phys. Rev. Lett. 93 162002
. B Aubert, BABAR Collaborationhep-ex/0406022PreprintBABAR Collaboration, Aubert B et al. 2004 Preprint hep-ex/0406022
. T Barnes, S Godfrey, Phys. Rev. D. 6954008Barnes T and Godfrey S 2004 Phys. Rev. D 69 054008
. E J Eichten, Lane K Quigg, C , Phys. Rev. D. 6994019Eichten E J, Lane K and Quigg C 2004 Phys. Rev. D 69 094019
. E Swanson, Phys. Lett. B. 588189Swanson E S 2004 Phys. Lett. B 588 189
. N Törnqvist, Phys. Lett. B. 590209Törnqvist N A 2004 Phys. Lett. B 590 209
. E Swanson, hep-ph/0406080Swanson E S 2004 Preprint hep-ph/0406080
. K Seth, hep-ph/0411122Seth K K 2004 Preprint hep-ph/0411122
. F E Close, P R Page, Phys. Lett. B. 578119Close F E and Page P R 2004 Phys. Lett. B 578 119
. S Pakvasa, M Suzuki, Phys. Lett. B. 57967Pakvasa S and Suzuki M 2004 Phys. Lett. B 579 67
. M Voloshin, Phys. Lett. B. 579316Voloshin M B 2004 Phys. Lett. B 579 316
. C-Y Wong, Phys. Rev. C. 6955202Wong C-Y 2004 Phys. Rev. C 69 055202
. E Braaten, M Kusunoki, Phys. Rev. D. 69114012Braaten E and Kusunoki M 2004 Phys. Rev. D 69 114012
. E Braaten, M Kusunoki, S Nussinov, hep-ph/0404161Braaten E, Kusunoki M and Nussinov S 2004 Preprint hep-ph/0404161
. P Ko, hep-ph/0405265Ko P 2004 Preprint hep-ph/0405265
. E Braaten, hep-ph/0406230Braaten E 2004 Preprint hep-ph/0406230
. M Voloshin, hep-ph/0408321Voloshin M B 2004 Preprint hep-ph/0408321
. K Abe, Belle Collaborationhep-ex/0408116PreprintBelle Collaboration, Abe K et al. 2004 Preprint hep-ex/0408116
. B Aubert, BABAR CollaborationPhys. Rev. Lett. 9341801BABAR Collaboration, Aubert B et al. 2004 Phys. Rev. Lett. 93 041801
. R Chistov, Belle CollaborationPhys. Rev. Lett. 9351803Belle Collaboration, Chistov R et al. 2004 Phys. Rev. Lett. 93 051803
. B Aubert, BABAR Collaborationhep-ex/0408083PreprintBABAR Collaboration, Aubert B et al. 2004 Preprint hep-ex/0408083.
. C Z Yuan, X Mo, P Wang, Phys. Lett. B. 57974Yuan C Z, Mo X H and Wang P 2004 Phys. Lett. B 579 74
. S Dobbs, CLEO Collaborationhep-ex/0410038PreprintCLEO Collaboration, Dobbs S et al. 2004 Preprint hep-ex/0410038
. Y Kubota, Nucl. Instrum. Meth. A. 32066Kubota Y et al. 1992 Nucl. Instrum. Meth. A 320 66
. G Viehhauser, Nucl. Instrum. Meth. A. 462146Viehhauser G et al. 2001 Nucl. Instrum. Meth. A 462 146
. D Peterson, Nucl. Instrum. Meth. A. 478142Peterson D et al. 2002 Nucl. Instrum. Meth. A 478 142
. M Artuso, Nucl. Instrum. Meth. A. 50291Artuso M et al. 2002 Nucl. Instrum. Meth. A 502 91
. V Budnev, Phys. Reports C. 15181Budnev V M et al. 1975 Phys. Reports C 15 181
. M Benayoun, Mod. Phys. Lett. A. 142605Benayoun M et al. 1999 Mod. Phys. Lett. A 14 2605
. S Eidelman, Review of Particle Properties. 5921Phys. Lett. BReview of Particle Properties, Eidelman S et al. 2004 Phys. Lett. B 592 1
. E S Ackleh, T Barnes, Phys. Rev. D. 45232Ackleh E S and Barnes T 1992 Phys. Rev. D 45 232
. K Seth, hep-ph/0405007Seth K K 2004 Preprint hep-ph/0405007
| [] |
[
"Topologies of the (M+1)SSM with a Singlino LSP at LEP2",
"Topologies of the (M+1)SSM with a Singlino LSP at LEP2"
] | [
"U Ellwanger ",
"C Hugonie ",
"\nLaboratoire de Physique Théorique et Hautes Energies\n\n",
"\nUniversité de Paris XI\nCentre d'OrsayF-91405Orsay CedexFrance\n",
"\nLaboratoire associé au Centre National de la Recherche Scientifique (URA D0063)\n\n"
] | [
"Laboratoire de Physique Théorique et Hautes Energies\n",
"Université de Paris XI\nCentre d'OrsayF-91405Orsay CedexFrance",
"Laboratoire associé au Centre National de la Recherche Scientifique (URA D0063)\n"
] | [] | We study the possible signals of the (M+1)SSM with a singlino LSP at LEP2. First we identify regions of the parameter space which are ruled out by negative results of sparticle searches in the context of the MSSM. In the remaining kinematically accessible regions we present total event rates for topologies which require further studies, i.e. estimations of the corresponding efficiencies: various 4 charged fermion final states with missing energy, possibly with displaced vertices due to a long lifetime of the NLSP, the second lightest neutralino. Searches for these unconventional signatures are essential in order to cover the entire kinematically accessible parameter space of the (M+1)SSM with a singlino LSP at LEP2. | 10.1007/s100520050727 | [
"https://export.arxiv.org/pdf/hep-ph/9812427v2.pdf"
] | 13,472,740 | hep-ph/9812427 | f4adc30dcf187e159466ab4aea6a4e1b4c2e1b67 |
Topologies of the (M+1)SSM with a Singlino LSP at LEP2
20 May 1999
U Ellwanger
C Hugonie
Laboratoire de Physique Théorique et Hautes Energies
Université de Paris XI
Centre d'OrsayF-91405Orsay CedexFrance
Laboratoire associé au Centre National de la Recherche Scientifique (URA D0063)
Topologies of the (M+1)SSM with a Singlino LSP at LEP2
20 May 1999Orsay LPTHE-98-79 hep-ph/9812427
We study the possible signals of the (M+1)SSM with a singlino LSP at LEP2. First we identify regions of the parameter space which are ruled out by negative results of sparticle searches in the context of the MSSM. In the remaining kinematically accessible regions we present total event rates for topologies which require further studies, i.e. estimations of the corresponding efficiencies: various 4 charged fermion final states with missing energy, possibly with displaced vertices due to a long lifetime of the NLSP, the second lightest neutralino. Searches for these unconventional signatures are essential in order to cover the entire kinematically accessible parameter space of the (M+1)SSM with a singlino LSP at LEP2.
Introduction
The supersymmetric extension of the standard model with an additional gauge singlet superfield, the so called (M+1)SSM [1,2,[4][5][6][7][8][9][10], solves naturally the µproblem of the MSSM: Even for a scale invariant superpotential -with a coupling λSH 1 H 2 among the Higgs superfields and the singlet superfield S -an effective µterm µ = λ S is generated, if the scalar component of S has a non vanishing vev. Such a vev of the order of the weak scale can be generated through the standard soft supersymmetry breaking terms, thus the weak scale appears exclusively in the form of the supersymmetry breaking scale. Moreover, assuming universal soft terms at a large (GUT) scale, the (M+1)SSM has the same number of free parameters as the MSSM. Previous analyses of the parameter space of the model [5][6][7] have shown that, as in the case of the MSSM, a large region is consistent with the present experimental bounds on sparticle and Higgs masses.
The particle content of the (M+1)SSM differs from the MSSM in the form of additional gauge singlet states in the Higgs sector (1 neutral CP-even and 1 CP-odd state) and in the neutralino sector (a two component Weyl fermion). These states mix with the corresponding ones of the MSSM, with a mixing angle which is proportional to the coupling λ above. Accordingly, the phenomenology of the (M+1)SSM depends on a large extend on the magnitude of λ:
For λ > ∼ O(10 −2 ) the masses and couplings, notably in the CP-even Higgs sector, can deviate visibly from the ones of the MSSM [4]; however, in this region of the parameter space of the (M+1)SSM some fine-tuning among the parameters is required in order to meet all the phenomenological constraints [6].
For λ < ∼ O(10 −2 ) the mixing angles involving the singlet states are quite small. Therefore, the Higgs and sparticle masses and couplings of the (M+1)SSM are very close to the MSSM ones (for corresponding values of µ and B [5,6]), with additional quasi singlet states which have small couplings to the gauge bosons and the MSSM sparticles. Accordingly, they have small production cross sections, and they will not appear in sparticle decays unless they represent the only kinematically allowed decay channel.
Assuming R parity conservation, this latter situation is realized if the quasi singlet Weyl fermion (the singlino) is the LSP. Then the singlino will appear in the final state of each sparticle decay, and the phenomenology of the (M+1)SSM with a singlino LSP differs considerably from the one of the MSSM.
In a previous paper [7] we have shown that this situation appears naturally in the case of a gaugino dominated supersymmetry breaking: M 1/2 ≫ A 0 , m 0 . Then, within the parameter space accessible at LEP2, the NLSP is mostly a binolike state. Hence all the processes involving sparticle productions and decays will end up with a bino to singlino transition, and we have studied the corresponding decay widths in [7]. An important result was that, for small values of λ or for singlino masses close to the bino mass, the bino life time can be so large that the bino to singlino cascade appears at macroscopic distances from the production point, or even out of the detector.
In the present paper we study the possible signals of the (M+1)SSM with a singlino LSP at LEP2 in the various regions of the parameter space. First we identify those regions which are ruled out by negative results of sparticle searches in the context of the MSSM. In the remaining kinematically allowed regions we present total event rates for various topologies, like 4 charged fermion final states and missing energy, with or without displaced vertices. Such topologies, with microscopic vertices, have been looked for at LEP2 in the context of the MSSM or models with R parity violation. However the corresponding efficiencies do not apply to the (M+1)SSM with a singlino LSP. With estimated efficiencies, we find that considerable kinematically allowed regions of the parameter space have not been tested at present, especially in the case of macroscopically displaced vertices. The main purpose of the present paper is to identify those topologies, for which further studies -i.e. estimation of efficiencies -are required in order to interpret the available or incoming data from LEP2 in the context of the (M+1)SSM with a singlino LSP.
It is a priori not clear whether negative results of sparticle searches would constrain the (M+1)SSM with a singlino LSP more or less than the MSSM: The final states associated with the pair production of a given sparticle (like the selectron or chargino) will often be more involved in the (M+1)SSM as compared to the MSSM, and the corresponding constraints on the cross sections are often much weaker. On the other hand, the (M+1)SSM with a singlino LSP allows for a process to be observable, which is invisible within the MSSM: the production of a pair of binos. If the binos decay into singlinos plus additional observable particles, LEP2 is sensitive to light binos, which would, however, escape detection within the associated MSSM. (Here and below the associated MSSM denotes the MSSM obtained after "freezing" the singlet vev, which generates effective µ and B terms, and after dropping the gauge singlet states in the neutralino and Higgs sectors.) Thus the application of the LEP2 results to the (M+1)SSM requires a case by case analysis, depending on the different regions of the parameter space, which will be performed below.
In order to scan the complete parameter space of the (M+1)SSM we proceed as in [6,7]: First we assume universal scalar masses m 0 , gaugino masses M 1/2 and trilinear couplings A 0 at the GUT scale. Thus we scan over the ratios m 0 /M 1/2 , A 0 /M 1/2 and the Yukawa couplings at the GUT scale, the absolute scale being determined at the end by requiring the correct value of M Z . For each point in the parameter space we integrate the renormalization group equations [2] down to the weak scale, and minimize the low energy effective potential including the full one loop radiative corrections [4]. We check whether squarks or sleptons do not assume vevs, diagonalize numerically the mass matrices and verify whether applicable bounds on sparticle and Higgs masses are satisfied.
In contrast to [6,7], however, we have included as matter Yukawa couplings not just the top Yukawa coupling h t , but all the couplings of the third generation h t , h b and h τ . First, this makes our results more reliable in the large tan(β) regime, and second this reveals a new phenomenon: Within the (M+1)SSM with a singlino LSP and sparticle masses in the reach of LEP2, the NLSP could possibly be the lightest stau τ 1 . (In the associated MSSM the lightest stau τ 1 would then be the true LSP, i.e. a stable charged particle; this situation has been discussed in [11].)
The paper is organized as follows: In the next section we present the lagrangian and discuss the different regions in the parameter space which are relevant for the present investigations. In section three we study the sparticle production processes which are kinematically allowed at LEP2, the topologies relevant for searches in the context of the (M+1)SSM with a singlino LSP, and the constraints on its parameters which could be already infered from available data. The total number of events expected in those regions of parameter space is given, for which the efficiencies still remain to be determined. Conclusions are presented in section four.
Parameter space of the (M+1)SSM with a singlino LSP
The superpotential of the (M+1)SSM is given by
W = λSH 1 H 2 + 1 3 κS 3 + h t Q 3 H 1 U c 3R +h b Q 3 H 2 D c 3R + h τ L 3 H 2 E c 3R + . . .(1)
where Q 3 denotes the left handed doublet of quarks of the third generation, U c 3R and D c 3R the (charge conjugate) right handed top and bottom quarks, L 3 the left handed doublet of leptons of the third generation, E c 3R the (charge conjugate) right handed tau. The ellipses in (1) denote Yukawa couplings involving quarks and leptons of the first two generations. The only dimensionful parameters of the model are the supersymmetry breaking parameters (for simplicity, we do not display the terms involving squarks and sleptons):
L sof t = 1 2 M 3 λ a 3 λ a 3 + M 2 λ i 2 λ i 2 + M 1 λ 1 λ 1 + h.c. −m 2 1 |H 1 | 2 − m 2 2 |H 2 | 2 − m 2 S |S| 2 −λA λ SH 1 H 2 − 1 3 κA κ S 3 + h.c.(2)
where λ 3 , λ 2 and λ 1 (the 'bino') are the gauginos of the SU(3) c , SU(2) L and U(1) Y gauge groups respectively. The scalar components of the Higgs in (2) are denoted by the same letters as the corresponding chiral superfields. These supersymmetry breaking terms are constrained in the present version of the model by universality at the scale M GU T ∼ 10 16 GeV. Thus, the independent parameters are: Universal gaugino masses M 1/2 (always positive in our convention); universal masses m 2 0 for the scalars; universal trilinear couplings A 0 (either positive or negative); the Yukawa couplings h t0 , h b0 , h τ 0 , λ 0 , κ 0 of the superpotential (1) at the scale M GU T . The parameters at the weak scale are obtained by integrating numerically the one loop renormalization group equations [2]. The Coleman-Weinberg radiative corrections to the effective potential involving top/stop, bottom/sbottom and tau/stau loops (beyond the leading log approximation) [4] are taken into account. The results for the mass matrices, after minimization of the effective potential, can be found in [1,2,[4][5][6][7] and will not be repeated here. Mixing terms are considered in the stop, sbottom and stau mass matrices.
Let us now discuss the parameter space of the (M+1)SSM with a singlino LSP which is relevant for sparticle searches at LEP2. Since here the Yukawa couplings λ and κ are quite small (λ, κ < ∼ O(10 −2 )) and hence the singlet sector mixes only weakly to the non singlet sector, it is possible to understand the gross features of the parameter space with the help of analytic approximations to the integrated renormalization group equations, the minimization of the effective potential and the mass matrices [3,[5][6][7]. (The results in section 3, on the other hand, are based on 'exact' numerical computations for ∼ 10 4 points in the parameter space.)
First, we consider the neutralino sector. In our convention, the (symmetric) neutralino mass matrix is given by [12]
M 0 = M 2 0 −g 2 h 1 √ 2 g 2 h 2 √ 2 0 M 1 g 1 h 1 √ 2 −g 1 h 2 √ 2 0 0 −µ −λh 2 0 −λh 1 2κs .(3)
For small λ, the singlino is thus an almost pure state of mass
M S ≃ 2κs.(4)
and the vev s of the scalar singlet can be estimated from the tree level scalar potential:
s ≃ − A κ 4κ 1 + 1 − 8m 2 S A 2 κ .(5)
Since A κ and m S are only slightly renormalized between M GU T and the weak scale for small λ and κ, M S can be written in terms of the universal soft terms at M GU T :
M S ≃ − A 0 2 1 + 1 − 8m 2 0 A 2 0 .(6)
The condition for the minimum (5) to be deeper than the trivial one reads at tree level
|A 0 | > 3m 0 (7) so that 2 3 |A 0 | < ∼ |M S | < ∼ |A 0 |.(8)
Since the effective µ parameter turns out to be quite large,
µ 2 = λ 2 s 2 ≃ 2.5M 2 1/2 − .5M 2 Z ,(9)
the lightest non singlet neutralino is the (nearly pure) bino B with mass M B . From the approximate analytic diagonalization of (3) for tan(β) > ∼ 5 (which, from our numerical results, is always the case for a singlino LSP), one obtains M B in terms of the universal gaugino mass M 1/2 as
M B ≃ M 1 + sin 2 θ W M 2 Z M 1 M 2 1 − µ 2 ≃ .41M 1/2 − 4.10 −2 M 2 Z M 1/2 M 2 1/2 − .2M 2 Z(10)
where we have used (9) and M 1 = .41M 1/2 . The second term in (10) is due to the bino/higgsino mixing. From (6) and (10) one finds that the necessary (resp. sufficient) conditions on the universal terms for a singlino LSP are
|A 0 | < ∼ .6M 1/2 (resp. |A 0 | < ∼ .4M 1/2 ).(11)
The Yukawa couplings λ and κ of the (M+1)SSM are, in general, constrained by the ratio A 0 /M 1/2 . From the absence of a deeper unphysical minimum of the Higgs potential with h 2 = s = 0 the following inequality can be derived:
κ < ∼ 4.10 −2 A 2 0 M 2 1/2 .(12)
Since the singlet vev s increases with decreasing κ (cf. (5)), but the effective µ term should be of the order of the weak scale, λ and κ should be of the same order of magnitude. From our numerical analysis we find that the bare parameters A 0 , M 1/2 and λ 0 satisfy the (not very stringent) relation
|A 0 | M 1/2 ∼ 4λ .5±.3 0 ;(13)
thus light singlinos are generally related to small values of λ and κ. Since the mixing angle of the singlino to the non singlet sector is proportional to λ, all decay widths of sparticles into a singlino LSP are at least proportional to λ 2 . Furthermore, λ can be extremely small; then the NLSP life time is very large. This phenomenon, already investigated in [7], will play an important role in the next section. Now we turn to the slepton sector. The lightest states are the 'right handed' charged sleptons l R and the sneutrinos ν. Since the bare scalar mass m 0 is quite small (cf. (7) and (11)) the corresponding mass terms at the weak scale are determined, from the integrated renormalization group equations, by M 1/2 . Neglecting the mixing between the right handed and the left handed sleptons, and using the known numerical values of the electroweak gauge couplings appearing in the D terms, their masses are (for medium or large tan(β))
m 2 l R = m 2 E − sin 2 θ W M 2 Z cos 2β ≃ .15M 2 0 + .23M 2 Z ,(14)m 2 ν = m 2 L + 1 2 M 2 Z cos 2β ≃ .52M 2 0 − .5M 2 Z .(15)
The limit on the sneutrino mass obtained from the Z width, m ν > ∼ M Z /2 [14], combined with (15) gives a lower limit on M 1/2 :
M 1/2 > ∼ 100 GeV.(16)
From (14) together with (10) it follows that the sleptons l R are heavier than the bino for M 1/2 < ∼ 320 GeV. However, this result holds only for the charged sleptons of the first two generations. For the third generation, the soft masses at low energy can be smaller than the ones given in (14) and (15) (depending on h τ ). Furthermore, the off-diagonal term in the stau mass matrix is given by h τ (µh 1 − A τ h 2 ), which is not necessarily negligible compared to the smallest diagonal term. Thus, the lightest eigenstate τ 1 will be lighter than the right handed sleptons of the first two generations l R and can well be lighter than the bino even for M 1/2 < ∼ 320 GeV (hence for sparticle masses within the reach of LEP2).
In the chargino sector, within the present region of the parameter space, the lightest eigenstate is essentially a wino of mass M 2 given in terms of M 1/2 by
M 2 = .82M 1/2 .(17)
In the Higgs sector we can again make use of the fact that the non singlet and singlet sectors are quasi decoupled. The direct search for Higgs scalars thus proceeds as in the MSSM, and the present negative results do not impose more stringent constraints on M 1/2 than (16). (For large values of λ, without singlino LSP, the Higgs phenomenology of the (M+1)SSM could, however, differ substantially from the one of the MSSM [4].)
Since the scalar Higgs quasi singlet state can possibly be produced in bino decays in the (M+1)SSM, its mass M S will be of interest. From the tree level part of the Higgs potential one finds for small Yukawa couplings
M 2 S ≃ 1 4 A 2 0 − 8m 2 0 |A 0 | + A 2 0 − 8m 2 0 ,(18)
hence
M S < ∼ |A 0 | √ 2 .(19)
For later use we note that the coupling Higgs singlet -bino -singlino is proportional to λ 2 , thus the production of the Higgs singlet state in bino decays will only occur for λ not too small.
To summarize, the parameter space of the (M+1)SSM with a singlino LSP is characterized by the universal gaugino mass M 1/2 being the dominant soft supersymmetry breaking term. Both A 0 and, consequently, m 0 are bounded from above in terms of M 1/2 by (11) and (7), respectively. The Yukawa couplings κ and λ also have upper limits of O(10 −2 ), and are possibly tiny.
The non singlet sparticles (with sizeable production cross sections) within the reach of LEP2 are: The second lightest neutralino, essentially a bino B; the right handed sleptons l R with masses given by (14) and the lightest stau τ 1 which could be substantially lighter; sneutrinos with masses given by (15); and the lightest chargino with a mass given by (17). Note that, for a value of M 1/2 corresponding to a bino in the reach of LEP2, the bino is always lighter than these sparticles, with the possible exception of the lightest stau τ 1 .
In the next section, we will discuss the different decays of these particles, and compare the respective final states to sparticle searches at LEP2. This will allow us to find out which parameter ranges of the (M+1)SSM have been already ruled out, and which require further study.
3 Topologies for sparticle searches at LEP2
Bino decays with a singlino LSP
Sparticle searches in the (M+1)SSM with a singlino LSP differ in several respects from sparticle searches in the MSSM: First, the presence of the singlino LSP usually gives rise to additional cascades in sparticle decays. For instance, pair production of binos is usually an observable process, whereas for an equivalent MSSM (with comparable soft supersymmetry breaking terms), the bino would correspond to the LSP, and this process would be invisible. Thus, areas in the soft SUSY breaking parameter space accessible at LEP2 are larger in the (M+1)SSM than in the MSSM, provided an adapted experimental analysis is done. Second, the decay of the NLSP (the bino or the lightest stau) into the singlino LSP is always proportional to a power of λ, which can be tiny. In this case (or if the singlino LSP happens to be close in mass to the NLSP, which is feasible in the (M+1)SSM with universal soft terms in contrast to the MSSM) the NLSP to LSP transition can be rather slow, leading to macroscopically displaced vertices.
In the following we can make use of the fact that the masses of most sparticles in the (M+1)SSM with a singlino LSP depend essentially on just one parameter, the universal gaugino mass M 1/2 : For M 1/2 not too large (M 1/2 < ∼ 180 GeV) B, l R , ν and χ ± 1 can be light enough for pair production being kinematically allowed at LEP2, cf. the dependence of their mass on M 1/2 in section 2. On the other hand, for 180 GeV < ∼ M 1/2 < ∼ 220 GeV, only B pair production is kinematically feasible (with the possible exception of staus).
Since all the sparticle decays in the (M+1)SSM with a singlino LSP proceed via the decay of the bino B into the singlino S (with the possible exception of the stau τ 1 , see below), we will briefly discuss the possible final states of this transition, using the results of [7]: a) B → Sνν: This invisible process is mediated dominantly by sneutrino exchange. Since the sneutrino mass, as the mass of B, is essentially fixed by M 1/2 (cf. (15)), the associated branching ratio varies in a predictable way with M B : It can become up to 90% for M B ∼ 30 GeV, but decreases with M B and is maximally 10% for M B > ∼ 65 GeV. b) B → Sl + l − : This process is mediated dominantly by the exchange of a charged slepton in the s-channel. If the lightest stau τ 1 is considerably lighter than the sleptons of the first two generations, the percentage of taus among the charged leptons can well exceed 1 3 . If τ 1 is lighter than B, it is produced on-shell, and the process becomes B → τ 1 τ → Sτ + τ − . Hence we can have up to 100% taus among the charged leptons and the branching ratio of this channel can become up to 100%. c) B → SS: This two-body decay is kinematically allowed if both S and S are sufficiently light. (A light S is not excluded by Higgs searches at LEP1 [15,16], if its coupling to the Z is too small [4]). However, the coupling B SS is proportional to λ 2 , whereas the couplings appearing in the decays a) and b) are only of O(λ). Thus this decay can only be important for λ not too small. In [7], we found that its branching ratio can become up to 100% in a window 10 −3 < ∼ λ < ∼ 10 −2 . Hence, its length of flight is never macroscopic. Of course, S will decay immediately into bb or τ + τ − , depending on its mass. (If the branching ratio Br( B → SS) is substantial, S is never lighter than ∼ 5 GeV.) If the singlet is heavy enough, its bb decay gives rise to 2 jets with B mesons, which are easily detected with b-tagging. (However, if the singlet mass is just above the bb threshold -typically, if m Υ < M S < ∼ 15 GeV -S could decay hadronically without B mesons.) In any case, the hadronic system -or the τ + τ − system -would have an invariant mass peaked at M S , making this signature easy to search for. d) B → Sγ: This branching ratio can be important if the mass difference ∆M ≡ M B − M S is small ( < ∼ 5 GeV).
Further possible final states like B → Sqq via Z exchange have always branching ratios below 10% and will not be considered here.
Constraints from MSSM-like selectron searches
Let us first consider the region in the parameter space where the invisible decay a) of B dominates, which occurs for M 1/2 < ∼ 140 GeV. Then, right handed selectrons e R are light enough for being pair produced, and they decay as in the MSSM into an electron and a bino, which is invisible regardless of its lifetime. Results of searches for selectrons with MSSM-like decays have been published by Aleph [17], Delphi [18], L3 [19] and Opal [20] 2 . Here, however, the analysis of the results differs from the situation in the MSSM in two respects:
First, for a given mass of the selectron, the mass difference m e R −M B is essentially known: for m e R = 65 GeV, e.g., we have m e R − M B ∼ 20 − 30 GeV. It turns out that for the mass differences given in the present model, the experimental efficiencies are always > ∼ 50%.
On the other hand, the branching ratio associated with the invisible decay of B is never 100%. (Even for M 1/2 < ∼ 140 GeV, B could still decay dominantly into SS, if λ happens to be in the window 10 −3 < ∼ λ < ∼ 10 −2 .) Thus, for each point in the parameter space, we have to calculate the expected number of MSSM-like events (2 electrons and missing energy) taking the corresponding branching ratio into account.
The most detailed informations on the efficiencies, the numbers of background and observed events, as a function of m e R and M B , are given by Opal [20]. From these results, we find that points in the parameter space leading to N > ∼ 10 expected events with 2 acoplanar electrons in the final state are excluded. This occurs in the region
M 1/2 < ∼ 125 GeV or M B < ∼ 43 GeV.(20)
However, this region is not totally excluded by acoplanar electron searches: As mentioned above, B could still decay dominantly into SS, if λ happens to be in the window 10 −3 < ∼ λ < ∼ 10 −2 . Further MSSM-like processes associated with 2 leptons and missing energy in the final state do not lead to additional constraints on the parameter space.
Higher multiplicity final states without displaced vertices
Next, we have to take into account visible B cascade decays, leading to events with higher multiplicity. First we treat the case where all sparticle decays take 2 In this paper, we use the results from the LEP2 run at √ s = 181 − 184 GeV. For recent updates at √ s = 189 GeV, see Refs. [32] place within at most 1 cm around the primary vertex, i.e. λ and ∆M not too small. The following pair production processes have to be considered:
p.1: e + e − → B B, p.2: e + e − → l R l * R → l + Bl − B, p.3: e + e − → ν ν * → ν Bν B, p.4: e + e − → χ + 1 χ − 1 → l + νl ′− ν * → l + ν Bl ′−ν B.(21)
Taking the bino decays a) -c) in sect. 3.1 into account, the possible final states are those listed in Table 1 Table 1); one bino decaying invisibly through channel a) B → Sνν, the other decaying into l + l − or bb and missing energy through channels b) or c) for p.2 and p.4 (the final states (i.2) and (i.4), i = 2,3,4, in Table 1). In the case of the process p.4 we have used the fact that sneutrinos are always lighter than the lightest chargino in the (M+1)SSM with a singlino LSP, thus the latter decays exclusively into an on-shell sneutrino and a charged lepton.
According to the discussion of the decay channel b) above, the charged leptons l ± in the final state can be the leptons of any generation. In the case of light staus, the percentage of taus among the charged leptons can become up to 100%. If the lightest stau τ 1 is the NLSP, p.2 and p.4 give 6 charged leptons plus missing energy in the final state. Only p.1 and p.3 lead to 4 charged leptons (taus) plus missing energy, since, in this case, the only decay channel for the bino is
B → τ τ 1 → Sτ + τ − .
Thus, the final states of interest are l + l − l + l − , l + l − bb and bbbb plus missing energy. Since the bs can arise solely from the decay c) B → SS → Sbb, the invariant mass of a bb system would always be peaked at M S , cf. the discussion above. However, for a given value of M 1/2 , we cannot predict the different branching ratios of B (λ may or may not be in the window where the decay into SS is dominant), hence we cannot predict the ratios of the different final states associated to a given process in (21). On the other hand, for a given value of M 1/2 we know, with small errors, the masses M B , m l R , m ν and M χ ± 1 and the corresponding production cross sections. For each point in the parameter space obtained from the scanning described in the previous section, we have calculated numerically the production cross sections of the proceses p.1-4, taking into account possible interference terms between s-, t-and u-channels [13], for e + e − collisions at 183 GeV c.m. energy. In Fig. 1 we show, for each point in the parameter space, the total number of events with 4 charged fermions plus missing energy in the final state as a function of M 1/2 , assuming an integrated luminosity of 55 pb −1 . We have already removed those points in the parameter space where B decays dominantly invisibly through channel a), and which are excluded by the negative results of selectron searches in the MSSM, see the discussion above. Moreover, we have not shown the points where B decays dominantly into channel d) B → Sγ which will be discussed separately below.
In Fig. 1 we observe a large number of events for M 1/2 < ∼ 150 GeV, which are due to the process p.3: If kinematically allowed, its cross section is typically larger than the ones of p.1, p.2 or p.4. For M 1/2 > ∼ 150 GeV, on the other hand, the number of events is essentially given by the number of B being pair produced (p.1).
Events with 4 charged fermions plus missing energy in the final state have been searched for at LEP2. The underlying processes were assumed to be: t 1 pair production with t 1 → bl ν [18,21] and heavy neutralinos decaying via the Multi Lepton channel [22] in the MSSM; lightest neutralino pair production in models with gauge mediated supersymmetry breaking (i.e. a gravitino LSP) and a stau NLSP [24]; or any sparticle pair production process in the context of models with R parity violation [26].
Standard backgrounds with 4 charged fermions and missing energy are small and typically, after imposing appropriate cuts, the number of background events in a given channel vary from 0 to 4, with a comparable number of observed events. No excess has been observed. The given efficiencies vary roughly between 20% and 60% depending, e.g. in / R p models, on the mass of the unstable (intermediate) neutralino.
Of course we cannot apply these efficiencies to the processes listed in (21). The kinematics of these processes is often very different from the kinematics of the assumed underlying processes, and also various branching ratios into different final states would have to be considered. (In particular in the case of small mass difference ∆M the efficiencies for the processes p.1-p.4 could be quite low.)
From Fig. 1 we can only deduce which range of values for M 1/2 could be excluded. For instance, assuming a minimal efficiency of 20% for all processes listed in (21), and assuming a total number of 4 expected events excluded, we would conclude that the total number of actual events has to be smaller than 20 implying a lower limit on M 1/2 or M B of
M 1/2 > ∼ 190 GeV or M B > ∼ 75 GeV.(22)
(In Fig. 1 we have indicated this example by a horizontal line.) Events with 6 charged fermions in the final state can also appear in slepton or chargino pair production (processes p.2 and p.4, the final states (i.2) and (i.4), i = 5 . . . 9, in Table 1). However, the bino is always lighter than these sparticles (with the possible exception of the stau, see below), and the regime in the parameter space covered by B pair production (and 4 charged fermions in the final state) is always larger.
Next, we comment briefly the case d) where B decays dominantly into Sγ. Note that this branching ratio can only be important for a small mass difference ∆M = M B − M S < ∼ 5 GeV. This decay could lead to final states with just 2 isolated photons and missing energy (via p.1 and p.3) or 2 leptons plus 2 isolated photons and missing energy (via p.2 and p.4). In the first case , however, detection efficiencies are always very small due to the small mass difference ∆M [27][28][29][30]. Final states of the form l + l − γγ + / E T have been searched for in [17,23,25], where gauge mediated supersymmetry breaking (i.e. a gravitino LSP) was assumed. Again, however, the efficiencies corresponding to the assumed underlying process do not apply to the present case due to the small value of ∆M. On the other hand, if the photons are soft enough to be accepted as low energy neutral clusters in acoplanar lepton searches, the MSSM constraint on the selectron mass m e R < ∼ 80 GeV [17][18][19][20] applies, leading to a lower limit on M 1/2 (M B ) of
M 1/2 > ∼ 175 GeV or M B > ∼ 67 GeV.(23)
Clearly this case requires a dedicated analysis depending on the various detectors.
Final states with neutral displaced vertices
Up to now, we have considered the case of a microscopic lifetime of B. For a small Yukawa coupling λ or a small ∆M, however, the length of flight of B can become large, leading to macroscopically displaced vertices [7]. Let us first remark that, in this case, the decay channel c) B → SS is impossible: If the decay length of B is large, either λ is very small and thus outside the window 10 −3 < ∼ λ < ∼ 10 −2 , or ∆M is small such that the quasi-singlet Higgs scalar S can no longer be produced on shell. Furthermore, the region of the parameter space where the invisible decay channel a) B → Sνν dominates has already been treated above, regardless of the B lifetime: In this case, selectron pair production (p.2 in (21)) looks like in the MSSM. Taking into account the dependence of this branching ratio on M 1/2 , the corresponding efficiencies and numbers of background/observed events, one finds that the region (20) can be completely excluded. (As a matter of fact, since the decay channel c) plays no role for displaced vertices, the bino decays always invisibly in this region of the parameter space.) Therefore, the remaining decay channels for a bino with M B > ∼ 43 GeV are: b) B → Sl + l − and d) B → Sγ. In the situation of a macroscopic length of flight, the cases of a B decay inside or outside the detector have to be treated separately. If B decays inside the detector ('mesoscopic' decay length: 1 cm < ∼ l B < ∼ 3 m, where l B denotes the decay length in the lab. system), the following topologies are possible:
• The processes p.2 (charged slepton pair production) and p.4 (chargino pair production) give rise to 2 acoplanar leptons from the primary vertex plus neutral displaced clusters (lepton pairs or photons) due to delayed B decay. Searches for events with neutral clusters have not been published up to now, due to vetos against such clusters in order to remove the background from radiative events [17][18][19][20]. However, for small values of ∆M (mainly when B decays dominantly into Sγ) such neutral clusters could be soft enough not to be vetoed (cf. the discussion above on photons in the final state). In this case, the limit on the selectron mass in the MSSM leads to the lower limit (23) on M 1/2 (M B ).
• The process p.1 (bino pair production) leads to events with just neutral displaced vertices and no activity at the primary vertex. Since, in this case, B is the lightest visible particle of the model, this process would allow to test a larger region in the parameter space than the processes p.2 and p.4 discussed above. The expected event rates are as in Fig. 1 for M 1/2 > ∼ 150 GeV. If ∆M is not too small, the decay product of B would be charged leptons with at least 33% taus. Clearly this topology is the most difficult one to detect (since triggers around the primary vertex will not be active 3 ), and no constraints on such processes have been published. On the other hand, for small ∆M, the photonic decay channel d) dominates. Searches have been performed within the MSSM for χ 0 2 pair production followed by a delayed χ 0 2 → χ 0 1 γ decay [28]. However, the efficiency for small mass differences is tiny and this channel cannot be used. In the region of the parameter space where this decay channel dominates, the relevant topology is 2 acoplanar electrons arising from selectron pair production, the photons being soft enough for being accepted as extra neutral clusters in this search (cf. above).
If B decays outside the detector ('macroscopic' decay length: l B >3 m), the situation in the (M+1)SSM with a singlino LSP is clearly the same as in the corresponding MSSM with B being the true LSP. In particular, the MSSM constraint on the selectron mass can be applied directly with the additional benefit that m e R − M B is known in terms of m e R . Hence, the lower limit on M 1/2 (M B ) is given by (23).
The present constraints for the various ranges of M 1/2 (or M B ) and the various B lifetimes can be summarized in Fig. 2. On the bottom horizontal line of Fig. 2, we plot M 1/2 in the range of interest, and on the top horizontal line we indicate the corresponding values of M B (with < ∼ 5 GeV accuracy). On the vertical axis, we plot the B decay length in the laboratory system. In this plane we have indicated in grey those regions (for to l B >1 cm), which are excluded by negative results from acoplanar electron searches. For l B <1 cm the total number of events with 4 charged fermions and missing energy in the final state exceeds 20 in the striped region.
As mentioned above, in the (M+1)SSM with a singlino LSP, the NLSP could possibly be a stau. Then, limits from MSSM stau searches can be applied. Again, if λ (or m τ 1 − M S ) is sufficiently small, the τ 1 lifetime can become large and give rise to displaced vertices. Medium or long-lived charged scalars have been searched for at LEP2 [17,24,31], and the corresponding constraints can also be applied here. However, the lower limit on stau masses does not correspond to a definite region in the (l B , M 1/2 ) plane of Fig. 2 which is or not excluded, since even for large values of M 1/2 , m τ 1 can still be relatively small. (Of course, B pair production can still be used, where now the B decays always through the cascade B → τ 1 τ → Sτ τ . Hence, the B lifetime is always very small. If, in addition, the stau lifetime is also small, processes p.1 and p.3 in (21) give rise to the same topology as in the case of a bino NLSP: 4 charged leptons (taus) plus missing energy. As discussed before, this case is included in Fig. 1.)
Summary and outlook
We have seen that the final state topologies of the (M+1)SSM with a singlino LSP can differ considerably from the MSSM, due to the additional B → SX cascade. Since these topologies can be the first sign of sparticle production at LEP2, it is very important to identify and to look carefully for them.
In the present paper we have identified these topologies, and studied the parameter space of the model in order to check whether there are regions not excluded by negative results from MSSM-like sparticles searches, though accessible at LEP2 (i.e. with a reasonable expected number of events).
Indeed we found several such regions, and the associate topologies have been listed in Table 1: First, we can have 4 charged fermions of various kinds and missing energy in the final state. Such final states have been looked for in the context of the MSSM, e.g. in stop and neutralino searches, or in models with R parity violation. However, the corresponding efficiencies within the present model are not known up to now.
In Fig. 1 we have shown the total number of events which can be expected within the present model as a function of M 1/2 (which can be translated into M B using (10)). Clearly, assuming a small but non vanishing efficiency for the topologies of the present model, the region M 1/2 < ∼ 140 GeV, corresponding to > O(10 2 ) total events, could already be excluded from searches for 4 charged fermions. Of particular interest is, however, the region M 1/2 > ∼ 150 GeV where only B pair production contributes to this topology; this process allows to test the largest region in the parameter space. With the corresponding efficiencies at hand one could expect, e.g., a sensitivity to a total number of N > 20 of 4 charged fermion events plus missing energy, which would allow to test the region up to M 1/2 < ∼ 190 GeV (or M B < ∼ 75 GeV) as indicated by the horizontal line in Fig. 1, or the striped region in Fig. 2. Note that final states with 6 charged fermions can only appear after slepton or chargino pair production (processes p.2 and p.4 in (21)). The accessible parameter space is thus smaller than the one covered by B pair production.
If the decay length of B is mesoscopic (1 cm < ∼ l B < ∼ 3 m) and B decays visibly, new topologies appear: Either two leptons at the primary vertex (from slepton or chargino pair production) plus neutral displaced clusters due to the delayed B decay, or just neutral displaced clusters from B pair production. The latter process is even more promising since it allows to test a larger region in the parameter space, although it is certainly the most difficult to trigger on. Again, the total number of expected events, as a function of M 1/2 (or M B ), can be deduced from Fig. 1. Now, however, the estimation of the corresponding efficiencies is much more delicate. On the other hand, the decay channel c) B → SS never appears in this range of the decay length l B , and the number of possible final states is reduced. (Now, the region M 1/2 < ∼ 125 GeV can already be excluded: Here the bino decays nearly always invisibly, and the negative results from acoplanar leptons plus missing energy searches -associated with the process p.2 in (21) -can be applied. This is indicated in Fig. 2 in form of the grey region for 1 cm < ∼ l B < ∼ 3 m.)
Herewith we would like to encourage searches for these unconventional topologies, in order to cover the entire parameter space of the (M+1)SSM with a singlino LSP. If no excesses are observed at LEP2, we will have to turn to larger c.m. energies at the Tevatron (Run II), the LHC or -hopefully -the NLC. Again, the (M+1)SSM with a singlino LSP predicts unconventional signals for these machines, like additional decay cascades (as compared to the MSSM) or displaced vertices. The details of these topologies and the expected event rates as a function of the parameters of the (M+1)SSM will have to be considered in the near future. e + e − → l R l * R e + e − → ν ν * e + e − → χ + 1 χ − B 1 → Sl + l − l + l − l + l − + / E T l + l − l + l − l + l − l + l − l + l − + / E T l + l − l + l − l + l ′− B 2 → Sl + l − +/ E T +/ E T (5.1) (5.2) (5.3) (5.4) B 1 → Sl + l − l + l − bb + / E T l + l − l + l − bb + / E T l + l − bb + / E T l + l − bbl + l ′− + / E T B 2 → SS → Sbb (6.1) (6.2) (6.3) (6.4) B 1 → SS → Sτ + τ − τ + τ − τ + τ − + / E T l + l − τ + τ − τ + τ − τ + τ − τ + τ − + / E T τ + τ − τ + τ − l + l ′− B 2 → SS → Sτ + τ − +/ E T +/ E T (9.1) (9.2) (9.3) (9.4)
Figure Captions
1 → l + B 1 l − B 2 → ν B 1ν B 2 → l + νl ′− ν * → l + ν B 1 l ′−ν B 2 Bino Decays B 1 → Sνν 0 l + l − + / E T 0 l + l ′− + / E T B 2 → Sνν (1.1) (1.2) (1.3) (1.4) B 1 → Sνν l + l − + / E T l + l − l + l − + / E T l + l − + / E T l + l − l + l ′− + / E T B 2 → Sl + l − (2.1) (2.2) (2.3) (2.4) B 1 → Sνν bb + / E T l + l − bb + / E T bb + / E T l + l ′− bb + / E T B 2 → SS → Sbb (3.1) (3.2) (3.3) (3.4) B 1 → Sνν τ + τ − + / E T l + l − τ + τ − + / E T τ + τ − + / E T l + l ′− τ + τ − + / E T B 2 → SS → Sτ + τ −(B 1 → Sl + l − l + l − τ + τ − + / E T l + l − l + l − τ + τ − l + l − τ + τ − + / E T l + l − τ + τ − l + l ′− B 2 → SS → Sτ + τ − +/ E T +/ E T(
. (The radiative decay B → Sγ will be discussed below.) Let us first consider processes with 4 visible fermions and missing energy. The appropriate cascade decays of the binos leading to 4 charged fermions in the final state are: visible decays b) B → Sl + l − or c) B → SS → Sbb or Sτ + τ − for the 2 binos in p.1 and p.3 (the final states (i.1) and (i.3), i = 5 . . . 9, in
Figure 1 :
1Number of 4 charged fermion events expected from B, e R , τ 1 , χ ± 1 pair production as a function of M 1/2 .
Figure 2 :
2Regions in the plane l B (in the laboratory system) vs. M 1/2 , which are excluded due to negative results from searches for acoplanar electrons at LEP2, are indicated in grey. (On the top horizontal line we indicate the corresponding values of M B , with < ∼ 5 GeV accuracy). The remaining regions still have to be explored. In the striped region, for l B < 1 cm, the total number of events with 4 charged fermions and missing energy in the final state exceeds 20. The vertical dashed line indicates the kinematic limit for B pair production at LEP2 with √ s = 183 GeV.
Figure 2
Table Caption Table 1 :
Caption1Visible final states after sparticle production in the case of microscopic vertices. (The B decay B → SS does not appear in the case of displaced vertices, see sect. 3.4.) The leptons ℓ + ℓ − can be leptons of any generation, including taus (which are possibly dominant). We have omitted photons from the decay B → Sγ, since these are always soft, see sect. 3.3.
Table 1
1Figure 1N(4f + /
E T )
M 1/2 [GeV]
One could use, however, an initial state radiative photon to trigger the event.
AcknowledgmentsIt is a pleasure to thank L. Duflot for helpful comments. Many useful discussions in the framework of the French workshop "GDR Supersymétrie" are also acknowledged.
. P Fayet, Nucl. Phys. B. 90104P. Fayet, Nucl. Phys. B 90 (1975) 104;
. H P Nilles, M Srednicki, D Wyler, Phys. Lett. B. 120346H.P. Nilles, M. Srednicki, D. Wyler, Phys. Lett. B 120 (1983) 346;
. J Ellis, J F Gunion, H E Haber, L Roszkowski, F Zwirner, Phys. Rev. D. 39844J. Ellis, J.F. Gunion, H.E. Haber, L. Roszkowski, F. Zwirner, Phys. Rev. D 39 (1989) 844;
. L Durand, J L Lopez, Phys. Lett. B. 217463L. Durand and J.L. Lopez, Phys. Lett. B 217 (1989) 463;
. M Drees, Int. J. Mod. Phys. A. 43635M. Drees, Int. J. Mod. Phys. A 4 (1989) 3635.
. J P Derendinger, C A Savoy, Nucl. Phys. B. 237307J.P. Derendinger, C.A. Savoy, Nucl. Phys. B 237 (1984) 307.
. L E Ibañez, C Lopez, Nucl. Phys. B. 233511L.E. Ibañez, C. Lopez, Nucl. Phys. B 233 (1984) 511;
. C Kounnas, A B Lahanas, D V Nanopoulos, M Quirós, Nucl. Phys. B. 236438C. Kounnas, A.B. Lahanas, D.V. Nanopoulos, M. Quirós, Nucl. Phys. B 236 (1984) 438;
. A Bouquet, J Kaplan, C A Savoy, Nucl. Phys. B. 262299A. Bouquet, J. Kaplan, C.A. Savoy, Nucl. Phys. B 262 (1985) 299;
. P Brax, C A Savoy, Nucl. Phys. B. 447227P. Brax, C.A. Savoy, Nucl. Phys. B 447 (1995) 227.
. U Ellwanger, Phys. Lett. B. 303271U. Ellwanger, Phys. Lett. B 303 (1993) 271;
. U Ellwanger, M Rausch De Traubenberg, C A Savoy, Z. Phys. C. 67665U. Ellwanger, M. Rausch de Traubenberg, C.A. Savoy, Z. Phys. C 67 (1995) 665;
. T Elliott, S F King, P L White, Phys. Rev. D. 492435T. Elliott, S.F. King, P.L. White, Phys. Rev. D 49 (1994) 2435.
. U Ellwanger, M Rausch De Traubenberg, C A Savoy, Phys. Lett. B. 315331U. Ellwanger, M. Rausch de Traubenberg, C.A. Savoy, Phys. Lett. B 315 (1993) 331;
. S F King, P L White, Phys. Rev. D. 524183S.F. King, P.L. White, Phys. Rev. D 52 (1995) 4183.
. U Ellwanger, M Rausch De Traubenberg, C A Savoy, Nucl. Phys. B. 49221U. Ellwanger, M. Rausch de Traubenberg, C.A. Savoy, Nucl. Phys. B 492 (1997) 21.
. U Ellwanger, C Hugonie, Eur. Phys. J. C. 5723U. Ellwanger, C. Hugonie, Eur. Phys. J. C 5 (1998) 723.
. S A Abel, S Sarkar, P L White, Nucl. Phys. B. 454663S.A. Abel, S. Sarkar, P.L. White, Nucl. Phys. B 454 (1995) 663;
. S A Abel, Nucl. Phys. B. 4809603301S.A. Abel, Nucl. Phys. B 480 (1996) 55-72; hep-ph 9603301.
. F Franke, H Fraas, Z. Phys. C. 72309F. Franke, H. Fraas, Z. Phys. C 72 (1996) 309.
. A Stephan, Phys. Rev. D. 58A. Stephan, Phys. Rev. D 58 (1998) ;
. Phys. Lett. B. 41197Phys. Lett. B 411 (1997) 97.
. A De Gouvea, A Friedland, H Murayama, Phys. Rev. 5995008A. de Gouvea, A. Friedland, H. Murayama, Phys. Rev. D59 (1999) 095008.
. H E Haber, G L Kane, Phys. Rep. 11775H.E. Haber, G.L. Kane, Phys. Rep. 117 (1985) 75.
. H Baer, A Bartl, D Karatas, W Majerotto, X , Tata Int. J. Mod. Phys. 44111and references thereinH. Baer, A. Bartl, D. Karatas, W. Majerotto, X. Tata Int. J. Mod. Phys. A4 (1989) 4111 and references therein.
. J F Grivaz, hep-ph/9709505L. KaneWorld Scientific Publishing Coto appear in Perspectives on SupersymmetryJ.F. Grivaz, hep-ph/9709505 (to appear in Perspectives on Supersymmetry, World Scientific Publishing Co., Gordon L. Kane, Ed.).
. Aleph , Phys. Lett. B. 313312Aleph collab., Phys. Lett. B 313 (1993) 312;
. Delphi Collab, Nucl. Phys. B. 3733Delphi collab., Nucl. Phys. B 373 (1992) 3;
. L3 Collab, Z. Phys. C. 57355L3 collab., Z. Phys. C 57 (1993) 355;
. Opal, Z. Phys. C. 641Opal collab., Z. Phys. C 64 (1994) 1.
CERN-EP/98-145 (submitted to. Aleph , Phys. Lett. B. Aleph collab., CERN-EP/98-145 (submitted to Phys. Lett. B);
. Delphi Collab, Delphi/, Delphi collab., DELPHI/98-95;
. Phys. Lett. B. 436389L3 collab., Phys. Lett. B 436 (1998) 389;
. CERN-EP/98-173Eur. Phys. J. C). to appear inOpal collab., CERN-EP/98-173 (to appear in Eur. Phys. J. C).
. Aleph , Phys. Lett. B. 433176Aleph collab., Phys. Lett. B 433 (1998) 176.
. Delphi Collab, Delphi/, Delphi collab., DELPHI/98-92.
. CERN-EP/98-122Eur. Phys. J. C). submitted toOpal collab., CERN-EP/98-122 (submitted to Eur. Phys. J. C).
. Aleph , CERN-EP/98-076Phys. Lett. B. submitted toAleph collab., CERN-EP/98-076 (submitted to Phys. Lett. B);
. CERN-EP/98-135Phys. Lett. B. submitted toL3 collab., CERN-EP/98-135 (submitted to Phys. Lett. B);
. CERN-EP/98-107Eur. Phys. J. C). to appear inOpal collab., CERN-EP/98-107 (to appear in Eur. Phys. J. C).
. Aleph , ALEPH/Aleph collab., ALEPH/98-071.
. Delphi Collab, Cern-Ep, Phys. Lett. B). to appear inDelphi collab., CERN-EP/98-176 (to appear in Phys. Lett. B).
. Delphi Collab, CERN-EP/98-170Eur. Phys. J. C). to appear inDelphi collab., CERN-EP/98-170 (to appear in Eur. Phys. J. C).
Opal collab. OPAL-PN/332Opal collab., OPAL-PN/332.
. Aleph Collab Aleph/, Aleph collab., ALEPH/98-027;
. Delphi Collab, Delphi/, Delphi collab., DELPHI/98-138;
OPAL-PN/356 and OPAL-PN/359 (submitted to. Opal, Eur. Phys. J. C). Opal collab., OPAL-PN/356 and OPAL-PN/359 (submitted to Eur. Phys. J. C).
. Aleph , Phys. Lett. B. 429201Aleph collab., Phys. Lett. B 429 (1998) 201.
. Delphi Collab, CERN-EP/98-142Eur. Phys. J. C). to appear inDelphi collab., CERN-EP/98-142 (to appear in Eur. Phys. J. C).
. CERN-EP/98-150Phys. Lett. B). to appear inL3 collab., CERN-EP/98-150 (to appear in Phys. Lett. B).
. Opal Collab Cern Ep, Eur. Phys. J. C). to appear inOpal collab., CERN EP/98-143 (to appear in Eur. Phys. J. C).
. Aleph , Phys. Lett. 405379Aleph collab., Phys. Lett. B405 (1997) 379;
. Delphi Collab, CERN-EP/98-171Phys. Lett. B. to appear inDelphi collab., CERN-EP/98-171 (to appear in Phys. Lett. B);
. Opal, Phys. Lett. B. 433195Opal collab., Phys. Lett. B 433 (1998) 195.
. Aleph Collab Aleph/, Aleph collab., ALEPH/98-072;
. Delphi Collab, Delphi/, Delphi collab., DELPHI/98-137;
Opal collab. OPAL-PN/362Opal collab., OPAL-PN/362.
| [] |
[
"Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation",
"Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation"
] | [
"Minglun Han [email protected] \nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n\n",
"Feilong Chen [email protected] \nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Future Technology\nUniversity of Chinese Academy of Sciences\n\n",
"Jing Shi \nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Shuang Xu \nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Bo Xu \nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n\n\nSchool of Future Technology\nUniversity of Chinese Academy of Sciences\n\n"
] | [
"Institute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n",
"Institute of Automation\nChinese Academy of Sciences\n",
"School of Future Technology\nUniversity of Chinese Academy of Sciences\n",
"Institute of Automation\nChinese Academy of Sciences\n",
"Institute of Automation\nChinese Academy of Sciences\n",
"Institute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences\n",
"School of Future Technology\nUniversity of Chinese Academy of Sciences\n"
] | [] | Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs crossmodal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively. | 10.48550/arxiv.2301.13003 | [
"https://export.arxiv.org/pdf/2301.13003v2.pdf"
] | 256,389,599 | 2301.13003 | 57eb41c7b5cbffb134cdcf67e455c9c852024cbd |
Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation
Minglun Han [email protected]
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences
Feilong Chen [email protected]
Institute of Automation
Chinese Academy of Sciences
School of Future Technology
University of Chinese Academy of Sciences
Jing Shi
Institute of Automation
Chinese Academy of Sciences
Shuang Xu
Institute of Automation
Chinese Academy of Sciences
Bo Xu
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences
School of Future Technology
University of Chinese Academy of Sciences
Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation
Index Terms: continuous integrate-and-fireknowledge distil- lationcontrastive learningpre-trained language models
Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs crossmodal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively.
Introduction
End-to-end (E2E) models have recently made remarkable progress on automatic speech recognition (ASR) tasks. Compared with hybrid models, E2E models are optimized in a unified structure. However, the tight integration in this unified structure hinders the infusion of linguistic knowledge and limits the use of large-scale textual corpora.
Currently, there are two popular approaches widely used to leverage unpaired text for E2E ASR models: language model (LM) fusion [1][2][3][4] and re-scoring [5]. Apart from them, utilizing large-scale pre-trained language models (PLMs) to improve language modeling of ASR models [6][7][8] is also a practical approach to make use of unpaired text dataset. PLMs possess powerful language modeling abilities, and their outputs contain rich linguistic information that can improve ASR language modeling [6,9]. Therefore, employing PLMs to improve speech recognition has gradually become an important research direction. Until now, the methods used to improve ASR with PLMs can be categorized into three classes: re-scorer based method, model-based method, and knowledge distillation based method. The re-scorer based methods [10][11][12][13][14] convert PLMs into rescorers and use them to re-score the N -best lists or lattices from the first-pass decoding, while not changing the ASR model. Unlike the re-scorer based method, the model-based method and KD-based method focus on improving the ASR model itself. The model-based method refers to using PLM as part of the ASR model. For example, Huang et al. [7] fine-tune PLM as an ASR model with acoustics as cues. Yi et al. [15] use the CIF mechanism [16] to combine pre-trained acoustic and language models in a unified structure. Following [15], Zheng et al. [17] and Deng et al. [18] integrate pre-trained acoustic and language models for low-resource ASR and non-autoregressive (NAR) ASR, respectively. However, directly deploying model-based methods may be challenging due to the large size and different structures of PLMs. The KD-based methods transfer knowledge from PLMs to ASR models via knowledge distillation [19]. Futami et al. [6] distill knowledge from the BERT output distribution to the output distribution of the ASR model. Unlike the probability-based KD, the representation-based KD, which optimizes the similarity between teacher and student representations, transfers knowledge from PLMs to NAR ASR models [20]. Furthermore, the representation-based KD is applied to various ASR models [9,21]. However, most KD-based methods transfer the knowledge to only one of acoustics or linguistics and thus cannot fully leverage PLMs.
In this paper, to explore effective schemes of using PLMs in ASR, we propose a knowledge transfer strategy called hierarchical knowledge distillation (HKD). HKD transfers linguistic knowledge from PLMs to different levels of the ASR model, including the acoustic level. However, it is not easy to directly transfer linguistic knowledge to the acoustic level of E2E models. Unlike other E2E schemes, the continuous integrate-andfire mechanism (CIF) [16], which generates token-level acoustic representations aligned with the text, provides a natural option for the KD at the acoustic level. Thus, we develop the HKD based on the CIF-based ASR model. Inspired by contrastive knowledge distillation (CKD) [22,23], we leverage contrastive loss to transfer the knowledge to the high-level acoustics of CIF-based ASR models. By pushing positive pairs together and negative pairs apart, the contrastive loss encourages the model to capture semantic alignment, giving CKD an advantage over losses that optimize similarity when distilling knowledge across different modalities and structures. At the linguistic level, we apply regression loss to transfer knowledge from the PLM to the linguistic representations. Unlike model-based methods, HKD does not require adapting the ASR model for PLMs. Compared with other representation-based KD methods, HKD transfers the knowledge into the ASR model at multiple levels and applies contrastive distillation to effectively bridge the semantic gap between acoustics and linguistics. Experiments show that HKD achieves 15% and 9% relative error rate reduction over the original CIF-based model on AISHELL-1 and LibriSpeech, respectively. The implementation is available on GitHub 1 .
Encoder
Continuous Integrate-and-Fire based ASR model
Continuous Integrate-and-Fire (CIF) [16], a soft monotonic alignment mechanism, has been successfully applied to various ASR tasks [24,25]. As shown in Figure 1, the CIF-based ASR model in this work consists of an acoustic encoder, a CIF module, and a decoder. The acoustic encoder has a convolution front-end, and a conformer [26] module. The CIF module has a 1-dimensional convolution layer and a fully-connected (FC) layer. The decoder, composed of FC layers and a transformer [27] module, is an autoregressive decoder.
The input feature sequence X = (x1, ..., xt, ..., xT ) is first fed to the convolution front-end of the encoder. Then, the conformer module takes the outputs of the convolution front-end as inputs and outputs low-level acoustic sequence H = (h1, ..., hu, ..., hU ). Note that the convolution front-end down-samples the inputs by 2, and the conformer module downsamples the inputs by 4 with two max-pooling layers. Next, H is delivered to the CIF module. In the CIF module, H are first passed through the 1-dimensional convolution layer, and then one FC layer with one output unit and a followed sigmoid activation is used to generate weights a = (a1, ..., au, ..., aU ) from outputs of the convolution layer. After that, The CIF module accumulates the weight au along the time axis. When the accumulated weight exceeds a threshold β, a firing representing the acoustic boundary between adjacent tokens occurs. The weight of the firing time-step will be split into two parts: 1) the first part is used for the weight accumulation of the token before the boundary to make its accumulated weight reach β; 2) the second part is left for the accumulation of the token after the boundary. Further, the CIF module summarizes hu between adjacent acoustic boundaries via weighted sum with generated weights as weighting factors, and outputs high-level acoustic sequence C = (c1, ..., ci, ..., cI ). Finally, the decoder takes the high-level acoustic sequence C = (c1, ..., ci, ..., cI ) as inputs, and gives the final linguistic sequence S = (s1, ..., si, ..., sI ). Figure 2: Hierarchical knowledge distillation. LRD denotes linguistic regression distillation, and ACD denotes acoustic contrastive distillation. P denotes projection, and N denotes L2 normalization.
Large-scale pre-trained language models
PLMs trained on large-scale datasets, such as BERT [28] and GPT-2 [29], have been widely used in natural language processing tasks. PLMs possess strong modeling power and contain rich linguistic information, which is helpful to compensate for the language modeling of the ASR model. This work focuses on transferring knowledge from BERT-like PLM teachers to ASR students via knowledge distillation. Given text sequence (T1, ..., Ti, ..., TI−1, <EOS>) with length I, the input for PLMs is ([CLS], T1, ..., Ti, ..., TI−1, [SEP]) with length (I + 1). As shown in Figure 2, to keep the strict alignment between the student ASR outputs and teacher PLM outputs, we ignore the PLM output corresponding to [CLS].
The final output sequence of teacher PLM is denoted as E = (e1, e2, ..., ei, ..., eI ).
Hierarchical knowledge distillation
We propose hierarchical knowledge distillation (HKD) that transfers the knowledge from PLM to the CIF-based ASR model, as shown in Figure 2. "Hierarchical" 1) describes the bottom-up ASR hierarchy: speech input is first transformed to low-level acoustic features H, and then transformed to highlevel acoustic features C, and finally transformed to linguistic representations S, and 2) describes the behavior of distillations that simultaneously happen at the acoustic level C and higher linguistic level S. Such hierarchical distillation might better utilize the PLMs to enhance different aspects of ASR. The total loss of the ASR model with HKD is the sum of 1) ASR loss and 2) multi-level distillation losses. The total loss is written as
L T otal = LASR + λAD · LAD + λLD · LLD(1)
where LASR is the ASR loss of the CIF-based model [16]. LAD and LLD are acoustic distillation (AD) loss and linguistic distillation (LD) loss, respectively. λ denotes the loss weight.
Acoustic contrastive distillation
Considering that the CIF acoustic sequence C is strictly aligned with the text sequence during training [16], we can transfer the knowledge from PLMs to these high-level acoustic representations. However, there are two potential obstacles in this distillation process: 1) modal gap: although the CIF output C is aligned with text sequence, it is still closer to the acoustics (without linguistic contextual modeling); 2) structure gap: the acoustic encoder of the CIF-based model, which uses conformer structure and a weight accumulation mechanism, usually differs from the transformer structures of PLMs. Inspired by contrastive distillation [23], we use contrastive loss for knowledge distillation across different modalities and structures. Compared with distillation losses that directly optimize the similarity metrics, contrastive loss forces the model to pull together the positive pairs and push apart the negative pairs. Thus, the model can capture the high-level semantic alignment between student and teacher, and better model semantics. More specifically, we use contrastive loss (based on InfoNCE [30]) as the objective function for acoustic contrastive distillation (ACD). We project original student outputs in C to match the dimension of teacher output representation, and then normalize them. We denote the projected student outputs, final student outputs, and final teacher outputs asĈ = (ĉ1,ĉ2, ...,ĉi, ...,ĉI ), C = (c1,c2, ...,ci, ...,cI ) andĒ = (ē1,ē2, ...,ēi, ...,ēI ), respectively. The contrastive loss is defined as
L cont AD = − 1 N N n=1 1 I n I n i=1 log s(ci,ēi) K k=1 s(ci,ē − n,i,k ) + s(ci,ēi) ,(2)
where s(x, y) is equal to exp(⟨x, y⟩/τ ), and ⟨x, y⟩ denotes the inner-product of x and y. N and I n denote the batch size and the text length of the n-th audio sample, respectively. τ and K denote the temperature and the number of negative samples for contrastive loss.ci represent the i-th student token query of the n-th sample.ēi represents the positive teacher token representation that matchesci.ē − n,i,k represents the k-th negative teacher token representation sampled from all teacher token representations (except the positive one) of the current batch.
Apart from the contrastive loss, we also try to conduct distillation with the mean square error (MSE) loss or the cosine embedding (COS) loss for comparison. They can be written as
L mse AD = αmse · 1 N N n=1 1 I n I n i=1 D d=1 (ĉ n i,d − e n i,d ) 2 , (3) L cos AD = αcos · 1 N N n=1 1 I n I n i=1 (1 − cosine(ĉ i , e i )),(4)
where D is the dimension of teacher representations. Coefficients αmse and αcos scale losses to achieve the balance.
Linguistic regression distillation
We use regression loss to distill the knowledge from PLMs to the final linguistic representations of the CIF-based model. Using regression loss to transfer the knowledge to ASR models has been proven effective [20]. However, it is still uncertain whether this method works for the CIF-based ASR models. Specifically, we use MSE loss as the objective function for linguistic regression distillation (LRD). Given the projected final state of the decoderŜ = (ŝ1,ŝ2, ...,ŝi, ...,ŝI ) as student outputs and the PLM outputs E as teacher outputs, MSE loss can be defined as
L mse LD = αmse · 1 N N n=1 1 I n I n i=1 D d=1 (ŝ n i,d − e n i,d ) 2 . (5)
Experimental setup
Datasets and metrics
We evaluate our method on a Mandarin Chinese dataset AISHELL-1 [31] and an English dataset LibriSpeech [32]. We extract 80-channel filterbank features computed from a 25ms window with a stride of 10ms. For AISHELL-1, the output vocabulary contains 4230 characters and four special tokens <PAD>, <EOS>, <BOS>, <UNK>. For LibriSpeech, because PLM and the English ASR model use different output vocabularies, we directly use the vocabulary of PLM for the ASR model for the convenience of distillation. We use the character error rate (CER) and word error rate (WER) to measure ASR performance for Chinese and English, respectively.
Configurations
For Chinese, the encoder of the ASR model consists of a convolution front-end and a conformer module. The convolution front-end is a 2-dimensional convolution layer with output channels 128, kernel size 3, and strides 2. During training, we apply dropout for conformer blocks (0.1), transformer blocks (0.2), and the convolution layer (0.2) in the CIF module. In addition, we apply SpecAugment [33] with F = 27, mF = 2, T = 50, mT = 2 and p = 1.0. We apply label smoothing with ϵ = 0.1. We train the models with the Adam optimizer [34] with β1 = 0.9, β2 = 0.98, lr = 3e-4 and a weight decay of 0.01. The weights of cross-entropy loss, connectionist temporal classification loss, and quantity loss are set to 1.0, 0.5 and 1.0, respectively. The threshold β of the CIF mechanism is 1.0. The scaling strategy and tail handling in [16] are applied. The weights of distillation losses are tuned on the dev set and chosen from {0.01, 0.1, 0.2, 0.5, 1.0}. αmse and αcos are set to 0.01 and 10, respectively. The PLMs used for distillation are bert-base-chinese 2 for Chinese and bert-baseuncased 3 for English. Note that all PLMs are fixed during training. During inference, we use beam search with beam size 10. For Chinese, we use a 16-layer Transformer LM (trained with the text of all training data) via shallow fusion [1] with the interpolation weight tuned on the dev set.
Results
Results on AISHELL-1
The experiments are conducted on the AISHELL-1 dataset. As depicted in Table 1, we first compare the CIF-based ASR model with models in other literature. With comparable model parameters, the CIF-based ASR model achieves comparable performance to the ESPnet conformer [35], with or without LM. Then, we use the CIF-based ASR model with LM as the baseline to evaluate the effectiveness of our method. We experiment with three settings: ACD only, LRD only, and HKD which combines ACD and LRD. The results reported in the last three rows show that ACD, LRD, and HKD achieve about 4%, 8%, and [38]. We can conclude that 1) both ACD and LRD can improve ASR performance; 2) HKD could further enhance the ASR model, which proves the complementary nature of LRD and ACD. Note that our method brings no additional inference cost. We compare the contrastive loss with other losses that optimize the similarity metrics directly. We conduct experiments under two settings: a CIF-based ASR baseline and a CIF-based ASR baseline with LRD. As shown in Table 2, the contrastive loss outperforms MSE loss and COS loss. This result may result from the fact that contrastive loss encourages the model to learn semantic alignments rather than strictly optimize the similarity metrics. Thus, the contrastive loss can perform better under the cross-modal distillation settings. The weight of MSE loss is set to 1.0. The weight of COS loss is set to 0.2. The weight of contrastive loss, the temperature τ , and the number of negative samples K are set to 1.0, 0.02, and 700, respectively.
We explore the effects of τ and K on ACD with a CIFbased ASR model and a CIF-based ASR model with LRD. Figure 3 shows the trend of CER as the temperature increases. Obviously, increasing τ leads to degradation. With τ chosen from {0.01, 0.02, 0.05}, ACD make CER fluctuate around 4.2% and provide stable improvements. We set τ to 0.02 to report the best results. Figure 3 shows the trend of CER as K increases. Gen- Figure 3: Effects of the temperature and the number of negative samples. erally speaking, more negative samples will lead to better performance. We report the best results with K = 700. When K is chosen from {100, 200, 300, 400, 500, 600}, the ASR model with HKD almost achieves comparable performances (around 4.2%). However, when we remove LRD, a severe deterioration occurs for settings with small K, which implies that LRD helps to stabilize the training of ACD. Since increasing K leads to more training memory cost, it is necessary to choose a compromised K to achieve comparable performance in practical usage.
Results on LibriSpeech
As shown in Table 3, our methods consistently improve performance on dev sets and test sets, which demonstrates the efficacy of our methods on the English dataset. Using ACD and LRD simultaneously, we can achieve a relative WER reduction of 9% on both test-clean and test-other. Our methods yield a lower relative performance gain on the English dataset than on the Chinese dataset. We hypothesize that the difference in the property of output modeling units may result in this discrepancy. In contrast to the Chinese modeling units (characters), the English modeling units (especially some intra-word subwords) may lack clear acoustic boundaries. Therefore, it is difficult for English to learn a proper cross-modal alignment between acoustics and linguistics via contrastive knowledge distillation.
Conclusion
In this work, we introduce a hierarchical knowledge distillation strategy to transfer PLM knowledge to different levels of the CIF-based ASR model. Specifically, we use acoustic contrastive distillation at the acoustic level and linguistic regression distillation at the linguistic level. Compared to the CIFbased ASR baseline, our method brings 15% relative CER reduction on AISHELL-1 and 9% relative WER reduction on Lib-riSpeech. We will explore our methods with larger-scale language models in the future.
†
This work was supported by the National Key Research and Development Program of China (2018AAA0100400), the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA27030300) and the National Natural Science Foundation of China (62206294). ‡ Corresponding author.
Figure 1 :
1The CIF-based ASR model.
The conformer module consists of 15 conformer blocks with d model = 256, d f f n = 2048 and h = 4, kernel size 15 (for depth-wise convolution), and 2 max-pooling layers after the 5th and the 10th blocks. The CIF module contains a 1-dimensional convolution layer with output channels 256, kernel size 3 and strides 1, and an FC layer followed by the sigmoid activation. The decoder consists of several FC layers and a transformer module, which consists of 2 transformer blocks with d model = 256, d f f n = 2048, and h = 4. For English, the hidden size and the number of attention heads are set to 512 and 8, respectively. The number of output channels of the convolution layer in the CIF module is 512.
Pre-trained Language ModelCIF-based ASR Model ......
LRD
ACD
...
...
Linguistic Level
Acoustic Level
Teacher Outputs
P
P N
N
Table 1 :
1Main results on AISHELL-1 (CER %).Model
LM # Param dev (%) test (%)
ESPnet Conformer [35]
46 M
4.5
4.9
ESPnet Conformer [35]
46 M
4.4
4.7
Branchformer [36]
45 M
4.2
4.4
WeNet [37]
46 M
-
4.4
Icefall
-
-
4.3
Neural Transducer [38]
90 M
3.8
4.1
CIF
47 M
4.5
4.9
+ ACD
47 M
4.2
4.7
+ LRD
47 M
4.0
4.5
+ HKD
47 M
3.8
4.2
CIF
47 M
4.4
4.8
+ ACD
47 M
4.2
4.6
+ LRD
47 M
4.0
4.4
+ HKD
47 M
3.8
4.1
Table 2 :
2Comparisonbetween contrastive loss and other dis-
tillation losses (CER %). AD represents acoustic distillation.
MSE, COS, and CONT represent mean square error loss, co-
sine embedding loss, and contrastive loss, respectively.
Model LRD AD AD Loss
w/o LM
w/ LM
dev / test dev / test
CIF
-
4.5 / 4.9
4.4 / 4.8
MSE
4.4 / 4.9
4.4 / 4.8
COS
4.5 / 4.9
4.4 / 4.8
CONT
4.2 / 4.7
4.2 / 4.6
-
4.0 / 4.5
4.0 / 4.4
MSE
4.0 / 4.5
4.0 / 4.5
COS
4.1 / 4.5
4.0 / 4.4
CONT
3.8 / 4.2
3.8 / 4.1
15% relative error rate reduction, respectively. With the help of
PLMs, the CIF-based model achieves comparable performance
with the strong baseline
Table 3 :
3Main results on LibriSpeech (WER %).Model
dev
clean
dev
other
test
clean
test
other
CIF
3.0
7.3
3.3
7.7
+ ACD
3.0
7.2
3.2
7.3
+ LRD
2.8
6.9
3.1
7.1
+ HKD
2.7
6.9
3.0
7.0
https://github.com/MingLunHan/CIF-HieraDist
https://huggingface.co/bert-base-chinese 3 https://huggingface.co/bert-base-uncased
On using monolingual corpora in neural machine translation. C Gulcehre, arXiv:1503.03535arXiv preprintC. Gulcehre et al., "On using monolingual corpora in neural ma- chine translation," arXiv preprint arXiv:1503.03535, 2015.
Cold fusion: Training seq2seq models together with language models. A Sriram, INTERSPEECH. A. Sriram et al., "Cold fusion: Training seq2seq models together with language models," in INTERSPEECH, 2018.
A comparison of techniques for language model integration in encoder-decoder speech recognition. S , SLT. S. Toshniwal et al., "A comparison of techniques for language model integration in encoder-decoder speech recognition," in SLT, 2018.
Component fusion: Learning replaceable language model component for end-to-end speech recognition system. C Shan, C Weng, G Wang, ICASSP. C. Shan, C. Weng, G. Wang et al., "Component fusion: Learn- ing replaceable language model component for end-to-end speech recognition system," in ICASSP, 2019.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, ICASSP. W. Chan et al., "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in ICASSP, 2016.
Distilling the knowledge of BERT for sequenceto-sequence ASR. H Futami, INTERSPEECH. H. Futami et al., "Distilling the knowledge of BERT for sequence- to-sequence ASR," in INTERSPEECH, 2020.
Speech recognition by simply fine-tuning bert. W.-C Huang, ICASSP. W.-C. Huang et al., "Speech recognition by simply fine-tuning bert," in ICASSP, 2021.
X-llm: Bootstrapping advanced large language models by treating multi-modalities as foreign languages. F Chen, M Han, H Zhao, Q Zhang, J Shi, S Xu, B Xu, arXiv:2305.04160arXiv preprintF. Chen, M. Han, H. Zhao, Q. Zhang, J. Shi, S. Xu, and B. Xu, "X-llm: Bootstrapping advanced large language models by treating multi-modalities as foreign languages," arXiv preprint arXiv:2305.04160, 2023.
Knowledge transfer from large-scale pretrained language models to end-to-end speech recognizers. Y Kubo, ICASSP. Y. Kubo et al., "Knowledge transfer from large-scale pretrained language models to end-to-end speech recognizers," in ICASSP, 2022.
Effective sentence scoring method using bert for speech recognition. J Shin, Y Lee, K Jung, ACML. J. Shin, Y. Lee, and K. Jung, "Effective sentence scoring method using bert for speech recognition," in ACML, 2019.
Masked language model scoring. J Salazar, ACL. J. Salazar et al., "Masked language model scoring," in ACL, 2020.
Innovative bert-based reranking language models for speech recognition. S.-H Chiu, B Chen, SLT. S.-H. Chiu and B. Chen, "Innovative bert-based reranking lan- guage models for speech recognition," in SLT, 2021.
Asr rescoring and confidence estimation with electra. H Futami, ASRU. H. Futami et al., "Asr rescoring and confidence estimation with electra," in ASRU, 2021.
Rescorebert: Discriminative speech recognition rescoring with bert. L Xu, ICASSP. L. Xu et al., "Rescorebert: Discriminative speech recognition rescoring with bert," in ICASSP, 2022.
Efficiently fusing pretrained acoustic and linguistic encoders for low-resource speech recognition. C Yi, IEEE Signal Processing Letters. 28C. Yi et al., "Efficiently fusing pretrained acoustic and linguistic encoders for low-resource speech recognition," IEEE Signal Pro- cessing Letters, vol. 28, pp. 788-792, 2021.
Cif: Continuous integrate-and-fire for end-to-end speech recognition. L Dong, ICASSP. L. Dong et al., "Cif: Continuous integrate-and-fire for end-to-end speech recognition," in ICASSP, 2020.
Wav-bert: Cooperative acoustic and linguistic representation learning for low-resource speech recognition. G Zheng, EMNLP (Findings). 2021G. Zheng et al., "Wav-bert: Cooperative acoustic and linguistic representation learning for low-resource speech recognition," in EMNLP (Findings), 2021.
Improving non-autoregressive end-to-end speech recognition with pre-trained acoustic and language models. K Deng, ICASSP. K. Deng et al., "Improving non-autoregressive end-to-end speech recognition with pre-trained acoustic and language models," in ICASSP, 2022.
Distilling the knowledge in a neural network. G Hinton, arXiv:1503.02531arXiv preprintG. Hinton et al., "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Fast end-to-end speech recognition via nonautoregressive models and cross-modal knowledge transferring from bert. Y Bai, Speech, and Language Processing. 29Y. Bai et al., "Fast end-to-end speech recognition via non- autoregressive models and cross-modal knowledge transferring from bert," IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, vol. 29, pp. 1897-1911, 2021.
Improving ctc-based speech recognition via knowledge transferring from pre-trained language models. K Deng, ICASSP. K. Deng et al., "Improving ctc-based speech recognition via knowledge transferring from pre-trained language models," in ICASSP, 2022.
Contrastive representation distillation. Y Tian, ICLR. Y. Tian et al., "Contrastive representation distillation," in ICLR, 2020.
Lrc-bert: latent-representation contrastive knowledge distillation for natural language understanding. H Fu, AAAIH. Fu et al., "Lrc-bert: latent-representation contrastive knowl- edge distillation for natural language understanding," in AAAI, 2021.
Cif-based collaborative decoding for end-to-end contextual speech recognition. M Han, ICASSP. M. Han et al., "Cif-based collaborative decoding for end-to-end contextual speech recognition," in ICASSP, 2021.
Improving end-to-end contextual speech recognition with fine-grained contextual knowledge selection. M Han, L Dong, Z Liang, ICASSP. M. Han, L. Dong, Z. Liang et al., "Improving end-to-end contex- tual speech recognition with fine-grained contextual knowledge selection," in ICASSP, 2022.
Conformer: Convolution-augmented transformer for speech recognition. A Gulati, INTERSPEECH. A. Gulati et al., "Conformer: Convolution-augmented transformer for speech recognition," in INTERSPEECH, 2020.
Attention is all you need. A Vaswani, NeurIPS. A. Vaswani et al., "Attention is all you need," in NeurIPS, 2017.
BERT: pre-training of deep bidirectional transformers for language understanding. J Devlin, NAACL-HLT. J. Devlin et al., "BERT: pre-training of deep bidirectional trans- formers for language understanding," in NAACL-HLT (1), 2019.
Language models are unsupervised multitask learners. A Radford, A. Radford et al., "Language models are unsupervised multitask learners," 2019.
Representation learning with contrastive predictive coding. A V D Oord, arXiv:1807.03748arXiv preprintA. v. d. Oord et al., "Representation learning with contrastive pre- dictive coding," arXiv preprint arXiv:1807.03748, 2018.
Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline. H Bu, O-COCOSDAH. Bu et al., "Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline," in O-COCOSDA, 2017.
Librispeech: An asr corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, ICASSP. V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: An asr corpus based on public domain audio books," in ICASSP, 2015.
Specaugment: A simple data augmentation method for automatic speech recognition. D S Park, INTERSPEECH. D. S. Park et al., "Specaugment: A simple data augmentation method for automatic speech recognition," in INTERSPEECH, 2019.
Adam: A method for stochastic optimization. D P Kingma, J Ba, ICLR. D. P. Kingma and J. Ba, "Adam: A method for stochastic opti- mization," in ICLR, 2015.
Espnet: End-to-end speech processing toolkit. S Watanabe, INTERSPEECH. S. Watanabe et al., "Espnet: End-to-end speech processing toolkit," in INTERSPEECH, 2018.
Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and understanding. Y Peng, ICML. Y. Peng et al., "Branchformer: Parallel mlp-attention architectures to capture local and global context for speech recognition and un- derstanding," in ICML, 2022.
Wenet: Production oriented streaming and nonstreaming end-to-end speech recognition toolkit. Z Yao, INTER-SPEECH. Z. Yao et al., "Wenet: Production oriented streaming and non- streaming end-to-end speech recognition toolkit," in INTER- SPEECH, 2021.
Integrating lattice-free mmi into end-to-end speech recognition. J Tian, Speech, and Language Processing. J. Tian et al., "Integrating lattice-free mmi into end-to-end speech recognition," IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.
| [
"https://github.com/MingLunHan/CIF-HieraDist"
] |
[
"Mathematical Effects of Linear Visco-elasticity in Quasi-static Biot Models",
"Mathematical Effects of Linear Visco-elasticity in Quasi-static Biot Models"
] | [
"Lorena Bociu ",
"Boris Muha ",
"Justin T Webster "
] | [] | [] | We investigate and clarify the mathematical properties of linear poro-elastic systems in the presence of classical (linear, Kelvin-Voigt) visco-elasticity. In particular, we quantify the time-regularizing and dissipative effects of visco-elasticity in the context of the quasi-static Biot equations. The full, coupled pressure-displacement presentation of the system is utilized, as well as the framework of implicit, degenerate evolution equations, to demonstrate such effects and characterize linear poro-visco-elastic systems. We consider a simple presentation of the dynamics (with convenient boundary conditions, etc.) for clarity in exposition across several relevant parameter ranges. Clear well-posedness results are provided, with associated a priori estimates on the solutions. In addition, precise statements of admissible initial conditions in each scenario are given.the abstract theory of solutions is established, with clear estimates on solutions (and we recap this in detail below). Yet, for linear poro-visco-elastic models, there seems to be no comprehensive presentation in the literature demarcating which parabolic behaviors are present, with clear estimates on solutions quantifying solution regularity and dissipation. There have been recent numerical investigations into the effects of viscoelasticity in Biot type models involving the first author here[11,52], as well as studies where poro-visco-elastic systems are studied numerically[7,24,43]. In this article, we investigate the general linear quasi-static porovisco-elastic system, and clarify the time-regularizing and dissipative effects of structural visco-elasticity; in particular, we map out the theory of partial differential equation (PDE) solutions for linear poro-visco-elastic systems.In this work we focus on perhaps the simplest inclusion of visco-elasticity: Kelvin-Voigt type. This is precisely what was considered in[10]. Here, we build on that mathematical framework, where weak solutions were constructed in a particular 3D case, and the subsequent 1D investigations of viscous effects[11,52]. In the present context, we provide clear well-posedness results with estimates, and a discussion of the construction of solutions (when illustrative). In addition, we make precise an appropriate notion of initial conditions in each relevant scenario. When it is appropriate, we relate the system abstractly to an associated semigroup framework[30,41]. We do not attempt to describe or invoke more sophisticated or recent theories of visco-elasticity here. Indeed, there does not yet seem to be a comprehensive PDE theory of poro-viscoelasticity in the simple, linear case of Kelvin-Voigt structural viscosity.The popular quasi-static formulation (neglecting elastic inertia) of Biot's equations is utilized here. On the other hand, the "full" inertial Biot system is formally equivalent [49] to thermo-elasticity, which is wellstudied[27,30]. Foundational works for the PDE theory of linear poro-elasticity can be found in[1,47,54], and culminating in the more modern works[48,49]. In this traditional framework, visco-elastic effects were considered in the displacement dynamics by invoking the so called secondary consolidation[24,38,49], typical for studies of clays. More recently, as described above, a growing interest in biologically-based Biot systems can also be observed. In these bio-Biot models, linear visco-elastic effects can be incorporated into traditional linear Biot dynamics by taking into account the visco-elastic strain, and possibly adjusting the formula for the local fluid content accordingly (depending on the specific scenario considered, either focusing on incompressible or compressible constituents). We address several parameter regimes of physical interest here. We do this in the spirit of the analysis of the well-known reference [49]; we also include some comments on the PDE effects of secondary consolidation (considered as a partial visco-elasticity) at the end of the work in Section 4. Although we focus on linear models with constant coefficients, recent applications-which inspired the consideration of models here-are in fact nonlinear or taken with timedependent coefficients[9,10,12,15,51]. The work here can be seen as a foundation for extending such considerations to the visco-elastic case; additionally, this work provides a clear and precise framework for researchers utilizing visco-elastic terms as model-regularizations, as is common in fluid-structure interactions, e.g.[29,32,37].A main focus of this work is to introduce the appropriate constituent operators and spaces into the context visco-elastic dynamics, which have been used in abstractly describing the quasi-static Biot dynamics for some time[1,10,15,16,47,49]. Following this, we can "reduce" or frame the poro-visco-elastic dynamics in the context of these operators to apply, when possible; to the knowledge of the authors, this has not been done. Interestingly, in some cases below, the abstract presentation of these systems reveals central features of poro-visco-elastic dynamics which are not immediately obvious in the full presentation of solutions. This is particularly true in considering which type of initial conditions are warranted for each configuration of interest, and connects the analysis thereof to appropriate ODE or semigroup frameworks.Quasi-static Poro-Visco-elastic DynamicsLet Ω ⊂ R n for n = 2, 3 be a bounded, smooth domain. For the dynamics, we follow a bulk of the notational conventions of[49]. In the traditional fully-saturated Biot system we have a pressure equation and a momentum equation; these are given in the variable u, describing the displacements of the solid matrix, | 10.1016/j.jmaa.2023.127462 | [
"https://export.arxiv.org/pdf/2208.11653v2.pdf"
] | 256,615,667 | 2208.11653 | 6e1d867b34f21089803a9ee0548da1c9ca602177 |
Mathematical Effects of Linear Visco-elasticity in Quasi-static Biot Models
4 Feb 2023 February 7, 2023
Lorena Bociu
Boris Muha
Justin T Webster
Mathematical Effects of Linear Visco-elasticity in Quasi-static Biot Models
4 Feb 2023 February 7, 2023poroelasticityimplicit evolution equationsstrong dampingviscoelasticity
We investigate and clarify the mathematical properties of linear poro-elastic systems in the presence of classical (linear, Kelvin-Voigt) visco-elasticity. In particular, we quantify the time-regularizing and dissipative effects of visco-elasticity in the context of the quasi-static Biot equations. The full, coupled pressure-displacement presentation of the system is utilized, as well as the framework of implicit, degenerate evolution equations, to demonstrate such effects and characterize linear poro-visco-elastic systems. We consider a simple presentation of the dynamics (with convenient boundary conditions, etc.) for clarity in exposition across several relevant parameter ranges. Clear well-posedness results are provided, with associated a priori estimates on the solutions. In addition, precise statements of admissible initial conditions in each scenario are given.the abstract theory of solutions is established, with clear estimates on solutions (and we recap this in detail below). Yet, for linear poro-visco-elastic models, there seems to be no comprehensive presentation in the literature demarcating which parabolic behaviors are present, with clear estimates on solutions quantifying solution regularity and dissipation. There have been recent numerical investigations into the effects of viscoelasticity in Biot type models involving the first author here[11,52], as well as studies where poro-visco-elastic systems are studied numerically[7,24,43]. In this article, we investigate the general linear quasi-static porovisco-elastic system, and clarify the time-regularizing and dissipative effects of structural visco-elasticity; in particular, we map out the theory of partial differential equation (PDE) solutions for linear poro-visco-elastic systems.In this work we focus on perhaps the simplest inclusion of visco-elasticity: Kelvin-Voigt type. This is precisely what was considered in[10]. Here, we build on that mathematical framework, where weak solutions were constructed in a particular 3D case, and the subsequent 1D investigations of viscous effects[11,52]. In the present context, we provide clear well-posedness results with estimates, and a discussion of the construction of solutions (when illustrative). In addition, we make precise an appropriate notion of initial conditions in each relevant scenario. When it is appropriate, we relate the system abstractly to an associated semigroup framework[30,41]. We do not attempt to describe or invoke more sophisticated or recent theories of visco-elasticity here. Indeed, there does not yet seem to be a comprehensive PDE theory of poro-viscoelasticity in the simple, linear case of Kelvin-Voigt structural viscosity.The popular quasi-static formulation (neglecting elastic inertia) of Biot's equations is utilized here. On the other hand, the "full" inertial Biot system is formally equivalent [49] to thermo-elasticity, which is wellstudied[27,30]. Foundational works for the PDE theory of linear poro-elasticity can be found in[1,47,54], and culminating in the more modern works[48,49]. In this traditional framework, visco-elastic effects were considered in the displacement dynamics by invoking the so called secondary consolidation[24,38,49], typical for studies of clays. More recently, as described above, a growing interest in biologically-based Biot systems can also be observed. In these bio-Biot models, linear visco-elastic effects can be incorporated into traditional linear Biot dynamics by taking into account the visco-elastic strain, and possibly adjusting the formula for the local fluid content accordingly (depending on the specific scenario considered, either focusing on incompressible or compressible constituents). We address several parameter regimes of physical interest here. We do this in the spirit of the analysis of the well-known reference [49]; we also include some comments on the PDE effects of secondary consolidation (considered as a partial visco-elasticity) at the end of the work in Section 4. Although we focus on linear models with constant coefficients, recent applications-which inspired the consideration of models here-are in fact nonlinear or taken with timedependent coefficients[9,10,12,15,51]. The work here can be seen as a foundation for extending such considerations to the visco-elastic case; additionally, this work provides a clear and precise framework for researchers utilizing visco-elastic terms as model-regularizations, as is common in fluid-structure interactions, e.g.[29,32,37].A main focus of this work is to introduce the appropriate constituent operators and spaces into the context visco-elastic dynamics, which have been used in abstractly describing the quasi-static Biot dynamics for some time[1,10,15,16,47,49]. Following this, we can "reduce" or frame the poro-visco-elastic dynamics in the context of these operators to apply, when possible; to the knowledge of the authors, this has not been done. Interestingly, in some cases below, the abstract presentation of these systems reveals central features of poro-visco-elastic dynamics which are not immediately obvious in the full presentation of solutions. This is particularly true in considering which type of initial conditions are warranted for each configuration of interest, and connects the analysis thereof to appropriate ODE or semigroup frameworks.Quasi-static Poro-Visco-elastic DynamicsLet Ω ⊂ R n for n = 2, 3 be a bounded, smooth domain. For the dynamics, we follow a bulk of the notational conventions of[49]. In the traditional fully-saturated Biot system we have a pressure equation and a momentum equation; these are given in the variable u, describing the displacements of the solid matrix,
Introduction
In the past 10 years, there has been an intense growth of work in theoretical and numerical studies invoking equations of poro-elasticity [3,6,12,16,17,23,26,28,33,42,43,51] (to name just a few). While the initial development of the mathematical theory of poro-elasticity was driven by geophysical applications [4,21,38,44,49,50], some of the recent interest in this field seems due to the fact that deformable porous media models describe biological tissues; these include organs, cartilages, bones and engineered tissue scaffolds [9-11, 13, 31, 39, 40, 45]. The mechanics of biological tissues typically exhibit both elastic and visco-elastic behaviors, resulting from the combined action of elastin and collagen [36,39,40]. These effects can change over time, and the loss of tissue visco-elasticity is relevant to the study of several age-related diseases such as: glaucoma, atherosclerosis, and Alzheimer's disease [45]. Several mathematically-oriented studies have invoked or utilized visco-elastic effects in the dynamics, owing to their analytically and numerically regularizing and dissipative properties, e.g., [8,10,43]. Thus, there is both mathematically-driven and application-driven motivation to consider a comprehensive mathematical investigation of poro-visco-elastic systems. Poro-visco-elastic solids were considered by Biot himself in [5]; the seminal poro-elasticity reference [20] contains a discussion of the modeling of poro-visco-elastic solids, and we also point to the references [22,46] in this regard.
The field of visco-elastic solids is vast, but we note that for purely hyperbolic-like dynamics (bulk, plate/shell, or beam elasticity), the effects of classical, linear visco-elasticity are well-understood and fully characterized at the abstract level [2,18,30]. And, for classical Biot-type systems of porous-elastic dynamics, and the homogenized pore pressure p. The pressure equation, resulting from mass balance, reads:
ζ t − ∇ · [K∇p] = S.
(2.1)
The quantity ζ is the fluid-content, and in the standard Biot model of poro-elasticity it is given by
ζ = c 0 p + α∇ · u. (2.2)
The constant c 0 ≥ 0 represents compressibility of the constituents, and will be considered in two regimes here: c 0 = 0 (incompressible constituents) and c 0 > 0 (compressible constituents). 1 The coupling constant α > 0 is referred to as the Biot-Willis constant, and, in the case of incompressible constituents c 0 = 0, we have α = 1 [21]. The quantity K(x, t) is the permeability tensor of the porous-elastic structure. We present it generally here (as in [12]), but for the analysis below, we will take k = const. This will provide clarity as we demonstrate the mathematical structures of poro-visco-elastic systems, and is in-line with the central mathematical references for the PDE analysis of Biot's equations, [1,47,49]. The fluid source function S is permitted to depend on x and t. The momentum equation for the fluid-solid mixture is given as an elliptic (µ, λ)-Lamé system, as driven by the pressure gradient and a source F:
− µ∆u − (λ + µ)∇∇ · u + α∇p = F. (2.3)
Below, we will consider the body force F to be spatially and temporally dependent. For reference, we recall that the primal, inertial form of the elasticity equation is
ρu tt − µ∆u − (λ + µ)∇∇ · u + ∇p = F. (2.4)
It is instructive to remember that the Biot dynamics begin here, and then take ρu tt ≈ 0 to obtain the standard quasi-static equations of poro-elasticity [20,49].
2.1 Inclusion of Visco-elasticity: δ 1 > 0
In the most straight-forward manner, the incorporation of visco-elasticity may be achieved through the momentum equation. We shall refer to this here as full, linear visco-elasticity, and follow the Kelvin-Voigt approach of including strong (or "structural") damping [2,18,19,30]. This entails adding a strain rate term to the global stress, including a "strength" coefficient, δ 1 ≥ 0, which captures the viscous structural effects. Denoting the symmetric gradient (linearized strain) as ε and standard linear elastic stress as σ we have ε(v) = 1 2 [∇v + ∇v T ], σ(v) = 2µε(v) + λ(∇ · v)I. (2.5) This yields the structural term − div σ(u + δ 1 u t ) = −[µ∆ + (λ + µ)∇div](u + δ 1 u t ) (2.6) in the system, where δ 1 > 0 indicates the presence of visco-elasticity (i.e., δ 1 = 0 represents the standard, elastic Biot dynamics). We note that the inclusion of this term is referred to as visco-elasticity, since the full inertial Biot dynamics in (2.4) would constitute a linear, visco-elastic system of elasticity (a strongly damped structural equation) with the inclusion of δ 1 > 0. This straight-forward inclusion of viscous effects in the solid can be obtained as a limiting case of the poro-visco-elastic modelling in [20].
Remark 2.1. There are many ways to incorporate visco-elastic effects into the modeling of poro-elastic systems. See, for example, [7], where a viscoelastic strain is considered in the case of compressible constituents; the other components of the system are also updated there, including the formula for the fluid content. In general, the field of visco-elasticity is broad, and we do not claim to be exhaustive. Here, our focus on the specific case of linear, quasi-static Biot in the presence of linear, Kelvin-Voigt structural viscosity. Other pertinent references for poro-visco-elastic systems are: [22,34,35,46,53]. The aforementioned references include aspects of the homogenization theory, detailing when and how visco-elasticity can arise in solids, and, in some cases existence results for weak solutions.
A central point here is that we permit the presence of visco-elasticity to affect the definition of the fluid content ζ. In the established reference [20], compressible constituents and viscous effects are considered, with modified fluid content. There are two choices for ζ considered here, encapsulated by δ 2 = 0 or δ 2 > 0:
ζ = c 0 p + α∇ · u + δ 2 ∇ · u t .
(2.7)
When δ 2 = 0, this represents the standard Biot definition of the fluid content, which prevalently appears in the literature for linearized poro-elastic systems with and without visco-elastic effects, e.g., [10,14,49]. A derivation of the model, obtained by heterogeneous mixture approach, can be found in [45,Section 12.2], for instance. In the present note, we take the approach of classifying the system and its solutions in two regimes: δ 2 = 0 and δ 2 > 0, noting that the application of interest can inform which selection is made.
To conclude this section, we re-state the linear poro-visco-elastic system as it is studied herein. We fix α, λ, µ > 0, and consider the following with δ 1 > 0:
−div σ(u + δ 1 u t ) + α∇p = F [c 0 p + α∇ · u + δ 2 ∇ · u t ] t − ∇ · [k∇p] = S, (2.8)
where we accommodate all possible regimes dependent on c 0 ≥ 0 and δ 2 ≥ 0. To (2.8) we associate the following boundary conditions: u = 0, and k∇p · n = 0 on Γ ≡ ∂Ω, (2.9) namely, homogeneous Dirichlet conditions for the displacement and homogeneous Neumann conditions for the pressure. At this juncture, we suggest that the natural initial conditions to be specified are those quantities appearing under the time derivatives above, namely:
δ 1 u(0) = u 0 and [c 0 p + α∇ · u + δ 2 ∇ · u t ](0) = d 0 .
It is immediately clear that the regime of interest and the parameter values affect the relative independence of these quantities. In what follows, we will precisely specify the initial quantities, their relationships, and their spatial regularities, as each depend on the regime under consideration and the type of solution sought.
Lastly, we mention a model of partial visco-elasticity, known as secondary consolidation (for certain soils, such as clays [24,38,49]), which has appeared in the literature. In this case, for λ * > 0, the displacement equation reads:
− (µ + λ)∇∇ · u − µ∆u − λ * ∇∇ · u t + α∇p = F (2.10)
We remark on secondary consolidation briefly in Section 4 at the end of this note.
Notation and Conventions
For the remainder of the paper, we consider α, µ, λ > 0 to be fixed and do not explicitly name them in theorems and subsequent discussions. The Sobolev space of order s defined on a domain D will be denoted by H s (D), with H s 0 (D) denoting the closure of test functions C ∞ 0 (D) := D(D) in the standard H s (D) norm (which we denote by · H s (D) or · s,D ). When s = 0 we may further abbreviate the notation to · denoting · L 2 (D) . Vector-valued spaces will be denoted as L 2 (Ω) ≡ [L 2 (Ω)] n and H s (Ω) = [H s (Ω)] n . We make use of the standard notation for the trace of a function w as γ[w], namely γ[·] as the map from H 1 (D) to H 1/2 (∂D). We will make use of the spaces L 2 (0, T ; U ) and H s (0, T ; U ), when U is a Hilbert space. Associated norms (and inner products) will be denoted with the appropriate subscript, e.g., || · || L 2 (0,T ;U) , though we will simply denote L 2 -inner products by (·, ·) when the context is clear. For estimates on solutions, we utilize the notation that A B means there exists a constant c > 0 not depending on critical constants (made clear by context) so that A ≤ cB.
Operators, Spaces, and Solutions
Let V = H 1 0 (Ω) and V = H 1 (Ω) ∩ L 2 0 (Ω). 2 We will topologize V through the inner-product
a(p, q) = (k∇p, ∇q) L 2 (Ω) ,
which gives rise to the gradient norm on V ; by the Poincaré-Wirtinger inequality, the norm
|| · || V = ||k 1/2 ∇ · ||,
is equivalent to the standard H 1 (Ω). Through Korn's inequality, as well as Poincaré's inequality, we may topologize V through the bilinear form:
e(u, v) = (σ(u), ε(v)) L 2 (Ω) ,
with σ, ε defined as above, leading to the H 1 (Ω)-equivalent norm || · || V = e(·, ·) 1/2 .
We define two linear differential operator associated to the bilinear forms e(·, ·) and a(·, ·), with (respectively) actions given by
Eu = −µ∆u − (λ + µ)∇∇ · u; Ap = −∇ · [k∇p] = −k∆p. (2.11)
Invoking the smoothness of the domain Ω (and standard elliptic regularity), we characterize the domain
D(E) ≡ H 2 (Ω) ∩ V,
which corresponds to homogeneous Dirichlet conditions for the elastic displacements. Similarly, we take
D(A) ≡ {p ∈ H 2 (Ω) ∩ L 2 0 (Ω) : γ[∇p · n] = 0}.
Here, E realizes an isomorphism in two contexts: D(E) → L 2 (Ω) and V → V ′ , where E −1 is interpreted respectively (i.e., through its natural coercive bilinear form e(·, ·)) [49]. Similarly, A : D(A) → L 2 (Ω) or V → V ′ is an isomorphism. See [12] for more discussion. Lastly, we define the nonlocal, zeroth order pressure-to-dilation mapping as follows:
B ≡ −∇ · E −1 ∇. (2.12)
As a mapping on L 2 (Ω), B is central to many abstract analyses of Biot dynamics [10,15,16]. We state its relevant properties as a lemma coming from [12] in the specific context of L 2 0 (Ω) and V = H 1 (Ω) ∩ L 2 0 (Ω):
Lemma 2.1. The operator B ∈ L (L 2 0 (Ω)) ∩ L (V )
. B is an isomorphism on L 2 0 (Ω) and is injective on V . Finally, we have that B is a self-adjoint, monotone operator when considered on L 2 0 (Ω).
Finally, we conclude with a definition of weak solutions for (2.8) which will be valid in all parameter regimes, and is consistent with the abstract definition given in the Appendix for (6.1). We note that such a definition encompassing all regimes does not seem to have appeared in the literature to date. Recall that the fluid content is given by ζ ≡ c 0 p + α∇ · u + δ 2 ∇ · u t , which depends on the nonnegative paremeters c 0 and δ 2 .
2 L 2 0 (Ω) ≡ {f ∈ L 2 (Ω)
: Ω f = 0} which is topologically isomorphic to L 2 (Ω)/R. Definition 1. Let c 0 , δ 1 , δ 2 ≥ 0. We say that (u, p) ∈ L 2 (0, T ; V × V ), such that δ 1 u ∈ H 1 (0, T ; V) and ζ t ∈ L 2 (0, T ; V ′ ), is a weak solution to problem (2.8) if:
• For every pair of test functions (v, q) ∈ V × V , the following equality holds in the sense of D ′ (0, T ): There are many notions of "stronger" solutions to Biot-type systems in the literature (see [49], for instance). To avoid confusion with notions of strong or classical solutions coming from other references, any notion of a stronger solution will be discussed here in the sense of weak solutions with additional regularity. Depending on the regularity of the sources in given cases, we will comment on when PDEs hold in a point-wise sense.
e(u, v) + δ 1 d dt e(u, v) + (α∇p, v) L 2 (Ω) + d dt (ζ, q) L 2 (Ω) + a(p, q) = F, v V ′ ×V + S, q V ′ ×V .
Review of Biot Solutions
: δ 1 = δ 2 = 0
We begin with a discussion of classical Biot dynamics in order to establish a baseline for comparison with our results below on poro-visco-elastic dynamics. Consider now the quasi-static Biot dynamics-in the absence of visco-elastic effects-given in the operator-theoretic form by
Eu + α∇p = F ∈ H 1 (0, T ; V ′ ) [c 0 p + α∇ · u] t + Ap = S ∈ L 2 (0, T ; V ′ ) [c 0 p + α∇ · u](0) = d 0 ∈ V ′ .
(2.14)
Although there are established references (such as [48,49]) that discuss linear Biot solutions, as well as more recent papers (such as [12]), we provide here a direct discussion of Biot solutions. Namely, we present the regularity of solutions as a function of the data, and clearly state the associated a priori estimates. While Theorems 2.2 and 2.3 are not novel results, it is valuable to present them in this way to provide context with our visco-elastic results for δ 1 > 0 below; we also provide brief proof sketches for δ 1 = 0 for completeness. From the established theory for implicit, degenerate equations (discussed in the Appendix), one seeks weak solutions in the class (u, p) ∈ L 2 (0, T ; V × V ).
Formally solving the elasticity equation a.e. t as u = −αE −1 ∇p + E −1 F, and relabeling the source
S → S − ∇ · E −1 F t ≡ S,
we obtain the reduced, implicit equation:
[Bp] t + Ap = S ∈ L 2 (0, T ; V ′ ), [Bp](0) = d 0 ∈ V ′ , where B = (c 0 I + α 2 B).
(2.15)
The above system can, in principle, degenerate if c 0 = 0 and B has a non-trivial kernel [12]-though this will not be the case here. Indeed, in this work, B is invertible on L 2 0 (Ω) independent of c 0 ≥ 0. Remark 2.2. We note that the temporal regularity of F is directly invoked in the reduction step. Namely, to consider S as a given RHS for the abstract degenerate equation, we must require that
∇ · E −1 F t ∈ L 2 (0, T ; V ′ );
this provides consistency with the original source, S. Additionally, to "solve" the elasticity equation for u (given p and F) we will require F ∈ V ′ a.e. t to obtain u ∈ V. Smoother considerations below will require additional spatial regularity for F and F t . Regularity of F t is at issue for the analysis of Biot's dynamics [12] and the analysis below.
To obtain the results below, one applies the general theory developed in [1] and [49] for Biot's dynamics, and adapted recently in [12,15].
Theorem 2.2. Let d 0 ∈ L 2 0 (Ω), F ∈ H 1 (0, T ; V ′ ), S ∈ L 2 (0, T ; V ′ )
, and c 0 ≥ 0. Then there exists a unique weak solution with (u, p) ∈ C([0, T ]; V) × L 2 (0, T ; V ) to (2.14). Moreover, any weak solution satisfies the following energy estimate:
||u|| 2 L ∞ (0,T ;V) + c 0 ||p|| 2 L ∞ (0,T ;L 2 (Ω)) + p 2 L 2 (0,T ;V ) (2.16) ||d 0 || 2 L 2 (Ω) + S 2 L 2 (0,T ;V ′ ) + C(T )||F|| 2 H 1 (0,T ;V ′ ) .
In all cases (c 0 ≥ 0) the dynamics are parabolic in the sense that if S ≡ 0 and F ≡ 0, we have:
||Ap|| L ∞ (0,T ;L 2 (Ω)) + ||Eu|| L ∞ (0,T ;L 2 (Ω)) + ||∇ · [Eu] || L ∞ (0,T ;L 2 (Ω)) ||d 0 || L 2 (Ω) T , (2.17)
to which elliptic regularity for A and E can then be applied (as in [49]).
Proof Sketch of Theorem 2.2. The theorem above can be obtained in two steps: First, weak solutions can be constructed directly (for instance through Galerkin method or by the theory in the Appendix, e.g., [12,15,16,49].) The particular constructed solution will satisfy the energy identity [12,48]. Then, using a classical argument involving the test function t p ds in the reduced pressure equation (2.15) (see [48, pp.116-117]), one concludes that weak solutions are unique.
We mention that the issue of uniqueness is much more subtle in the case of time-dependent coefficients. See the detailed discussion in [12]. Remark 2.3. Above, we have the ability to specify only the quantity d 0 ∈ L 2 0 (Ω)-rather than a pair (u 0 , d 0 ) or (p 0 , d 0 ), so that c 0 p 0 + ∇ · u 0 = d 0 . Indeed, given d 0 ∈ L 2 0 (Ω) in this framework and recalling that
Bp(0) = [(c 0 I + B)p](0) = c 0 p 0 + ∇ · u 0 we have that d 0 ∈ L 2 0 (Ω) =⇒ Bp(0) ∈ L 2 0 (Ω) =⇒ p(0) ∈ L 2 0 (Ω) =⇒ Eu(0) ∈ V ′ =⇒ u(0) ∈ V.
In the case when the operator B is not invertible on a chosen state space and c 0 = 0 (e.g. L 2 (Ω)), the issue can be more subtle. (See [12] for more discussion, as well as the original papers [1,49].) The equivalence of specifying p 0 and d 0 will not necessarily be available when δ 1 > 0 and an additional time derivative is present in the equations.
We now briefly describe the notion of a smooth solution for the classical Biot dynamics above, when the data are smooth. These results can be obtained through elliptic regularity for E and A on L 2 (Ω), and formal a priori estimates via the weak form, or via the implicit semigroup formulation as applied to Biot's dynamics in [49, Theorems 3.1 and 4.1]. We first invoke the properties of B in the context of (2.14) to obtain the chain:
d 0 ∈ V =⇒ Bp(0) ∈ V =⇒ p(0) ∈ V =⇒ Eu(0) ∈ L 2 (Ω) =⇒ u(0) ∈ D(E).
Via the standard methodology for parabolic dynamics equation, choosing stronger initial data yields additional regularity.
Theorem 2.3. If d 0 ∈ V , with S ∈ H 1 (0, T ; V ′ ) and F ∈ H 2 (0, T ; V ′ ),
there exists a unique weak solution with the regularity:
p ∈ H 1 (0, T ; L 2 0 (Ω)) ∩ L ∞ (0, T ; V ) and u ∈ H 1 (0, T ; V).
If, in addition, S ∈ L 2 (0, T ; L 2 (Ω)) and F ∈ H 1 (0, T ; L 2 (Ω)), the above solution also has (2.14) a.e. t and a.e. x and we obtain the additional regularity:
u ∈ L 2 (0, T ; D(E)), p ∈ L ∞ (0, T ; D(A)).
Lastly, if we also assume F ∈ L ∞ (0, T ; L 2 (Ω)), then u ∈ L ∞ (0, T ; D(E)).
We note that, solutions of higher regularity-for instance considering d 0 or p 0 ∈ D(A)-can be considered; however, one must address commutators associated to boundary conditions encoded in A and B. This can be seen, for instance, in attempting to test (2.15) with Ap t .
For completeness, we provide the formal identities which give rise to the smooth solutions above.
Proof Sketch of Theorem 2.3. Consider the reduced form of the Biot equation,
[Bp] t + Ap = S ∈ H 1 (0, T ; V ′ ), with B = [c 0 I + α 2 B]
, as defined in Section 2.3 and
S ≡ S − ∇ · E −1 F t ∈ H 1 (0, T ; V ′ ).
Consider a smooth solution (as, for instance, for finite dimensional approximants) and test with p t to obtain:
k 2 ||∇p(T )|| 2 + T 0 (Bp t , p t )dt = k 2 ||∇p(0)|| 2 + S(0), p(0) V ′ ×V (2.18) + S(T ), p(T ) V ′ ×V + T 0 S t , p V ′ ×V dt.
Alternatively, if we assume that S ∈ L 2 (0, T ; L 2 0 (Ω)) (which follows from the additional assumptions above), then the identity is similar:
k 2 ||∇p(T )|| 2 + T 0 (Bp t , p t )dt = k 2 ||∇p(0)|| 2 + T 0 ( S, p t )dt (2.19)
In both situations, the assumed regularity of the data is sufficient to estimate the RHS and obtain an estimate on p.
With regularity of the pressure p in hand, we consider the full system in (2.14) and formally differentiate the elasticity equation (2.14) 1 . This yields:
Eu t + α∇p t = F t ∈ L 2 (0, T ; V ′ ) [c 0 p + α∇ · u] t + Ap = S ∈ L 2 (0, T ; L 2 0 (Ω)) or H 1 (0, T ; V ′ ) (2.20)
We can test the first equation by u t and the second by p t and add to obtain the identity:
e(u t , u t ) + c 0 ||p t || 2 + k 2 d dt ||∇p|| 2 = F t , u t V ′ ×V + (S, p t ) L 2 (Ω) ,(2.21)
where we have assumed the case S ∈ L 2 (0, T ; L 2 0 (Ω))-the appropriate modifications are clear for the other case, as in (2.18) and (2.19) above. The RHS can be estimated, with the assumed regularities of F t and S. The additional regularities stated in the theorem are then read-off from the individual equations in (2.14).
Visco-elastic Cases of Interest
In considering full, linear poro-visco-elasticity, we will take δ 1 > 0, and retain the parameter to track terms which depend on it. We consider the independent cases:
• compressible constituents c 0 > 0 and incompressible constituents c 0 = 0;
• standard fluid content δ 2 = 0 and adjusted fluid content δ 2 > 1.
This yields four cases of interest for:
Eu + δ 1 Eu t + α∇p = F ∈ H 1 (0, T ; V ′ ) [c 0 p + α∇ · u + δ 2 ∇ · u t ] t + Ap = S ∈ H 1 (0, T ; V ′ ).
(2. 22) In the analysis in the sequel, we will often require additional temporal regularity on the volumetric source S and additional regularity for F beyond what is specified above. It is natural, and analogous to (2.14), to take initial conditions of the form
[c 0 p + α∇ · u + δ 2 ∇ · u t ](0) = d 0 ∈ V ′ , and δ 1 E(u(0)) ∈ V ′ .
(2.23)
However, we will discuss initial conditions more precisely on a case by case basis. In fact, a main feature of our subsequent analysis is in addressing this point. Which initial quantities can be specified depends on the specific parameter regime (δ 1 , δ 2 , c 0 ≥ 0), of course being mindful of possible over-specification. Owing to the time-derivatives present in both equations-in contrast to Biot's traditional equations (2.14)-we are unable to circumvent the need to specify two initial quantities; however the relationship between them will be an interesting question to be addressed.
Summary of Initial Conditions:
We now provide a summary of proper specifications of initial conditions, with justifications to follow in the appropriate sections. Of course, there are questions of scaling and regularity of these conditions. Such matters will translate into the sought-after notion of solution. Though a natural quantity is the fluid content, we relegate our summary to the two primal variables: (p, u) with possible initial conditions (p(0), u(0)). Of course these are possibly related through the quantity
d 0 = [c 0 p + α∇ · u + δ 2 ∇ · u t ](0).
The proper initial conditions for (2.22) are given in the table below. We take δ 1 > 0 and consider c 0 ≥ 0, δ 2 ≥ 0.
c 0 = 0 c 0 > 0 δ 2 = 0 p(0) or (p(0), u(0)) (p(0), u(0)) or (p(0), p t (0)) δ 2 > 0 p(0) or (p(0), u(0)) p(0) or (p(0), u(0))
Remark 2.4. There may be physical restrictions about the permissibility of certain parameter combinations. For instance, in the application to biological tissues, when it is standard to take α = 1 and c 0 = 0 [45], the parameter δ 2 should be nullified [11,52]. This is to say, the combination δ 1 , δ 2 > 0, c 0 = 0 may not be physically relevant, however, in this mathematically-oriented work we accommodate all parameter combinations and describe the features of the resulting dynamics.
Poro-Visco-elastic System, Reduction, and Solutions
Section 3 constitutes the main thrust of the paper. We will now consider the linear poro-visco-elastic system, as presented in (2.22), with δ 1 > 0. We note that the boundary conditions are embedded in the operators A and E.
Outline and Discussion of Main Results
Section 3 is divided as follows: First, we consider the traditional fluid content in the model (taking δ 2 = 0) in Section 3.2 and conditioning on the values of the storage coefficient c 0 ≥ 0 in the contained subsections. Subsequently, in Section 3.3, we consider the model with modified fluid content (δ 2 > 0). In addition to analyzing the full system, we will reformulate the model abstractly to apply established results and obtain well-posedness of the dynamics in a variety of functional frameworks. In each case described in Section 3, we provide:
• a discussion of the dynamics,
• a state reduction,
• a discussion of initial conditions,
• and a well-posedness theorem with estimates. When it is instructive, we provide corresponding estimates on solutions and describe the resulting constructions of solutions. While the abstract results and main techniques that we employ are not novel to this paper, (i) the abstract problem formulation is, as well as (ii) the application of these abstract results to the linear poro-visco-elastic model. Such results on poro-visco-elastic systems have not appeared in the literature to the knowledge of the authors.
The novel results we obtain here are now briefly described.
• Theorem 3.1 utilizes the full formulation of the linear poro-visco-elastic dynamics and provides clear a priori estimates on solutions, not distinguishing between cases with the storage coefficient c 0 ≥ 0.
• Theorem 3.2 gives a well-posedness result when c 0 > 0 that is obtained through a priori estimates without the use of the semigroup framework. Theorem 3.3 obtains a well-posedness result and utilizes the semigroup framework. Different assumptions on initial conditions provide different outcomes in these aforementioned results. Subsequently, a semigroup decay result is presented in Theorem 3.4.
• A similar sequence is repeated, on different spaces, when c 0 = 0; this case is referred to as the ODE case, for reasons explained below. Theorem 3.6 provides well-posedness of solutions, Theorem 3.7 provides a detailed description of the regularity of solutions, and Theorem 3.8 gives explicit exponential decay, even in this ODE setting, obtained through direct estimates.
• In the case where δ 2 > 0 (modified fluid content), we have only one new theorem, Theorem 3.9. This is owing to the fact that the abstract structure of this problem reduces to that of the traditional elastic Biot system, to which the previous results are then applied. The novel contribution here, then, is to demonstrate how the system is reduced in this fashion.
Case 1: Traditional Fluid Content, δ 2 =0
Take δ 2 = 0 in (2.22). We begin with a formal discussion of weak solutions for the full system, and then proceed to the abstract system reduction. After these discussions, we proceed to rigorous statements concerning solutions.
Motivating Discussion
Let us begin by describing the energy identity for the full dynamics, before any system reduction is made.
Recall the system full dynamics under consideration:
Eu + δ 1 Eu t + α∇p = F ∈ H 1 (0, T ; V ′ ) [c 0 p + α∇ · u] t + Ap = S ∈ H 1 (0, T ; V ′ ). (3.1)
From this, we will have (following from [10]) the a priori estimate on solutions:
||u|| 2 L ∞ (0,T ;V) + c 0 ||p|| 2 L ∞ (0,T ;L 2 (Ω)) + δ 1 ||u t || 2 L 2 (0,T ;V) + ||p|| 2 L 2 (0,T ;V ) (3.2) c 0 ||p 0 || 2 + ||u 0 || 2 V + ||S|| 2 L 2 (0,T ;V ′ ) + ||F|| 2 L 2 (0,T ;L 2 (Ω)) .
Several comments are in order:
• We need not invoke any additional regularity of the sources S or F beyond those appearing in (3.1) and (3.2); indeed we will do this later.
• Even in the estimate above, one can replace the norm on F as follows: ||F|| 2 L 2 (0,T ;L 2 (Ω)) → ||F|| H 1 (0,T ;V ′ ) . In this case, the constant on the RHS corresponding to becomes dependent on time, i.e.,
LHS ≤ C(T )RHS in (3.2).
• Note, from the RHS, d 0 = [∇ · u + c 0 p](0) does not explicitly appear; on the other hand, any two of the three of u(0) = u 0 , p(0) = p 0 , or d 0 may be specified, with the third obtained immediately thereafter. Now, we proceed to obtain an abstract reduction of (3.1), in line with what was done for (2.15) for the purely elastic dynamics (δ 1 = δ 2 = 0). This is the central insight in the analysis of these poro-visco-elastic dynamics.
From the displacement equation (3.1) 1 , we have:
δ 1 u t + u = E −1 F − αE −1 (∇p). (3.3)
This equation can be explicitly solved for u as an ODE in t for a.e. x (see Lemma 3.5 below). We may then differentiate the pressure equation in time:
c 0 p tt + α∇ · u tt + Ap t = S t ∈ L 2 (0, T ; V ′ ).
We then rewrite ∇ · u tt through the time derivative of (2.22) 1 :
δ 1 ∇ · u tt = −∇ · u t + ∇ · E −1 (F t ) + αBp t = α −1 [c 0 p t + Ap] − α −1 S + ∇ · E −1 (F t ) + αBp t .
Recalling the definition B = −∇ · E −1 ∇, and taking
S = δ −1 1 [S − α∇ · E −1 (F t )] + S t ,(3.4)
we have obtained a reduced pressure equation in this case:
c 0 p tt + A + δ −1 1 (c 0 I + α 2 B) p t + δ −1 1 Ap = S. (3.5)
We are now in a position to make several salient observations about linear poro-visco-elastic dynamics:
• When c 0 > 0, we observe a strongly damped hyperbolic-type equation [25,30](and references therein). The damping operator D is given by
D ≡ A + δ −1 1 (c 0 I + α 2 B) = A + δ −1 1 B. (3.6)
It can be interpreted as D : V → V ′ or from D(A) → L 2 (Ω). This operator is nonlocal but has the property of being A-bounded in the sense of [18, p.17] (see also [30]). Roughly, D being A-bounded means that D acts as A does in its dissipative properties.
• We provide an explicit definition for weak solutions below for the reduced problem (3.5).
• There is clear singular behavior in the equation in the visco-elastic parameter δ 1 ց 0.
• To obtain the reduced equation, we have differentiated the fluid source, S. We then inherit the requirement that S ∈ H 1 (0, T ; V ′ ) to make use of the reduced formulation. Consequently, we again need F ∈ H 1 (0, T ; V ′ ) when invoking the abstract reduced form in (3.5).
• It is not obvious at this stage what the connection is between the primally specified initial quantities, u(0) and d 0 , and those standard ones for the strongly damped wave equation, p(0) and p t (0); we resolve this below, with distinct theorems for these two different frameworks.
• The formal energy identity for the reduced dynamics on t ∈ [0, T ] can be written:
1 2 c 0 p t (T ) 2 + 1 δ 1 a(p(T ), p(T )) + T 0 a(p t , p t ) + δ −1 1 Bp t , p t L 2 (Ω) dt = 1 2 c 0 p t (0) 2 + 1 δ 1 a(p(0), p(0)) + T 0 S, p t V ′ ×V dt.(3.7)
We shall reference this later.
In the next subsection, we present a well-posedness and regularity theorem (in two parts) which is independent of c 0 ≥ 0. In Theorem 3.1 we approach the problem through the full formulation and provide a priori estimates and various regularity assumptions on the data. Secondly, in the subsections that follow, we will consider c 0 > 0 and c 0 = 0 separately. For the full system with c 0 > 0, we have Theorem 3.2. For the reduced formulation, we provide Theorem 3.3, which is achieved through the second-order semigroup theory, and requires specification of both (p(0), p t (0)); from the obtained solution, we infer regularity about the "natural" initial quantities. Following these theorems, we may compare the resulting regularity of the produced solutions. From the above facts, we also infer:
General Well-posedness Result
: c 0 ≥ 0 Theorem 3.1. Consider c 0 ≥ 0 in (3.1). Let S ∈ L 2 (0, T ; V ′ ), F ∈ H 1 (0, T ; V ′ ) ∪ L 2 (0, T ; L 2 (Ω)). Take u 0 ∈ V and c 0 p 0 ∈ L 2 0 (Ω).• u ∈ C([0, T ]; V)), • c 0 p t ∈ L 2 (0, T ; V ′ ),
• c 0 p ∈ C([0, T ]; L 2 0 (Ω)).
[ Part 2] In addition to the previous assumptions, take p 0 ∈ V and u 0 ∈ D(E), and assume that F ∈ H 1 (0, T ; L 2 (Ω)) and S ∈ H 1 (0, T ; V ′ ) ∩ L 2 (0, T ; L 2 0 (Ω)). Then the above weak solution has the additional regularity that
p ∈ H 1 (0, T ; L 2 (Ω)) ∩ L ∞ (0, T ; V ) ∩ L 2 (0, T ; D(A)) and u t ∈ L ∞ (0, T ; V).
The resulting solutions satisfy system (3.1) a.e. x a.e. t. Furthermore, if F ∈ L 2 (0, T ; V) also, then we have Eu ∈ H 1 (0, T ; V) and u ∈ H 2 (0, T ; V) in addition.
In Part 1 of the Theorem above, there is not enough regularity to infer any spatial regularity of p t (0) from the equation. In Part 2, however, we can read-off the regularity of p t (0) ∈ V ′ (when c 0 > 0) from the pressure equation a posteriori as follows: since S(0) ∈ V ′ is defined for S ∈ H 1 (0, T ; V ′ ):
c 0 p t (0) = S(0) − Ap(0) − α∇ · u t (0) ∈ V ′ .
We also obtain an initial condition for u t (0) ∈ D(E) in Part 2 of the theorem via the elasticity equation and the regularity of p 0 , u 0 , and F. Proof of Theorem 3.1. The construction of solutions in Part 1 of the theorem follows a standard approach via the a priori estimate (the baseline energy inequality) in (3.2). (For instance, see the construction given through full spatio-temporal discretization given in [10].) Uniqueness is obtained straightforwardly, as, for a weak solution, the function u t ∈ L 2 (0, T ; V) is a permissible test function in the elasticity equation (3.1) 1 .
(See the analysis in [10].) This implies that any weak solution satisfies the energy inequality (3.2). As the problem is linear, uniqueness then follows.
To obtain Part 2, we point to the requisite a priori estimate for higher regularity. This estimate can be obtained in the discrete or semi-discrete framework (i.e., on Galerkin approximants, as in [15]), and the constructed solution satisfies the resulting estimate. Uniqueness at the level of weak solutions remains. To obtain the a priori estimate, differentiate (3.1) 1 in time and test with u t , then test (3.1) 2 with p t . This produces the formal identities
δ 1 2 d dt e(u t , u t ) + e(u t , u t ) + α(∇p t , u t ) = (F t , u t ) (3.8) c 0 ||p t || 2 + α(∇ · u t , p t ) + k 2 d dt ||∇p|| 2 = (S, p t ). (3.9)
We note that, from the second identity above, we can treat the term (S, p t ) as an inner product (and absorb p t on the LHS) when c 0 > 0. However, to have a result which is independent of c 0 ≥ 0, we relax the regularity below. Doing so, adding the two identities, and integrating in time, we obtain:
δ 1 2 e(u t (T ), u t (T )) + k 2 ||∇p(T )|| 2 + T 0 [e(u t , u t ) + c 0 ||p t || 2 ]dt = δ 1 2 e(u t (0), u t (0)) + k 2 ||∇p 0 || 2 + T 0 (F t , u t )dt − T 0 S, p V ′ ×V dt + S, p V ′ ×V t=T t=0
.
The RHS and the data S, F, and p 0 have appropriate regularity to control the LHS. We note, of course, that at t = 0 we have:
E(u t (0)) = −α∇p 0 − E(u 0 ) + F(0); which is bounded in V ′ . Since E : V → V ′ is identified by the Riesz Isomorphism, this gives that ||u t (0)|| 2 V ||p 0 || 2 V + ||u 0 || 2 V + ||F|| 2 H 1 (0,T ;V ′ ) .
From the above, we obtain that solutions are bounded in the sense of u t ∈ L ∞ (0, T ; V) and p ∈ L ∞ (0, T ; V )∩ H 1 (0, T ; L 2 0 (Ω)). The remaining statements on regularity are read-off from the equations (3.1) for the data as prescribed respectively in the statement of Theorem 3.1.
Remark 3.1. An alternative way to obtain the same result for more regular solutiosn is to invoke the test function E(u t ) in the displacement equation (3.1) 1 . Noting that the divergence operator commutes with the Laplacian, we observe u ∈ H 1 (0, T ; D(E)), p ∈ L ∞ (0, T ; V ) ∩ L 2 (0, T ; D(A)), having specified initial conditions p 0 ∈ V , u 0 ∈ D(E).
Compressible Constituents
, c 0 > 0
The equation (3.5) above, with c 0 > 0, is hyperbolic-like with strong damping; this renders the entire system parabolic [18,30], with associated parabolic estimates. The damping operator D defined in (3.6) is A-bounded in the sense of [18]. Equations of such type can also arise in acoustics-see e.g. [25]. We elaborate below through several additional theorems. Before doing so, let us provide a clear definition of weak solutions to the reduced wave equation (3.5). Namely, when c 0 > 0, we define p to be a weak solution to
c 0 p tt + A + δ −1 1 B p t + δ −1 1 Ap = S, with S = δ −1 1 [S − α∇ · E −1 (F t )
] + S t and B = c 0 I + α 2 B, to mean: Definition 2 (Weak Solution of Reduced, Strongly-Damped Wave). Let S ∈ L 2 (0, T ; V ′ ). A weak solution of (3.5) is a function p ∈ H 1 (0, T ; V ) ∩ H 2 (0, T ; V ′ ) such that for a.e. t > 0 and all q ∈ V , one has
c 0 p tt , q V ′ ×V + Dp t , q V ′ ×V + δ −1 1 a(p, q) = S, q V ′ ×V ,(3.10)
where we interpret D = A + δ −1 1 (c 0 I + α 2 B) : V → V ′ through the properties of the operators A and B given in Section 2.3.
We point out that the requirements that S ∈ H 1 (0, T ; V ′ ) and F ∈ H 1 (0, T ; V ′ ) are sufficient to guarantee that S ∈ L 2 (0, T ; V ′ ).
In anticipation of the use of semigroup theory applied to the abstract presentation of the dynamics in (3.5) as a wave-type equation, we now address the following question:
What regularity can be obtained from the system when specifying, as an initial state, p t (0) = p 1 ?
The next theorem addresses this through a priori estimates on the full system (3.1), before we move to the semigroup theory for (3.5) in the later Theorem 3.3.
Theorem 3.2. Let S ∈ H 1 (0, T ; L 2 0 (Ω)) and F ∈ H 1 (0, T ; V ′ ). Consider initial data of the form p(0) = p 0 ∈ V and p t (0) = p 1 ∈ L 2 0 (Ω).
Then there exists unique, finite energy weak solution p, i.e., in the sense of Definition 2 to (3.5), with the identity (3.7) holding in D ′ (0, T ).
Assuming, in addition, that u 0 ∈ D(E) and F ∈ L 2 (0, T ; L 2 (Ω)), one obtains a unique weak solution to the full system (3.1) in the sense of Definition 1.The following additional statements hold:
• The unique solution u to (3.3) has u ∈ H 1 (0, T ; D(E)).
• p ∈ L ∞ (0, T ; D(A)).
• S ∈ L 2 (0, T ; V ) =⇒ Ap ∈ L 2 (0, T ; V ). Proof of Theorem 3.2. Since (3.5) has the form of the strongly damped wave equation, the construction of solutions is standard and we omit those details. We suffice to say, solutions can be constructed via the Galerkin method, and approximants satisfy energy identity (3.7). Therefore, one obtains a weak solution as weak/weak- * limits of the approximations. Moreover, uniqueness follows directly from the energy identity and linearity of the system-the regularity of p t is sufficient for it to be used as a test function for an arbitrary weak solution (unlike the case for the undamped wave equation).
The displacement u is recovered by using obtained regularity of p and solving ODE 3.3 in time, in the space D(E) (see Lemma 3.5 below). Additional regularity of p is obtained by using equation (3.1) 2 :
Ap = S − c 0 p t − α∇ · u t ∈ L ∞ (0, T ; L 2 (Ω)).
The last statement of the theorem also follows by noticing p t ∈ L 2 (0, T ; V ) and ∇ · u t ∈ L 2 (0, T ; V ).
Here we emphasize that, in the original formulation, our main poro-visco-elastic equations, (2.22), it is not necessary to specify p t (0). However, p t (0) can be formally obtained from u 0 and d 0 as we now describe. Let us consider
d 0 = c 0 p(0) + ∇ · u(0) ∈ V.
And, correspondingly, assume that F ∈ H 1 (0, T ; L 2 (Ω)) and take Eu(0) ∈ L 2 (Ω) to be fully specified. As E : D(E) → L 2 (Ω) is an isomorphism, this provides u(0) ∈ H 2 (Ω) ∩ V, and we can back-solve from d 0 ∈ V to obtain p(0) ∈ V , providing ∇p(0) ∈ L 2 (Ω). Then, again from (2.22) 1 , we read-off E[u + δ 1 u t ](0) ∈ L 2 (Ω) and infer that u t (0) ∈ H 2 (Ω) ∩ V since
δ 1 Eu t (0) = F(0) − α∇p(0) − Eu(0) ∈ L 2 (Ω).
Finally, p t (0) can be read-off from the pressure equation, when the time trace S(0) ∈ L 2 (Ω) is well-defined:
c 0 p t (0) = S(0) − Ap(0) − α∇ · u t (0) ∈ V ′ . (3.11)
Thus, we observe that the energy methods (and standard solutions, as in Theorem 3.1 Part 2) applied to the original system gives a different (lower) regularity of solutions than that obtained in Theorem 3.2. Thus, by prescribing p t (0) ∈ L 2 0 (Ω) (instead of prescribing u 0 ) and invoking the wave structure, we obtain an improved regularity result in Theorem 3.2. Now, we proceed to invoke the semigroup theory for second-order abstract equations with strong damping. Our primary semigroup reference will be [41], and for the strongly damped wave equation [2,18]. In this case, our damper D is A-bounded. This is typically written as A ≤ D ≤ A, which we rewrite here as: There exists appropriate constants such that (Aq, q) L 2 (Ω) (Dq, q) L 2 (Ω) (Aq, q) L 2 (Ω) , ∀q ∈ D(A).
We do not provide an in-depth discussion of the correspondence of the existence of a C 0 -semigroup on a given state space and associated solutions, instead referring to the [41,Chapter 4].
We now provide the framework for Theorem 3.3.
• The strongly damped wave equation in (3.5) has a first order formulation considering the state
y = [p, p t ] T ∈ Y ≡ V × L 2 0 (Ω) written asẏ = 0 I −[δ 1 c 0 ] −1 A −c −1 0 D y + F , y(0) = [p 0 , p 1 ] T . (3.12) • The operator A ≡ 0 I −[c 0 δ 1 ] −1 A −c −1 0 D is taken with domain D(A) ≡ D(A) × D(A),(3.13)
and D as given in (3.6)
with F ≡ [0, c −1 0 S] T .
The theorem below will provide the existence of a semigroup e At ∈ L (Y ). In this context, we will obtain a solution y(t) = e At y 0 to the first order formulation in two standard contexts:
• When y 0 ∈ D(A), the resulting solution lies in C 1 ((0, T ]; Y ) ∩ C 0 ([0, T ]; D(A)) and satisfies (3.12) pointwise.
• When y 0 ∈ Y , the resulting solution is C 0 ([0, T ]; Y ) and satisfies a time-integrated version of (3.12); these solutions are sometimes called generalized or mild [41], and are, in fact, C 0 ([0, T ]; Y )-limits of solutions from the previous bullet. Namely, we can approximate the data y 0 ∈ Y by y n 0 ∈ D(A), and obtain the generalized solution as a C 0 ([0, T ]; Y )-limit of the solutions emanating from the y n 0 .
• It is standard (in this linear context) to obtain weak solutions (in the sense of Definition 2) by considering smooth solutions emanating from y n 0 ∈ D(A) as approximants. Lastly, one may select the state space Z = L 2 0 (Ω)×L 2 0 (Ω) with D(A) the same as before. In this context, A again generates a C 0 -semigroup e At ∈ L (Z). This semigroup is again analytic, though it is not a contraction semigroup. In this case, with [p 0 , p 1 ] T ∈ (L 2 0 (Ω)) 2 we obtain solutions in the sense of C 0 ([0, T ]; Z).
Proof of Theorem 3.3. The proof of this theorem follows immediately from the application of [30, pp.292-293] to the present framework to obtain the semigroup. For the case of taking the state space Z, see also [2]. In passing from the semigroup to solutions-taking into account the inhomogeneity F -we invoke [41,Chapter 4.2].
We note that S ∈ H 1 (0, T ; L 2 0 (Ω)) and F ∈ H 1 (0, T ; V ′ ) implies S ∈ L 2 (0, T ; L 2 0 (Ω)). Moreover, stronger assumptions S ∈ H 2 (0, T ; L 2 0 (Ω)) and F ∈ H 2 (0, T ; V ′ ) imply S ∈ H 1 (0, T ; L 2 0 (Ω)). We can say more, since the strongly damped wave equation is known to be exponentially stable. ||e At || L ≤ M 0 e −γt , and, more generally, ||A k e At || L ≤ M k t −k e −γt , t ≥ 0, k ∈ N.
(3.14)
Proof of Theorem 3.4. As the above semigroup is analytic, and the requisite spectral criteria are satisfied by the operator A, exponential decay is inferred immediately from the [2, Theorem 1.1(b), pp.20-21].
In the above semigroup approach, we have constructed solutions in the variable p via the semigroup approach. We now describe how to pass from the variable p to u via the ODE in (3.1) 1 . This observation is central to subsequent sections: When a given pressure function p is obtained in a regularity class for which the ODE
δ 1 u t + u = E −1 F − αE −1 (∇p)
can be readily interpreted, we obtain a mapping ∇p → u. When one has decay estimates as above in Theorem 3.4 these can be pushed from p to u in the solution to the full system (3.1).
Lemma 3.5. Consider the ODE in (3.3). Letting
Q = δ −1 [−αE −1 (∇p) + E −1 (F)],
we have u t + δ −1 1 u = Q, which can be solved as:
u(x, t) = e −t/δ1 u(x, 0) + t 0 e [τ −t]/δ1 Q(x, τ )dτ. (3.15)
From Lemma 3.5, one can pass the decay in Theorems 3.3 and 3.4 (as well as Theorem 3.2) from the variables p to u (and u t ). We provide an example for the state space norm below.
Take F ≡ 0 and S = 0. We will obtain a decay result for the displacement u through the ODE. From (3.14) we have that
y(t) Y ≤ M 0 e −γt y 0 Y =⇒ ∇p(t) L 2 (Ω) ≤ M 0 e −γt y 0 Y =⇒ E −1 (∇p(t)) D(E) ≤ M 0 e −γt y 0 Y .
(3.16) Using (3.15) we obtain
u(t) D(E) ≤ e − t δ 1 u 0 D(E) + M 0 1 − γδ 1 (e −tγ − e − t δ 1 ) y 0 Y
Of course, the estimate above can be readily adjusted to accommodate the space in which u 0 is specified.
Incompressible Constituents
We now take c 0 = δ 2 = 0 in (2.22), which, following the calculations of Section 3.2, yields the abstract dynamics:
δ 1 A + α 2 B p t + Ap = S, with S = S − α∇ · E −1 (F t ) + δ 1 S t . (3.17)
Note that S = δ 1 S from the previous section. In this case, we make a change of variable: 3.18) and proceed in the variable q. In this scenario, the operator α 2 B + δ 1 A ∈ L (V, V ′ ) and is boundedly invertible in this sense, following the properties of A and B in Section 2.3. Indeed, this follows immediately from: (i) Lax-Millgram on the strength of A and (ii) the continuity and self-adjointness of B on L 2 0 (Ω). Under the change of variable, our abstract equation (3.17) can be written as an ODE in the variable q as:
q = [α 2 B + δ 1 A]p (q t + A[(α 2 B + δ 1 A) −1 ]q = S ∈ L 2 (0, T ; V ′ ); q(0) = q 0 ∈ V ′ . (3.19)
The operator R has that
R ≡ A[(α 2 B + δ 1 A) −1 ] ∈ L (V ′ ) ∩ L (L 2 0 (Ω)); it is zeroth-order with L 2 (Ω)-adjoint R * = [(α 2 B + δ 1 A) −1 ]A.
The ODE in (3.19) can be interpreted either in the sense of C(0, T ; V ′ ) or C(0, T ; L 2 0 (Ω)), depending on the regularity of the data which compose S-namely, whether we require S, S t and F t to take values in L 2 or H 1 type spaces. In either case, the ODE can thus be solved in the context of a uniformly continuous semigroup [41]. Here, the semigroup is e −Rt ∈ L (X), where X can be chosen to be V ′ or L 2 0 (Ω). For S ∈ L 2 (0, T ; V ′ ) and q 0 ∈ V ′ , the classical variation of parameters formula [41] yields that q ∈ H 1 (0, T ; V ′ ). However, p ∈ H 1 (0, T ; V ) is immediately obtained through inversion of the change of variables (3.18) a.e. t. Then, the elasticity ODE can be explicitly solved in time as in Lemma 3.5, providing u ∈ H 1 (0, T ; V). From this, we observe that p(0) = p 0 ∈ V must be specified at the outset in order to possess an initial condition of the form q(0) = q 0 ∈ V ′ . This reflects the fact that, since this cases reduces to an ODE, there is no spatial regularization provided by the pressure dynamics. In this case, one only needs to specify p(0) to obtain a solution to the abstract ODE in (3.19). This comes through the appearance of the combination δ 1 u t + u in the elasticity equation and the structure of the solution to the ODE in u. To recover the displacement variable u from p, one must additionally specify u(0) as well. Lastly, after solving the ODE in q (and therefore for p), one can revert back to the ODE for u to obtain additional temporal regularity (as a function of the regularity of F), since both sides of the equality below can be time-differentiated
δ 1 u t + u = E −1 [F − α∇p].
Through this discussion, we have arrived at the following theorems.
Theorem 3.6 (ODE Theorem). Let S ∈ H 1 (0, T ; V ′ ) and F ∈ H 1 (0, T ; V ′ ) (so that S = S −α∇·E −1 (F t )+ δ 1 S t ∈ L 2 (0, T ; V ′ )) and take p 0 ∈ V .
Then there exists a unique (ODE) solution p ∈ H 1 (0, T ; V ) to (3.19), given by
p(t) = [α 2 B + δ 1 A] −1 e −Rt [α 2 B + δ 1 A]p 0 .
• If S ∈ L 2 (0, T ; L 2 0 (Ω)), then p ∈ L 2 (0, T ; D(A)).
• If u 0 ∈ D(E) and F ∈ L 2 (0, T ; L 2 (Ω)), then there exists a unique weak solution (p, u) to (2.22), with p as before and u ∈ H 1 (0, T ; D(E)). In this case, the formal energy equality in (3.7) holds for the weak solution (p, u) with c 0 = 0.
Theorem 3.7 (Regularity for ODE ). Let m ∈ N, S ∈ H 1 (0, T ; H m (Ω)∩L 2 0 (Ω)) and F ∈ H 1 (0, T ; H m−1 (Ω)) (so that S = S − α∇ · E −1 (F t ) + δ 1 S t ∈ L 2 (0, T ; H m (Ω))), and take p(0) ∈ H m+2 (Ω) ∩ V .
Then the ODE solution p to (3.19) has p ∈ H 1 (0, T ; H m+2 (Ω)).
• If S ∈ L 2 (0, T ; H m+1 (Ω)), then p ∈ L 2 (0, T ; H m+3 (Ω))).
• If u 0 ∈ H m+1 ∩ V, then the weak solution (p, u) to (2.22), satisfies an additional regularity property u ∈ H 1 (0, T ; H m+1 (Ω)).
• If u(0) ∈ H m+3 ∩ V and F ∈ L 2 (0, T ; H m+1 (Ω)), then the weak solution (p, u) to (2.22), satisfies an additional regularity property u ∈ H 1 (0, T ; H m+3 (Ω)).
We note that these theorems are obtained through the reduced formulation in (3.17), which itself is obtained by time-differentiating the original equations (and the data). In some sense, then, we are lowering the regularity of the solution. However, some additional regularity is obtained a posteriori through elliptic regularity.
Finally, we discuss decay in the incompressible constituents case, which is not immediate. First, by Theorem 3.1, the energy estimate (3.2) holds for c 0 = 0 solutions:
||u|| 2 L ∞ (0,T ;V) + δ 1 ||u t || 2 L 2 (0,T ;V) + ||p|| 2 L 2 (0,T ;V ) ||u 0 || 2 V + ||S|| 2 L 2 (0,T ;V ′ ) + ||F|| 2 L 2 (0,T ;L 2 (Ω)) .
The above, of course, indicates dissipation in both variables p and u, yet the pointwise-in-time quantity, ||p(t)|| 2 L 2 0 (Ω) , has disappeared. As this case considers an ODE for q (on either V ′ or L 2 0 (Ω)), the spectral properties of the operator R-on the respective space-would dictate decay in q. From that point of view, we only remark that: (i) the operator R : V ′ → V ′ is non-negative, and (ii) 0 is not an eigenvalue of R.
However, we can directly observe exponential decay in this case through a multiplier method for the whole system. Indeed, with solutions in hand from Theorems 3.6 and 3.7, we can reconstruct the weak solution to (2.22) and then selectively test the equations using wave-type stabilization arguments. We present this argument here.
Theorem 3.8 (Exponential ODE Stability). Consider S ≡ 0 and F ≡ 0 in (2.22) (so S ≡ 0 in (3.17)). Suppose that u 0 , u t (0) ∈ V and p 0 ∈ V . Then, there exists C > 0 and γ > 0 so that we have the estimate for all t ≥ 0:
||u(t)|| 2 V + ||u t (t)|| 2 V + ||p(t)|| 2 V ≤ C[||u 0 || 2 V + ||u t (0)|| 2 V + ||p 0 || 2 V ]e −γt (3.20)
Proof of Theorem 3.8. We will consider smooth solutions for formal calculations, which can then be extended by density in the standard way the final estimate on generalized (semigroup) solutions. The full system in strong form is:
Eu + δ 1 Eu t + α∇p = 0 α∇ · u t + Ap = 0. (3.21)
First, we recall the standard energy estimate (as quoted above) which is obtained by testing the elasticity equation with u t and the pressure equation with p:
1 2 d dt ||u(t)|| 2 V + δ 1 ||u t (t)|| 2 V + k||p|| 2 V = 0. (3.22)
Next, we differentiate the displacement equation in time, and test again with u t , while testing the pressure equation with p t and adding:
1 2 ||u t (t)|| 2 V + δ 1 2 d dt ||u t (t)|| 2 V + k 2 d dt ||p|| 2 V = 0. (3.23)
Finally, we use the so called equipartition multiplier associated with the wave equation-namely, testing with u. (Note: This was in fact done in the energy estimates in [10], but stability was not pursued there.) This yields the identity:
||u(t)|| 2 V + δ 1 2 d dt ||u(t)|| 2 V = −α(∇p, u). (3.24)
From this we obtain the estimate by Young's inequality:
||u(t)|| 2 V + δ 1 2 d dt ||u(t)|| 2 V ≤ α 2 C P 2 ||∇p(t)|| 2 + 1 2 ||u(t)|| 2 V ,(3.25)
where C P is a constant such that ||u|| 2 L 2 (Ω) ≤ C P ||u|| 2 V , which exists by virtue of Poincare and Korn. Now, absorbing the last term and then multiplying the resulting inequality by k[α 2 C P ] −1 , we obtain:
1 2 k α 2 C P ||u(t)|| 2 V + δ 1 k 2α 2 C P d dt ||u(t)|| 2 V ≤ k 2 ||∇p(t)|| 2 ,(3.26)
Adding (3.22), (3.23), and (3.26), and absorbing the final RHS term, we obtain the estimate:
1 2 d dt 1 + δ1k α 2 CP ||u(t)|| 2 V + δ1||ut(t)|| 2 V + k||p|| 2 V + δ1 + 1 2 ||ut|| 2 V + k 2 ||p|| 2 V + k 2α 2 CP ||u(t)|| 2 V ≤ 0 (3.27)
Finally, if we define the quantity
E(t) ≡ 1 2 1 + δ 1 k α 2 C P ||u(t)|| 2 V + δ 1 ||u t (t)|| 2 V + k||p|| 2 V ,
then, we observe that there exists an γ = γ(k, δ 1 , α, C P ) with 0 < γ < min 1, k δ 1 k + α 2 C P so that we have the Grönwall type estimate:
d dt E(t) + γE(t) ≤ 0.
This implies the exponential decay:
E(t) ≤ E(0)e −γt .
From this, the final result of the theorem follows.
Remark 3.2. At the cost of scaling the RHS in the final estimate above, one can work with the more natural quantity
E(t) ≡ 1 2 ||u(t)|| 2 V + δ 1 ||u t (t)|| 2 V + k||∇p|| 2 ,
and obtain the analogous theorem.
3.3 Case 2: Adjusted Fluid Content, δ 2 > 0
We now consider δ 1 , δ 2 > 0 in (2.22), which is to say, we allow for visco-elastic effects in the structural equation and we modify the definition of the fluid content of (2.22). Thus, in this section, the fluid content will be given by
ζ = c 0 p + α∇ · u + δ 2 ∇ · u t ,(3.28)
where again we retain the coefficients δ 2 , δ 1 to observe their presence in the reduced dynamics. We note that, for the dynamics to admit energy estimates, we must observe the identity
αδ 1 = δ 2 . (3.29)
Alternatively, one obtains the above coefficient relation by formally mapping u → u + δ 1 u t in the derivation of the original Biot dynamics (2.14).
Note that we will not make a distinction between c 0 ≥ 0 in this section. Indeed, as we will see, upon making the abstract reduction, we will obtain precisely the same system (under different state variables) as the original Biot dynamics. The one distinction is that it will not be adequate to only specify d 0 or p 0 in the initial conditions. Indeed, we will require u 0 and one of p 0 , d 0 . Even considering the pressure equation alone, it will not be adequate to specify d 0 alone, as we will see. Now, invoking the elasticity equation as in (3.3), we can write (like before):
δ 1 u t + u = E −1 F − αE −1 (∇p).
Taking the divergence of this equation, enforcing the condition αδ 1 = δ 2 , and plugging into the fluid content expression in (3.28), we obtain a pressure equation from (2.22) 2 of the form:
[c 0 p + α 2 Bp + ∇ · E −1 F] t + Ap = S,
which is rewritten as
[(c 0 I + α 2 B)p] t + Ap = S,(3.30)
where S is as before in Section 2.4. We note that this pressure equation, with δ 1 , δ 2 > 0, has the exact same structure as the original implicit degenerate system without visco-elasticity, as presented in (2.15). Thus, the inclusion of viscous effects in both the fluid content and displacement equation recovers the same implicit, degenerate (parabolic) dynamics given by Biot's poro-elastic dynamics. Although we have the same system abstractly, it is worth it to describe the resulting estimates and relevant quantities in this case. We note explicitly that, for the above dynamics in p, it is sufficient to specify either p(0) = p 0 or Bp(0) = [(c 0 I + α 2 B)p](0) in L 2 0 (Ω)-indeed, these are equivalent by the invertibility of B in that context. However, unlike the case of pure Biot dynamics (δ 1 = δ 2 = 0), we cannot move from d 0 = ζ(0) = [c 0 p + α∇ · u + δ 2 ∇ · u t ](0) to p 0 directly, as we must pass through the ODE for u in (3.3). Said differently: by moving to the abstract framework for the δ 1 , δ 2 > 0 dynamics, we can solve for p in (3.30). Then, solving the ODE in (3.3)-with a given initial condition u 0 -we obtain the corresponding displacement solution u. As in previous cases, it is clear that given p 0 and u 0 the quantity u t (0) can be recovered through
δ 1 u t (0) = −u 0 − αE −1 ∇p 0 + E −1 F(0) ∈ V.
This produces associated a priori estimates, albeit in an indirect way. One can immediately obtain energy estimates through the multiplier method, as in previous sections. However, to obtain a priori estimates (e.g., on approximants), one will test the pressure equation with p in (2.22) 2 and the displacement equation (2.22) 1 with δ 1 u tt + u t . In this step, one again observes the necessary requirement that αδ 1 = δ 2 . The resulting identities are:
1 2 d dt e δ 1 u t + u t , δ 1 u t + u − α(p, δ 1 ∇ · u tt + ∇ · u t ) = F(τ ), [δ 1 u t (τ ) + u(τ )] V ′ ×V τ =t τ =0 − F t , [δ 1 u t + u] V ′ ×V (∇ · [δ 2 u tt + αu t ], p) + c 0 2 d dt ||p|| 2 + a(p, p) = S, p V ′ ×V .
These identities, along with the application of the abstract theorem in Section 2.4, yield the central theorem for this case. We refer to the Appendix for the definition of weak solutions in the case of the reduced, implicit formulation in (3.30).
||u0|| 2 V +c0||p0|| 2 L 2 0 (Ω) +||S|| 2 L 2 (0,T ;V ′ ) +C(T )||F|| 2 H 1 (0,T ;V ′ ) . (3.31)
In this above framework-as we have reduced to the same abstract theory for classical Biot with δ 1 = δ 2 = 0-we can accordingly discuss parabolic estimates and smooth solutions. We do not repeat the statements here, but refer back to Theorem 2.2 and Theorem 2.3, which can be analogously adopted here. Remark 3.3. If one allows for the possibility that the coefficient δ 2 is fully independent, renaming δ 2 asα, one obtains the system:
Eu + δ 1 Eu t + α∇p = F [c 0 p + α∇ · u +α∇ · u t ] t + Ap = S. (3.32)
Accordingly, we can obtain a reduced equation which is not closed in p:
[c 0 p + αα δ 1 Bp] t + α −α δ 1 α δ 1 Bp − α −α δ 1 δ −1 1 ∇ · u + Ap = S −α δ 1 ∇ · E −1 F t − δ −1 1 α −α δ 1 ∇ · E −1 F
The ODE for u can be solved as before, which produces the equation:
c0I + αα δ1 B p t + A + α 2 δ1 − α α B p − α 2 δ 2 1 − αα δ 3 1 t 0 e −(t−τ )/δ 1 Bp(τ )dτ (3.33) = S −α δ1 ∇ · E −1 F + α δ 2 1 − α δ1 e −t/δ 1 ∇ · u0 + α δ 2 1 − α δ1 ∇ · E −1 F − α δ 3 1 − α δ 2 1 t 0 e −(t−τ )/δ 1 ∇ · E −1 F(τ )dτ
This is also a visco-elastic equation which can be solved by the methods in this paper, but we do not pursue this here. In this case we note the emergence of additional terms which vanish when we enforce the conditioñ α = δ 1 α.
Remarks on Secondary Consolidation
In some sense, the effects of secondary consolidation of soils (i.e., creep) can be thought of as partial viscoelasticity [38]. Therefore, for completeness, we add some remarks here on the nature of associated solutions and estimates. In the case of secondary consolidation, as it is described in [49], we "regularize" only the divergence term in the momentum equation (omitting ∆u t from the full visco-elastic terms). Thus, we consider the Biot system with secondary consolidation effects, and, as before, we allow both definitions of the fluid content based strictly on mathematical grounds. So we have:
−λ * ∇∇ · u t − µ∆u − (λ + µ)∇∇ · u + α∇p = F [c 0 p + α∇ · u + δ 2 ∇ · u t ] t + Ap = S (4.1)
In these short sections, we will track the impact of this "partial viscoelastic" λ * term. We note that there is no need to consider the case with full visco-elasticity δ 1 > 0 and secondary consolidation, as the latter would be redundant.
Traditional Fluid Content: δ 2 = 0
This is the secondary consolidation model as explicitly discussed in [38,49].
−λ * ∇∇ · u t − µ∆u − (λ + µ)∇∇ · u + α∇p = F [c 0 p + α∇ · u] t + Ap = S (4.2)
The well-posedness of weak solutions is given in [49]. We mention here that, using the standard multipliers for weak solutions (justified by the approach in [12]) one obtains the following estimate on solutions:
||u|| 2 L ∞ (0,T ;V) + c0||p|| 2 L ∞ (0,T ;L 2 (Ω)) + λ * ||∇ · ut|| 2 L 2 (0,T ;L 2 (Ω)) + ||p|| 2 L 2 (0,T ;V ) ||u0|| 2 V + ||S|| 2 L 2 (0,T ;V ′ ) + ||F|| 2 H 1 (0,T ;V ′ ) (4.3)
A partial "visco-elastic" effect of secondary consolidation is immediate obviated: the additional damping/dissipation term above for ∇ · u t . Upon temporal integration, we will obtain the additional property of weak solutions that ∇ · u t ∈ L 2 (0, T ; L 2 (Ω)). This term represents a certain "smoothing" as well, as ∇ · u t has been boosted from L 2 (0, T ; V ′ ) → L 2 (0, T ; L 2 (Ω)) by the presence of λ * > 0. We note that since the fluid content c 0 p + ∇ · u lies in H 1 (0, T ; V ′ ) (via the pressure equation), we can now extract c 0 p ∈ L 2 (0, T ; V ′ ), which is not obvious when λ * = 0, since we cannot decouple the two terms in the sum for the fluid content.
Incompressible Constituents
In the case of c 0 = 0, we observe some partial regularization of the dynamics for λ * > 0; this is explicitly mentioned in [49], and we expand upon it here. Note that from the pressure equation, we can write:
Ap = S − α∇ · u t ,
from which elliptic regularity can be applied-for weak solutions-when Ω is sufficiently regular and S ∈ L 2 (0, T ; L 2 (Ω)). Then, with ∇ · u t ∈ L 2 (0, T ; L 2 (Ω)) as described above, we observe a "boost" p ∈ L 2 (0, T ; V ) → p ∈ L 2 (0, T ; D(A)) through elliptic regularity applied a.e. t. But this cannot be pushed on the momentum equation, owing to the addition of the secondary consolidation term:
E(u) = F − α∇p + λ * ∇∇ · u t .
This is to say that the regularity gain in p is not realized for the displacement u through the momentum equation.
One further observation, in this case, is a particular representation of the system which is not available in other cases. Noting that p = A −1 [S − α∇ · u t ], one can plug this into the elasticity equation to obtain:
− α 2 ∇A −1 div + λ * ∇div u t + E(u) = F − α∇A −1 S.
This is an implicit equation directly in u which can be analyzed in the framework of implicit, degenerate equations [1,47,49]; we do not pursue this line of investigation here.
Compressible Constitutents
In the case of c 0 > 0, [49] observes that the effect secondary consolidation is de-regularizing. This is observed in hindering the discussion in the previous section, namely the pressure equation now reads as:
Ap = S − α∇ · u t − c 0 p t .
The effect of secondary consolidation through λ * (the boosting of ∇ · u to L 2 (0, T ; L 2 (Ω)) is lost, since we can only conclude that c 0 p t ∈ L 2 (0, T ; V ′ ) rather than L 2 (0, T ; L 2 (Ω)). Thus there is neither smoothing in p nor u in this case.
Adjusted Fluid Content
Finally, we observe that in the case of adjusted fluid content, we obtain the natural analog to our earlier discussions. Taking δ 2 > 0, we consider the system:
−λ * ∇∇ · u t + Eu + α∇p = F [c 0 p + α∇ · u + δ 2 ∇ · u t ] t + Ap = S (4.4)
As above, we invoke (as before) the test function δ 2 u tt + u t in the elasticity equation, and p in the pressure equation. This provides an identical estimate as that in Theorem 3.9 with the additional property that ∇ · u t ∈ L 2 (0, T ; L 2 (Ω)), and the associated term λ * ||∇ · u t || 2 L 2 (0,T ;L 2 (Ω)) appears on the LHS of (3.31) in Theorem 3.9. Again, then, we see the effect of secondary consolidation as that of partial damping.
Summary and Conclusions
In this note, we characterized linear poro-visco-elastic systems across several parameter regimes:
Eu + δ 1 Eu t + α∇p = F [c 0 p + α∇ · u + δ 2 ∇ · u t ] t + Ap = S. (5.1)
We began with the traditional Biot system (δ 1 = δ 2 = 0), i.e., no Kelvin-Voigt visco-elastic effects, and recapitulated existence results and estimates for weak solutions, as well as solutions with higher regularity in Section 2.4. Using this as a jumping-off point, we considered the addition of linear Kelvin-Voigt type (strong) dissipation in the Lamé system (5.1). Our central focuses were in the well-posedness and regularity of solutions across all parameter regimes, as well as the clear determination of the abstract structure of the problem, including the discernment of the appropriate initial quantities. Our approach included providing clear a priori estimates on solutions, where they were illuminating. We employed an operator-theoretic framework inspired by [1,49] and developed in [12] which we introduced in Section 2.3. The central operators were A (a Neumann Laplacian) and B (a zeroth order, nonlocal pressure-to-divergence operator).
We then considered the model with δ 1 > 0 and δ 2 = 0, which is to say we left the fluid-content unaltered in our addition of strong damping. We first gave a well-posedness result which was valid for both compressible constituents c 0 > 0 and incompressible constituents c 0 = 0 in Section 3.2. We then distinguished between these two cases. We determined that for c 0 > 0, the system constitutes a strongly damped hyperbolic-type system. It was important, in this case, to distinguish results based on which initial quantities were specified. Regardless, the regularity of solutions was made clear, and the parabolicity of the system was detailed in several ways. In the case when c 0 = 0, we observed that the abstract, reduced version of the dynamics constituted an ODE in a Hilbert space of our choosing (either L 2 0 (Ω) or an H −1 (Ω) type space, V ′ ). We exploited the ODE nature of the dynamics to produce a clear well-posedness and regularity result.
In the case when δ 1 , δ 2 > 0 (i.e., the adjusted fluid content), we observed that the abstract reduction of the system brings the dynamics back to the traditional Biot-structure. In other words, by adding viscoelasticity to the displacement equation (δ 1 > 0), as well as adjusting the fluid content (δ 2 > 0), we do not observe additional effects from the visco-elastic damping-rather, we obtain the same qualitative results for the solution as we had for Biot's original dynamics. Noting a small difference in which initial quantities must be prescribed, we presented a well-posedness theorem, with relevant a priori estimates.
Finally, in Section 4, we provide some small remarks on partial visco-elasticity, known in soil mechanics as secondary consolidation. The main focus of this section was to provide clear a priori estimates on solutions, indicating precisely how dissipation is introduced into the system through secondary consolidation effects. Additionally, we corroborate remarks in [49] concerning the extent to which this sort of partial visco-elasticity can be, in fact, partially regularizing (when c 0 = 0) and de-regularizing (c 0 > 0).
The Appendix serves to provide a small overview of the standard theory of weak solutions for implicit, degenerate evolution equations [1,12,47], and is taken from [48].
We believe that the work presented here, as it is in the spirit of [1,47,49], will be of interest to researchers working on applied problems in poro-elasticity. In particular, as the effects of visco-elasticity are prominent in biological sciences, those who work on biologically-motivated Biot models may find the results presented herein useful. Indeed, to the best of our knowledge, we have provided the first elucidation of the mathematical effects of linear visco-elasticity, when included in linear poro-elastic dynamics.
Appendix: Abstract Framework for Weak Solutions
Let V be a separable Hilbert space with dual V ′ (not identified with V ). Assume V densely and continuously includes into another Hilbert space H, which is identified with its dual: V ֒→ H ≡ H ′ ֒→ V ′ . We denote the inner-product in H simply as (·, ·), with (h, h) = ||h|| 2 H for each h ∈ H. Similarly, we denote the V ′ × V duality pairing as ·, · . (For h ∈ H, we identify h, h = ||h|| 2 H as well.) Assume that A ∈ L (V, V ′ ) and B ∈ L (H). Finally, suppose that d 0 ∈ V ′ and S ∈ L 2 (0, T ; V ′ ) are the specified data.
In this setup, we can define the weak (implicit-degenerate) Cauchy problem to be solved as: The following generation theorem is adapted from [48,III.3, for weak solutions, and produces weak solvability of (6.1) in a straight-forward way: Theorem 6.1. Let A, B be as above, and assume additionally that they are self-adjoint and monotone (in the respective sense, A : V → V ′ and B : H → H). Assume further that there exists λ, c > 0 so that
Find w ∈ L 2 (0, T ; V ) such that d dt [Bw] + Aw = S ∈ L 2 (0, T ; V ′ )2 Av, v + λ(Bv, v) ≥ c||v|| 2 V , ∀ v ∈ V.
Then, given Bw(0) = d 0 ∈ H and S ∈ L 2 (0, T ; V ′ ), there exists a unique weak solution to (6.1) satisfying ||w|| 2 L 2 (0,T ;V ) ≤ C(λ, c) ||S|| 2 L 2 (0,T ;V ′ ) + (d 0 , w(0)) H . (6.
2)
The assumption in this theorem is that there exists a w(0) ∈ V so that Bw(0) = d 0 , the given initial data. In more recent work applying this theorem [12], we need not assume the existence of such w(0). See Theorem 2.2.
One may also consult the implicit semigroup theory presented in [49,Section 5] and [48,IV.6], in particular for a discussion of smoother solutions.
•
The initial conditions ζ(0) = d 0 and δ 1 u(0) = u 0 are satisfied in the sense of C([0, T ]; V ′ ) and C([0, T ]; V ′ ), respectively.
[Part 1 ]
1Then there exists unique weak solution, p ∈ L 2 (0, T ; V ), u ∈ H 1 (0, T ; V), and [c 0 p + α∇ · u] t ∈ L 2 (0, T ; V ′ ), as in Definition 1. This solution is of finite energy, i.e., the identity (3.2) holds.
Theorem 3. 3 (
3Damped Semigroup Theorem). In the framework of (3.12)-(3.13), with δ 1 , c 0 > 0, the operator A generates a C 0 -semigroup of contractions e At ∈ L (Y ), which is analytic (in the sense of[41, Chapter 2.5]) on Y ≡ V × L 2 0 (Ω). This is to say:• For [p 0 , p 1 ] T ∈ Y and S ∈ L 2 (0, T ; L 2 0 (Ω)), we obtain a unique (generalized) solution [p(·), p t (·)] T ∈ C([0, T ]; Y ) (as described above).• For [p 0 , p 1 ] T ∈ D(A) × D(A) and S ∈ H 1 (0, T ; L 2 0 (Ω)), we obtain a unique solution [p(·), p t (·)] T ∈ C([0, T ]; D(A) × D(A)) ∩ C 1 ((0, T ]; Y )) that satisfies the system in a point-wise sense.
Theorem 3 . 4 (
34Exponential Decay of Solutions). Consider the above framework in (3.12)-(3.13), and takeF ≡ 0 and S ≡ 0 in (3.1) (so F ≡ [0, 0] T ). Consider c 0 , δ 1 > 0.Then the analytic semigroup e At generated by A : Y ⊃ D(A) → Y is uniformly exponentially stable. That is, there exists γ, M 0 , M k > 0 (each depending on c 0 , δ 1 > 0) so that:
Theorem 3. 9 (
9Visco-elastic Solutions with Modified Fluid Content). Suppose that S ∈ L 2 (0, T ; V ′ ) and F ∈ H 1 (0, T ; V ′ ). Take p 0 ∈ L 2 0 (Ω). Suppose δ 2 > 0 with c 0 ≥ 0 and enforce the condition that δ 2 = αδ 1 . Then there exists a unique weak solution p ∈ L 2 (0, T ; V ) satisfying the reduced, implicit formulation(3.30).If u 0 ∈ V, then (with p the same as above) there exists a unique weak solution (u, p) ∈ C([0, T ]; V) × L 2 (0, T ; V ) to (2.22) satisfying the energy inequality ||δ1ut +u|| 2 L ∞ (0,T ;V)
lim tց0 [
tց0Bw(t)] = d 0 ∈ V ′ .(6.1) The time derivative above is taken in the sense of D ′ (0, T ), and since such a solution would have Bu ∈ H 1 (0, T ; V ′ ) (with the natural inclusion V ֒→ V ′ holding), Bu has point-wise (in time) values into V ′ and the initial conditions makes sense through the boundedness of B with H ֒→ V ′ .
We recall here that incompressibility of each component means that the volumetric deformation of the solid constituent corresponds to the variation of fluid volume per unit volume of porous material, i.e., ζ = ∇ · u.
Dynamic behaviour of a porous medium saturated by a Newtonian fluid. J L Auriault, International Journal of Engineering Science. 186Auriault, J.L., 1980. Dynamic behaviour of a porous medium saturated by a Newtonian fluid. International Journal of Engineering Science, 18(6), pp.775-785.
Lack of generation of strongly continuous semigroups by the damped wave operator on H × H (or: The little engine that couldn't). A V Balakrishnan, R Triggiani, Applied Mathematics Letters. 66Balakrishnan, A.V. and Triggiani, R., 1993. Lack of generation of strongly continuous semigroups by the damped wave operator on H × H (or: The little engine that couldn't). Applied Mathematics Letters, 6(6), pp.33-37.
Sensitivity Analysis in Poro-Elastic and Poro-Visco-Elastic Models with Respect to Boundary Data. H T Banks, K Bekele-Maxwell, L Bociu, M Noorman, G Guidoboni, Quarterly of Applied Mathematics. 75Banks, H.T., Bekele-Maxwell, K., Bociu, L., Noorman, M., and Guidoboni, G., 2017. Sensitivity Analysis in Poro-Elastic and Poro-Visco-Elastic Models with Respect to Boundary Data, Quarterly of Applied Mathematics 75, pp.697-735.
General theory of three-dimensional consolidation. M A Biot, J. Appl. Phys. 122Biot, M.A., 1941. General theory of three-dimensional consolidation, J. Appl. Phys., 12(2), pp. 155-164.
Theory of deformation of a porous viscoelastic anisotropic solid. M A Biot, J. of Applied physics. 275Biot, M.A., 1956. Theory of deformation of a porous viscoelastic anisotropic solid. J. of Applied physics, 27(5), pp.459-467.
Robust fixed stress splitting for Biot's equations in heterogeneous media. J W Both, M Borregales, J M Nordbotten, K Kumar, F A Radu, Applied Mathematics Letters. 68Both, J.W., Borregales, M., Nordbotten, J.M., Kumar, K. and Radu, F.A., 2017. Robust fixed stress splitting for Biot's equations in heterogeneous media. Applied Mathematics Letters, 68, pp.101-108.
The gradient flow structures of thermo-poro-visco-elastic processes in porous media. J W Both, K Kumar, J M Nordbotten, F A Radu, arXiv:1907.03134arXiv preprintJ.W. Both, K. Kumar, J.M. Nordbotten, F.A. Radu, The gradient flow structures of thermo-poro-visco-elastic processes in porous media, 2019, arXiv preprint arXiv:1907.03134
Global existence of weak solutions to unsaturated poroelasticity. J W Both, I S Pop, I Yotov, ESAIM: Mathematical Modelling and Numerical Analysis. 556Both, J.W., Pop, I.S. and Yotov, I., 2021. Global existence of weak solutions to unsaturated poroelasticity. ESAIM: Mathematical Modelling and Numerical Analysis, 55(6), pp.2849-2897.
Multilayered poroelasticity interacting with Stokes flow. L Bociu, S Čanić, B Muha, J T Webster, SIAM Journal on Mathematical Analysis. 536Bociu, L.,Čanić, S., Muha, B. and Webster, J.T., 2021. Multilayered poroelasticity interacting with Stokes flow. SIAM Journal on Mathematical Analysis, 53(6), pp.6243-6279.
Analysis of Nonlinear Poro-Elastic and Poro-Visco-Elastic Models. L Bociu, G Guidoboni, R Sacco, J T Webster, ARMA. 222Bociu, L., Guidoboni, G., Sacco, R., and Webster, J.T., 2016. Analysis of Nonlinear Poro-Elastic and Poro- Visco-Elastic Models, ARMA, 222, pp. 1445-1519
On the role of compressibility in poroviscoelastic models. L Bociu, G Guidoboni, R Sacco, M Verri, Mathematical Biosciences and Engineering. 165Bociu, L., Guidoboni, G., Sacco, R., and Verri, M., 2019. On the role of compressibility in poroviscoelastic models, Mathematical Biosciences and Engineering, 16(5), 6167-6208.
Weak solutions in nonlinear poroelasticity with incompressible constituents. L Bociu, B Muha, J T Webster, Nonlinear Analysis: Real World Applications. 67103563Bociu, L., Muha, B. and Webster, J.T., 2022. Weak solutions in nonlinear poroelasticity with incompressible constituents. Nonlinear Analysis: Real World Applications, 67, p.103563.
Poro-Visco-Elastic Models in Biomechanics: Sensitivity Analysis. L Bociu, M Noorman, Communications in Applied Analysis. 231Bociu, L. and Noorman, M., 2019. Poro-Visco-Elastic Models in Biomechanics: Sensitivity Analysis, Commu- nications in Applied Analysis, 23(1), pp.61-77.
Poro-Visco-Elasticity in Biomechanics: Optimal Control. L Bociu, S Strikwerda, Research in Mathematics of Materials Science. ChamSpringer International PublishingBociu, L. and Strikwerda, S., 2022. Poro-Visco-Elasticity in Biomechanics: Optimal Control. In: Research in Mathematics of Materials Science (pp. 103-132). Cham: Springer International Publishing.
Nonlinear quasi-static poroelasticity. L Bociu, J T Webster, J. of Differential Equations. 296Bociu, L. and Webster, J.T., 2021. Nonlinear quasi-static poroelasticity. J. of Differential Equations, 296, pp.242- 278.
Steady flow in a deformable porous medium. Y Cao, S Chen, Meir , A J , Math. Meth. Appl. Sci. 37Cao, Y., Chen, S., and Meir, A.J., 2014. Steady flow in a deformable porous medium, Math. Meth. Appl. Sci., 37, pp.1029-1041.
Multiscale two-stage solver for Biot's poroelasticity equations in subsurface media. N Castelletto, S Klevtsov, H Hajibeygi, H A Tchelepi, Computational Geosciences. 23Castelletto, N., Klevtsov, S., Hajibeygi, H. and Tchelepi, H.A., 2019. Multiscale two-stage solver for Biot's poroelasticity equations in subsurface media. Computational Geosciences, 23, pp.207-224.
Proof of extensions of two conjectures on structural damping for elastic systems. S P Chen, R Triggiani, Pacific Journal of Mathematics. 1361Chen, S.P. and Triggiani, R., 1989. Proof of extensions of two conjectures on structural damping for elastic systems. Pacific Journal of Mathematics, 136(1), pp.15-55.
A mathematical model for linear elastic systems with structural damping. G Chen, D L Russell, Quarterly of Applied Mathematics. 394Chen, G. and Russell, D.L., 1982. A mathematical model for linear elastic systems with structural damping. Quarterly of Applied Mathematics, 39(4), pp.433-454.
. O Coussy, Poromechanics. John Wiley & SonsCoussy, O., 2004. Poromechanics. John Wiley & Sons.
Fundamentals of poroelasticity. E Detournay, A H Cheng, .-D , Comprehensive Rock Engineering: Principles, Practice and Projects. C. FairhurstPergamon PressIIAnalysis and Design MethodDetournay, E. and Cheng, A.H.-D., 1993. Fundamentals of poroelasticity, Chapter 5 in Comprehensive Rock Engineering: Principles, Practice and Projects, Vol. II, Analysis and Design Method, ed. C. Fairhurst, Pergamon Press, 113-171.
Viscosity dependent behaviour of viscoelastic porous media. H I Ene, B Vernescu, Asymptotic Theories for Plates and Shells. Chapman and Hall/CRC319Ene, H. I. and Vernescu, B., 1992., Viscosity dependent behaviour of viscoelastic porous media. Chapter 3 in Asymptotic Theories for Plates and Shells, Pitman Research Notes in Mathematics Series, 319., Chapman and Hall/CRC, 1995.
Guest editorial to the special issue: computational mathematics aspects of flow and mechanics of porous media. V Fred, C Rodrigo, F Gaspar, K Kundan, Computational Geosciences. 252Fred, V., Rodrigo, C., Gaspar, F. and Kundan, K., 2021. Guest editorial to the special issue: computational mathematics aspects of flow and mechanics of porous media. Computational Geosciences, 25(2), pp.601-602.
A stabilized method for a secondary consolidation Biot's model. F J Gaspar, J L Gracia, F J Lisbona, P N Vabishchevich, Numerical Methods for Partial Differential Equations: An International Journal. 241Gaspar, F.J., Gracia, J.L., Lisbona, F.J. and Vabishchevich, P.N., 2008. A stabilized method for a secondary consolidation Biot's model. Numerical Methods for Partial Differential Equations: An International Journal, 24(1), pp.60-78.
Effective acoustic equations for a two-phase medium with microstructure. R P Gilbert, A Panchenko, Mathematical and Computer Modelling. 3913Gilbert, R.P. and Panchenko A., 2004., Effective acoustic equations for a two-phase medium with microstructure. Mathematical and Computer Modelling 39(13), pp. 1431-1448
Physics-informed neural network simulation of multiphase poroelasticity using stress-split sequential training. E Haghighat, D Amini, R Juanes, Computer Methods in Applied Mechanics and Engineering. 397115141Haghighat, E., Amini, D. and Juanes, R., 2022. Physics-informed neural network simulation of multiphase poroelasticity using stress-split sequential training. Computer Methods in Applied Mechanics and Engineering, 397, p.115141.
On the essential spectrum of a semigroup of thermoelasticity. D B Henry, A PerissinittoJr, O Lopes, Nonlinear Analysis: Theory Methods Appl. 21Henry DB, Perissinitto A Jr. Lopes O., 1988. On the essential spectrum of a semigroup of thermoelasticity. Nonlinear Analysis: Theory Methods Appl. 21, 65-75.
Biot-pressure system with unilateral displacement constraints. A Hosseinkhan, R E Showalter, Journal of Mathematical Analysis and Applications. 4971124882Hosseinkhan, A. and Showalter, R.E., 2021. Biot-pressure system with unilateral displacement constraints. Jour- nal of Mathematical Analysis and Applications, 497(1), p.124882.
On well-posedness and small data global existence for an interface damped free boundary fluidstructure model. M Ignatova, I Kukavica, I Lasiecka, A Tuffaha, Nonlinearity. 273467Ignatova, M., Kukavica, I., Lasiecka I. and Tuffaha A., 2014. On well-posedness and small data global existence for an interface damped free boundary fluidstructure model. Nonlinearity 27(3), p. 467
I Lasiecka, R Triggiani, Control theory for partial differential equations. CambridgeCambridge University Press1Lasiecka, I. and Triggiani, R., 2000. Control theory for partial differential equations (Vol. 1). Cambridge: Cam- bridge University Press.
A mixed finite element method for nearly incompressible multiple-network poroelasticity. J J Lee, E Piersanti, K A Mardal, M E Rognes, SIAM J. on Scientific Computing. 412Lee, J.J., Piersanti, E., Mardal, K.A. and Rognes, M.E., 2019. A mixed finite element method for nearly incompressible multiple-network poroelasticity. SIAM J. on Scientific Computing, 41(2), pp.A722-A747.
L p theory for the interaction between the incompressible Navier-Stokes system and a damped plate. D Maity, T Takahashi, J. Math. Fluid Mech. 234Maity, D. and Takahashi, T., 2021. L p theory for the interaction between the incompressible Navier-Stokes system and a damped plate. J. Math. Fluid Mech. 23(4), Paper No. 103
Accurate discretization of poroelasticity without Darcy stability: Stokes-Biot stability revisited. K A Mardal, M E Rognes, T B Thompson, BIT Numerical Mathematics. 61Mardal, K.A., Rognes, M.E. and Thompson, T.B., 2021. Accurate discretization of poroelasticity without Darcy stability: Stokes-Biot stability revisited. BIT Numerical Mathematics, 61, pp.941-976.
Homogenization methods for multiscale mechanics. C C Mei, B Vernescu, World Scientific Publishing Co. Pte. Ltd330Hackensack, NJMei, C.C. and Vernescu, B., 2010., Homogenization methods for multiscale mechanics. World Scientific Pub- lishing Co. Pte. Ltd., Hackensack, NJ, 2010. xviii+330 pp
Mathematical models of a diffusion-convection in porous media. A M Meirmanov, R Zimin, Electronic Journal of Differential Equations. 105Meirmanov, A.M. and Zimin, R., 2012. Mathematical models of a diffusion-convection in porous media. Elec- tronic Journal of Differential Equations, 2012(105), pp.1-16.
Biphasic creep and stress relaxation of articular cartilage in compression: Theory and experiments. V C Mow, S C Kuei, W M Lai, Armstrong , C G , ASME J. Biomech. Eng. 102Mow, V.C., Kuei, S.C., Lai, W.M., and Armstrong, C.G., 1980. Biphasic creep and stress relaxation of articular cartilage in compression: Theory and experiments, ASME J. Biomech. Eng., 102, pp.73-84.
Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls. B Muha, S Čanić, Archive for Rational Mechanics and Analysis. 2073Muha, B.,Čanić, S., 2013. Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls. Archive for Rational Mechanics and Analysis, 207(3), pp. 919-968
Multiscale flow and deformation in hydrophilic swelling porous media. M A Murad, J H Cushman, International Journal of Engineering Science. 343Murad, M.A. and Cushman, J.H., 1996. Multiscale flow and deformation in hydrophilic swelling porous media. International Journal of Engineering Science, 34(3), pp.313-338.
Poroelasticity of cartilage at the nanoscale. H T Nia, Han, Ll, Y Li, C Ortiz, A Grodzinsky, Biophys. J. 101Nia, H.T., Han, Ll, Li, Y., Ortiz, C., and Grodzinsky, A., 2011. Poroelasticity of cartilage at the nanoscale, Biophys. J., 101, pp.2304-2313.
N Ozkaya, M Nordin, D Goldsheyder, D Leger, Fundamentals of Biomechanics. Equilibrium, Motion, and Deformation. New YorkSpringerOzkaya, N., Nordin, M., Goldsheyder, D., and Leger, D., 1999. Fundamentals of Biomechanics. Equilibrium, Motion, and Deformation, Springer, New York.
A Pazy, Semigroups of linear operators and applications to partial differential equations. Springer Science & Business Media44Pazy, A., 2012. Semigroups of linear operators and applications to partial differential equations (Vol. 44). Springer Science & Business Media.
Stability and monotonicity for some discretizations of the Biot's consolidation model. C Rodrigo, F J Gaspar, X Hu, L T Zikatanov, Computer Methods in Applied Mechanics and Engineering. 298Rodrigo, C., Gaspar, F.J., Hu, X. and Zikatanov, L.T., 2016. Stability and monotonicity for some discretizations of the Biot's consolidation model. Computer Methods in Applied Mechanics and Engineering, 298, pp.183-204.
Mixed and Galerkin finite element approximation of flow in a linear viscoelastic porous medium. E Rohan, S Shaw, M F Wheeler, J R Whiteman, Computer Methods in Applied Mechanics and Engineering. 260Rohan, E., Shaw, S., Wheeler, M.F. and Whiteman, J.R., 2013. Mixed and Galerkin finite element approximation of flow in a linear viscoelastic porous medium. Computer Methods in Applied Mechanics and Engineering, 260, pp.78-91.
Finite element and finite difference methods for continuous flows in porous media. T F Russell, M F Wheeler, The mathematics of reservoir simulation. Russell, T.F. and Wheeler, M.F., 1983. Finite element and finite difference methods for continuous flows in porous media. In: The mathematics of reservoir simulation (pp. 35-106). Society for Industrial and Applied Mathematics.
A Comprehensive Physically Based Approach to Modeling in Bioengineering and Life Sciences. R Sacco, G Guidoboni, A G Mauri, Elsevier Academic PressSacco, R., Guidoboni, G., and Mauri, A.G., 2019. A Comprehensive Physically Based Approach to Modeling in Bioengineering and Life Sciences, Elsevier Academic Press.
Non-homogeneous media and vibration theory. E Sanchez-Palencia, Lecture Notes in Physics. 127Springer-VerlagSanchez-Palencia, E., 1980. Non-homogeneous media and vibration theory. Lecture Notes in Physics 127, Springer-Verlag
Degenerate evolution equations and applications. R E Showalter, Indiana University Mathematics J. 238Showalter, R.E., 1974. Degenerate evolution equations and applications. Indiana University Mathematics J., 23(8), pp.655-677.
Monotone Operators in Banach Space and Nonlinear Partial Differential Equations. R E Showalter, AMS, Mathematical Surveys and Monographs49Showalter, R.E., 1996. Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, AMS, Mathematical Surveys and Monographs, 49.
Diffusion in poro-elastic media. R E Showalter, J. Mathematical Analysis and Application. 251Showalter, R.E., 2000. Diffusion in poro-elastic media, J. Mathematical Analysis and Application, 251, pp. 310-340.
Principle of Soil Mechanics, Eng. News Record, A Series of Articles. K Terzaghi, Terzaghi, K., 1925. Principle of Soil Mechanics, Eng. News Record, A Series of Articles.
Mathematical Theory of Nonlinear Single-Phase Poroelasticity. C J Van Duijn, A Mikelic, LyonPreprint hal-02144933van Duijn, C.J. and Mikelic, A., 2019. Mathematical Theory of Nonlinear Single-Phase Poroelasticity. Preprint hal-02144933, Lyon June.
M Verri, G Guidoboni, L Bociu, R Sacco, The Role of Structural Viscoelasticity in Deformable Porous Media with Incompressible Constituents: Applications in Biomechanics. 15Verri, M., Guidoboni, G., Bociu, L., and Sacco, R., 2018. The Role of Structural Viscoelasticity in Deformable Porous Media with Incompressible Constituents: Applications in Biomechanics, Mathematical Biosciences and Engineering, Volume 15, Number 4, 933-959.
On the homogenization of visco-elastic processes. A Visintin, The IMA Journal of Applied Mathematics. 776Visintin, A., 2012. On the homogenization of visco-elastic processes. The IMA Journal of Applied Mathematics, 77(6), pp.869-886.
The existence and uniqueness theorem in Biot's consolidation theory. A Zenisek, Appl. Math. 29Zenisek, A., 1984. The existence and uniqueness theorem in Biot's consolidation theory, Appl. Math., 29,, pp.194-211.
| [] |
[
"DROPLET MOTION WITH CONTACT-LINE FRICTION: LONG-TIME ASYMPTOTICS IN COMPLETE WETTING",
"DROPLET MOTION WITH CONTACT-LINE FRICTION: LONG-TIME ASYMPTOTICS IN COMPLETE WETTING"
] | [
"Lorenzo Giacomelli ",
"ANDManuel V Gnann ",
"Dirk Peschka "
] | [] | [] | We consider the thin-film equation for a class of free boundary conditions modelling friction at the contact line, as introduced by E and Ren. Our analysis focuses on formal longtime asymptotics of solutions in the perfect wetting regime. In particular, through the analysis of quasi-self-similar solutions, we characterize the profile and the spreading rate of solutions depending on the strength of friction at the contact line, as well as their (global or local) corrections, which are due to the dynamical nature of the free boundary conditions. These results are complemented with full transient numerical solutions of the free boundary problem.LG acknowledges discussions with Maria Chiricotto. MVG appreciates discussions with Jochen Denzler, Robert McCann, and Christian Seis regarding self-similar asymptotics preceding the preperation of this work. MVG was partially supported by the Deutsche Forschungsgemeinschaft (DFG) under project # 334362478. DP thanks Luca Heltai and Marita Thomas for fruitful discussions and acknowledges the financial support within the DFG-Priority Programme 2171 by project # 422792530. | 10.1098/rspa.2023.0090 | [
"https://export.arxiv.org/pdf/2302.03005v2.pdf"
] | 256,616,053 | 2302.03005 | 8449ef5468213cf69005ecd616553eb0b19c8cc9 |
DROPLET MOTION WITH CONTACT-LINE FRICTION: LONG-TIME ASYMPTOTICS IN COMPLETE WETTING
Lorenzo Giacomelli
ANDManuel V Gnann
Dirk Peschka
DROPLET MOTION WITH CONTACT-LINE FRICTION: LONG-TIME ASYMPTOTICS IN COMPLETE WETTING
We consider the thin-film equation for a class of free boundary conditions modelling friction at the contact line, as introduced by E and Ren. Our analysis focuses on formal longtime asymptotics of solutions in the perfect wetting regime. In particular, through the analysis of quasi-self-similar solutions, we characterize the profile and the spreading rate of solutions depending on the strength of friction at the contact line, as well as their (global or local) corrections, which are due to the dynamical nature of the free boundary conditions. These results are complemented with full transient numerical solutions of the free boundary problem.LG acknowledges discussions with Maria Chiricotto. MVG appreciates discussions with Jochen Denzler, Robert McCann, and Christian Seis regarding self-similar asymptotics preceding the preperation of this work. MVG was partially supported by the Deutsche Forschungsgemeinschaft (DFG) under project # 334362478. DP thanks Luca Heltai and Marita Thomas for fruitful discussions and acknowledges the financial support within the DFG-Priority Programme 2171 by project # 422792530.
Introduction
1.1. Thin-film equations. Thin-film equations are a class of fourth-order degenerate-parabolic equations whose prototype, in one space dimension, is
∂ t h + ∂ y (m(h)∂ 3
y h) = 0 on {h > 0} := {(t, y) ∈ (0, ∞) × R : h(t, y) > 0}, (1.1) where m is a mobility which degenerates at h = 0.
Thin-film equations are formally derived as leading-order approximations of the Navier-Stokes equations in a suitable regime, which is known as the lubrication approximation [1][2][3]. In this case, h represents the height of a thin layer or droplet of a Newtonian fluid on a flat solid substrate and the mobility m often has the form m(h) = h 3 + b 3−n h n , the parameters b and n being related to a slip condition imposed at the liquid-solid interface: in particular, n = 1 corresponds to Greenspan's slip condition [4], n = 2 corresponds to (linear) Navier slip and n = 3 (or b = 0) corresponds to no slip at the substrate. The case m(h) = h may also be seen as the lubrication approximation of the two-dimensional Hele-Shaw flow in the half-space [5], and in this case the lubrication approximation has been given rigorous justifications [5][6][7][8][9]. Nonlinear free boundary problems with m(h) = 1 are discussed in the context of surface diffusion [10]. General traveling wave solutions for m(h) = h n with 0 ≤ n < 3 and beyond are discussed in [11][12][13].
We are interested in situations in which an interface exists which separates a dry region of the substrate from a wetted one:
{h(t, ·) > 0} = (s − (t), s + (t)).
(1.2) In this case, (1.1) is complemented by two obvious boundary conditions at s ± (t), respectively: the defining condition h(t, s ± (t)) = 0, (1.3) and the kinematic condition We are interested in situations in which the support {h(t, ·) > 0} is allowed to evolve in time, leading to a genuine free boundary problem for which a third condition is required. Appropriate choices of such a third boundary condition are being debated since decades by now. Starting with the work of Bernis and Friedman [14], most of the analytical theory concentrated on the condition of constantly zero contact angle, focusing on existence of weak solutions [15][16][17][18][19], on their qualitative properties [20][21][22][23][24][25][26][27][28][29][30][31][32], and on well-posedness in weighted spaces [33][34][35][36][37][38][39][40][41][42]. For the constant, non-zero contact angle case we refer to [43][44][45][46][47][48][49][50][51]. For quasistatic models of droplet evolution we refer to [52] for the Stokes flow and to [4,53,54] for thin-film models.
h −1 m(h)∂ 3 y h | y=s±(t) =ṡ ± (t),(1.
We focus on a class of contact-line conditions, first considered in [4,55] in special cases, that relate the contact-line velocity,ṡ, to the dynamic contact angle. Analytical works for this class are limited to a few contributions [56][57][58]. The class has been motivated and generalized by E and Ren building up on a simple, basic principle: consistency with the second law of thermodynamics in the isothermal case [59][60][61]. We now introduce this class directly at the level of lubrication theory.
1.2. Contact-line frictional laws in complete wetting. At leading order in lubrication approximation, and after a normalization, the surface energy reads (1.5) where S ∈ R is (proportional to) the so-called spreading coefficient [6]. We are interested in a regime where the spreading coefficient of the solid/liquid/vapor system vanishes, i.e., the complete wetting regime S = 0. This is a generic situation in the so-called "moist" case, which concerns for instance a surface which has been pre-exposed to vapor [62]: thus
E[h(t)] =ˆs + (t) s−(t) 1 2 (∂ y h) 2 − S dy,E[h(t)] = 1 2ˆs + (t) s−(t) (∂ y h) 2 dy, (1.6)
in what follows (for the case S ̸ = 0, see e.g. the discussions in [45,[63][64][65]). After integrations by parts (see [58, formula (1.13)
]), (1.1)-(1.4) formally yield d dt E[h(t)] =ṡ + (t) 1 2 (∂ y h) 2 | y=s+ −ṡ − (t) 1 2 (∂ y h) 2 | y=s− +ˆs + (t) s−(t) ∂ y h(∂ t ∂ y h)dy = −ṡ + (t) 1 2 (∂ y h) 2 − h∂ 2 y h | y=s+ +ṡ − (t) 1 2 (∂ y h) 2 − h∂ 2 y h | y=s− −ˆs + (t) s−(t) m(h)(∂ 3 y h) 2 dy. (1.7)
If by contradiction h(∂ 2 y h)| y = s ± (t) =: k ± ̸ = 0 then one would have
1 2 ∂ y (∂ y h) 2 = (∂ y h)(∂ 2 y h) ∼ k ± h −1 ∂ y h = k ± ∂ y log h as y → s ± (t) ∓ ,
whence (∂ y h) 2 would become unbounded as y → s ± (t) ∓ . Hence, h(∂ 2 y h)| y = s ± (t) = 0 for solutions with finite slope at the contact line, which is the class we are interested in. Therefore, (1.7) reads as
d dt E[h(t)] = − 1 2ṡ + (t)(∂ y h) 2 | y=s+(t) + 1 2ṡ − (t)(∂ y h) 2 | y=s−(t) −ˆs + (t) s−(t) m(h)(∂ 3 y h) 2 dy. (1.8)
Consistency with the second law of thermodynamics implies that the first two terms on the righthand side of (1.8) must be non-positive. A simple form of constitutive relations which enforces non-positivity is
(∂ y h) 2 | x = s ± (t) = f ℓ (±ṡ ± ) with f ℓ (σ)σ ≥ 0 for all σ ∈ R. (1.9)
According to (1.9), receding fronts with speed σ < 0 only exist if f ℓ (σ) = 0. Hence, in complete wetting, receding fronts (if any) have zero contact angle. Furthermore, note that the more standard zero contact-angle condition (∂ y h)(t, s ± (t)) = 0 corresponds to the limit of vanishing contact-line dissipation, f ℓ ≡ 0. For the analogue of (1.9) in partial wetting (S < 0) we refer to [58,66].
The simple argument above can in fact be embedded into a more general formal gradientflow structure of the system, based on a separation of dual forces into a bulk and a contactline dissipation. This structure, which we elaborate in Appendix A, is also at the basis of the discretization that we adopt in numerical simulations.
1.3. Goals. Intermediate asymptotics for (1.1) with the Ren-E boundary condition (1.9), such as the Voinov-Cox-Hocking law (see the discussions in [13,[67][68][69][70]), has been formally worked out in [57]. Instead, here we will focus on the long-time dynamics. In complete wetting, it is expected that generic solutions spread indefinitely, covering the whole real line with a layer of zero-thickness in the limit as t → +∞. Therefore, the long-time dynamics can be equivalently captured by considering a power-law form m(h) = h n of the mobility, instead of its full form
m(h) = h 3 + b 3−n h n . Then (1.1) reads ∂ t h + ∂ y (h n ∂ 3 y h) = 0 on {h > 0}, n ∈ [1, 3),(1.10)
and we are led to consider the following free boundary problem:
∂ t h + ∂ y h n ∂ 3 y h = 0 for y ∈ (s − (t), s + (t)), (1.11a) h = 0 at y = s ± (t), (1.11b) (∂ y h) 2 = f ℓ (±ṡ ± (t)) at y = s ± (t), (1.11c) h n−1 (∂ 3 y h) =ṡ ± (t) at y = s ± (t). (1.11d)
In the absence of contact-line friction (f ℓ ≡ 0), (1.11) admits self-similar solutions [71], which are expected to describe the long-time dynamics of generic solutions (however, rigorous results are available for n = 1 only [36,39,[72][73][74][75]). If f ℓ ̸ ≡ 0, (1.11c) breaks the self-similar structure of (1.11a). Nevertheless, long-time dynamics may be inferred from the analysis of (quasi-)self-similar solutions, where the non-self-similar part of the operator is seen as a small modulation in time. This method has already been applied to thin-film equations for related asymptotic studies [67,69].
1.4. The model problem. We assume prototypical power-law forms of f ℓ , i.e.,
f ℓ (σ) = dg α (σ), where g α (σ) = max 0, |σ| α−1 σ or g α (σ) = |σ| α−1 σ,(1.12)
and α > 0 and d > 0 are constants encoding the strength of friction at the contact line. For S = 0, the former g α allows for receding fronts, while the latter alternative does not. Equation (1.11a) and the kinematic condition (1.11d) have two scaling invariances,
(t, y, h) → (T * t , Y * ŷ , H * ĥ ), where T * = Y 4 * H −n * . (1.13)
We use one invariance to normalize the droplet's mass to be 2, i.e.
M :=ˆs
+ (t) s−(t) h(t, y) dy = 2H * Y * .
(1.14)
With the choice in (1.12), (1.11c) reads as
∂ŷĥ|ŷ =ŝ±(t) 2 = D 2 g α ± dŝ± dt (t) where D 2 := d Y 2 * H 2 * Y α * T α * .
If α ̸ = 4 n+3 , we may use the second scaling invariance to fix the constant D > 0 to be 1:
1 = D 2 = dY 2+α * H 2 * T α * (1.13) = dY 2−3α = d M 2 2−3α H 4−α(n+3) * ⇔ H 4−α(n+3) * = d M 2 2−3α . (1.15)
Therefore, removing hats, we will consider the following free boundary problem with initial mass M = 2 and with D = 1 if α ̸ = 4 n+3 :
∂ t h + ∂ y h n ∂ 3 y h = 0 for y ∈ (s − (t), s + (t)), (1.16a) h = 0 at y = s ± (t), (1.16b) 1 D ∂ y h 2 = g α (±ṡ ± (t)) at y = s ± (t), (1.16c) h n−1 ∂ 3 y h =ṡ ± (t) at y = s ± (t), (1.16d) s+(t) s−(t)
h(t, y)dy = 2.
(1.16e)
We seek even solutions, with s = s(t) = s + (t) = −s − (t) denoting the position of the right free boundary. Furthermore, since we are interested in the long-time dynamics, we seek solutions with advancing contact linesṡ > 0, whence (1.12) reduces to g α (ṡ) = (ṡ) α . It is convenient to pass to a fixed domain by the change of variables
h(t, y) = s −1 H(t, x), x = s −1 y. (1.17)
Then, taking symmetry into account, (1.16) reads as
s n+4 ∂ t H − s n+3ṡ ∂ x (xH) + ∂ x H n ∂ 3 x H = 0 for x ∈ (0, 1), (1.18a) H = 0 at x = 1, (1.18b) s −4 D −1 ∂ x H 2 = (ṡ) α at x = 1, (1.18c) s −(n+3) H n−1 ∂ 3 x H =ṡ at x = 1, (1.18d) ∂ x H = ∂ 3 x H = 0 at x = 0, (1.18e) 1 0 H(t, x)dx = 1, (1.18f)
which we will consider in what follows. Note that the combination of (1.18c) and (1.18d) yields
D −1 ∂ x H 2 = s 4−α(n+3) (H n−1 ∂ 3 x H) α at x = 1. (1.19)
A generic symmetric solution of problem (1.16) is shown in Fig. 1, where we choose initial conditions highlighting transient behavior and convergence to self-similar solutions for h(t, y). .
(1.20)
In §2 we discuss the case (1.20), identifying for any D ≥ 0 a unique self-similar profile (cf. Theorem 2.2); we also discuss its behavior with respect to D and its stability. In §3 we investigate quasiself-similar solutions. Our analysis shows that the long-time dynamics is dominated by:
• contact-line dissipation if α < 4 n+3 (strong contact-line friction, §3.2): in particular, the long-time scaling law depends on α;
• bulk dissipation if α > 4 n+3 (weak contact-line friction, §3.3), leading to the same long-time scaling law as in the case of null contact-line dissipation.
For (1.20), both dissipations contribute equally and we speak of the balanced case. A summary of the results obtained is given in §4, which also contains their discussion and indicates further directions. Finally, in Appendix A we detail the gradient-flow formulation of the problem and we discuss how this formulation drives our numerical scheme.
Self-similar solutions
We seek symmetric and mass-preserving self-similar solutions of (1.16) in the case when the balance condition (1.20) holds. In terms of H, this translates into the ansatz where B > 0 is an unknown constant. Inserting (2.1) into (1.18), integrating once using (1.18e) and recalling (1.19), we find
H(t, x) = H(x) and s(t) = (γ −1 B 2 t) γ , γ = 1 n + 4 ,(2.H n−1 d 3 H dx 3 = B 2 x in (0, 1), (2.2a) dH dx = 0 at x = 0, (2.2b) H = 0 at x = 1, (2.2c) dH dx 2 = D 2 (H n−1 d 3 H dx 3 ) α (2.2a) = D 2 B 2α at x = 1, (2.2d) 1 0 H(x)dx = 1, (2.2e)
where we recall that in this case D cannot be set to 1 by scaling. For n = 1, we have α
H D (x) = B D D 2 (1 − x 2 ) + B 2 D 24 (1 − x 2 ) 2 , B = B D := 15 2 − D + 4 5 + D 2 . (2.3)
In particular,
B 2 D = 45 for D = 0, ∼ 9/D 2 as D → ∞, hence H D (x) = 15 8 (1 − x 2 ) 2 for D = 0 = 3 2 (1 − x 2 ) as D → ∞. (2.4)
Remark 2.1. We remark that H D for D = 0 coincides with the Smyth-Hill solution [76]. Further note that the profile (2.3) was formulated already in [72, Eq. (6.1)] and later on in [37], without a justification of the contact-angle dynamics, that is, instead of (2.2d) the condition dH dx =constant at x = 1 was assumed (cf. [37, (2
.3)]).
For n > 1, the solution of (2.2) is not explicit. However, the situation is analogous: We conclude the section with the proof of Theorem 2.2.
Theorem 2.2. Let n ∈ [1, 3). For any D ≥ 0 there exists a unique solution (B D , H D ) to (2.2). In addition, B D=0 > 0, (B D=0 , H D=0 ) is the unique solution to (2.2) with D = 0, and DB α D → −3 and H D (x) → 3 2 (1 − x 2 ) in C([0, 1]) as D → +∞.
Proof of Theorem 2.2. We rescale (2.2) as follows:
x =x B x andĤ =x −1 B H,x B := B 2 n+4 ,(2.H n−1 d 3Ĥ dx 3 =x forx ∈ (0,x B ), (2.6a) dĤ dx = 0 atx = 0, (2.6b) (Ĥ, dĤ dx ) = (0, −DB α− 4 n+4 ) (1.20) = (0, −DB 4 (n+3)(n+4) ) atx =x B , (2.6c)x B 0 u dx = 1. (2.6d)
In [67, Theorem 2.1], and in [71, Theorem 1.2] for D = 0 (up to a rescaling), it is shown that the system
u n−1 d 3 u dy 3 = y for y ∈ (0, y θ ), (2.7a) du dy = 0 at y = 0, (2.7b) (u, du dy ) = (0, −θ) at y = y θ , (2.7c) y θ 0 u dy = 1. (2.7d)
has for any θ ≥ 0 a unique solution (y, u) = (y θ , u θ ) (see [37, Theorem 3.2 and Theorem 3.3] for an alternative proof of existence and uniqueness). Furthermore, y θ is a decreasing function of θ with 0 < y 0 < ∞ and y ∞ = 0. Note that the proof in [67] applies to inhomogeneous mobilities, but carries over to our situation without any change of the reasoning.
If D = 0, (2.6) coincides with (2.7) with θ = 0, hence Theorem 2.2 holds with (x B ,Ĥ) = (y 0 , u). If D > 0, for any B > 0 let θ B := DB 4 (n+3)(n+4)
and let (y B , u B ) be the unique solution to (2.7) with θ = θ B . Since B → θ B is increasing, in view of the above, B → y B is decreasing from y 0 to y ∞ = 0: hence there exists a unique B (whence a unique θ) such that y B =x B = B 2 n+4 . This proves the well-posedness of (2.2). It remains to consider the limit as D → +∞. Let θ D , B D , and y D =x D be the unique constants identified above. By construction, they are related by
θ D = DB 4 (n+3)(n+4) D and y D = B 2 n+4 D .
If by contradiction θ D → θ ∈ [0, +∞) for a subsequence D → +∞ (not relabeled), then on one hand B D → 0, hence y D → 0; on the other hand, (2.7) would imply that y D → y ∈ (0, y 0 ], a contradiction. Therefore θ D → +∞, hence y D = B 2 n+4 D → 0 as D → +∞. Hence, in the limit D → +∞, it follows from (2.2) because of B D → 0 that H D (x) converges uniformly in [0, 1] to the unique solution of
d 3 H dx 3 = 0 in (0, 1), H(1) = dH dx (0) = 0,ˆ1 0 H(x)dx = 1. Hence, DB α D → −3 and H D (x) → 3 2 (1 − x 2 ) as D → ∞. □ 3.
Quasi-self-similar solutions 3.1. Scaling and ansatz. As we have just seen, only the case α = 4 n+3 yields exact self-similar solutions to (1.18). For α ̸ = 4 n+3 , we may nevertheless consider quasi-self-similar solutions, which are expected to describe the large-time asymptotics of generic ones. As mentioned in §1.3, this method has been applied to thin-film equations for related asymptotic studies [67,69]. Quasi-selfsimilar solutions are characterized by ignoring the explicit dependence of H on time in (1.18), that is, the term ∂ t H is dropped while the dependence on time through s is retained. We may then integrate (1.18a) using the boundary conditions (1.18b), (1.18d), and |ṡ| < ∞. Recalling also (1.19), this yields
H n−1 ∂ 3 x H = s n+3ṡ x for x ∈ (0, 1), (3.1a) H = 0 at x = 1, (3.1b) (∂ x H) 2 = s 4−α(n+3) (H n−1 ∂ 3 x H) α at x = 1, (3.1c) ∂ x H = 0 at x = 0, (3.1d) 1 0 Hdx = 1, (3.1e)
where in view of (1.15) we have assumed without loss of generality D = 1. The free boundary condition (3.1c) suggests that for t ≫ 1, i.e. s ≫ 1, the solution is
• dominated by contact-line friction if α < 4 n+3 (strong contact-line friction); • a perturbation of the complete wetting solution if α > 4 n+3 (weak contact-line friction). In the next two sections we will show that this is indeed the case. To this aim, we will use the asymptotic expansion
s n+4 (t) = s n+4 0 + s n+4 1 (1 + o(1)), H(t, x) = H 0 (x) + ωH 1 (x) + O(ω 2 ), (3.2)
with s 0 (t) ≫ 1 and s 1 (t) ≪ s 0 (t) for t ≫ 1, and ω = ω(s) ≪ 1 as s → ∞. Under the expansion (3.2), we obviously have from (3.1b) and (3.1d) that
H 0 (1) = 0, dH0 dx (0) = 0, H 1 (1) = 0, dH1 dx (0) = 0, (3.3)
and from (3.1e) thatˆ1
0 H 0 dx = 1,ˆ1 0 H 1 dx = 0. (3.4)
On the other hand, the leading order term in the contact-line condition (3.1c) depends on the sign of α − 4 n+3 . This motivates distinguishing the two cases.
3.2. The case α < 4 n+3 : strong contact-line friction. Since α < 4 n+3 and s 0 ≫ 1, using (3.2), at leading order the bulk equation (3.1a) and the contact-line condition (3.1c) are
H n−1 0 d 3 H0 dx 3 = s n+3 0ṡ 0 x for x ∈ (0, 1) and H n−1 0 d 3 H0 dx 3 = 0 at x = 1, (3.5)
respectively. The two equations are obviously incompatible, pointing to the necessity of considering the correction ωH 1 . Therefore,
d 3 H0 dx 3 = 0 in (0, 1) (3.6)
and the leading-order terms in (3.1a) and (3.1c) are, respectively,
ω H n−1 0 d 3 H1 dx 3 + (n − 1)H n−2 0 d 3 H0 dx 3 H 1 = s n+3 0ṡ 0 x for x ∈ (0, 1), (3.7) dH0 dx 2 = s 4−α(n+3) 0 ω α H n−1 0 d 3 H1 dx 3 + (n − 1)H n−2 0 d 3 H0 dx 3 H 1 α at x = 1.H 0 = 3 2 (1 − x 2 ) for x ∈ [−1, 1]. (3.9)
On the other hand, separation of variables in (3.7) yields with (3.6) and (3.9),
d 3 H1 dx 3 = x (1 − x 2 ) 1−n in (0, 1), (3.10) ω = 2 3 n−1 s n+3 0ṡ 0 ,(3.11)
where, since the expansion (3.2) only depends on the product ωH 1 , we were free to choose the normalization factor of ω. Now, we use
0 (3.4) =ˆ1 0 dx dx H 1 dx = 1 · H 1 (1) − 0 · H 1 (0) −ˆ1 0 x dH1 dx dx (3.3) = − 1 2ˆ1 0 d dx (x 2 − 1) dH1 dx dx = −0 · dH1 dx (1) + 1 2 dH1 dx (0) + 1 2ˆ1 0 (x 2 − 1) d 2 H1 dx 2 dx (3.3) = 1 6ˆ1 0 d dx (x 3 − 3x + 2) d 2 H1 dx 2 dx = 0 · d 2 H1 dx 2 (1) − 1 3 d 2 H1 dx 2 (0) − 1 6ˆ1 0 (x 3 − 3x + 2) d 3 H1 dx 3 dx,
giving with (3.10),
d 2 H1 dx 2 (0) = −C 1 , C 1 := 1 2ˆ1 0 x(x 3 − 3x + 2)(1 − x 2 ) 1−n dx. (3.12)
Thus we can integrate (3.10) as follows:
d 2 H1 dx 2 (x) (3.10),(3.12) = −C 1 +ˆx 0 x 1 (1 − x 2 1 ) 1−n dx 1 , dH1 dx (x) (3.3) = −C 1 x +ˆx 0 x 1 (x − x 1 ) (1 − x 2 1 ) 1−n dx 1 , so that with (3.3), H 1 (x) = C1 2 (1 − x 2 ) −ˆ1 xˆx 1 0 x 2 (x 1 − x 2 ) (1 − x 2 2 ) 1−n dx 2 dx 1 for x ∈ [−1, 1]. (3.13)
For n = 1 we have C 1 = 1 10 and this solution reads
H 1 (x) = 1 20 (1 − x 2 ) − 1 24 (1 − x 4 ) = 1 120 (1 − x 2 )(1 − 5x 2 ) for x ∈ [−1, 1].
(3.14)
For n = 2 we get C 1 = 1 6 (5 − log 2) and the correction
H 1 (x) = C1 2 (1 − x 2 ) − 1 4 (3 − 3x 2 − 2 log 4 + (x − 1) 2 log(1 − x) + (x + 1) 2 log(1 + x)).s 0 (t) = 3 1−γ 2 γ −γ t γ , γ = α α+4 < 1 n+4 (3.17)
because α < 4 n+3 , and consequently
ω(t) (3.16) = 3 1−γ 2γ 2 3 n−1 (s 0 (t)) − 1−γ(n+4) γ = γ 1−γ(n+4) 2 n−1 3 6−n−(n+4)γ 2 t −(1−γ(n+4)) . (3.18)
Finally, the correction s 1 is obtained from the next-to-leading order terms in equation (3.1a):
s n+3 1ṡ 1 = O ω 2 (3.18) = O t −2(1−γ(n+4)) ⇔ s n+4 1 = O(t 2(n+4)γ−1 ) if γ ̸ = 1 2(n+4) , O(log t) if γ = 1 2(n+4) .= 3 1−γ 2 γ −γ t γ 1 + O(t −(1−γ(n+4)) ) if γ ̸ = 1 2(n+4) , 1 + O(t − 1 2 log t) if γ = 1 2(n+4) .
(3.21)
The combination of (3.9), (3.13), and (3.18) in (3.2) yields at leading order as s → ∞ (or equivalently t → ∞) the quasi-self-similar profile H according to
H(t, x) = 3 2 (1 − x 2 ) + C 2 t −(1−γ(n+4)) H 1 (x) + O t −2(1−γ(n+4)) for x ∈ [−1, 1], C 2 = γ 1−γ(n+4) 2 n−1 3 6−n−(n+4)γ 2 (3.22)
where γ = α α+4 and H 1 is defined in (3.12)-(3.13). For n = 1 this equation reduces to
H(t, x) = 3 2 (1 − x 2 ) + 3 5(1−γ) 2 γ 1−5γ 120 t −(1−5γ) (1 − x 2 )(1 − 5x 2 ) + O t −2(1−5γ) (3.23) for x ∈ [−1, 1].
3.3. The case α > 4 n+3 : weak contact-line friction. On assuming α > 4 n+3 , separation of variables in (3.1a) yields, at leading order for s 0 ≫ 1,
s 0 = B 2 0 s −(n+3) 0 (3.24)
and
H n−1 0 d 3 H0 dx 3 = B 2 0 x in (0, 1) (3.25)
for some unknown constant B 0 > 0, whilst the condition (3.1c) at the contact line yields where we have normalized ω conveniently since only the product ωH 1 enters the expansion in (3.2). Furthermore, we have used that ω dH1 dx has to have (strictly) negative sign at x = 1 for the expansion to make sense around x = 1 and be nontrivial. In the bulk, the next-to-leading order terms in (3.1a),
ω (n − 1)H n−2 0 d 3 H0 dx 3 H 1 + H n−1 0 d 3 H1 dx 3 = s n+3= O(ω) = O t 4−α(n+3) 2(n+4) ⇒ s n+4 1 = O t 2(n+6)−α(n+3) 2(n+4) if α ̸ = 2 n+6 n+3 , O(log t)
if α = 2 n+6 n+3 , and with (3.20) and (3.28) this produces
s(t) = (n + 4)B 2 0 t 1 n+4 1 + O t − α(n+3)−4 2(n+4) if α ̸ = 2 n+6 n+3 , 1 + O(t −1 log t) if α = 2 n+6 n+3 .
(3.34)
Next, we turn our attention to determining H 1 and C 2 . We distinguish three cases.
ω = (225t) − 2(α−1) 5 , s(t) = (225 t) 1 5 1 + O t −2(α−1) 5 if α ̸ = 7 2 , 1 + O(t −1 log t) if α = 7 2 .H 1 = (45) α 2 6 1 5 (1 − x 2 ) − 1 4 (1 − x 2 ) 2 = − (45) α 2 120 (1 − x 2 )(1 − 5x 2 ).H(t, x) = 15 8 (1 − x 2 ) 2 − 1 8 3 α−1 5 5 − 3α+2 10 t − 2(α−1) 5 (1 − x 2 )(1 − 5x 2 ) + O t − 4(α−1) 5 .
n ∈ (1, 3 2 ). We now consider n > 1. Defining g as H 1 = − C2 n f + g, Once g is determined, the mass constraint (3.4) leads to
C 2 = − n´1 0 g(x) dx 1 0 f (x) dx . (3.40)
In order to determine solvability of (3.39), we use [71,Theorem 1.3], that is,
f = C 3 (1 − x) 2 (1 + o(1)) for 0 < n < 3 2 , C 4 (1 − x) 2 (− log(1 − x)) 2 3 (1 + o(1)) for n = 3 2 , C 5 (1 − x) 3 n (1 + o(1)) for 3 2 < n < 3, as x ↗ 1, (3.41)
where C 3 , C 4 , C 5 > 0 only depend on n. The asymptotics (3.41) imply that (3.39) has no solution for n ∈ 3 2 , 3 : indeed, for n ∈ ( 3 2 , 3) from (3.39a)-(3.39c) and (3.41) we infer
d 3 g dx 3 = − 1 2 B α 0 C − 3 2 4 (1 − x) −2 (− log(1 − x)) −1 for n = 3 2 , −(n − 1)B α 0 C −n 5 (1 − x) −2 (1 + o(1)) for n ∈ ( 3 2 , 3), as x ↗ 1,
in contradiction with (3.39c).
For n ∈ 1, 3 2 , we apply a further splitting according to
g = 1 2 B α 0 (1 − x 2 ) + v + w, (3.42) where f n d 3 v dx 3 = 1−n 2 B α 0 x (1 − x 2 ) for x ∈ (0, 1), (3.43a) v = dv dx = 0 at x = 1, (3.43b) dv dx = 0 at x = 0, (3.43c)
and
Lw := (n − 1) x w + f n d 3 w dx 3 = (1 − n) x v for x ∈ (0, 1), (3.44a) w = dw dx = 0 at x = 1, (3.44b) dw dx = 0 at x = 0. (3.44c)
We first construct a solution to (3.43). We have
d 2 v dx 2 (3.43a) = − n−1 2 B α 0ˆx 0 x 1 (1 − x 2 1 ) (f (x 1 )) n dx 1 + C 6 , dv dx (3.43c) = − n−1 2 B α 0ˆx 0ˆx 1 0 x 2 (1 − x 2 2 ) (f (x 2 )) n dx 2 dx 1 + C 6 x = − n−1 2 B α 0ˆx 0 x 2 (x − x 2 )(1 − x 2 1 ) (f (x 2 )) n dx 2 + C 6 x, C 6 (3.43b) = n−1 2 B α 0ˆ1 0 x(1 − x) 2 (1 + x) (f (x)) n dx, so that v (3.43b) = n−1 2 B α 0ˆ1 xˆx 1 0 x 2 (x − x 2 )(1 − x 2 1 ) (f (x 2 )) n dx 2 dx 1 − C6 2 (1 − x 2 ). (3.45)
Notably, in view of (3.41), (3.45) only yields a well-defined solution for n ∈ [1, 3 2 ). Lastly, in order to find the solution of (3.44), we usê
1 0 x f −n w (Lw) dx = (n − 1)ˆ1 0 x 2 f −n w 2 dx +ˆ1 0 x w d 3 w dx 3 dx (3.44b) = (n − 1)ˆ1 0 x 2 f −n w 2 dx −ˆ1 0 w d 2 w dx 2 dx − 1 2ˆ1 0 x d dx dw dx 2 dx (3.44b),(3.44c) = (n − 1)ˆ1 0 x 2 f −n w 2 dx + 3 2ˆ1 0 dw dx 2 dx > 0 for all w ̸ ≡ 0.
This proves coercivity for n ∈ 1, 3 2 , so that the Lax-Milgram theorem yields existence of a unique solution to (3.44). Together with (3.45), this yields existence of a unique solution g to (3.39), hence of a unique H 1 . 3). The afore-mentioned non-existence of solutions to (3.39) for mobility exponents n ∈ [ 3 2 , 3) entails that the ansatz (3.2) breaks down for this range of mobilities in the regime of strong contact-line friction α > 4 n+3 . This necessitates a matched-asymptotics approach in which we distinguish between outer and inner region. Since the case n = 3 2 contains a logarithmic resonance (see (3.41)), we only concentrate on the range n ∈ ( 3 2 , 3) in what follows. With H 0 and s 0 given by (3.27) and (3.28), respectively, it follows from the previous analysis that s = s 0 (1 + o(1)) and H(t, x) = H 0 (x) in the outer region (whose extent is to be determined). Near x = 1, from (3.33) and (3.41), we infer for n ∈ ( 3 2 , 3) that
n ∈ ( 3 2 ,H 0 = C 5 B 2 n 0 (1 − x) 3 n (1 + o(1)) for 0 < 1 − x ≪ 1, (3.46)
where more precisely
C 5 = 3 n ( 3 n − 1)(2 − 3 n ) − 1 n . (3.47)
In the inner region x ↗ 1, we use the traveling-wave ansatz
H(t, x) = sF in (ξ) ξ = s(1 − x). (3.48)
Inserted into (1.18a) with s = s 0 (1+o(1)), this gives, after an integration using (1.18b) and (1.18d),
F n−1 in d 3 Fin dξ 3 = −ṡ 0 for ξ > 0. (3.49a)
The boundary condition (1.18d) is then automatically satisfied; the conditions (1.18b) and (1.18c) translate into F in = 0 and dFin dξ 2 = (ṡ 0 ) α at ξ = 0, (3.49b) and are complemented with the condition
d 2 Fin dξ 2 → 0 as ξ → ∞ for 3 2 < n < 3, (3.49c)
which in view of (3.46) is required for the matching to the outer solution H 0 . Well-posedness of (3.49) may be obtained using the strategies in [56] or [13], where related problems were considered.
Here we sketch the application of the argument in [13] to the present case. For ξ ≪ 1, a oneparametric solution family to (3.49a) and (3.49b) is given by
F in (ξ) = ṡ α 2 0 ξ −ṡ 1+ α 2 (1−n) 0 (4−n)(3−n)(2−n) ξ 4−n + a in ξ 2 + h.o.t. for n ̸ = 2 s α 2 0 ξ +ṡ 1−α 0 2 ξ 2 log ξ + a in ξ 2 + h.o.t. for n = 2 as ξ ↘ 0, a in ∈ R (3.50)
where h.o.t. denotes higher-order terms. For ξ ≫ 1, a one-parametric (plus translation) solution family to (3.49a) and (3.49c) is given by
F in =ṡ 1 n 0 C 5 ξ 3 n + b in ξ + h.o.t. for 3 2 < n < 3 as ξ → ∞, b in ∈ R. (3.51)
Therefore, for 3 2 < n < 3, (3.50) and (3.51) define two two-dimensional solution manifolds (with parameters (ξ, a in ) and (ξ, b in ), respectively) of the three-dimensional dynamical system (F in , dFin dξ , d 2 Fin dξ 2 ): their intersection is the one-dimensional solution curve determining a in and b in and defining the solution to (3.49).
In terms of x, expansion (3.51) translates at leading order into H(t, x) 2 n 0 C 5 , which is fulfilled because of (3.24). Since
s 0 F in (3.50) = s 2 0ṡ α 2 0 (1 − x)(1 + o(1)) (3.24) = B α 0 s 4−α(n+3) 2 0 (1 − x)(1 + o(1)) (3.28) = B α 0 (n + 4)B 2 0 t 4−α(n+3) 2(n+4) (1 − x)(1 + o(1)) for 0 ≤ 1 − x ≪ s −1 , we conclude that H(t, x) =H 0 (t, x)(1 + o(1)), wherẽ H 0 (t, x) ∼ H 0 (x) for 1 − x ≫ s −1 B α 0 (n + 4)B 2 0 t −β (1 − x) for 1 − x ≪ s −1 , s ≫ 1, β = α(n + 3) − 4 2(n + 4) > 0.
(3.53)
Conclusions and outlook
We have obtained an asymptotic description of the long-time dynamics of solutions (H, s) to (1.18). This translates back to the original height via h(t, y) = s −1 H(t, s −1 y) with support (−s, s).
In the balanced case, α = 4 n+3 , we have shown that (1.18) admits for any D ≥ 0 a unique self-similar profile, determined by (2.2), and s scales like t 1 n+4 . Both profile and speed depend on D, the profile ranging from the zero contact-angle one for D = 0 to a parabolic shape as D → +∞. Numerical solutions to (2.2) have been provided in Fig. 2.
A very interesting question concerns stability, i.e., the convergence of solutions H of (1.18) to the self-similar profile (2.3). This issue has already been raised in [72, §6] and [36, §8] in a similar context and could be faced, starting from n = 1, either by energy-entropy methods [72][73][74][75], by studying global-in-time classical solutions for perturbations of special solutions like a self-similar profile [35,36,39], a traveling wave [40,42], or an equilibrium-stationary solution [33,34,46,47,[49][50][51]. In the three latter cases, the difficulty lies in finding suitable estimates for the linearized evolution of perturbations. Unlike in the case of the Smyth-Hill profile [76], this linearization does not carry an apparent symmetric structure, which is why the linear analysis is presumably more involved. Fig. 3 provides numerical simulations supporting convergence in the balanced case for n = 1 and n = 2.
For strong contact-line friction, α < 4 n+3 , H and s obey the asymptotic
H(t, x) = 3 2 (1 − x 2 ) + C 2 t −(1−γ(n+4)) H 1 (x) + O t −2(1−γ(n+4)) for x ∈ [−1, 1], (4.1a) s(t) = 3 1−γ 2 γ −γ t γ 1 + O(t −(1−γ(n+4)) ) if γ ̸ = 1 2(n+4) , 1 + O(t − 1 2 log t) if γ = 1 2(n+4) , (4.1b)
where H 1 and C 2 are defined in §3.2 (see (3.12)-(3.13) and (3.22)), and
H 1 (x) = 1 120 (1 − x 2 )(1 − 5x 2 ) if n = 1.
The leading-order profile is a parabolic one, with finite non-zero contact angle, and coincides with the exact self-similar profile in the limiting case D = ∞ ( §2). In other words, H D → H 0 as D → ∞. In addition, the evolution of the contact line, as given by (4.1b), is slower than the standard one and is dominated by the contact-line frictional exponent α. Therefore, for α < 4 n+3 contact-line friction dominates the long-time dynamics uniformly in space and time (Fig. 4, first column). Notably, the correction H 1 , which is dictated by attaining the dynamical contact-line condition, is not localized near the contact line, but rather propagates throughout the solution's support (Fig. 5, middle and right). For weak contact-line friction, α > 4 n+3 , the leading-order dynamics instead coincide with the zero-contact-angle ones in terms of both profile and speed, in the sense that
H(t, x) = H 0 (x)
and
s(t) = (n + 4)B 2 0 t 1 n+4 ,(4.2)
where (B 0 , H 0 ) is the unique solution to (2.2) with D = 0. In other words, H 0 = H D=0 is the unique self-similar profile of (1.10) with zero contact angle and normalized mass (Fig. 4, second column).
Our results on the corrections turn out to depend on the mobility exponent n: if n ∈ [1, 3/2) we find a global estimate as above, in the sense that (1 + cos(πx)) (red dashed line) for strong contact line friction α = 1/2 (left) and weak contact line friction α = 2 (right) compared with the exact self-similar solutions HD(x) to (2.2) approached as t → ∞ (red dotted lines, withD → ∞ for α = 1/2, cf. Theorem 2.2, andD = 0 for α = 2). Note that for α = 1 2 solutions go slightly above the graph of HD before relaxing (a manifestation of the lack of a comparison principle). The time evolution is indicated by dark gray arrows. where H 1 (x) is uniquely determined in §3.3. On the other hand, if n ∈ (3/2, 3) we are only able to qualify a local correction:
H(t, x) = H 0 (x) + (n + 4)B 2 0 t −β H 1 (x) + O(t −2β ), β = α(n + 3) − 4 2(n + 4) , (4.3a) s(t) = (n + 4)B 2 0 t 1 n+4 1 + O(t −β ) if α ̸ = 2 n+6 n+3 , 1 + O(t −1 log t) if α = 2 n+6 n+3 (4.3b)H(t, x) ∼ H 0 (x) for 1 − x ≫ t − 1 n+4 B α 0 (n + 4)B 2 0 t −β (1 − x) for 1 − x ≪ t − 1 n+4 .
It is apparent from our numerical simulation (Fig. 5, right)
s(t) = Y * s * (T −1 * t), where H * (1.14) = Y −1 * (1.13) = T − 1 n+4 * (1.15) = d 1 4−α(n+3) , α ̸ = 4 n+3 , yielding s(t) ∼ 3 1−γ 2 γ −γ d − 1 α+4 t γ if α < 4 n+3 (n + 4)B 2 0 t 1 n+4 if α > 4 n+3 as t → +∞.
This shows that in the strong case, as expected, larger frictional coefficients d yield slower speeds; notably, however, in the weak case the leading-order asymptotic speed in (1.11) is instead oblivious of both α and d, hence universal. is formally defined in terms of the energy E in (1.5) and the dual dissipation potential
Ψ * (h, η) = 1 2ˆ{ h>0} m(h) (∂ y π) 2 dy + 1 2ˆ∂ {h>0} m cl (∂ y h) ζ 2 ds, η = (π, ζ), (A.2)
where Ψ * (h, 0) ≡ 0 and Ψ * is convex in the second argument. The bulk mobility m and the contact-line mobility m cl are non-negative functions, which in the case of (1.11) are given by
m(h) = h n and m cl (z) = 2 d 1 α |z| 2 α . (A.
3)
The first term in Ψ * encodes the standard dissipation of the viscous fluid with nontrivial slip boundary conditions, whereas the second term encodes the extra dissipation at the contact line y = s ± . This formulation relies on the formal assumption that for fixed time there exists a representation of the dual force η = (π : {h > 0} → R, ζ : ∂{h > 0} → R) such that We seek (ḣ, π, ζ) that satisfy (A.5) for all test functions (v,π,ζ). By testing this weak formulation with (v,π,ζ) = (ḣ, π, ζ), we can deduce the energy descent Additionally, the solution h(t, y) and the contact line s ± (t) satisfy the kinematic condition d dt h(t, s ± (t)) =ḣ t, s ± (t) +ṡ ± (t)∂ y h t, s ± (t) = 0, (A.7)
which we can use to reconstruct the boundary velocity and evolve the domain {h > 0} and the solution h. Using integration by parts and assuming the solution is smooth enough, from (A.5a) we identify π = −∂ 2 y h and ζ = 1 2|∂yh| ((−2S) − (∂ y h) 2 ). Using (A.5b) we can recover the general evolution of h and {h > 0}. We will now focus on the complete wetting case, S = 0. Then the thin-film dynamics are governed bẏ which coincides with (1.12). The Ren-E model with quadratic dissipation has α = 1; for a derivation from the Stokes problem see [66].
A.2. Discretization. The weak formulation (A.5) is discretized using a standard finite element discretization in space. In the derivative of the energy, we replace ∂ y h by ∂ y h + τ ∂ yḣ in order to achieve a semi-implicit treatment of the highest-order derivative in the time-discretization, a standard method in higher-order parabolic equations. For given h, we seek (ḣ, π, ζ) using P 1 finite elements defined in {h > 0} and on ∂{h > 0}, respectively, and solvê which allows to define the function H(t) : {h > 0}(t 0 ) → R via H(t, y) = h(t, ξ(t, y)). By construction we have H(t, s ± (t 0 )) ≡ 0 and correspondingly ∂ t H = 0 at the fixed contact line y = s ± (t 0 ). For the time derivatives we haveḢ(t, y) =ḣ(t, ξ) + ∂ y h(t, ξ)ξ. The knowledge ofḣ from the solution of (A.10) and the boundary conditionḢ = 0 entirely determine the time derivativesḢ and the mappingξ. Note that, for a moving front, ∂ y h = 0 at ∂{h > 0} is not an issue for the application of P 1 finite elements, since on the last element connected to s ± , (A.9) yields a possibly small but nonzero value of |∂ y h|. In the reference domain we can update both the height function H(t + τ, y) = H(t, y) + τḢ(t, y) and the map ξ(t + τ, y) = ξ(t, y) + τξ(t, y), which uniquely determines h(t, y) on a moving domain. A similar approach for partial wetting is studied in [66] and higher-dimensional extensions are discussed in [77].
1 )Figure 1 .
11Solution h(t, y) of the transient problem (1.16) with n = α = 1 and D = 1 with (symmetric) initial data h(t = 0, y) = 15 4 ((1 + y)(1 − y) − 2 5 (1 + cos(πy)) for s±(t = 0) = ±1.
can be integrated explicitly to the unique solution
Remark 2 . 3 .
23Theorem 2.2 shows that for large, respectively small, contact-line frictional coefficients the evolution is controlled by the contact-line frictional law, respectively the complete wetting regime. In addition, since s(t) ∼ (γ −1 B 2 D t) γ and B D → 0 as D → +∞, as D → +∞ solutions approach a quasi-stationary interface shape with respect to the time-scale t γ . The behavior of H D for varying D is shown inFig. 2.
Figure 2 .
2Exact self-similar solutions HD(x) ≡ H(x) of the ODE system (2.2) and first derivative dH D dx for n = 1 (first two panels) and n = 2 (last two panels) with α = 4/(n+3) shown for various friction coefficients 0 ≤ D ≤ ∞ (D = 0 with dH D dx = 0 at x = ±1; D → +∞ with linear dH D dx ). Numerical solutions are obtained by shooting method.
yields n-independently the unique solution
( 3 .
315) Next, we compute s 0 and ω. With help of (3.9) and (3.10), so that by normalizing s 0 (0) = 0 through a time shift,
.17),(3.19)
. 1 .
1for H 0 is complemented with the boundary conditions (3.3) and the mass constraint (3.4), hence it coincides with (2.2) with D = 0. Therefore (B 0 , H 0 ) is the unique solution to (2.2) with D = 0, (3.27) (cf. Theorem 2.2 and [71, Theorem 1.2]), i.e. H 0 = H D=0 is the unique exact self-similar solution of (1.10) with zero contact angle and unit mass. Integrating (3.24) with normalization s 0 It is to be noted that the leading-order expansion given by s 0 and H 0 is selfconsistent. This is in contrast to the case of strong contact-line friction, where the leading-order expansions in the bulk and at the contact line were incompatible with each other (cf. (3.5)). However, since we are interested in quantifying the correction coming from the contact-line frictional law, we shall be looking at ωH 1 and s 1 in this case, too.At next-to-leading order for s 0 ≫ 1, the contact-line condition(3.
− 1)xH 1 + f n d 3 H1 dx 3 = −C 2 xf in (0, 1) (3.32)for some C 2 > 0 without loss of generality (since the expansion only depends on ωH 1 ), where we have defined f = B
n = 1 .
1For n = 1, an explicit integration gives (cf.
),(3.36) and(3.37) in(3.2), at leading order as s → ∞ (or equivalently t → ∞) the approximate position of the free boundary is given by (3.36), and the quasi-self-similar profile H is given by
1
1≫ 1, expansions (3.46) and (3.52) thus have an overlapping region, s −1 ≪ 1 − x ≪
Figure 3 .
3For D = 1 and n = 1 (left) or n = 2 (right), the solution H(t, x) to (1.18) for increasing times 0 ≤ t < ∞ (blue lines, time increases along arrows) with initial datum H(t = 0, x) = 15 4 ((1 + x)(1 − x) − 2 5 (1 + cos(πx)) (red dashed line), compared with the exact self-similar solutions HD(x) to (2.2) approached as t → ∞ (red dotted line).
Figure 4 .
4For D = 1 and n = 1 (top) or n = 2 (bottom), the solution H(t, x) to (1.18) for different t (blue lines) with initial datum H(t = 0, x) = 15 4 (1 + x)(1 − x) − 2 5
Figure 5 .
5Convergence of the correction to the self-similar solution, where (left) we show the L 1 norm of the difference between C(t, x) = H(t, x) − H0(x) /|H(t, ·) − H0(·)|∞ and the theoretically predicted, normalized correction ±H1(x)/|H1(·)|∞. For comparison, in (middle) and (right) we show C(t, x) at t = 200 for n = 1, respectively n = 2. Corrections C (colored) are compared with theoretically predicted ones (dashed/dotted), when the latter are available.
that the correction should consist, also in this case, of a globally defined function H 1 and s 1 as in (4.3), with the same time exponent β; however, at the moment this is left as an open question.
Figure 6 .
6Convergence to self-similar solutions observed in the long-time behavior s(t) ∼ t γ of numerical solutions to (1.18) for mobility exponents n = 1, 2, where (left) we show the logarithmic time-derivative d log s d log t approaching the predicted γ (black dotted line) for n = 1 and (right) we show the evaluation of the logarithmic time-derivative at t = 10 7 for various α and for n = 1, 2 compared to the theoretical prediction.
Figure 7 .
7Time correction, in the form d dt (t −γ s(t)) versus t, for numerical solutions to(1.18), shown using full lines for strong contact line friction n = 1, α < 1 (left), weak contact line friction n = 1, α > 1 (center), and strong contact line friction n = 2, α < 4/5 (right). Dashed lines are the corresponding theoretical predictions.As in the balanced case , a rigorous stability result is expected to hold for α ̸ = 4 n+3 and would be interesting to be pursued, showing convergence of solutions H of (1.18) to the corresponding H 0 . In this respect,Fig. 4and 6 provide numerical simulations supporting convergence, and Fig. 5 and 7 numerically validate the next-to-leading-order corrections to the macroscopic profile H 0 . A final remark concerns dependence on the contact-line frictional coefficient d (cf. (1.12)), which we could scale out. Let s * be the asymptotic position of the free boundary of the solution to (1.16) with D = 1 and M = 2, as given by (4.1b) and (4.2). Keeping mass equal to two, but for a generic d > 0, the asymptotic position of the contact line in the original equation (1.11) is given by
Appendix A. The gradient-flow formulation and its discretization A.1. Gradient-flow formulation. Problem (1.11) and its discretization are based on a gradient flow formulation for the height h and the wetted area {h > 0} = (s − , s + ) as in (1.2). The gradient flowḣ = −∂ η Ψ * (h, DE[h]). (A.1)
gradient flow (A.1) withη = (π,ζ) and using (A.2) gives ⟨η,ḣ⟩ = ⟨η, −∂ η Ψ * (h, DE[h]h) ∂ y π ∂ yπ dy −ˆ∂ {h>0} m cl (∂ y h)ζζ ds. (A.5b)
dt E[h(t)] = ⟨η,ḣ⟩ = − ˆ{ h>0} m(h) (∂ y π) 2 dy +ˆ∂ {h>0} m cl (∂ y h) ζ 2 ds ≤ 0.(A.6)
h
= ∂ y (m(h) ∂ y π), π = −∂ 2 y h in {h > 0}, (A.8a) h = −ṡ ± ∂ y h = −m cl (∂ y h) ζ, ζ = − 1 2 |∂ y h| on ∂{h > 0}. (A.8b)with natural boundary conditions for π in (A.8a). Assuming (A.3), at the contact line ∂{h > 0} we get for positive speeds (which are of interest to us) y | , or equivalentlyṡ ± = ±d − 1 α |∂ y h| 2 α at ∂{h > 0}, (A.9)
h) ∂ y π ∂ yπ dy −ˆ∂ {h>0} m cl (∂ y h) ζζ ds. (A.10b)where in the numerical scheme m = m(h) and m cl = m cl (∂ y h) are evaluated explicitly from the previous time step. Now, we introduce an arbitrary Lagrangian-Eulerian method by constructing a mapping ξ : {h > 0}(t 0 ) → {h > 0}(t) using the linear constructionξ(t, y) = s + (t) − s − (t) s + (t 0 ) − s − (t 0 ) y − s − (t 0 ) + s − (t), (A.11)
Using this representation 1 , one identifies η with ⟨η,v⟩ = ⟨DE[h],v⟩ in a weak formulation, i.e., h>0} ∂ y h ∂ yv dy +ˆ∂⟨η,ḣ⟩ =ˆ{
h>0}
πḣ dy +ˆ∂
{h>0}
ζḣ ds for any rateḣ(t) : {h > 0} → R.
(A.4)
{h>0}
πv dy +ˆ∂
{h>0}
ζv ds =ˆ{
For dual forces η that do not admit such a representation, we formally set Ψ * (h, η) = ∞.
Long-scale evolution of thin liquid films. Alexander Oron, Stephen H Davis, S George Bankoff, Rev. Mod. Phys. 69Alexander Oron, Stephen H. Davis, and S. George Bankoff. Long-scale evolution of thin liquid films. Rev. Mod. Phys., 69:931-980, Jul 1997.
Wetting and spreading. Daniel Bonn, Jens Eggers, Joseph Indekeu, Jacques Meunier, Etienne Rolley, Rev. Mod. Phys. 81Daniel Bonn, Jens Eggers, Joseph Indekeu, Jacques Meunier, and Etienne Rolley. Wetting and spreading. Rev. Mod. Phys., 81:739-805, May 2009.
A justification for the thin film approximation of Stokes flow with surface tension. M Günther, G Prokert, J. Differential Equations. 24510M. Günther and G. Prokert. A justification for the thin film approximation of Stokes flow with surface tension. J. Differential Equations, 245(10):2802-2845, 2008.
On the motion of a small viscous droplet that wets a surface. H P Greenspan, J. Fluid Mech. 841H. P. Greenspan. On the motion of a small viscous droplet that wets a surface. J. Fluid Mech., 84(1):125-143, 1978.
Rigorous lubrication approximation. Interfaces Free Bound. L Giacomelli, F Otto, 5L. Giacomelli and F. Otto. Rigorous lubrication approximation. Interfaces Free Bound., 5(4):483-529, 2003.
Variational formulation for the lubrication approximation of the Hele-Shaw flow. L Giacomelli, F Otto, Calc. Var. Partial Differential Equations. 133L. Giacomelli and F. Otto. Variational formulation for the lubrication approximation of the Hele-Shaw flow. Calc. Var. Partial Differential Equations, 13(3):377-403, 2001.
Well-posedness and uniform bounds for a nonlocal third order evolution operator on an infinite wedge. Hans Knüpfer, Nader Masmoudi, Comm. Math. Phys. 3202Hans Knüpfer and Nader Masmoudi. Well-posedness and uniform bounds for a nonlocal third order evolution operator on an infinite wedge. Comm. Math. Phys., 320(2):395-424, 2013.
Darcy's flow with prescribed contact angle: Well-posedness and lubrication approximation. Archive for Rational Mechanics and. H Knüpfer, N Masmoudi, Analysis. 2182H. Knüpfer and N. Masmoudi. Darcy's flow with prescribed contact angle: Well-posedness and lubrication approximation. Archive for Rational Mechanics and Analysis, 218(2):589-646, 2015.
Hele-Shaw flow in thin threads: a rigorous limit result. Interfaces Free Bound. Bogdan-Vasile Matioc, Georg Prokert, 14Bogdan-Vasile Matioc and Georg Prokert. Hele-Shaw flow in thin threads: a rigorous limit result. Interfaces Free Bound., 14(2):205-230, 2012.
Stability analysis of unsteady, nonuniform base states in thin film equations. Marion Dziwnik, Maciek Korzec, Andreas Münch, Barbara Wagner, Multiscale Modeling & Simulation. 122Marion Dziwnik, Maciek Korzec, Andreas Münch, and Barbara Wagner. Stability analysis of unsteady, nonuni- form base states in thin film equations. Multiscale Modeling & Simulation, 12(2):755-780, 2014.
Traveling-wave solutions to thin-film equations. S Boatto, L P Kadanoff, P Olla, Phys. Rev. E. 483S. Boatto, L. P. Kadanoff, and P. Olla. Traveling-wave solutions to thin-film equations. Phys. Rev. E (3), 48(6):4423-4431, 1993.
Moving boundary problems and non-uniqueness for the thin film equation. J R King, Bowen, European Journal of Applied Mathematics. 123JR King and M Bowen. Moving boundary problems and non-uniqueness for the thin film equation. European Journal of Applied Mathematics, 12(3):321-356, 2001.
Rigorous asymptotics of traveling-wave solutions to the thin-film equation and Tanner's law. Lorenzo Giacomelli, Manuel V Gnann, Felix Otto, Nonlinearity. 299Lorenzo Giacomelli, Manuel V. Gnann, and Felix Otto. Rigorous asymptotics of traveling-wave solutions to the thin-film equation and Tanner's law. Nonlinearity, 29(9):2497-2536, 2016.
Higher order nonlinear degenerate parabolic equations. Francisco Bernis, Avner Friedman, J. Differential Equations. 831Francisco Bernis and Avner Friedman. Higher order nonlinear degenerate parabolic equations. J. Differential Equations, 83(1):179-206, 1990.
Nonnegative solutions of a fourth-order nonlinear degenerate parabolic equation. E Beretta, M Bertsch, R Dal Passo, Arch. Rational Mech. Anal. 1292E. Beretta, M. Bertsch, and R. Dal Passo. Nonnegative solutions of a fourth-order nonlinear degenerate para- bolic equation. Arch. Rational Mech. Anal., 129(2):175-200, 1995.
The lubrication approximation for thin viscous films: regularity and long-time behavior of weak solutions. A L Bertozzi, M Pugh, Comm. Pure Appl. Math. 492A. L. Bertozzi and M. Pugh. The lubrication approximation for thin viscous films: regularity and long-time behavior of weak solutions. Comm. Pure Appl. Math., 49(2):85-123, 1996.
The thin viscous flow equation in higher space dimensions. M Bertsch, R Passo, H Garcke, G Grün, Adv. Differential Equations. 33M. Bertsch, R. Dal Passo, H. Garcke, and G. Grün. The thin viscous flow equation in higher space dimensions. Adv. Differential Equations, 3(3):417-440, 1998.
Droplet spreading under weak slippage-existence for the Cauchy problem. Günther Grün, Comm. Partial Differential Equations. 29Günther Grün. Droplet spreading under weak slippage-existence for the Cauchy problem. Comm. Partial Differential Equations, 29(11-12):1697-1744, 2004.
Doubly nonlinear thin-film equations in one space dimension. Lidia Ansini, Lorenzo Giacomelli, Arch. Ration. Mech. Anal. 1731Lidia Ansini and Lorenzo Giacomelli. Doubly nonlinear thin-film equations in one space dimension. Arch. Ration. Mech. Anal., 173(1):89-131, 2004.
Finite speed of propagation and continuity of the interface for thin viscous flows. Francisco Bernis, Adv. Differential Equations. 13Francisco Bernis. Finite speed of propagation and continuity of the interface for thin viscous flows. Adv. Dif- ferential Equations, 1(3):337-368, 1996.
Finite speed of propagation for thin viscous flows when 2 ≤ n < 3. Francisco Bernis, C. R. Acad. Sci. Paris Sér. I Math. 32212Francisco Bernis. Finite speed of propagation for thin viscous flows when 2 ≤ n < 3. C. R. Acad. Sci. Paris Sér. I Math., 322(12):1169-1174, 1996.
The thin film equation with 2 ≤ n < 3: finite speed of propagation in terms of the L 1 -norm. J Hulshof, A E Shishkov, Adv. Differential Equations. 35J. Hulshof and A. E. Shishkov. The thin film equation with 2 ≤ n < 3: finite speed of propagation in terms of the L 1 -norm. Adv. Differential Equations, 3(5):625-642, 1998.
A waiting time phenomenon for thin film equations. R Passo, L Giacomelli, G Grün, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 304R. Dal Passo, L. Giacomelli, and G. Grün. A waiting time phenomenon for thin film equations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 30(2):437-463, 2001.
The thin film equation with nonlinear diffusion. R Passo, L Giacomelli, A E Shishkov, Comm. Partial Differential Equations. 269R. Dal Passo, L. Giacomelli, and A. E. Shishkov. The thin film equation with nonlinear diffusion. Comm. Partial Differential Equations, 26(9-10):1509-1557, 2001.
Droplet spreading under weak slippage: a basic result on finite speed of propagation. Günther Grün, SIAM J. Math. Anal. 344Günther Grün. Droplet spreading under weak slippage: a basic result on finite speed of propagation. SIAM J. Math. Anal., 34(4):992-1006, 2003.
Droplet spreading under weak slippage: the waiting time phenomenon. G Grün, Ann. Inst. H. Poincaré Anal. Non Linéaire. 212G. Grün. Droplet spreading under weak slippage: the waiting time phenomenon. Ann. Inst. H. Poincaré Anal. Non Linéaire, 21(2):255-269, 2004.
Propagation of support in one-dimensional convected thin-film flow. L Giacomelli, A E Shishkov, Indiana Univ. Math. J. 544L. Giacomelli and A. E. Shishkov. Propagation of support in one-dimensional convected thin-film flow. Indiana Univ. Math. J., 54(4):1181-1215, 2005.
Lower bounds on waiting times for degenerate parabolic equations and systems. Lorenzo Giacomelli, Günther Grün, Interfaces Free Bound. 81Lorenzo Giacomelli and Günther Grün. Lower bounds on waiting times for degenerate parabolic equations and systems. Interfaces Free Bound., 8(1):111-129, 2006.
Optimal lower bounds on asymptotic support propagation rates for the thin-film equation. J Fischer, J. Differential Equations. 25510J. Fischer. Optimal lower bounds on asymptotic support propagation rates for the thin-film equation. J. Dif- ferential Equations, 255(10):3127-3149, 2013.
Upper Bounds on Waiting Times for the Thin-Film Equation: The Case of Weak Slippage. J Fischer, Arch. Rational Mech. Anal. 2113J. Fischer. Upper Bounds on Waiting Times for the Thin-Film Equation: The Case of Weak Slippage. Arch. Rational Mech. Anal., 211(3):771-818, 2014.
Behaviour of free boundaries in thin-film flow: the regime of strong slippage and the regime of very weak slippage. Julian Fischer, Ann. Inst. H. Poincaré C Anal. Non Linéaire. 335Julian Fischer. Behaviour of free boundaries in thin-film flow: the regime of strong slippage and the regime of very weak slippage. Ann. Inst. H. Poincaré C Anal. Non Linéaire, 33(5):1301-1327, 2016.
Sharp criteria for the waiting time phenomenon in solutions to the thin-film equation. Nicola De Nitti, Julian Fischer, Comm. Partial Differential Equations. 477Nicola De Nitti and Julian Fischer. Sharp criteria for the waiting time phenomenon in solutions to the thin-film equation. Comm. Partial Differential Equations, 47(7):1394-1434, 2022.
Smooth zero-contact-angle solutions to a thin-film equation around the steady state. L Giacomelli, H Knüpfer, F Otto, J. Differential Equations. 2456L. Giacomelli, H. Knüpfer, and F. Otto. Smooth zero-contact-angle solutions to a thin-film equation around the steady state. J. Differential Equations, 245(6):1454-1506, 2008.
Corrigendum to "Smooth zero-contactangle solutions to a thin-film equation around the steady state. Björn Bringmann, Lorenzo Giacomelli, Hans Knüpfer, Felix Otto, J. Differential Equations. 245Björn Bringmann, Lorenzo Giacomelli, Hans Knüpfer, and Felix Otto. Corrigendum to "Smooth zero-contact- angle solutions to a thin-film equation around the steady state" [J. Differential Equations 245 (2008) 1454-1506].
. J. Differential Equations. 2612J. Differential Equations, 261(2):1622-1635, 2016.
Regularity of source-type solutions to the thin-film equation with zero contact angle and mobility exponent between 3/2 and 3. L Giacomelli, M Gnann, F Otto, European Journal of Applied Mathematics. 242013L. Giacomelli, M. Gnann, and F. Otto. Regularity of source-type solutions to the thin-film equation with zero contact angle and mobility exponent between 3/2 and 3. European Journal of Applied Mathematics, 24:735-760, 10 2013.
Well-posedness and self-similar asymptotics for a thin-film equation. Manuel V Gnann, SIAM J. Math. Anal. 474Manuel V. Gnann. Well-posedness and self-similar asymptotics for a thin-film equation. SIAM J. Math. Anal., 47(4):2868-2902, 2015.
A dynamical systems approach for the contactline singularity in thin-film flows. Fethi Ben, Manuel V Belgacem, Christian Gnann, Kuehn, Nonlinear Anal. 144Fethi Ben Belgacem, Manuel V. Gnann, and Christian Kuehn. A dynamical systems approach for the contact- line singularity in thin-film flows. Nonlinear Anal., 144:204-235, 2016.
On the regularity for the Navier-slip thin-film equation in the perfect wetting regime. Manuel V Gnann, Arch. Ration. Mech. Anal. 2223Manuel V. Gnann. On the regularity for the Navier-slip thin-film equation in the perfect wetting regime. Arch. Ration. Mech. Anal., 222(3):1285-1337, 2016.
The thin-film equation close to self-similarity. Christian Seis, Anal. PDE. 115Christian Seis. The thin-film equation close to self-similarity. Anal. PDE, 11(5):1303-1342, 2018.
Well-posedness for the Navier-slip thin-film equation in the case of complete wetting. L Giacomelli, M V Gnann, H Knüpfer, F Otto, J. Differential Equations. 2571L. Giacomelli, M.V. Gnann, H. Knüpfer, and F. Otto. Well-posedness for the Navier-slip thin-film equation in the case of complete wetting. J. Differential Equations, 257(1):15-81, 2014.
The Navier-slip thin-film equation for 3D fluid films: existence and uniqueness. V Manuel, Mircea Gnann, Petrache, J. Differential Equations. 26511Manuel V. Gnann and Mircea Petrache. The Navier-slip thin-film equation for 3D fluid films: existence and uniqueness. J. Differential Equations, 265(11):5832-5958, 2018.
Stability of receding traveling waves for a fourth order degenerate parabolic free boundary problem. Manuel V Gnann, Slim Ibrahim, Nader Masmoudi, Adv. Math. 347Manuel V. Gnann, Slim Ibrahim, and Nader Masmoudi. Stability of receding traveling waves for a fourth order degenerate parabolic free boundary problem. Adv. Math., 347:1173-1243, 2019.
Lubrication approximation with prescribed nonzero contact angle. Felix Otto, Comm. Partial Differential Equations. 23Felix Otto. Lubrication approximation with prescribed nonzero contact angle. Comm. Partial Differential Equations, 23(11-12):2077-2164, 1998.
Thin-film equations with "partial wetting" energy: existence of weak solutions. M Bertsch, L Giacomelli, G Karali, Phys. D. 2091-4M. Bertsch, L. Giacomelli, and G. Karali. Thin-film equations with "partial wetting" energy: existence of weak solutions. Phys. D, 209(1-4):17-27, 2005.
The thin film equation with non-zero contact angle: a singular perturbation approach. A Mellet, Comm. Partial Differential Equations. 401A. Mellet. The thin film equation with non-zero contact angle: a singular perturbation approach. Comm. Partial Differential Equations, 40(1):1-39, 2015.
Well-posedness for the Navier slip thin-film equation in the case of partial wetting. Hans Knüpfer, Comm. Pure Appl. Math. 649Hans Knüpfer. Well-posedness for the Navier slip thin-film equation in the case of partial wetting. Comm. Pure Appl. Math., 64(9):1263-1296, 2011.
Well-posedness for a class of thin-film equations with general mobility in the regime of partial wetting. Hans Knüpfer, Arch. Ration. Mech. Anal. 2182Hans Knüpfer. Well-posedness for a class of thin-film equations with general mobility in the regime of partial wetting. Arch. Ration. Mech. Anal., 218(2):1083-1130, 2015.
Classical solvability of the multidimensional free boundary problem for the thin film equation with quadratic mobility in the case of partial wetting. Sergey Degtyarev, Discrete Contin. Dyn. Syst. 377Sergey Degtyarev. Classical solvability of the multidimensional free boundary problem for the thin film equation with quadratic mobility in the case of partial wetting. Discrete Contin. Dyn. Syst., 37(7):3625-3699, 2017.
Erratum to: Well-posedness for a class of thin-film equations with general mobility in the regime of partial wetting. Hans Knüpfer, Arch. Ration. Mech. Anal. in preparationHans Knüpfer. Erratum to: Well-posedness for a class of thin-film equations with general mobility in the regime of partial wetting. Arch. Ration. Mech. Anal., 2023. in preparation.
Relaxation rates for a perturbation of a stationary solution to the thin-film equation. Elias Esselborn, SIAM J. Math. Anal. 481Elias Esselborn. Relaxation rates for a perturbation of a stationary solution to the thin-film equation. SIAM J. Math. Anal., 48(1):349-396, 2016.
Relaxation to equilibrium in the one-dimensional thin-film equation with partial wetting and linear mobility. Mohamed Majdoub, Nader Masmoudi, Slim Tayachi, Comm. Math. Phys. 3852Mohamed Majdoub, Nader Masmoudi, and Slim Tayachi. Relaxation to equilibrium in the one-dimensional thin-film equation with partial wetting and linear mobility. Comm. Math. Phys., 385(2):837-857, 2021.
On the onset of motion of sliding drops. Ciro Semprebon, Martin Brinkmann, Soft Matter. 1018Ciro Semprebon and Martin Brinkmann. On the onset of motion of sliding drops. Soft Matter, 10(18):3325-3334, 2014.
A variational approach to a quasi-static droplet model. Natalie Grunewald, Inwon Kim, Calc. Var. Partial Differential Equations. 41Natalie Grunewald and Inwon Kim. A variational approach to a quasi-static droplet model. Calc. Var. Partial Differential Equations, 41(1-2):1-19, 2011.
On the relationship between the thin film equation and tanner's law. M G Delgadino, A Mellet, Communications on Pure and Applied Mathematics. 743M. G. Delgadino and A. Mellet. On the relationship between the thin film equation and tanner's law. Commu- nications on Pure and Applied Mathematics, 74(3):507-543, 2021.
Nonisothermal spreading of liquid drops on horizontal plates. P Ehrhard, S H Davis, J. Fluid Mech. 229P. Ehrhard and S. H. Davis. Nonisothermal spreading of liquid drops on horizontal plates. J. Fluid Mech., 229:365-388, 1991.
Droplets spreading with contact-line friction: lubrication approximation and traveling wave solutions. M Chiricotto, L Giacomelli, Communications in Applied and Industrial Mathematics. 22M. Chiricotto and L. Giacomelli. Droplets spreading with contact-line friction: lubrication approximation and traveling wave solutions. Communications in Applied and Industrial Mathematics, 2(2), 2011.
Scaling laws for droplets spreading under contact-line friction. M Chiricotto, L Giacomelli, Commun. Math. Sci. 112M. Chiricotto and L. Giacomelli. Scaling laws for droplets spreading under contact-line friction. Commun. Math. Sci., 11(2):361-383, 2013.
Weak solutions to thin-film equations with contact-line friction. Maria Chiricotto, Lorenzo Giacomelli, Interfaces Free Bound. 192Maria Chiricotto and Lorenzo Giacomelli. Weak solutions to thin-film equations with contact-line friction. Interfaces Free Bound., 19(2):243-271, 2017.
Boundary conditions for the moving contact line problem. Weiqing Ren, E Weinan, Physics of Fluids. 19222101Weiqing Ren and Weinan E. Boundary conditions for the moving contact line problem. Physics of Fluids, 19(2):022101, 2007.
Continuum models for the contact line problem. Weiqing Ren, Dan Hu, Weinan E , Physics of Fluids. 2210102103Weiqing Ren, Dan Hu, and Weinan E. Continuum models for the contact line problem. Physics of Fluids, 22(10):102103, 2010.
Derivation of continuum models for the moving contact line problem based on thermodynamic principles. W Ren, W E , Comm. Math. Sci. 92W. Ren and W. E. Derivation of continuum models for the moving contact line problem based on thermodynamic principles. Comm. Math. Sci., 9(2):597-606, 2011.
Wetting -Statics and dynamics. P G De Gennes, Rev. Mod. Phys. 573P. G. de Gennes. Wetting -Statics and dynamics. Rev. Mod. Phys., 57(3):827-863, 1985.
Lubrication approximation with prescribed nonzero contact angle. F Otto, Comm. Partial Differential Equations. 23F. Otto. Lubrication approximation with prescribed nonzero contact angle. Comm. Partial Differential Equa- tions, 23(11-12):2077-2164, 1998.
Thin-film equations with "partial wetting" energy: existence of weak solutions. M Bertsch, L Giacomelli, G Karali, Phys. D. 2091-4M. Bertsch, L. Giacomelli, and G. Karali. Thin-film equations with "partial wetting" energy: existence of weak solutions. Phys. D, 209(1-4):17-27, 2005.
Spreading equilibria under mildly singular potentials: pancakes versus droplets. R Durastanti, L Giacomelli, Journal of Nonlinear Science. 3252022R. Durastanti and L. Giacomelli. Spreading equilibria under mildly singular potentials: pancakes versus droplets. Journal of Nonlinear Science, 32(5), 2022.
Variational approach to dynamic contact angles for thin films. Dirk Peschka, Physics of Fluids. 30882115Dirk Peschka. Variational approach to dynamic contact angles for thin films. Physics of Fluids, 30(8):082115, 2018.
Effective and microscopic contact angles in thin film dynamics. M Bertsch, R Passo, S H Davis, L Giacomelli, European J. Appl. Math. 112M. Bertsch, R. Dal Passo, S. H. Davis, and L. Giacomelli. Effective and microscopic contact angles in thin film dynamics. European J. Appl. Math., 11(2):181-201, 2000.
Droplet spreading: intermediate scaling law by PDE methods. L Giacomelli, F Otto, Comm. Pure Appl. Math. 552L. Giacomelli and F. Otto. Droplet spreading: intermediate scaling law by PDE methods. Comm. Pure Appl. Math., 55(2):217-254, 2002.
Shear-thinning liquid films: macroscopic and asymptotic behaviour by quasi-selfsimilar solutions. L Ansini, L Giacomelli, Nonlinearity. 156L. Ansini and L. Giacomelli. Shear-thinning liquid films: macroscopic and asymptotic behaviour by quasi-self- similar solutions. Nonlinearity, 15(6):2147-2164, 2002.
The Cox-Voinov law for traveling waves in the partial wetting regime. V Manuel, Anouk C Gnann, Wisse, Nonlinearity. 357Manuel V. Gnann and Anouk C. Wisse. The Cox-Voinov law for traveling waves in the partial wetting regime. Nonlinearity, 35(7):3560-3592, 2022.
Source type solutions of a fourth order nonlinear degenerate parabolic equation. F Bernis, L A Peletier, S M Williams, Nonlinear Anal. 183F. Bernis, L. A. Peletier, and S. M. Williams. Source type solutions of a fourth order nonlinear degenerate parabolic equation. Nonlinear Anal., 18(3):217-234, 1992.
Long-time asymptotics for strong solutions of the thin film equation. J A Carrillo, G Toscani, Comm. Math. Phys. 2253J. A. Carrillo and G. Toscani. Long-time asymptotics for strong solutions of the thin film equation. Comm. Math. Phys., 225(3):551-571, 2002.
Asymptotic equipartition and long time behavior of solutions of a thin-film equation. Eric A Carlen, Süleyman Ulusoy, J. Differential Equations. 2412Eric A. Carlen and Süleyman Ulusoy. Asymptotic equipartition and long time behavior of solutions of a thin-film equation. J. Differential Equations, 241(2):279-292, 2007.
A family of nonlinear fourth order equations of gradient flow type. Daniel Matthes, Robert J Mccann, Giuseppe Savaré, Comm. Partial Differential Equations. 34Daniel Matthes, Robert J. McCann, and Giuseppe Savaré. A family of nonlinear fourth order equations of gradient flow type. Comm. Partial Differential Equations, 34(10-12):1352-1397, 2009.
Localization, smoothness, and convergence to equilibrium for a thin film equation. Eric A Carlen, Süleyman Ulusoy, Discrete Contin. Dyn. Syst. 3411Eric A. Carlen and Süleyman Ulusoy. Localization, smoothness, and convergence to equilibrium for a thin film equation. Discrete Contin. Dyn. Syst., 34(11):4537-4553, 2014.
High-order nonlinear diffusion. N F Smyth, J M Hill, IMA J. Appl. Math. 402N. F. Smyth and J. M. Hill. High-order nonlinear diffusion. IMA J. Appl. Math., 40(2):73-86, 1988.
Model hierarchies and higher-order discretisation of time-dependent thin-film free boundary problems with dynamic contact angle. Dirk Peschka, Luca Heltai, [email protected] (Manuel V. Gnann) Delft Institute of Applied Mathematics. Delft, The Netherlands464Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology ; Dirk Peschka) Weierstrass Institute -Mohrenstrasse 39, 10117 BerlinLorenzo Giacomelli) SBAI Department, Sapienza University of Rome -Via A. Scarpa 16, 00161 Roma, Italy -lorenzoDirk Peschka and Luca Heltai. Model hierarchies and higher-order discretisation of time-dependent thin-film free boundary problems with dynamic contact angle. Journal of Computational Physics, 464:111325, 2022. (Lorenzo Giacomelli) SBAI Department, Sapienza University of Rome -Via A. Scarpa 16, 00161 Roma, Italy [email protected] (Manuel V. Gnann) Delft Institute of Applied Mathematics, Faculty of Electrical Engineering, Math- ematics and Computer Science, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands (Dirk Peschka) Weierstrass Institute -Mohrenstrasse 39, 10117 Berlin, Germany -dirk.peschka@wias- berlin.de
| [] |
[
"Are there phase transitions in information space?",
"Are there phase transitions in information space?"
] | [
"Jonathan Oppenheim \nRacah Institute of Theoretical Physics\nHebrew University of Jerusalem\nGivat Ram91904JerusalemIsrael and (\n\nInstitute of Theoretical Physics and Astrophysics\nUniversity of Gdańsk\nPoland\n",
"Micha ",
"Horodecki ",
"Ryszard Horodecki \nInstitute of Theoretical Physics and Astrophysics\nUniversity of Gdańsk\nPoland\n"
] | [
"Racah Institute of Theoretical Physics\nHebrew University of Jerusalem\nGivat Ram91904JerusalemIsrael and (",
"Institute of Theoretical Physics and Astrophysics\nUniversity of Gdańsk\nPoland",
"Institute of Theoretical Physics and Astrophysics\nUniversity of Gdańsk\nPoland"
] | [] | The interplay between two basic quantities -quantum communication and information -is investigated. Quantum communication is an important resource for quantum states shared by two parties and is directly related to entanglement. Recently, the amount of local information that can be drawn from a state has been shown to be closely related to the non-local properties of the state. Here we consider both formation and extraction processes, and analyze informational resources as a function of quantum communication. The resulting diagrams in information space allow us to observe phase-like transitions when correlations become classical. | 10.1103/physrevlett.90.010404 | [
"https://arxiv.org/pdf/quant-ph/0207169v3.pdf"
] | 6,496,236 | quant-ph/0207169 | 27be16d1d6741a3f4e9bc7d5a33b7e5c18f3e856 |
Are there phase transitions in information space?
21 Jan 2003
Jonathan Oppenheim
Racah Institute of Theoretical Physics
Hebrew University of Jerusalem
Givat Ram91904JerusalemIsrael and (
Institute of Theoretical Physics and Astrophysics
University of Gdańsk
Poland
Micha
Horodecki
Ryszard Horodecki
Institute of Theoretical Physics and Astrophysics
University of Gdańsk
Poland
Are there phase transitions in information space?
21 Jan 2003
The interplay between two basic quantities -quantum communication and information -is investigated. Quantum communication is an important resource for quantum states shared by two parties and is directly related to entanglement. Recently, the amount of local information that can be drawn from a state has been shown to be closely related to the non-local properties of the state. Here we consider both formation and extraction processes, and analyze informational resources as a function of quantum communication. The resulting diagrams in information space allow us to observe phase-like transitions when correlations become classical.
Quantum communication (QC) -the sending of qubits between two parties -is a primitive concept in quantum information theory. Entanglement cannot be created without it, and conversely, entanglement between two parties can be used to communicate quantum information through teleportation [1]. The amount of quantum communication needed to prepare a state, and the amount of quantum communication that a state enables one to perform, are fundamental properties of states shared by two parties. This amount, is identical to the entanglement cost E c [2] and entanglement of distillation E D [3] respectively. Perhaps surprisingly, these two quantities are different [4]. There are even states which are "bound" in the sense that quantum communication is needed to create it, but nothing can be obtained from it [5]. Yet QC is a distinct notion from entanglement. For example, one may need to use a large amount of QC while creating some state of low E c , in order to save some other resource. In the present paper we will consider such a situation. The second primitive resource of interest will be information which quantifies how pure a state is. The motivation comes from (both classical or quantum) thermodynamics: it is known that bits that are in a pure state can be used to extract physical work from a single heat bath [6], and conversely work is required to reset mixed states to a pure form [7,8].
For distant parties, in order to use information to perform such tasks, it must first be localized. In [9] we considered how much information (i.e. pure states) can be localized (or drawn) from a state shared between two parties. Thus far, the amount of information needed to prepare a state has not been considered, a possible exception being in [10] where it was noted that there was a thermodynamical irreversibility between preparation and measurement for ensembles of certain pure product states. However, given the central role of quantum communication and information, it would be of considerable importance to understand the interplay between these two primitive resources. In this Letter, we attempt to lay the foundation for this study by examining how much information is needed to prepare a state and how much can be extracted from it as a function of quan-tum communication. For a given state, this produces a curve in information space. The shapes of the curve fall into a number of distinctive categories which classify the state and only a small number of parameters are needed to characterize them. The curves for pure states can be calculated exactly, and they are represented by a one parameter family of lines of constant slope. The diagrams exhibit features reminiscent of thermodynamics, and phase-like transitions (cf. [11]) are observed.
An important quantity that emerges in this study is the information surplus ∆ f . It quantifies the additional information that is needed to prepare a state when quantum communication resources are minimized. ∆ f tells us how much information is dissipated during the formation of a state and is therefore closely related to the irreversibility of state preparation and therefore, to the difference between the entanglement of distillation and entanglement cost. When it is zero, there is no irreversibility in entanglement manipulations. Examples of states with ∆ f = 0 include pure states, and states with an optimal decomposition [3] which is locally orthogonal.
Consider two parties in distant labs, who can perform local unitary operations, dephasing [12], and classical communication. It turns out to be simpler to substitute measurements with dephasing operations, since we no longer need to keep track of the informational cost of reseting the measuring device. (This cost was noted by Landauer [7] and used by Bennett [8] to exorcize Maxwell's demon.) The classical communication channel can also be thought of as a dephasing channel. Finally, we allow Alice and Bob to add noise (states which are proportional to the identity matrix) since pure noise contains no information. Note that we are only interesting in accounting for resources that are "used up" during the preparation procedure. For example, a pure state which is used and then reset to its original state at the end of the procedure, does not cost anything.
Consider now the information extraction process of [9]. If the two parties have access to a quantum channel, and share a state ̺ AB , they can extract all the information from the state
I = n − S(̺ AB )(1)
where n is the number of qubits of the state, and S(̺ AB ) is its Von Neumann entropy. Put another way, the state can be compressed, leaving I pure qubits. However, if two parties do not have access to a quantum channel, and can only perform local operations and communicate classically (LOCC), then in general, they will be able to draw less local information from the state. In [9] we defined the notion of the deficit ∆ to quantify the information that can no longer be drawn under LOCC. For pure states, it was proven that ∆ was equal to the amount of distillable entanglement in the state.
Let us now turn to formation processes and define ∆ f (Q) as follows. Given an amount of quantum communication Q, the amount of information (pure states) needed to prepare the state ̺ AB under LOCC is given by I f (Q). Clearly at least E c bits of quantum communication are necessary. In general, I f (Q) will be greater than the information content I. The surplus is then
∆ f (Q) = I f (Q) − I .(2)
The two end points are of particular interest I.e.
∆ f ≡ ∆ f (E c ) where quantum communication is mini- mized, and ∆ f (E r ) = 0 where we use the quantum chan- nel enough times that I f (E r ) = I. Clearly E c ≤ E r ≤ min{S(ρ A ), S(ρ B )}
where ρ A is obtained by tracing out on Bob's system. This rough bound is obtained by noting that at a minimum, Alice or Bob can prepare the entire system locally, and then send it to the other party through the quantum channel (after compressing it). We will obtain a tight bound later in this paper.
The general procedure for state preparation is that Alice uses a classical instruction set (ancilla in a classical state) with probability distribution matching that of the decomposition which is optimal for a given Q. Since the instruction set contains classical information, it can be passed back and forth between Alice and Bob. Additionally they need n pure standard states. The pure states are then correlated with the ancilla, and then sent. The ancilla need not be altered by this procedure, and can be reset and then reused and so at worse we have
I f ≡ I f (E c ) ≤ n and ∆ f ≤ S(̺ AB )
. We will shortly describe how one can do better by extracting information from correlations between the ancilla and the state.
The pairs (Q, I f (Q)) form curves in information space. In Figure 1 we show a typical curve which we now explain. Since we will be comparing the formation curves to extraction curves, we will adopt the convention that I f (Q) and Q will be plotted on the negative axis since we are using up resources. It can be shown that I f (Q) is concave, monotonic and continuous. To prove concavity, we take the limit of many copies of the state ̺ AB . Then given any two protocols, we can always toss a coin weighted with probabilities p and 1 − p and perform one of the protocols with this probability. There will always be a protocol which is at least as good as this. Monotonicity is obvious (additional quantum communication can only help), and continuity follows from monotonicity, and the existence of the probabilistic protocol. The probabilistic protocol can be drawn as a straight line between the points (E r , I f (E r )) and (E c , I f (E c )). There may however exist a protocol which has a lower I f (Q) than this straight line, i.e. the curve I f (Q) satisfies
I f (Q) ≤ I + I(E r ) − I f (E c ) E c − E r (Q − E r )(3)
Let us now look at extraction processes. The idea is that we draw both local information (pure separable states), and distill singlets. The singlets allow one to perform teleportations, so that we are in fact, extracting the potential to use a quantum channel. We can also consider the case where we use the quantum channel to assist in the information extraction process. We can therefore write the extractable information I l (Q) as a function of Q. When Q is positive, we distill singlets at the same time as drawing information, and when Q is negative, we are using the quantum channel Q times to assist in the extraction (see also Figure 1).
There appear to be at least three special points on the curve. The first, is the point I l ≡ I l (0). This was considered in [9] when we draw maximal local information without the aid of a quantum channel. Another special point is the usual entanglement distillation procedure I g = I l (E D ). The quantity I g is the amount of local information extractable from the "garbage" left over from distillation. I g can be negative as information may need to be added to the system in order to distill all the available entanglement. Finally, I = I l (E r ) is the point where we use the quantum channel enough times that we can extract all the available information. This is the same number of times that the quantum channel is needed to prepare the state without any information surplus since both procedures are now reversible. Just as with the formation curve, I l (Q) is convex, continuous and monotonic. For Q ≥ 0 there is an upper bound on the extraction curve due to the classical/quantum complementarity of [13].
I + Q ≤ I l(4)
It arises because one bit of local information can be drawn from each distilled singlet, destroying the entanglement. One might suppose that the complementarity relation (4) can be extended into the region Q < 0. Perhaps surprisingly, this is not the case, and we have found that a small amount of quantum communication can free up a large amount of information. In Figure 2a we plot the region occupied by pure states. For extraction processes, pure states saturate the bound of Eq. (4) [13]. For formation processes they are represented as points.
In general, if ∆ f = 0 then E c = E D . Examples include mixtures of locally orthogonal pure states [14]. The converse is not true, at least for single copies, as there are separable states such as those of [10] which have ∆ f = 0, and ∆ = 0.
It therefore appears that ∆ f is not a function of the entropy of the state, or of the entanglement, but rather, shows how chaotic the quantum correlations are. It can also be thought of as the information that is dissipated during a process, while ∆ can be thought of as the bound information which cannot be extracted under LOCC. The quantities we are discussing have (direct or metaphoric) connections with thermodynamics. Local information can be used to draw physical work, and quantum communication has been likened to quantum logical work [14]. One is therefore tempted to investigate whether there can be some effects similar to phase transitions. Indeed, we will demonstrate such an effect for a family of mixed states where the transition is of second order, in that the derivative of our curves will behave in a discontinuous way.
To this end we need to know more about E r and I f . Consider the notion of LOCC-orthogonality (cf. [14]). One says that ̺ i is LOCC-orthogonal, if by LOCC Alice and Bob can transformed
i p i |i A ′ i|̺ i AB into |0 A ′ 0| ⊗ i p i ̺ i
AB and vice versa; |i A ′ is the basis of Alice's ancilla. In other words, Alice and Bob are able to correlate the state ̺ i to orthogonal states of a local ancilla as well as reset the correlations. Consider a state ̺ AB that can be LOCC-decomposed, i.e. it is a mixture of LOCC-orthogonal states ̺ = i p i ̺ i . The decomposition suggests a scheme for reversible formation of ̺. Alice prepares locally the state ̺ A ′ AB = i p i |i A ′ i|̺ i AB . This costs n A ′ AB − S(̺ A ′ AB ) bits of information. Conditioned on i, Alice compresses the halves of ̺ i , and sends them to Bob via a quantum channel. This costs i p i S(̺ B ) qubits of quantum communication. Then, since the ̺ i are LOCC-orthogonal, Alice and Bob can reset the ancilla, and return n A ′ bits. One then finds, in this protocol, formation costs n AB − S(̺ AB ) bits, hence it is reversible. Consequently E r ≤ i p i S(̺ B ), hence
E r (̺ AB ) ≤ inf min X i p i S(̺ i X ), X = A, B (5)
where the infimum runs over all LOCC-orthogonal decompositions of ̺ AB . We can also estimate I f by observing that the optimal decomposition for entanglement cost is compatible with LOCC-orthogonal decompositions, i.e. it is of the
form {p i q ij , ψ ij } where j q ij |ψ ij ψ ij | = ̺ i . Now, Alice prepares locally the state ̺ A ′ A ′′ AB = i p i q ij |i A ′ i| ⊗ |j A ′′ j| ⊗ |ψ ij AB ψ ij |.
Conditional on ij, Alice compresses the halves of ψ ij 's and sends them to Bob. This costs on average E c qubits of communication. So far Alice borrowed n A ′ A ′′ AB − S(̺ A ′ A ′′ AB ) bits. Alice and Bob then reset and return ancilla A ′ (this is possible due to LOCC-orthogonality of ̺ i ) and also return ancilla A ′′ without resetting. The amount of bits used is n AB − (S(̺ AB ) − i p i S(̺ i )), giving
∆ f ≤ inf i p i S(̺ i ) ≤ S(̺)(6)
where, again, the infimum runs over the same set of decompositions as in Eq. (5) providing a connection between ∆ f and E r . In the procedure above, collective operations were used only in the compression stage. In such a regime the above bounds are tight. There is a question, whether by some sophisticated collective scheme, one can do better. We conjecture that it is not the case, supported by the remarkable result of [15]. The authors show that an ensemble of nonorthogonal states cannot be compressed to less than S(̺) qubits even at the expense of classical communication. In our case orthogonality is replaced by LOCC-orthogonality, and classical communication by resetting. We thus assume equality in Eqs. (5), (6). Thus for a state that is not LOCCdecomposable (this holds for all two qubit states that do not have a product eigenbasis) we have ∆ f = S(̺ AB ),
E r = min{S(̺ A ), S(̺ B )}.
Having fixed two extremal points of our curves, let us see if there is a protocol which is better than the probabilistic one (a straight line on the diagram). We need to find some intermediate protocol which is cheap in both resources. The protocol is suggested by the decomposition ̺ = i p i ̺ i where ̺ i are themselves LOCC-orthogonal mixtures of pure states. Thus Alice can share with Bob each ̺ i at a communication cost
of Q = i p i E c (̺ i ).
If the states ̺ i are not LOCCorthogonal, Alice cannot reset the instruction set, so that the information cost is I = n − i p i S(̺ i ). We will now show by example, that this may be a very cheap scenario. Consider
̺ = p|ψ + ψ + | + (1 − p)|ψ − ψ − |, p ∈ [0, 1 2 ](7)
with ψ ± = 1 √ 2 (|00 ± |11 ). When p = 0 we have E r = 1, I f = 2, E c = H( 1 2 + p(1 − p)) [4] where H(x) = −x log x−(1−x) log(1−x) is the binary entropy; thus our extreme points are (1, 2−H(p)) and (E c , 2). For p = 0 the state has ∆ = 0 hence the formation curve is just a point. We can however plot it as a line I = 1 (increasing Q will not change I). Now, we decompose the state as ̺ = 2p̺ s + (1 − 2p)|ψ − ψ − |, where ̺ s is an equal mixture of LOCC-orthogonal states |00 and |11 . The intermediate point is then (1 − 2p, 2 − 2p). A family of diagrams with changing parameter p is plotted in Fig. (3). The derivative χ(Q) = ∂Q ∂I has a singularity at p = 1/2. Thus we have something analogous to a second order phase transition. The quantity χ(Q) may be analogous to a quantity such as the magnetic susceptibility. The transition is between states having ∆ = 0 (classically correlated) [9] and states with ∆ = 0 which contain quantum correlations. It would be interesting to explore these transitions and diagrams further, and also the trade-off between information and quantum communication. To this end, the quantity ∆ f (Q) + Q − E c appears to express this tradeoff. Finally, we hope that the presented approach may clarify an intriguing notion in quantum information theory, known as the thermodynamics of entanglement [14,16,17].
FIG. 1 :
1Formation and extraction curves for a generic mixed state. The short-dashed line represent the variant where information can be extracted from the "garbage" left after entanglement distillation (Ig > 0). In general, the curves need not be smooth. The formation curve is in the lower left quadrant.
FIG. 2
2: a) pure states b) states with ∆ = 0 c) separable states with ∆ > 0 d) bound entangled states with ∆ > 0.
Figure 2b -
2bd shows the curves for some different types of states. It is interesting the extent to which one can classify the different states just by examining the diagrams in information space.
transition" in the family of states of Eq.(7)
. C H Bennett, Phys. Rev. Lett. 701895C. H. Bennett et.al., Phys. Rev. Lett 70, 1895 (1983).
. P Hayden, M Horodecki, B Terhal, quant-ph/0008134J. Phys. A. 346891P. Hayden, M. Horodecki, and B. Terhal, J. Phys. A 34, 6891 (2001), quant-ph/0008134.
. C H Bennett, Phys. Rev. A. 532046C. H. Bennett et. al., Phys. Rev. A 53, 2046 (1996) .
. G Vidal, W Dür, J I Cirac, quant-ph/0112131Phys. Rev. Lett. 8927901G. Vidal, W. Dür, and J. I. Cirac, Phys. Rev. Lett 89, 027901 (2002), quant-ph/0112131.
. M Horodecki, P Horodecki, R Horodecki, quant-ph/9801069Phys. Rev. Lett. 80M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett 80, 5239 (1998), quant-ph/9801069.
. L Szilard, Z. Phys. 53840L. Szilard, Z. Phys. 53, 840 (1929).
. R Landauer, IBM J. Res. Develop. 5183R. Landauer, IBM J. Res. Develop. 5, 183 (1961).
. C H Bennett, Int. J. Theor. Phys. 21905C. H. Bennett, Int. J. Theor. Phys. 21, 905 (1982).
. J Oppenheim, M Horodecki, P Horodecki, R Horodecki, quant- ph/0112074Phys. Rev. Lett. 89180402J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett 89, 180402 (2002), quant- ph/0112074.
. C H Bennett, Phys. Rev. A. 591070C. H. Bennett et. al., Phys. Rev. A 59, 1070 (1999) .
. D Aharonov, quant-ph/9910081D. Aharonov, quant-ph/9910081.
Dephasing can be written as i PiρABPi where the Pi are orthogonal local projection operators. Dephasing can be written as i PiρABPi where the Pi are orthogonal local projection operators.
. J Oppenheim, K Horodecki, M Horodecki, P Horodecki, R Horodecki, quant-ph/0207025J. Oppenheim, K. Horodecki, M. Horodecki, P. Horodecki, and R. Horodecki, quant-ph/0207025.
. P Horodecki, M Horodecki, R Horodecki, quant-ph/9805072Acta Phys. Slovaca. 48P. Horodecki, M. Horodecki, and R. Horodecki, Acta Phys. Slovaca 48, 144 (1998), quant-ph/9805072.
. P Hayden, R Jozsa, A Winter, quant-ph/0204038P. Hayden, R. Jozsa, and A. Winter, quant-ph/0204038.
. S Popescu, D Rohrlich, quant-ph/9610044Phys. Rev. A. 563319S. Popescu and D. Rohrlich, Phys. Rev. A 56, R3319 (1997), quant-ph/9610044.
. M Horodecki, J Oppenheim, R Horodecki, quant-ph/0207177Phys. Rev. Lett. M. Horodecki, J. Oppenheim, and R. Horodecki, to ap- pear in Phys. Rev. Lett., quant-ph/0207177.
| [] |
[
"Dynamical large deviations of linear diffusions",
"Dynamical large deviations of linear diffusions"
] | [
"Johan Du Buisson \nDepartment of Physics\nStellenbosch University\n7600StellenboschSouth Africa\n",
"Hugo Touchette \nDepartment of Mathematical Sciences\nStellenbosch University\n7600StellenboschSouth Africa\n"
] | [
"Department of Physics\nStellenbosch University\n7600StellenboschSouth Africa",
"Department of Mathematical Sciences\nStellenbosch University\n7600StellenboschSouth Africa"
] | [] | Linear diffusions are used to model a large number of stochastic processes in physics, including small mechanical and electrical systems perturbed by thermal noise, as well as Brownian particles controlled by electrical and optical forces. Here, we use techniques from large deviation theory to study the statistics of time-integrated functionals of linear diffusions, considering three classes of functionals or observables relevant for nonequilibrium systems which involve linear or quadratic integrals of the state in time. For these, we derive exact results for the scaled cumulant generating function and the rate function, characterizing the fluctuations of observables in the long-time limit, and study in an exact way the set of paths or effective process that underlies these fluctuations. The results give a complete description of how fluctuations arise in linear diffusions in terms of effective forces that remain linear in the state or, alternatively, in terms of fluctuating densities and currents that solve Riccati-type equations. We illustrate these results using two common nonequilibrium models, namely, transverse diffusions in two dimensions involving a non-conservative rotating force, and two interacting particles in contact with heat baths at different temperatures. | 10.1103/physreve.107.054111 | [
"https://export.arxiv.org/pdf/2212.12022v2.pdf"
] | 255,096,624 | 2212.12022 | 511ed1a4b2bc45c395023fd18fb41913acace8f0 |
Dynamical large deviations of linear diffusions
(Dated: May 10, 2023)
Johan Du Buisson
Department of Physics
Stellenbosch University
7600StellenboschSouth Africa
Hugo Touchette
Department of Mathematical Sciences
Stellenbosch University
7600StellenboschSouth Africa
Dynamical large deviations of linear diffusions
(Dated: May 10, 2023)
Linear diffusions are used to model a large number of stochastic processes in physics, including small mechanical and electrical systems perturbed by thermal noise, as well as Brownian particles controlled by electrical and optical forces. Here, we use techniques from large deviation theory to study the statistics of time-integrated functionals of linear diffusions, considering three classes of functionals or observables relevant for nonequilibrium systems which involve linear or quadratic integrals of the state in time. For these, we derive exact results for the scaled cumulant generating function and the rate function, characterizing the fluctuations of observables in the long-time limit, and study in an exact way the set of paths or effective process that underlies these fluctuations. The results give a complete description of how fluctuations arise in linear diffusions in terms of effective forces that remain linear in the state or, alternatively, in terms of fluctuating densities and currents that solve Riccati-type equations. We illustrate these results using two common nonequilibrium models, namely, transverse diffusions in two dimensions involving a non-conservative rotating force, and two interacting particles in contact with heat baths at different temperatures.
I. INTRODUCTION
Stochastic differential equations (SDEs) are widely used in science and engineering to model the dynamics of systems driven by both deterministic forces and external noise sources [1][2][3][4]. In many cases, the force acting on a system can be taken or approximated to be linear in the state, giving rise to linear SDEs, which are also referred to as linear diffusions, linear Langevin equations or Ornstein-Uhlenbeck processes. These are used in physics to model many different systems evolving close to fixed points or in weak force regimes, including small mechanical systems perturbed by thermal noise [5][6][7][8], and Brownian particles manipulated by electrical or laser fields [8][9][10]. Because they are exactly solvable, linear SDEs are also used as basic models of nonequilibrium systems to study the effect of non-conservative forces, temperature gradients, and the breaking of time-reversal symmetry, in general, on the steady state of these systems, determined by their stationary density and current [11][12][13].
In this work, we study the statistics of dynamical observables of linear SDEs, defined as timeintegrated functionals of the paths or trajectories of an SDE. These quantities are related in physics to thermodynamic quantities, such as the work done on a system over time or the heat exchanged with a bath, and thus play a prominent role when investigating the efficiency of control and biological processes that fluctuate at the micro and meso scales [13][14][15]. The study of these fluctuations using techniques from large deviation theory [16][17][18][19] has led in recent years to many general results and insights about the physics of nonequilibrium systems, related to fluctuation symmetries [20][21][22][23], fluctuation phase transitions [24][25][26][27][28][29][30], and thermodynamic uncertainty relations or bounds connecting the variance of current fluctuations to dissipation [31][32][33][34][35].
Another important insight coming from large deviation theory is that fluctuations of observables in nonequilibrium processes arise from density and current fluctuations that organise themselves in an "optimal" way so as to minimize a certain cost or loss [36][37][38][39][40][41][42], similarly to noise-driven transitions in chemical systems which are known to follow optimal "pathways" [43][44][45]. This observation has proven useful for understanding the transport properties of many nonequilibrium systems, including interacting particle systems driven in nonequilibrium states by boundary reservoirs [36][37][38], and generalizes at the level of fluctuations the idea that the knowledge of the stationary density and Typeset by REVT E X arXiv:2212.12022v2 [cond-mat.stat-mech] 9 May 2023 the current of a Markov process is sufficient to completely characterise its stationary state [11].
This description of nonequilibrium systems in terms of density and current fluctuations is appealing physically, but cannot be developed easily in practice because determining the large deviation functions or potentials that describe these fluctuations requires that we solve non-trivial spectral or optimization problems [18], whose dimension increases with the size of the system or model considered. For this reason, there are only a few models for which these functions can be calculated exactly, including lattice jump processes [24][25][26][46][47][48], low-dimensional SDEs [49][50][51][52], as well as simple interacting particle systems, such as the one-dimensional exclusion process [53][54][55][56][57] or the zero-range process [58][59][60][61], which can also be solved analytically in the macroscopic limit using field-theory techniques from the macroscopic fluctuation theory [38].
Here, we derive analytical results for the dynamical large deviations of linear SDEs, showing that this type of process admits exact solutions for three classes of observables, defined, respectively, as linear integrals of the state in time, quadratic integrals of the state, and stochastic integrals involving a linear function of the state multiplied by the increment of the process in time. Each of these arises naturally when dealing with linear SDEs, as shown in the next sections. The third class of observables, in particular, arises when defining thermodynamic quantities, such as the nonequilibrium work or the entropy production, related to the current.
Large deviations of linear SDEs have been studied for specific examples of linear SDEs and observables, notably, quadratic observables [62][63][64][65][66][67][68][69], the entropy production [69][70][71][72][73][74], the nonequilibrium work [75][76][77][78], energy currents in spin dynamics [79], and the heat exchanged by harmonic oscillators with a heat bath [80][81][82][83][84][85]. Our results unify and generalize these studies by considering linear SDEs in any dimension and by extending the class of observables considered to the three classes mentioned above. For these, we give explicit expressions, involving Riccati-type equations, for the scaled cumulant generating function and the rate function, which characterize the probability distribution of dynamical observables in the long-time limit. These two functions are important in physics, since they also determine symmetries in the distribution of observables, referred to as fluctuation relations [20][21][22][23], and sharp transitions between different fluctuation regimes [24][25][26][27][28][29][30].
Compared to previous studies, we also provide a complete description of the way fluctuations arise in terms of optimal density and current fluctuations, which modify the stationary density and current of the diffusion considered or, equivalently, in terms of an effective diffusion process that modifies the force or drift of the original diffusion [86][87][88][89]. This effective process has been studied extensively in recent years for many examples of jump processes [90][91][92][93][94] and diffusions [50][51][52][95][96][97] used as models of nonequilibrium systems, and has been shown in this context to be useful for understanding transitions between different fluctuation regimes, among other phenomena. One important property of this process that we uncover is that, for the three types of observables considered, the effective process is also described by a linear SDE, which means that it is characterized by a modified Gaussian stationary density and current corresponding, in the original SDE, to the optimal density and current fluctuations that give rise to an observable fluctuation. In this sense, these observables can be considered as "closed" or "sufficient" for the effective process to remain in the same SDE class as the original model.
The results that we obtain provide in the end a complete and exactly solvable framework for studying the large deviations of linear diffusions, which can be used to predict their steady-state and fluctuation properties, to determine from observed trajectories whether a system is reversible or irreversible, and, from a more applied perspective, to understand the convergence of Monte Carlo simulations [98]. We believe that they can also be applied, beyond linear systems, to approximate the large deviation functions of nonlinear SDEs or nonlinear observables near fixed points of the corresponding noiseless dynamics, at least in the Gaussian regime of fluctuations, characterized by a mean and variance, which also follow from our results. This can be applied potentially to study the large deviations of more complex systems and to develop new numerical algorithms for computing large deviation functions. We comment on these issues in the concluding section.
To illustrate our results, we consider two linear models in two dimensions, commonly used in physics to describe nonequilibrium processes, namely, transverse diffusions driven by a nonconservative and thus nonequilibrium force that generates a rotating drift in the plane, and the so-called Brownian gyrator, which consists of two particles interacting via a linear (spring) force, put in contact with heat baths at different temperatures. For each of these models, we show for specific observables how the stationary density and current are changed when fluctuations are observed, and how these changes depend on non-conservative forces being applied or on external reservoirs. In some cases, the current can vanish at the fluctuation level, implying that an irreversible system can "behave" in a reversible way when it is observed to fluctuate in a specific way or direction.
II. MODEL AND LARGE DEVIATIONS
A. Linear SDEs
The systems that we consider are underdamped diffusions or Langevin-type systems, described by the linear SDE
dX(t) = −M X(t)dt + σdW (t),(1)
where X(t) ∈ R n is the system's state at time t, M is the drift matrix defining the linear force or drift acting on X(t), and W (t) ∈ R m is a vector of independent Brownian or Wiener motions acting as the noise source, which is multiplied by the noise matrix σ of size n × m. For the remaining, we assume that the diffusion matrix D = σσ T , where T stands for the transpose, is invertible. Linear SDEs are used in physics to model many different systems, including nonequilibrium processes driven by temperature or chemical gradients [1], small cantilever and torsion systems perturbed by thermal noise [5][6][7][8], Brownian particles manipulated by laser tweezers [8][9][10], and electric circuits perturbed by Nyquist or artificial noise [99][100][101]. In control theory, they are also widely used as exact or approximate models of feedback-controlled systems, forming the basis of the classical linear-quadratic-Gaussian control problem [102][103][104]. Naturally, part of the interest for linear SDEs comes from the fact that they can be solved exactly to express X(t) as an integral of the noise. Moreover, the probability density p(x, t) can be found exactly by solving the Fokker-Planck equation, yielding a Gaussian distribution at all times t > 0 when the system is initialized at X(0) = x 0 , characterized by a time-dependent mean and covariance matrix (see [4,Sec. 3.7]).
Here, we focus on the long-time behaviour of the SDE (1) by further assuming that the matrix M , which is not necessarily symmetric, is positive definite, meaning that its eigenvalues have positive real parts. In this case, it is known that the SDE is ergodic and, thus, has a unique stationary density, given explicitly [4] by
p * (x) = 1 (2π) n det C exp − 1 2 x, C −1 x ,(2)
where C, the covariance matrix, satisfies the Lyapunov equation
D = M C + CM T .(3)
Here we use a, b to represent the standard vector inner product in R n . In the long-time limit, the SDE is also characterized by a stationary current, defined in general by
J * (x) = F (x)p * (x) − 1 2 D∇p * (x),(4)
where F (x) is the general force or drift entering in an SDE, which for F (x) = −M x and p * (x) as given in (2) yields the current
J * (x) = Hxp * (x),(5)
where
H = D 2 C −1 − M.(6)
For the remaining, it is important to note that p * and J * determine F uniquely for a fixed D via (4), which means that their knowledge can be used to identify an SDE, whether linear or not. Moreover, the stationary current determines the reversibility of an SDE, that is, whether the probability of any given path over the time is the same as the probability of that path reversed in time [11]. If J * (x) = 0 for all x, then the SDE is reversible, describing in the long-time limit an equilibrium steady state, whereas if J * (x) = 0, then the SDE is irreversible and describes a nonequilibrium steady state violating the condition of detailed balance.
For linear SDEs, we can distinguish two sources of nonequilibrium behavior: a non-symmetric drift matrix M , leading to a non-conservative F that does not follow from the gradient of a potential, and a diffusion matrix D not proportional to the identity matrix I, related to heat baths with different temperatures or correlated noise sources. We study examples of these cases in Sec. V. Of course, a non-symmetric M and D ∝ I can still lead to an equilibrium state if they are such that H = 0.
B. Observables
For a given linear SDE, we are interested in obtaining the long-time form of the probability density p(A T = a) of a dynamical observable A T , which is a time-averaged function of the state X(t). We consider, specifically, three classes of observables:
• Linear additive observables of the form
A T = 1 T T 0 η, X(t) dt,(7)
where η is an arbitrary vector in R n ;
• Quadratic observables, defined as
A T = 1 T T 0 X(t), QX(t) dt,(8)
where Q is assumed, without loss of generality, to be a symmetric n × n matrix;
• Linear current-type observables, defined in terms of the increments of the SDE as
A T = 1 T T 0 ΓX(t) • dX(t),(9)
where Γ is an arbitrary n × n matrix and • denotes the scalar product taken according to the Stratonovich convention or calculus.
These observables are important in physics, as they include many quantities that can be measured in practice, such as the mechanical work done on a nonequilibrium process, the heat transferred in time between a system and its environment, and the entropy production, which is a measure of the irreversibility of stochastic processes [13][14][15]. In control theory, the quadratic observable is also related to quadratic cost functions or Lagrangians that are minimized to determine the optimal control inputs in steady-state control systems [102][103][104]. We study specific examples in Sec. V, showing how the vector η and matrices Q and Γ are to be chosen depending on the observable considered.
C. Large deviation principle
Finding the probability density of A T is difficult in general, even for linear SDEs. However, it is known from large deviation theory [16][17][18][105][106][107] that this density often scales in the limit of large integration times T according to
p(A T = a) ≈ e −T I(a) ,(10)
so the problem of finding p(A T = a) is simplified to the problem of finding the exponent I(a), called the rate function. The meaning of this approximation is that the dominant part of p(A T = a) as T becomes large is a decaying exponential controlled by I(a), so that corrections are sub-exponential in T . When this holds, A T is said to satisfy the large deviation principle (LDP) with rate function I(a) [106]. Equivalently, A T is said to satisfy the LDP if the limit
lim T →∞ − 1 T ln p(A T = a) = I(a)(11)
exists and yields a non-trivial rate function. The rate function is generally convex for ergodic Markov processes, and has a unique minimum and zero, denoted here by a * , which corresponds to the typical value of A T where p(A T = a) concentrates as T → ∞ [106]. The LDP shows that this concentration is exponential with T , so that fluctuations of A T away from a * are exponentially unlikely with the integration time.
The typical value a * also corresponds to the stationary expectation or mean of A T , and so can be calculated from p * or J * , depending on the observable considered. For linear additive observables, we trivially have
a * = R n η, x p * (x)dx = 0,(12)
since p * has zero mean, whereas for quadratic observables, a * is a modified second moment of p * :
a * = R n x, Qx p * (x)dx = Tr(QC).(13)
The result in both cases involves only p * , and so A T is said to be a density-type observable. For the third class of observable considered, we have instead
a * = R n Γx, J * (x) dx = Tr(Γ T HC),(14)
which explains why we refer to it as a current-type observable. In particular, a * = 0 for this observable if J * = 0.
D. Large deviation functions
To find the rate function of the three classes of observables defined above, we use the Gärtner-Ellis theorem [106], which expresses I(a) in terms of another function λ(k), known as the scaled cumulant generating function (SCGF) and defined by
λ(k) = lim T →∞ 1 T ln E[e kT A T ],(15)
where E[·] denotes the expectation. Provided that λ(k) exists and is differentiable, then A T satisfies the LDP and its rate function is given by the Legendre transform of the SCGF [106]:
I(a) = k a a − λ(k a ),(16)
with k a the unique root of
λ (k) = a.(17)
The advantage of using the SCGF for obtaining the rate function is that the generating function of A T conditioned on X(0) = x, defined by
G k (x, t) = E[e ktAt |X(0) = x],(18)
satisfies the linear equation
∂ t G k (x, t) = L k G k (x, t),(19)
where L k is a modification of the generator of the SDE, known as the tilted generator, which depends in our case on M , D and the observable considered [18]. We give in the next section the explicit expression of this operator as we consider the three observables individually. The linear equation defined by this operator is the well-known Feynman-Kac (FK) equation, which can be solved from the initial condition G k (x, 0) = 1 to obtain G k (x, t) and, in turn, λ(k) by taking the long-time limit of this solution, which does not depend in general on the initial condition because of the ergodicity of the process. Alternatively, we can use the fact that the FK equation is linear to expand G k (x, t) in a complete basis of bi-orthogonal eigenfunctions to obtain the SCGF, under mild conditions, from the dominant eigenvalue of L k . The SCGF can then be found by solving the following spectral problem for the dominant eigenvalue [18]:
L k r k (x) = λ(k)r k (x).(20)
Since the tilted generator L k is not generally Hermitian, this spectral equation has to be considered in conjunction with the adjoint equation
L † k l k (x) = λ(k)l k (x),(21)
where L † k is the adjoint of L k and l k is the eigenfunction of L † k associated with its dominant eigenvalue [18]. For convenience, we take these eigenfunctions to satisfy the normalization conditions
R n r k (x)l k (x)dx = 1(22)
and
R n l k (x)dx = 1.(23)
The problem of obtaining the rate function is therefore reduced to the problem of solving the FK equation or solving a particular spectral problem for the process and observable considered.
E. Effective process
The spectral problem (20) determines not only the SCGF and, in turn, the rate function characterising the likelihood of the fluctuations of A T , but also provides a way to understand how these fluctuations arise in the long-time limit in terms of an effective process that describes the subset of trajectories leading to a given fluctuation A T = a [86][87][88][89]. This effective process, which is also called the auxiliary, driven or fluctuation process, was studied extensively for jump processes [90][91][92][93][94] and SDEs [50][51][52][95][96][97]. In the latter case, it takes the form of a modified diffusion X k (t) satisfying the SDE
dX k (t) = F k (X k (t))dt + σdW (t),(24)
which has the same noise matrix σ as that of the original process, but with the effective drift F k given by
F k (x) = F (x) + D∇ ln r k (x)(25)
for the additive observables (here, linear and quadratic) and
F k (x) = F (x) + D[kΓx + ∇ ln r k (x)](26)
in the case of linear-current observables [87]. Here, F (x) = −M x is again the original drift of the linear SDE, while r k (x) is the eigenfunction related to the dominant eigenvalue and SCGF λ(k). Moreover, the value k is set for a given fluctuation A T = a to k a , via the duality relation (17), which plays a role analogous to the temperature-energy relation in equilibrium statistical mechanics [87]. The effective process or effective SDE is also ergodic [87] and, therefore, has a unique stationary density, known to be given by
p * k (x) = r k (x)l k (x),(27)
and a stationary current, given in general by
J * k (x) = F k (x)p * k (x) − 1 2 D∇p * k (x).(28)
We study these modifications of the density and current, as well as the effective SDE supporting them, in the next sections for the three observables of interest. Since the effective process is ergodic, it also has a stationary value of A T , denoted in the remaining by a * k , and given as in Eqs. (12)-(14) by replacing p * and J * with p * k and J * k , respectively. Mathematically, a * k is also the inverse function of k a , following the duality relation (17), so that a * k = λ (k). The effective SDE can be interpreted, as mentioned, as the SDE describing the subset of trajectories giving rise in the long-time limit to a fluctuation A T = a, which has p * ka as its stationary density, J * ka as its stationary current, and a * ka = a as its stationary and typical value of A T [87]. In general, p * 0 = p * and J * 0 = J * for k = 0, since the original process is not modified when observing its typical value A T = a * . Alternatively, it is known that the effective process can be interpreted as an optimal control process, whose drift minimizes a certain cost function in the long-time limit, related to the relative entropy [88]. From this point of view, the modified density and current can be seen as optimal density and current fluctuations leading to or creating a given fluctuation A T = a. We further discuss these interpretations in Sec. IV and refer to the original works [86][87][88][89]92] on the effective process for more details.
III. MAIN RESULTS
We derive in this section the exact generating function of A T for the three classes of observables defined before by solving the FK equation, and obtain from the result their SCGF and rate function by investigating the long-time limit of the generating function. We also obtain explicit expressions for the dominant eigenfunction r k , which allows us to study the effective SDE, providing us with a clear understanding of how fluctuations of these observables arise from modified forces, densities, and currents in linear diffusions. To be concise, we provide only the final results for the various functions considered, which can be checked by direct substitution into the FK equation or the spectral equations. For more details about the derivation of these solutions, which follow by discretizing and iteratively solving the FK equation in time, we refer to [108].
A. Linear additive observables
We begin our analysis with the linear additive observable A T , defined in (7), which involves the vector η in the linear contraction with the state X(t) of the linear SDE (1). For this observable, the generating function G k (x, t) satisfies the FK equation (19) with the tilted generator
L k = − M x, ∇ + 1 2 ∇, D∇ + k η, x ,(29)
which is solved, for the initial condition G k (x, 0) = 1, by
G k (x, t) = e v k (t),x e 1 2 t 0 v k (s),Dv k (s) ds ,(30)
where v k (t) is a vector in R n satisfying the differential equation
dv k (t) dt = kη − M T v k (t)(31)
with initial condition v k (0) = 0. This gives the exact generating function of A T at all times t ≥ 0. To extract the SCGF from this result, we note that (31) has a stationary solution v * k given explicitly by
v * k = k M T −1 η,(32)
which is an attractive fixed point for all k ∈ R, since M is assumed to be positive definite. As a result, we obtain from the definition (15) of the SCGF,
λ(k) = 1 2 v * k , Dv * k(33)
or, more explicitly,
λ(k) = k 2 2 M T −1 η, D M T −1 η .(34)
The fact that the result is quadratic in k means that the fluctuations of A T are Gaussian, as expected for linear integrals of Gaussian processes, with zero asymptotic mean and asymptotic variance
λ (0) = M T −1 η, D M T −1 η .(35)
This can be seen more explicitly by taking the Legendre transform of λ(k), which yields the quadratic rate function
I(a) = a 2 2 M T −1 η, D M T −1 η .(36)
To understand physically how these Gaussian fluctuations arise, we note that G k (x, t) is known [87] to scale in the long-time limit according to
G k (x, t) ∼ r k (x)e tλ(k) ,(37)
so we can write directly
r k (x) = e v * k ,x .(38)
It can be verified that this function satisfies the spectral equation (20) for L k as given in (29) and λ(k) as given in (33), so it is indeed the dominant eigenfunction of L k . Consequently, we find from (25) that the modified drift of the effective process is
F k (x) = −M (x − x * k ),(39)where x * k = M −1 Dv * k .
Thus, the effective process is also a linear process with the same drift matrix M as the original process, but with a fixed point in the drift pushed from the origin x = 0 to x * k to create the fluctuation A T = a * k . Its stationary density is therefore simply a translation of the stationary density of the original process,
p * k (x) = p * (x − x * k ), which is consistent with a * k = R n η, x p * k (x)dx = η, x * k .(40)
Similarly, for the current we find
J * k (x) = H(x − x * k )p * k (x) = J * (x − x * k ).(41)
These results confirm previous studies considering specific linear processes and linear observables [98], including the one-dimensional Ornstein-Uhlenbeck process [87], and also confirm that the reversibility of the original SDE is not modified at the level of fluctuations [87], since they only translate the current in space.
B. Quadratic observables
For the quadratic observable defined in (8), the tilted generator governing the evolution of the generating function has the form
L k = − M x, ∇ + 1 2 ∇, D∇ + k x, Qx .(42)
and admits the following solution:
G k (x, t) = e x,B k (t)x e t 0 Tr(DB k (s))ds ,(43)
where B k (t) is now a symmetric n × n matrix satisfying the differential Riccati equation
dB k (t) dt = 2B k (t)DB k (t) − M T B k (t) − B k (t)M + kQ(44)
with initial condition B k (0) = 0. This can be obtained, as mentioned, by discretizing and iteratively solving the FK equation in time [108]. As before, the SCGF is determined by the stationary solution B * k of this equation satisfying the algebraic Riccati equation
2B * k DB * k − M T B * k − B * k M + kQ = 0.(45)
In general this equation has multiple possible solutions; the correct one is found by requiring B * 0 = 0, since G 0 (x, t) = 1 for all x and t. Provided that this solution is a stationary solution of (44), then the generating function scales in the long-time limit according to
G k (x, t) ∼ e x,B * k x e t Tr(DB * k ) ,(46)
so that the SCGF is found to be
λ(k) = Tr(DB * k ).(47)
A similar result was found independently by Monthus and Mazzolo [69] using a more complicated path integral approach. There are many results also in mathematics on the SCGF of quadratic observables of Gaussian processes [62][63][64][65][66][67][68], but most are expressed in terms of the spectral density of these processes. It is an open problem to establish an equivalence between these results and the trace result above involving the Riccati matrix.
From the expression of the SCGF, we obtain the rate function I(a) by Legendre transform. The result is not explicit, since B * k must now be found by solving (45). However, it can be checked from this equation that the asymptotic mean of A T , which corresponds to the zero of I(a), is
a * = λ (0) = Tr(QC),(48)
consistent with (13). Moreover, the asymptotic variance is
λ (0) = 4 Tr(CQCB * 0 ),(49)
where B * 0 is the derivative of B * k with respect to k evaluated at k = 0, which satisfies yet another Lyapunov equation
M T B * 0 + B * 0 M = Q.(50)
The full derivation of these results can be found in [108]. The variance result is important, as it gives the variance of the small Gaussian fluctuations of A T around a * , determined by expanding I(a) to second order around a * . Large fluctuations of A T away from this value are generally not Gaussian, since I(a) is generally not quadratic for quadratic observables, as shown in Sec. V. To understand how these small and large fluctuations are created, we note again the scaling in (37) to infer from (46):
r k (x) = e x,B * k x .(51)
It can be checked again that this solves the spectral equation (20) with the eigenvalue given in (47). From (25), we then find
F k (x) = −M k x,(52)
where
M k = M − 2DB * k .(53)
Hence we see that the effective process associated with quadratic observables is still a linear diffusion, but now involves a modified drift matrix, leading to the following stationary density:
p * k (x) = 1 (2π) n det C k exp − 1 2 x, C −1 k x ,(54)
where C k is the modified covariance matrix satisfying the Lyapunov equation
D = M k C k + C k M T k .(55)
Moreover, the associated current is
J * k (x) = H k xp * k (x),(56)
where the matrix H k is obtained from (6) by replacing C and M by C k and M k , respectively. It is interesting to note from these results that, for a reversible SDE with M symmetric and D ∝ I, the effective process remains reversible, so that J * k = 0 if J * = 0. In this case, only the density p * is modified to p * k to create fluctuations of A T . This can be checked from (56), but follows more easily by noting from (25) that F k is gradient if F itself is gradient when D ∝ I. On the other hand, for an irreversible SDE, the density and the current are generally modified to accommodate fluctuations, as predicted by (54) and (56), so the irreversible properties of the effective SDE can differ in this case from those of the original SDE, as illustrated in Sec. V.
To close, we should note that the results for p * k and J * k above hold if the effective process is ergodic, that is, if M k is positive definite. Although obvious, this is an important remark because it provides us with a criterion for determining whether λ(k) exists for a given k, which is easier to check than the criterion mentioned earlier about the existence of stationary solutions of the time-dependent Riccati equation (44). If M k is not positive definite for a given k, then the Lyapunov equation (55) does not have a positive definite solution C k and, as such, the eigenfunction r k , formally expressed in (51), does not constitute a valid eigenfunction of the spectral problem associated with the SCGF, which implies that the SCGF itself does not exist.
C. Current-type observables
We conclude by considering linear current-type observables, as defined in (9), which involve an n × n matrix Γ. We first address the case where Γ is purely antisymmetric so that Γ = −Γ T . The case where Γ also has a non-zero symmetric part is more involved and is therefore treated separately after.
Antisymmetric Γ
For the observable (9), with Γ assumed to be purely antisymmetric, the associated tilted generator L k is given by
L k = −k M x, Γx + (−M + kDΓ)x, ∇ + 1 2 ∇, D∇ + k 2 2 Γx, DΓx .(57)
This can be written in a slightly more convenient form as
L k = − k 2 x, (M T Γ − ΓM )x + (−M + kDΓ)x, ∇ + 1 2 ∇, D∇ + k 2 2 x, Γ T DΓx ,(58)
given that
M x, Γx = x, M T Γx = 1 2 x, (M T Γ + Γ T M )x = x, (M T Γ − ΓM )x ,(59)
where we have used the antisymmetry of Γ in the last equality. The solution of the FK equation with the tilted generator (58) is the same as that found in (43) for quadratic additive observables, except that the differential Riccati equation satisfied by B k (t) is now
dB k (t) dt = k 2 2 Γ T DΓ − k 2 (M T Γ − ΓM ) + (−M + kDΓ) T B k (t) + B k (t)(−M + kDΓ) + 2B k (t)DB k (t),(60)
with initial condition B k (0) = 0. A similar equation was obtained using path-integral methods for a particular type of linear current-type observable, namely, the nonequilibrium work, by Kwon, Noh and Park [76], who then obtained large deviation results for this observable via numerical integration. Here, we obtain the SCGF and rate function directly by considering the stationary solution B * k of the Riccati equation, which now satisfies the algebraic Riccati equation
k 2 2 Γ T DΓ − k 2 (M T Γ − ΓM ) + (−M + kDΓ) T B * k + B * k (−M + kDΓ) + 2B * k DB * k = 0(61)
with B * 0 = 0. Assuming, as before, that the correct solution of this equation is a stationary solution of (60), we then recover the same expression of the SCGF as for quadratic observables, namely,
λ(k) = Tr(DB * k ),(62)
from which we obtain the rate function I(a) by Legendre transform. The results again are not explicit, but rely nevertheless on the solution of (61). From this equation, it can also be checked as before that the asymptotic mean of A T is the one found in (14), while the asymptotic variance, characterizing the Gaussian regime of fluctuations near a * , is
λ (0) = Tr CΓ T DΓ + 2C(ΓM − M T Γ)CB * 0 + 2C(Γ T DB * 0 + B * 0 DΓ) ,(63)
where B * 0 now satisfies the Lyapunov equation
B * 0 M + M T B * 0 = 1 2 ΓM − M T Γ .(64)
We show in the application section that these equations can be solved exactly in non-trivial cases.
Since the generating function has the same form as that obtained for quadratic observables, the eigenvector r k has also the same form as that shown in (51), which means that the effective process is again a linear SDE with a drift matrix entering in (52) now given by
M k = M − 2DB * k − kDΓ.(65)
As before, for those k for which M k is positive definite, the effective process is ergodic and large deviations exist. In this case, the stationary density p * k has the same form as (54), using M k as above in the Lyapunov equation for the covariance matrix C k . Similarly, the modified current J * k is given as in (56), using the appropriate C k and M k for the current observable.
Despite the fact that the effective SDEs associated with the quadratic and current-type observables have the same linear form, they have different reversibility properties coming from their different M k . In particular, for current-type observables, the effective process is in general irreversible even if the original process is reversible, since a current has to be produced to sustain a non-zero fluctuation of A T . This follows by noting that the effective drift for this observable, shown in (26), has an added part involving Γ, which is non-gradient when DΓ is not symmetric. In a more obvious way, we also know that a fluctuation of A T in the original process is realized as the typical value
a * k = R n Γx, J * k p * k (x)dx(66)
in the effective process, so that J * k = 0 if a * k = 0. The same relation applies for irreversible SDEs and implies for those that the current is modified by fluctuations. In particular, for a * k = 0, we have J * k = 0, so an irreversible process can behave as a reversible process when conditioned on observing the fluctuation A T = 0. An example of this unusual fluctuation behavior is discussed in Sec. V.
General Γ
We now address the case where the matrix Γ has a non-zero symmetric component. To this end, we decompose this matrix as Γ = Γ + + Γ − in terms of its symmetric and antisymmetric parts
Γ ± = Γ ± Γ T 2 ,(67)
so as to express the observable similarly as
A T = A + T + A − T , where A ± T = 1 T T 0 Γ ± X(t) • dX(t).(68)
We have already discussed the antisymmetric part A − T before. As for the symmetric part, we can integrate it directly to obtain
A + T = 1 2T X(T ), Γ + X(T ) − 1 2T X(0), Γ + X(0) ,(69)
since the Stratonovich convention used in the definition of the observable preserves the standard rules of calculus. Thus, this part only adds boundary terms to A − T , which can contribute to the large deviations of A T , surprisingly, even though they are not extensive in time, because they can limit the range of values of k for which λ(k) exists.
This effect was described for particular observables in recent studies [80][81][82], and can be understood by expressing the generating function as
G k (x, t) = R n dy G k (x, y, t),(70)
where
G k (x, y, t) = E[δ(X(t) − y)e tkAt |X(0) = x](71)
is the generating function of A T in which both X(0) and X(t) are fixed. Considering the decomposition of A T above, we then have
G k (x, t) = e − k 2 x,Γ + x R n dy e k 2 y,Γ + y G − k (x, y, t),(72)
G − k (x, y, t) being the generating function of A − T with fixed initial and terminal states.
In the long-time limit, it is known [87] that this generating function scales, similarly to (37), according to
G − k (x, y, t) ∼ e tλ − (k) r − k (x)l − k (y),(73)
where λ − (k) is the dominant eigenvalue and SCGF of A − T with eigenfunctions r − k and l − k . This eigenvalue was already obtained in (62), while r − k was found in (51) with B * k satisfying the algebraic Riccati equation (61) for Γ − . As for l − k , we can find it using (54) with M k given as in (65), leading to
G − k (x, y, t) ∼ e tλ − (k) e − y,B * k y − 1 2 y,C −1 k y + x,B * k x ,(74)
up to a multiplicative constant, and thus to
G k (x, t) ∼ e tλ − (k) e x,(B * k − k 2 Γ + )x R n dy e − 1 2 y,B k y ,(75)
where
B k = C −1 k + 2B * k − kΓ + .(76)
In this last expression, both C k and B * k are associated with Γ − , and are thus obtained by following our previous results for antisymmetric current observables.
The difference now for general current observables is that, for G k (x, t) to exist, the integral over y above needs to be convergent, which holds when B k is positive definite. In this case, we obtain λ(k) = λ − (k), assuming that λ − (k) itself exists. If B k is not positive definite, then λ(k) = ∞, so the domain where the SCGF exists is effectively cut or limited by Γ + [80][81][82].
To express this result more precisely, let us denote by K − the interval of k values for which the SCGF λ − (k) associated with Γ − exists, which, we recall, is determined by requiring that C k is positive definite. Moreover, let K + denote the interval of values for which B k is positive definite. Then
λ(k) = λ − (k) k ∈ K − ∩ K + ∞ otherwise.(77)
In general, the intersection of K − and K + defines a specific value of k beyond which the SCGF ceases to exist. For concreteness, we can take this cut-off value to be positive, denoting it by k max , to rewrite the SCGF as
λ(k) = λ − (k) k < k max ∞ k ≥ k max .(78)
From the properties of Legendre transforms, it is known that the existence of the cut-off k max has the effect of creating a linear branch in the rate function I(a) beyond a pointā given by the left derivative of λ − (k) at k max (see [16,Ex. 3.3]). As a result, the rate function of A T can be written as
I(a) = I − (a) a <ā k max a − λ − (k max ) a ≥ā,(79)
where I − (a) is the rate function associated with A − T . Therefore, we see that the fluctuations of A T belowā are determined by the fluctuations of the antisymmetric (time-extensive) part A − T , with the boundary term A + T playing no role, whereas the fluctuations of A T aboveā are determined by A + T and, more specifically, by the term in (69) involving X(T ), since X(0) is fixed here to x. If the initial condition is chosen instead according to a probability density p(x, 0), then there is usually another cut-off, k min < 0, coming from the integration of X(0) over that density [109]. In this case, I(a) generally has two linear branches, instead of one, related to the fluctuations of A + T coming from the initial and terminal boundary terms. This type of rate function has been studied before, in particular, in the context of the so-called extended fluctuation relation [110].
To close this section, we note that because the effective process is based on r k , it is defined only in the domain of the SCGF, here k < k max , describing the fluctuations of A T dominated by those of A − T . In that region, the result in (75) implies
r k (x) = e x,(B * k − k 2 Γ + )x ,(80)
so that
F k (x) = −M x + kDΓx + D 2B * k − kΓ + x.(81)
However, since Γ = Γ + + Γ − , this becomes
F k (x) = −(M + 2DB * k + kDΓ − )x,(82)
which is exactly the effective drift associated with Γ − , as given by (65), confirming that the boundary terms in the observable play no role. For the regime of fluctuations of A T dominated by these terms, it is not known what the effective process is or even if such a process exists [52]. At the very least, it cannot be defined from spectral elements, since those elements do not exist outside the domain of the SCGF.
IV. OTHER APPROACHES
It is known in large deviation theory that the SCGF can be obtained from two other approaches related to control theory and optimization [88]. We briefly discuss them here to complete our results and to establish a link with the classical Gaussian control problem. For simplicity, we consider only the case of additive linear observables. Similar results apply for the two other observables.
The first approach is based on the idea of modifying the drift of the original SDE to obtain a new SDE with drift F , assumed also to be ergodic. By considering all such modifications, it is known [88] that the SCGF of A T with respect to the original SDE can be expressed in a variational way as
λ(k) = lim T →∞ max F E[kA T − R T ],(83)
where E[·] now denotes the expectation with respect to the modified SDE and
R T = 1 2T T 0 F (X(t)) − F (X(t)), D −1 [F (X(t)) − F (X(t))] dt(84)
is a time-averaged "distance" between the modified and original SDEs. Equivalently, since the modified SDE is assumed to be ergodic, we can replace the long-time expectation by an expectation involving the stationary density of this process, denoted byp, so as to write
λ(k) = max F {kA[p] − R[p, F ]},(85)
where
A[p] = R n η, x p(x) dx(86)
is, similarly to (12), the typical value of A T in the modified SDE and
R[p, F ] = R n F (x) − F (x), D −1 (F (x) − F (x)) p(x) dx(87)
is the typical distance.
The two maximization problems in (83) and (85) have a natural interpretation in terms of a controlled SDE whose drift is modified so as to maximize the cost or loss function [102][103][104]. The control is applied over an infinite time horizon, leading to an ergodic process that realizes the SCGF as the maximal cost. From recent works [87][88][89], it is known that this optimal control process is the effective process described earlier with drift F k and stationary density p * k , so we can write in fact
K T = E[kA T −R T ]λ(k) = kA[p * k ] − R[p * k , F k ].(88)
The results of the previous section therefore predict that the optimal SDE that maximizes the cost K T in the long-time limit is a linear SDE characterized by a modified fixed-point or a modified drift matrix M k satisfying an algebraic Riccati equation. From a control perspective, these results can be derived by assuming that the control drift F is linear in the state. In this case, the cost K T has a linear part and a quadratic part in x, which has been studied extensively as the linear-quadratic-Gaussian (LQG) control problem [102][103][104]. It can be checked that the well-known Riccati equation associated with this problem recovers the results found here for the three observables considered, with the following differences:
• The LQG problem is formulated by minimizing linear-quadratic cost functions over the class of ergodic controls that are linear in the control inputs, leading to an optimal controller that is linear in x. Here, we make no such linearity assumption; the linearity of the optimal controller follows from the spectral solution giving λ(k) and r k (x).
• The quadratic part of the cost function in LQG control is assumed to be positive definite to guarantee that the minimization problem has a solution. In our case, that part is not necessarily positive definite, depending on the observable and k value considered, because the SCGF is not necessarily positive. However, the minimization has a solution if the SCGF exists.
• For current-type observables, the functional A[p] involves the stationary current J of the controlled SDE, similarly to (14), instead of its stationary density, giving rise to a control cost involving the density and current of the control process or, equivalently, its state and increments [111][112][113], which generalizes the classical LQG control problem.
The SCGF can be obtained in a slightly different way by noting, as done earlier, that the drift of an ergodic SDE is uniquely determined by its stationary density and current, so the minimization in (85) over F can be re-expressed as a minimization over densitiesp that are normalized in R n and currents J that satisfy the stationary condition (4). This change of variables has the effect of transforming the distance R[p, F ] to
I[p, J ] = 1 2 R n ( J (x) − J * p (x)), (p(x)D) −1 ( J (x) − J * p (x)) dx,(89)
where J * p is an "instantaneous" current obtained from (4) by replacing p * withp [88]. As a result, the minimization in (85) giving the SCGF becomes
λ(k) = max p, J {kA[p] − I[p, J ]}.(90)
This result plays a special role in large deviation theory, as the functional I[p, J ] has the interpretation of a rate function, characterizing the probability that the original SDE with drift F gives rise to a density fluctuationp away from its stationary density p * concurrently with a current fluctuation J away from its stationary current J * [39][40][41][42]. From this point of view, the maximization in (90) can be seen as a Lagrange version of the problem of finding the most likely density and current fluctuations that give rise to a fluctuation A T = a of the observable, with k playing the role of the Lagrange parameter [88]. Many works have appeared recently on this level of fluctuations [39][40][41][42], known technically as the level 2.5 of large deviations, so we refer to them for more details.
To be consistent with the solution (88), the optimal density and current fluctuations that are most likely to appear must correspond to the stationary density and current of the effective process, so we also have
λ(k) = kA[p * k ] − I[p * k , J * k ].(91)
It can be checked that this result, as well as the one shown in (88), agree with the explicit expressions that we have found in the previous section for λ(k), F k , p * k and J * k , which means that these expressions can be derived, in principle, by solving the ergodic control problem in (85) or the optimization problem in (90). This also applies for quadratic and current-type observables. The only difference for the latter is that A[p] is not a function of the density but of the current J , similarly to (14).
V. APPLICATIONS
We illustrate our results in this section with two examples of SDEs in R 2 , used in statistical physics as minimal models of nonequilibrium systems, focusing on quadratic observables and linear current-type observables, as the case of linear additive observables is trivial. Some of the SDEs and observables that we consider have been studied before [69][70][71][76][77][78], using different methods, however, based on path integrals. We revisit them here to show how the SCGF and rate function can be obtained in a more direct way using our approach based on Riccati equations, and extend these results by describing how different fluctuation regimes arise physically via density and current fluctuations related to the effective process.
A. Quadratic observable for transverse diffusions
The first system that we consider is the normal or transverse diffusion in R 2 , defined by the general linear SDE (1) with
M = γ ξ −ξ γ(92)
and σ = I with γ > 0, ξ ∈ R, and > 0. This process serves as a minimal model of nonequilibrium steady-state systems [70,71,77,78], since the antisymmetric part of the drift involving the parameter ξ creates a stationary current given by
J * (x) = ξ −x 2 x 1 p * (x),(93)
which involves the Gaussian stationary density
p * (x) = γ π 2 e −γ x 2 / 2 ,(94)
so that C = 2 /(2γ)I. For ξ < 0, the current circulates around the origin in a clockwise direction, whereas, for ξ > 0, it circulates in an anticlockwise direction. When ξ = 0, the current vanishes, giving rise to an equilibrium system with a gradient drift, which has the same stationary density as the nonequilibrium system, interestingly, since p * does not depend on ξ.
The first observable that we study for this system is the time-averaged squared distance from the origin:
A T = 1 T T 0 X(t) 2 dt,(95)
which corresponds to the choice Q = I in the general quadratic observable (8). For this observable, the differential Riccati equation (44) can be solved exactly to obtain B k (t) and, in turn, G k (x, t), which have well-defined limits, giving the SCGF λ(k) [108]. Alternatively, we can solve the algebraic Riccati equation (45) to obtain the steady-state solution B * k directly, yielding in both cases the diagonal matrix
B * k = b * k I with b * k = γ − γ 2 − 2k 2 2 2(96)
for k ∈ (−∞, γ 2 /(2 2 )). Consequently, we find from (47),
λ(k) = 2 2 b * k = γ − γ 2 − 2k 2(97)
for the same range of k values. Taking the Legendre transform, we then obtain the following rate function:
I(a) = γ 2 a 2 2 + 2 2a − γ, a > 0.(98)
These two functions, plotted in Fig. 1, are similar to those found for the one-dimensional Ornstein-Uhlenbeck process [64]. The minimum of I(a), giving the typical value of A T , is a * = 2 /γ, while the asymptotic variance is
λ (0) = I (a * ) −1 = 4 γ 3 .(99)
This variance describes again the Gaussian fluctuations of A T in the vicinity of a * . Away from this value, the fluctuations are non-Gaussian, as is clear from the form of I(a). In fact, as a → 0, the term 2 /(2a) dominates, so the right tail of p(A T = a) follows an inverse exponential distribution, while, for a → ∞, the term γ 2 a/(2 2 ) takes over, predicting an exponential distribution for the large values of A T generated by trajectories of the SDE venturing far away from the origin. It is remarkable that both the SCGF and rate function are independent of the nonequilibrium parameter ξ. Intuitively, this can be understood by noting that A T is radially symmetric and, therefore, is not affected by the rotation of X(t) around the origin. What matters is the distance of the trajectories of X(t) from the origin, which is controlled by the diagonal (symmetric) part of the drift matrix M . Thus, small values of A T below a * must arise from trajectories that remain close to the origin, irrespective of the manner in which they rotate around this point, and should therefore be described by an effective drift that confines the process around the origin. Similarly, large fluctuations of A T above a * should arise from rare trajectories that are less confined around the origin but rotate freely around the origin as in the original process. This is confirmed by calculating the effective drift matrix from (53) to obtain
M k = γ 2 − 2k 2 ξ −ξ γ 2 − 2k 2 .(100)
The diagonal part of this matrix is modified by k, resulting in an effective density with covariance
C k = 2 2 γ 2 − 2k 2 I,(101)
which is more or less confined around the origin, depending on the fluctuations considered. The antisymmetric part of the drift, on the other hand, remains the same, implying that the current is not modified in form. In fact, from (56) we find
J * k (x) = ξ −x 2 x 1 p * k (x).(102)
so the effective current differs from J * only to the extent that p * k differs from p * . The fluctuations of this observable are thus realized optimally by altering the density, with the only changes to the current resulting from those density modifications. In particular, if J * = 0, then J * k = 0, so the reversibility of the original process is not changed for ξ = 0 when looking at fluctuations.
Of crucial importance for this to hold is the fact that the diffusion matrix D is proportional to the identity and that the diagonal part of M is proportional to the identity. If either or both of these properties are not satisfied, then B * k can be non-diagonal, implying a non-trivial coupling of the density and current with ξ, even though A T is a density-type observable.
B. Nonequilibrium work and entropy production for transverse diffusions
The drift acting on an SDE can be seen as a force that performs work, which can be transformed in time into internal energy or dissipated as heat into the environment, depending on the physical system considered. Recently, these quantities have come to be studied as part of the stochastic thermodynamics (or stochastic energetics) formalism, which is concerned with extending the notions and laws of thermodynamics to stochastic processes [13][14][15]. In this context, quantities such as work, heat and entropy take the form of time-integrated functionals of the system's state, which means that they are dynamical observables, and can be shown to satisfy conservation laws that generalize the first and second laws of thermodynamics.
Two of the most important quantities in stochastic thermodynamics are the entropy production, defined for the linear SDE (1) as
E T = − 1 T T 0 2D −1 M X(t) • dX(t)(103)
and the nonequilibrium work
W T = − 1 T T 0 2(D −1 M ) − X(t) • dX(t),(104)
which is the antisymmetric part of E T not related to a change of potential energy. The large deviations of these observables were studied by Noh [78] for transverse diffusions using path integral methods. We revisit them here to illustrate our simpler approach, and extend these results by discussing the properties of the effective process, which provides a physical way of understanding how large deviations arise in terms of modified drifts, densities and currents. We begin by considering the nonequilibrium work, which is an antisymmetric current-type observable described for this SDE by the matrix
Γ = 0 −2ξ/ 2 2ξ/ 2 0 .(105)
Similarly to the quadratic observable, the SCGF of W T can be obtained by either solving the time-dependent Riccati equation (60) exactly, and by taking the long-time limit of the solution, or by solving the time-independent Riccati equation (61). The result in both cases is
λ(k) = γ − γ 2 − 4k(1 + k)ξ 2(106)
for k in the range
K − = −ξ 2 − γ 2 ξ 2 + ξ 4 2ξ 2 , −ξ 2 + γ 2 ξ 2 + ξ 4 2ξ 2 .(107)
We plot this function in Fig. 2, together with the corresponding rate function I(w) obtained by Legendre transform. The latter function has a minimum located at w * = λ (0) = 2ξ 2 /γ and has two branches on either side that become asymptotically linear in w, because the SCGF is defined on a bounded interval, which implies that the probability density p(W T = w) has exponential tails for large work values, positive or negative. From the expression of the SCGF, we also note that
λ(k) = λ(−k − 1),(108)
which is an important symmetry of the SCGF, referred to as the Gallavotti-Cohen fluctuation relation [20][21][22][23], which translates at the level of the rate function to Therefore, we have
I(w) = I(−w) − w.(109)p(W T = w) p(W T = −w) ≈ e T w(110)
for large T , which is the more standard expression of the Gallavotti-Cohen fluctuation relation, showing that positive work values are exponentially more likely than negative work values. This reflects the fact that the average work w * is always positive, since trajectories of the transverse diffusion travel on average in the direction of the rotating drift and, therefore, that negative work fluctuations resulting from trajectories that go against the drift are exponentially unlikely.
To understand how these fluctuations are created, we calculate the solution (65) for the modified drift matrix using the solution of the Riccati equation leading to the SCGF, obtaining
M k = γ − λ(k) ξ(1 + 2k) −ξ(1 + 2k) γ − λ(k) .(111)
For the range K − above, this matrix is positive definite and so describes an ergodic effective process whose stationary density is
p * k (x) = γ − λ(k) π 2 e −[γ−λ(k)] x 2 / 2 ,(112)
while the stationary current is
J * k (x) = ξ(1 + 2k) −x 2 x 1 p * k (x).(113)
These are similar to the stationary density p * and current J * found before in (94) and (93), respectively, and are plotted in Fig. 3 for various values of k and parameters values γ = = ξ = 1, giving rise to an anticlockwise J * . From the plots, we can see that positive work fluctuations are created by trajectories that have an anticlockwise current J * k , as expected, which is greater in magnitude than J * when w > w * , corresponding to k > 0 (see Fig. 3a), and smaller in magnitude when 0 < w < w * , corresponding to −1/2 < k < 0 (Fig. 3b). On the other hand, for negative work fluctuations, associated with k < −1/2, the trajectories reverse direction (Fig. 3c), thereby creating a clockwise current J * k , which increases in magnitude as k decreases. Between these two regimes, when w = 0 (corresponding to k = −1/2), the current J * k vanishes, as the trajectories responsible for this work fluctuation do not rotate on average and behave, therefore, in a reversible way. These changes in the current are also accompanied by changes in the density, as seen from (112), which have the effect of confining the state either closer to the origin for k ∈ (−1, 0) (see Fig. 3b) or further from it otherwise. Moreover, we can see that, as k approaches the boundaries of K − , the confinement, determined by diagonal part of M k , vanishes, showing that the extremely large work fluctuations, either positive or negative, are effectively created by a weakly confined, rotating Brownian motion in the plane.
From these results, we can understand directly the large deviations of the entropy production by noting again that W T is the antisymmetric part of E T , so E T and W T differ only by a boundary term, as discussed in the previous section. The boundary term, coming from the symmetric part of E T , is described by the matrix
Γ + = − 2γ 2 I,(114)
which we use to determine the cut-off value beyond which the matrix B k , defined in (76), ceases to be positive definite. In our case, the cut-off is negative because Γ + is negative definite and is equal to k min = −1, which means that the SCGF of E T matches that obtained for W T but only for k in the range
− 1, −ξ 2 + γ 2 ξ 2 + ξ 4 2ξ 2 .(115)
The effect of k min on the rate function is similar to what we discussed in the previous section for k max and leads here to the following rate function for E T :
I(e) = −e e <ē I − (e) e ≥ē,(116)
I − (e) being the rate function of the nonequilibrium work evaluated at arguments E T = e of the entropy production. The crossover valueē is determined fromē = λ (−1) and is given explicitly bȳ e = −2ξ 2 /γ. This rate function is compared with the rate function of the nonequilibrium work in Fig. 4. The difference coming from the linear branch of I(e) belowē is clearly seen and implies the existence of a dynamical phase transition that separates two large deviations regimes: one on the right ofē where the large deviations of E T are determined by the large deviations of W T , with the boundary term playing no role, and the other, left ofē, where the large deviations of E T are only determined by those of the boundary term. The latter regime or region cannot be described in terms of an effective process, since it is related to the cut-off value k min . For e >ē, however, there exists an effective process, which is the same for E T as for W T .
C. Brownian gyrator
We consider as our second application two Brownian particles with positions X 1 (t) and X 2 (t) evolving according to the overdamped SDE
dX(t) = − γ + κ −κ −κ γ + κ X(t) + 1 0 0 2 dW (t),(117)
where X(t) = (X 1 (t), X 2 (t)) and W (t) = (W 1 (t), W 2 (t)). The drift in this system includes a friction force with friction parameter γ > 0 and a linear (spring) force between the two particles with spring constant κ ≥ 0. The presence of two separate noise strengths 1 and 2 indicates that the two particles interact with two different heat baths having, in general, non-identical temperatures T 1,2 = 2 1,2 /2. The same SDE is also used to describe the charge dynamics of two resistors kept at different temperatures and coupled by a capacitance.
This system has been studied extensively in physics as the Brownian gyrator [114][115][116], and has a nonequilibrium steady state when κ > 0 and 1 = 2 , related to the energy exchanged between the two thermal baths via the linear coupling. The stationary density and current characterizing this state can be calculated exactly, but their expressions are however too long to display here. For our purpose, we only note the stationary covariance matrix obtained from (3):
C = 1 4γ(γ + 2κ) 2γ 2 1 (γ+2κ)+( 2 1 + 2 2 )κ 2 γ+κ ( 2 1 + 2 2 )κ ( 2 1 + 2 2 )κ 2γ 2 2 (γ+2κ)+( 2 1 + 2 2 )κ 2 γ+κ ,(118)
from which the stationary density and current can easily be found via (2) and (5), respectively. It can be checked from this result that, if κ > 0 and the noise strengths are different, then a non-zero stationary probability current exists, which rotates clockwise in the plane when 1 < 2 and anticlockwise when 1 > 2 . On the other hand, if 1 = 2 , then the system has an equilibrium steady state for arbitrary κ. Likewise, for κ = 0 the system is in equilibrium even when the noise strengths are different because the two particles are then decoupled, representing two isolated systems in contact with separate heat baths.
For this system, we consider as before the nonequilibrium work W T , defined in (104), which was studied implicitly by Kwon et al. [76] and more recently by Monthus and Mazzolo [69] using path integrals. This observable is characterized by the antisymmetric matrix
Γ − = 0 − κ( 2 1 − 2 2 ) 2 1 2 2 κ( 2 1 − 2 2 ) 2 1 2 2 0 .(119)
The SCGF cannot be found now by obtaining the generating function exactly, since B k (t) in the Riccati equation (60) does not have a diagonal form here, due to the off-diagonal symmetric part of the drift matrix M . However, we can solve the algebraic Riccati equation (61) so as to find the appropriate stationary solution B * k , leading to
λ(k) = γ + κ − γ 2 + 2γκ − κ 2 (1 + k) 2 1 − k 2 k 2 1 − (1 + k) 2 2 1 2 2(120)
for k in the range K − = (k − , k + ), where k ± = −κ 1 2 + κ 2 2 ± 4γ 2 1 2 2 2 + 8γκ 1 2 2 2 + κ 2 ( 1 2 + 2 2 ) 2
2κ ( 1 2 − 2 2 ) .(121)
This result is shown in Fig. 5 with the associated rate function, obtained by computing the Legendre transform numerically. The SCGF is symmetric around k = −1/2 and satisfies again the fluctuation symmetry noted before in (108), which means that I(w) satisfies the symmetry in (109). The minimum of I(w) is now located at
w * = κ 2 ( 2 1 − 2 2 ) 2 2(γ + κ) 2 1 2 2 ,(122)
predicting overall that positive work fluctuations are more likely than negative fluctuations with the same magnitude, in agreement with (110). Further, it can be checked that λ(k) and I(w) remain invariant under the exchange 1 ↔ 2 , indicating that only the magnitude of the difference | 1 − 2 | in noise strengths and not the sign of the difference 1 − 2 determines the large deviations. This is The effective process underlying the fluctuations of W T is similar to the one found for transverse diffusions, with 1 − 2 playing the role of the nonequilibrium parameter ξ, and so we do not discuss it in detail here. The main difference to note is that the stationary density p * of the Brownian gyrator has a tilt and eccentricity in the plane, related to the coupling κ, which are also seen in the stationary current. This property persists at the level of p k and J * k [117], as shown in Fig. 6, but does not change otherwise the basic observation that positive work fluctuations follow the flow of the stationary current and affect only its magnitude (see Fig. 6a, b), while negative work fluctuations reverse the direction of the stationary current and also change its magnitude (Fig. 6c). For W T = 0, which corresponds to k = −1/2, we also find J * k = 0. In this case, the Brownian particles effectively cease to interact as they realize this work fluctuation, and thus behave in a reversible way.
VI. CONCLUSIONS
We have studied in this paper the large deviations of linear SDEs, considering three types of dynamical observables, defined in terms of linear or quadratic integrals in time of the state. For these, we have obtained explicit formulas for the SCGF and rate function characterizing their probability distribution in the long-time limit. These formulas involve Riccati equations, which can be solved exactly in some cases, as illustrated here with two physically-motivated models, or numerically using methods developed in control theory [118]. In addition, we have studied how the fluctuations of these observables arise via rare trajectories that can be described in terms of an effective SDE, which includes extra terms in the drift driving the process in the fluctuation region of interest, or, equivalently, in terms of density and current fluctuations that differ from the stationary density and current of the SDE considered. These two complementary levels of fluctuations give valuable insights into how large deviations are created physically and show, for the three types of observables considered, that those large deviations originate from an effective SDE that is also linear. Consequently, they can be seen as arising from Gaussian density fluctuations coupled to current fluctuations that are both driven by linear non-conservative forces.
In future studies, it would be interesting to study nonlinear SDEs and possibly nonlinear observables of these processes to see if useful information, exact or approximate, about their large deviations can be obtained by linearizing them in some way. For this problem, we see three applications of potential interest:
• Linearize the SDE and, if applicable, the observable near the fixed point of the noiseless dynamics, if there is one. Applying our results to the resulting linear model should describe the small Gaussian fluctuations of the actual nonlinear system and observable, meaning that the asymptotic mean and variance should be given by the linear model.
• The effective SDE associated with a nonlinear SDE and observable is, in general, another nonlinear SDE. In the case of quadratic observables and linear current-type observables, we expect both SDEs to have the same noiseless fixed point, if the original SDE has one, following what we have found for linear SDEs. Consequently, for these observables, we expect the linearized model to provide approximate information about the full range of large deviations.
• Many numerical and simulation methods rely on the knowledge of the effective process or attempt to construct that process in an iterative way in order to compute the SCGF or the rate function [88,97,119,120]. A linear ansatz could be included in these methods, either as an approximation of the effective process or as a seed for an iteration scheme that gradually constructs the correct nonlinear effective process. Both approaches could lead potentially to improved algorithms, since the spectral problem underlying the effective process would be replaced, effectively, by the problem of solving a Riccati equation.
Other directions of interest include the generalization of our results to time-dependent linear diffusions, in particular, periodic linear diffusions, and to linear diffusions evolving in bounded domains with reflections at the boundaries. A framework for the large deviations of time-periodic systems has been developed [121] and application of this framework following the exact results obtained here for the generating function could prove fruitful. As for reflected diffusions, we have shown recently [95] that imposing reflecting boundaries to the simple one-dimensional Ornstein-Uhlenbeck process leads in general to a nonlinear effective process, because of additional boundary conditions imposed on the spectral problem [122]. It is therefore natural to ask how our results for unbounded linear diffusions, based on Riccati equations, are modified by these boundary conditions.
FIG. 1 .
1(a) SCGF and (b) rate function of the squared norm of the transverse diffusion with γ = 1, ξ = 1, and = 1.
FIG. 2 .
2(a) SCGF and (b) rate function of the nonequilibrium work done by the transverse diffusion for the parameters γ = 1, ξ = 1, and = 1.
FIG. 3 .
3Vector plot of the stationary current J * k of the effective process associated with the nonequilibrium work done by the transverse system for different values of k. The density plots underneath show the modified stationary density p * k . Parameters: ξ = 1, γ = 1, and = 1.
FIG. 4 .
4Comparison of the rate functions of the nonequilibrium work (NEQ) and the entropy production (EP) for the transverse diffusion. Parameters: γ = 1, ξ = 1, and = 1.
FIG. 5 .
5(a) SCGF and (b) rate function of the nonequilibrium work done by Brownian gyrator for γ = 1, κ = 1, and noise strengths 1 = 2 and 2 = 1.
FIG. 6 .
6Vector plot of the stationary current J * k of the effective process associated with the nonequilibrium work done by the Brownian gyrator for various values of k. The density plots underneath show the modified stationary density p * k . Parameters: γ = 1, κ = 1, 1 = 2, and 2 = 1. explained by noting that positive and negative values of W T are determined by the direction or chirality of J * , as for the transverse diffusion, which depends here on the sign of 1 − 2 .
ACKNOWLEDGMENTSWe thank Francesco Coghi and Raphael Chetrite for useful discussions. JdB is funded by the National Research Foundation, South Africa (PhD Scholarship).
Handbook of Stochastic Methods for Physics. C W Gardiner, Chemistry and the Natural Sciences. New YorkSpringer132nd ed.C. W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, 2nd ed., Springer Series in Synergetics, Vol. 13 (Springer, New York, 1985).
The Fokker-Planck Equation: Methods of Solution and Applications. H Risken, SpringerBerlin3rd ed.H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications, 3rd ed. (Springer, Berlin, 1996).
K Jacobs, Stochastic Processes for Physicists: Understanding Noisy Systems. CambridgeCambridge University PressK. Jacobs, Stochastic Processes for Physicists: Understanding Noisy Systems (Cambridge University Press, Cambridge, 2010).
G A Pavliotis, Stochastic Processes and Applications. New YorkSpringerG. A. Pavliotis, Stochastic Processes and Applications (Springer, New York, 2014).
Torque detection using Brownian fluctuations. G Volpe, D Petrov, 10.1103/PhysRevLett.97.210603Phys. Rev. Lett. 97210603G. Volpe and D. Petrov, "Torque detection using Brownian fluctuations," Phys. Rev. Lett. 97, 210603 (2006).
Low thermal fluctuations in a system heated out of equilibrium. M Geitner, F Sandoval, E Bertin, L Bellon, 10.1103/PhysRevE.95.032138Phys. Rev. E. 9532138M. Geitner, F. Aguilar Sandoval, E. Bertin, and L. Bellon, "Low thermal fluctuations in a system heated out of equilibrium," Phys. Rev. E 95, 032138 (2017).
Steady-state fluctuation relations for systems driven by an external random force. J R Gomez-Solano, L Bellon, A Petrosyan, S Ciliberto, 10.1209/0295-5075/89/60003Europhys. Lett. 8960003J. R. Gomez-Solano, L. Bellon, A. Petrosyan, and S. Ciliberto, "Steady-state fluctuation relations for systems driven by an external random force," Europhys. Lett. 89, 60003 (2010).
Experiments in stochastic thermodynamics: Short history and perspectives. S Ciliberto, 10.1103/PhysRevX.7.021051Phys. Rev. X. 721051S. Ciliberto, "Experiments in stochastic thermodynamics: Short history and perspectives," Phys. Rev. X 7, 021051 (2017).
Optical trapping and manipulation of neutral particles using lasers. A Ashkin, Proc. Nat. Acad. Sci. (USA). Nat. Acad. Sci. (USA)94A. Ashkin, "Optical trapping and manipulation of neutral particles using lasers," Proc. Nat. Acad. Sci. (USA) 94, 4853-4860 (1997).
Nonequilibrium fluctuations in small systems: From physics to biology. F Ritort, Advances in Chemical Physics. S. A. Rice137John WileyF. Ritort, "Nonequilibrium fluctuations in small systems: From physics to biology," in Advances in Chemical Physics, Vol. 137, edited by S. A. Rice (John Wiley, New York, 2008) pp. 31-123.
Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states. R K P Zia, B Schmittmann, 10.1088/1742-5468/2007/07/P07012J. Stat. Mech. 7012R. K. P. Zia and B. Schmittmann, "Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states," J. Stat. Mech. 2007, P07012 (2007).
Fluctuation properties of steady-state Langevin systems. J B Weiss, 10.1103/PhysRevE.76.061128Phys. Rev. E. 7661128J. B. Weiss, "Fluctuation properties of steady-state Langevin systems," Phys. Rev. E 76, 061128 (2007).
Stochastic thermodynamics, fluctuation theorems and molecular machines. U Seifert, 10.1088/0034-4885/75/12/126001Rep. Prog. Phys. 75126001U. Seifert, "Stochastic thermodynamics, fluctuation theorems and molecular machines," Rep. Prog. Phys. 75, 126001 (2012).
. K Sekimoto, Stochastic Energetics, Lect. Notes. Phys. 799SpringerK. Sekimoto, Stochastic Energetics, Lect. Notes. Phys., Vol. 799 (Springer, New York, 2010).
L Peliti, S Pigolotti, Stochastic Thermodynamics: An Introduction. PrincetonPrinceton University PressL. Peliti and S. Pigolotti, Stochastic Thermodynamics: An Introduction (Princeton University Press, Princeton, 2021).
The large deviation approach to statistical mechanics. H Touchette, 10.1016/j.physrep.2009.05.002Phys. Rep. 478H. Touchette, "The large deviation approach to statistical mechanics," Phys. Rep. 478, 1-69 (2009).
Large deviation approach to nonequilibrium systems. R J Harris, H Touchette, Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond. R. Klages, W. Just, and C. JarzynskiWeinheimWiley-VCH6R. J. Harris and H. Touchette, "Large deviation approach to nonequilibrium systems," in Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond, Reviews of Nonlinear Dynamics and Complexity, Vol. 6, edited by R. Klages, W. Just, and C. Jarzynski (Wiley-VCH, Weinheim, 2013) pp. 335-360.
Introduction to dynamical large deviations of Markov processes. H Touchette, 10.1016/j.physa.2017.10.046Physica A. 504H. Touchette, "Introduction to dynamical large deviations of Markov processes," Physica A 504, 5-19 (2018).
Ergodicity and large deviations in physical systems with stochastic dynamics. R L Jack, 10.1140/epjb/e2020-100605-3Eur. J. Phys. B. 9374R. L. Jack, "Ergodicity and large deviations in physical systems with stochastic dynamics," Eur. J. Phys. B 93, 74 (2020).
Dynamical ensembles in nonequilibrium statistical mechanics. G Gallavotti, E G D Cohen, 10.1103/PhysRevLett.74.2694Phys. Rev. Lett. 74G. Gallavotti and E. G. D. Cohen, "Dynamical ensembles in nonequilibrium statistical mechanics," Phys. Rev. Lett. 74, 2694-2697 (1995).
Fluctuation theorem for stochastic dynamics. J Kurchan, J. Phys. A: Math. Gen. 31J. Kurchan, "Fluctuation theorem for stochastic dynamics," J. Phys. A: Math. Gen. 31, 3719-3729 (1998).
A Gallavotti-Cohen-type symmetry in the large deviation functional for stochastic dynamics. J L Lebowitz, H Spohn, 10.1023/A:1004589714161J. Stat. Phys. 95J. L. Lebowitz and H. Spohn, "A Gallavotti-Cohen-type symmetry in the large deviation functional for stochastic dynamics," J. Stat. Phys. 95, 333-365 (1999).
Fluctuation theorems for stochastic dynamics. R J Harris, G M Schütz, 10.1088/1742-5468/2007/07/P07020J. Stat. Mech. 7020R. J. Harris and G. M. Schütz, "Fluctuation theorems for stochastic dynamics," J. Stat. Mech. 2007, P07020 (2007).
Dynamical first-order phase transition in kinetically constrained models of glasses. J P Garrahan, R L Jack, V Lecomte, E Pitard, K Van Duijvendijk, F Van Wijland, 10.1103/PhysRevLett.98.195702Phys. Rev. Lett. 98195702J. P. Garrahan, R. L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, and F. van Wijland, "Dynamical first-order phase transition in kinetically constrained models of glasses," Phys. Rev. Lett. 98, 195702 (2007).
First-order dynamical phase transition in models of glasses: An approach based on ensembles of histories. J P Garrahan, R L Jack, V Lecomte, E Pitard, K Van Duijvendijk, F Van Wijland, 10.1088/1751-8113/42/7/075007J. Phys. A: Math. Theor. 4275007J. P. Garrahan, R. L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, and F. van Wijland, "First-order dynamical phase transition in models of glasses: An approach based on ensembles of histories," J. Phys. A: Math. Theor. 42, 075007 (2009).
Space-time phase transitions in driven kinetically constrained lattice models. T Speck, J P Garrahan, 10.1140/epjb/e2010-10800-xEur. Phys. J. B. 79T. Speck and J. P. Garrahan, "Space-time phase transitions in driven kinetically constrained lattice models," Eur. Phys. J. B 79, 1-6 (2011).
Dynamical phase transition for current statistics in a simple driven diffusive system. C P Espigares, P L Garrido, P I Hurtado, 10.1103/PhysRevE.87.032115Phys. Rev. E. 8732115C. P. Espigares, P. L. Garrido, and P. I. Hurtado, "Dynamical phase transition for current statistics in a simple driven diffusive system," Phys. Rev. E 87, 032115 (2013).
Cusp singularities in boundary-driven diffusive systems. G Bunin, Y Kafri, D Podolsky, 10.1007/s10955-013-0752-6J. Stat. Phys. 152G. Bunin, Y. Kafri, and D. Podolsky, "Cusp singularities in boundary-driven diffusive systems," J. Stat. Phys. 152, 112-135 (2013).
A minimal model of dynamical phase transition. P , Tsobgni Nyawo, H Touchette, 10.1209/0295-5075/116/50009Europhys. Lett. 11650009P. Tsobgni Nyawo and H. Touchette, "A minimal model of dynamical phase transition," Europhys. Lett. 116, 50009 (2016).
Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion. A Lazarescu, 10.1088/1751-8121/aa7175J. Phys. A: Math. Theor. 50254004A. Lazarescu, "Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion," J. Phys. A: Math. Theor. 50, 254004 (2017).
Thermodynamic uncertainty relation for biomolecular processes. A C Barato, U Seifert, 10.1103/PhysRevLett.114.158101Phys. Rev. Lett. 114158101A. C. Barato and U. Seifert, "Thermodynamic uncertainty relation for biomolecular processes," Phys. Rev. Lett. 114, 158101 (2015).
Universal bounds on current fluctuations. P Pietzonka, A C Barato, U Seifert, 10.1103/PhysRevE.93.052145Phys. Rev. E. 9352145P. Pietzonka, A. C. Barato, and U. Seifert, "Universal bounds on current fluctuations," Phys. Rev. E 93, 052145 (2016).
Dissipation bounds all steady-state current fluctuations. T R Gingrich, J M Horowitz, N Perunov, J L England, 10.1103/PhysRevLett.116.120601Phys. Rev. Lett. 116120601T. R. Gingrich, J. M. Horowitz, N. Perunov, and J. L. England, "Dissipation bounds all steady-state current fluctuations," Phys. Rev. Lett. 116, 120601 (2016).
Inferring dissipation from current fluctuations. T R Gingrich, G M Rotskoff, J M Horowitz, J. Phys. A: Math. Theor. 50184004T. R. Gingrich, G. M. Rotskoff, and J. M. Horowitz, "Inferring dissipation from current fluctuations," J. Phys. A: Math. Theor. 50, 184004 (2017).
Quantifying dissipation using fluctuating currents. J Li, J M Horowitz, T R Gingrich, N Fakhri, 10.1038/s41467-019-09631-xNature Comm. 101666J. Li, J. M. Horowitz, T. R. Gingrich, and N. Fakhri, "Quantifying dissipation using fluctuating currents," Nature Comm. 10, 1666 (2019).
Non-equilibrium steady states: Fluctuations and large deviations of the density and of the current. B Derrida, 10.1088/1742-5468/2007/07/P07023J. Stat. Mech. 7023B. Derrida, "Non-equilibrium steady states: Fluctuations and large deviations of the density and of the current," J. Stat. Mech. 2007, P07023 (2007).
Macroscopic fluctuation theory for stationary non-equilibrium states. L Bertini, A De Sole, D Gabrielli, G Jona-Lasinio, C Landim, J. Stat. Phys. 107L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, "Macroscopic fluctuation theory for stationary non-equilibrium states," J. Stat. Phys. 107, 635-675 (2002).
Macroscopic fluctuation theory. L Bertini, A De Sole, D Gabrielli, G Jona-Lasinio, C Landim, 10.1103/RevModPhys.87.593Rev. Mod. Phys. 87L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio, and C. Landim, "Macroscopic fluctuation theory," Rev. Mod. Phys. 87, 593-636 (2015).
Canonical structure of dynamical fluctuations in mesoscopic nonequilibrium steady states. C Maes, K Netočný, 10.1209/0295-5075/82/30003Europhys. Lett. 8230003C. Maes and K. Netočný, "Canonical structure of dynamical fluctuations in mesoscopic nonequilibrium steady states," Europhys. Lett. 82, 30003 (2008).
On and beyond entropy production: The case of Markov jump processes. C Maes, K Netočný, B Wynants, Markov Proc. Relat. Fields. 14C. Maes, K. Netočný, and B. Wynants, "On and beyond entropy production: The case of Markov jump processes," Markov Proc. Relat. Fields 14, 445-464 (2008).
A formal view on 2.5 large deviations and fluctuation relations. A Barato, R Chetrite, 10.1007/s10955-015-1283-0J. Stat. Phys. 160A. Barato and R. Chetrite, "A formal view on 2.5 large deviations and fluctuation relations," J. Stat. Phys. 160, 1154-1172 (2015).
Level 2 and level 2.5 large deviation functionals for systems with and without detailed balance. J Hoppenau, D Nickelsen, A Engel, New J. Phys. 1883010J. Hoppenau, D. Nickelsen, and A. Engel, "Level 2 and level 2.5 large deviation functionals for systems with and without detailed balance," New J. Phys. 18, 083010 (2016).
Random Perturbations of Dynamical Systems. M I Freidlin, A D Wentzell, Grundlehren der Mathematischen Wissenschaften. New YorkSpringer260M. I. Freidlin and A. D. Wentzell, Random Perturbations of Dynamical Systems, Grundlehren der Mathematischen Wissenschaften, Vol. 260 (Springer, New York, 1984).
D Wales, Energy Landscapes: Applications to Clusters, Biomolecules and Glasses. CambridgeCambridge University PressD. Wales, Energy Landscapes: Applications to Clusters, Biomolecules and Glasses (Cambridge Univer- sity Press, Cambridge, 2004).
Towards a theory of transition paths. W E , E Vanden Eijnden, J. Stat. Phys. 123W. E. and E. Vanden Eijnden, "Towards a theory of transition paths," J. Stat. Phys. 123, 503-523 (2006).
Chaotic properties of systems with Markov dynamics. V Lecomte, C Appert-Rolland, F Van Wijland, Phys. Rev. Lett. 9510601V. Lecomte, C. Appert-Rolland, and F. van Wijland, "Chaotic properties of systems with Markov dynamics," Phys. Rev. Lett. 95, 010601 (2005).
Thermodynamic formalism for systems with Markov dynamics. V Lecomte, C Appert-Rolland, F Van Wijland, 10.1007/s10955-006-9254-0J. Stat. Phys. 127V. Lecomte, C. Appert-Rolland, and F. van Wijland, "Thermodynamic formalism for systems with Markov dynamics," J. Stat. Phys. 127, 51-106 (2007).
Large deviations of the dynamical activity in the East model: Analysing structure in biased trajectories. R L Jack, P Sollich, J. Phys. A: Math. Theor. 4715003R. L. Jack and P. Sollich, "Large deviations of the dynamical activity in the East model: Analysing structure in biased trajectories," J. Phys. A: Math. Theor. 47, 015003 (2014).
Large deviation function for entropy production in driven one-dimensional systems. J Mehl, T Speck, U Seifert, 10.1103/PhysRevE.78.011123Phys. Rev. E. 7811123J. Mehl, T. Speck, and U. Seifert, "Large deviation function for entropy production in driven one-dimensional systems," Phys. Rev. E 78, 011123 (2008).
Diffusions conditioned on occupation measures. F Angeletti, H Touchette, 10.1063/1.4941384J. Math. Phys. 5723303F. Angeletti and H. Touchette, "Diffusions conditioned on occupation measures," J. Math. Phys. 57, 023303 (2016).
Large deviations of the current for driven periodic diffusions. P , Tsobgni Nyawo, H Touchette, 10.1103/PhysRevE.94.032101Phys. Rev. E. 9432101P. Tsobgni Nyawo and H. Touchette, "Large deviations of the current for driven periodic diffusions," Phys. Rev. E 94, 032101 (2016).
Dynamical phase transition in drifted Brownian motion. P , Tsobgni Nyawo, H Touchette, 10.1103/PhysRevE.98.052103Phys. Rev. E. 9852103P. Tsobgni Nyawo and H. Touchette, "Dynamical phase transition in drifted Brownian motion," Phys. Rev. E 98, 052103 (2018).
Exact large deviation function in the asymmetric exclusion process. B Derrida, J L Lebowitz, 10.1103/PhysRevLett.80.209Phys. Rev. Lett. 80B. Derrida and J. L. Lebowitz, "Exact large deviation function in the asymmetric exclusion process," Phys. Rev. Lett. 80, 209-213 (1998).
Large deviation of the density profile in the steady state of the open symmetric simple exclusion process. B Derrida, J L Lebowitz, E R Speer, J. Stat. Phys. 107B. Derrida, J. L. Lebowitz, and E. R. Speer, "Large deviation of the density profile in the steady state of the open symmetric simple exclusion process," J. Stat. Phys. 107, 599-634 (2002).
Exact large deviation functional of a stationary open driven diffusive system: The asymmetric exclusion process. B Derrida, J L Lebowitz, E R Speer, J. Stat. Phys. 110B. Derrida, J. L. Lebowitz, and E. R. Speer, "Exact large deviation functional of a stationary open driven diffusive system: The asymmetric exclusion process," J. Stat. Phys. 110, 775-810 (2003).
An exact formula for the statistics of the current in the TASEP with open boundaries. A Lazarescu, K Mallick, J. Phys. A: Math. Theor. 44315001A. Lazarescu and K. Mallick, "An exact formula for the statistics of the current in the TASEP with open boundaries," J. Phys. A: Math. Theor. 44, 315001 (2011).
The exclusion process: A paradigm for non-equilibrium behaviour. K Mallick, 10.1016/j.physa.2014.07.046Physica A. 418K. Mallick, "The exclusion process: A paradigm for non-equilibrium behaviour," Physica A 418, 17-48 (2015).
Current fluctuations in the zero-range process with open boundaries. R J Harris, A Rákos, G M Schütz, J. Stat. Mech. 8003R. J. Harris, A. Rákos, and G. M. Schütz, "Current fluctuations in the zero-range process with open boundaries," J. Stat. Mech. 2005, P08003 (2005).
Discontinuous condensation transition and nonequivalence of ensembles in a zero-range process. S Grosskinsky, G Schütz, 10.1007/s10955-008-9541-zJ. Stat. Phys. 132S. Grosskinsky and G. Schütz, "Discontinuous condensation transition and nonequivalence of ensembles in a zero-range process," J. Stat. Phys. 132, 77-108 (2008).
Current loops and fluctuations in the zero-range process on a diamond lattice. R Villavicencio-Sanchez, R J Harris, H Touchette, 10.1088/1742-5468/2012/07/P07007J. Stat. Mech. 20127007R. Villavicencio-Sanchez, R. J. Harris, and H. Touchette, "Current loops and fluctuations in the zero-range process on a diamond lattice," J. Stat. Mech. 2012, P07007 (2012).
Density profiles, dynamics, and condensation in the ZRP conditioned on an atypical current. O Hirschberg, D Mukamel, G M Schütz, 10.1088/1742-5468/2015/11/P11023J. Stat. Mech. 11023O. Hirschberg, D. Mukamel, and G. M. Schütz, "Density profiles, dynamics, and condensation in the ZRP conditioned on an atypical current," J. Stat. Mech. 2015, P11023 (2015).
Large deviation rate calculations for nonlinear detectors in Gaussian noise. G R Benitz, J A Bucklew, IEEE Trans. Info. Th. 36G. R. Benitz and J. A. Bucklew, "Large deviation rate calculations for nonlinear detectors in Gaussian noise," IEEE Trans. Info. Th. 36, 358-371 (1990).
Large deviations for quadratic forms of stationary gaussian processes. B Bercu, F Gamboa, A Rouault, 10.1016/S0304-4149(97)00071-9Stoch. Proc. Appl. 71B. Bercu, F. Gamboa, and A. Rouault, "Large deviations for quadratic forms of stationary gaussian processes;," Stoch. Proc. Appl. 71, 75-90 (1997).
Large deviations for quadratic functionals of Gaussian processes. W Bryc, A Dembo, 10.1023/A:1022656331883J. Theoret. Prob. 10W. Bryc and A. Dembo, "Large deviations for quadratic functionals of Gaussian processes," J. Theoret. Prob. 10, 307-332 (1997).
A functional large deviations principle for quadratic forms of Gaussian stationary processes. F Gamboa, A Rouault, M Zani, 10.1016/S0167-7152(98)00270-3Stat. Prob. Lett. 43F. Gamboa, A. Rouault, and M. Zani, "A functional large deviations principle for quadratic forms of Gaussian stationary processes," Stat. Prob. Lett. 43, 299-308 (1999).
Large deviations in estimation of an Ornstein-Uhlenbeck model. D Florens-Landais, H Pham, 10.1239/jap/1029349453J. Appl. Prob. 36D. Florens-Landais and H. Pham, "Large deviations in estimation of an Ornstein-Uhlenbeck model," J. Appl. Prob. 36, 60-77 (1999).
Sharp large deviations for Gaussian quadratic forms with applications. B Bercu, F Gamboa, M Lavielle, 10.1051/ps:2000101ESAIM: Prob. Stats. 4B. Bercu, F. Gamboa, and M. Lavielle, "Sharp large deviations for Gaussian quadratic forms with applications," ESAIM: Prob. Stats 4, 1-24 (2000).
Sharp large deviations for the Ornstein-Uhlenbeck process. B Bercu, A Rouault, 10.1137/S0040585X97978737Th. Prob. Appl. 46B. Bercu and A. Rouault, "Sharp large deviations for the Ornstein-Uhlenbeck process," Th. Prob. Appl. 46, 1-19 (2002).
Non-equilibrium diffusions via non-Hermitian electromagnetic quantum mechanics with application to the statistics of entropy production in the Brownian gyrator. C Monthus, A Mazzolo, 10.1103/PhysRevE.107.014101Phys. Rev. E. 10714101C. Monthus and A. Mazzolo, "Non-equilibrium diffusions via non-Hermitian electromagnetic quantum mechanics with application to the statistics of entropy production in the Brownian gyrator," Phys. Rev. E 107, 014101 (2022).
Path-integral analysis of fluctuation theorems for general Langevin processes. V Y Chernyak, M Chertkov, C Jarzynski, J. Stat. Mech. 8001V. Y. Chernyak, M. Chertkov, and C. Jarzynski, "Path-integral analysis of fluctuation theorems for general Langevin processes," J. Stat. Mech. 2006, P08001 (2006).
Statistics of entropy production in linearized stochastic systems. K Turitsyn, M Chertkov, V Y Chernyak, A Puliafito, 10.1103/PhysRevLett.98.180603Phys. Rev. Lett. 98180603K. Turitsyn, M. Chertkov, V. Y. Chernyak, and A. Puliafito, "Statistics of entropy production in linearized stochastic systems," Phys. Rev. Lett. 98, 180603 (2007).
Entropic fluctuations in Gaussian dynamical systems. V Jaksic, C.-A Pillet, A Shirikyan, 10.1016/S0034-4877(16)30034-9Rep. Math. Phys. 77V. Jaksic, C.-A. Pillet, and A. Shirikyan, "Entropic fluctuations in Gaussian dynamical systems," Rep. Math. Phys. 77, 335-376 (2016).
Entropic fluctuations in thermally driven harmonic networks. V Jaksic, C.-A Pillet, A Shirikyan, 10.1007/s10955-016-1625-6J. Stat. Phys. 166V. Jaksic, C.-A. Pillet, and A. Shirikyan, "Entropic fluctuations in thermally driven harmonic networks," J. Stat. Phys. 166, 926-1015 (2017).
The entropy production of stationary diffusions. L , Da Costa, G A Pavliotis, arXiv:2212.05125L. Da Costa and G. A. Pavliotis, "The entropy production of stationary diffusions," (2022), arXiv:2212.05125.
Work fluctuations for a Brownian particle between two thermostats. P Visco, J. Stat. Mech. 6006P. Visco, "Work fluctuations for a Brownian particle between two thermostats," J. Stat. Mech. 2006, P06006 (2006).
Nonequilibrium fluctuations for linear diffusion dynamics. C Kwon, J D Noh, H Park, 10.1103/PhysRevE.83.061145Phys. Rev. E. 8361145C. Kwon, J. D. Noh, and H. Park, "Nonequilibrium fluctuations for linear diffusion dynamics," Phys. Rev. E 83, 061145 (2011).
Multiple dynamic transitions in nonequilibrium work fluctuations. J D Noh, C Kwon, H Park, 10.1103/PhysRevLett.111.130601Phys. Rev. Lett. 111130601J. D. Noh, C. Kwon, and H. Park, "Multiple dynamic transitions in nonequilibrium work fluctuations," Phys. Rev. Lett. 111, 130601 (2013).
Fluctuations and correlations in nonequilibrium systems. J D Noh, J. Stat. Mech. 1013J. D. Noh, "Fluctuations and correlations in nonequilibrium systems," J. Stat. Mech. 2014, P01013 (2014).
Energy flux distribution in a two-temperature Ising model. V Lecomte, Z Rácz, F Van Wijland, 10.1088/1742-5468/2005/02/P02008J. Stat. Mech. 2008V. Lecomte, Z. Rácz, and F. van Wijland, "Energy flux distribution in a two-temperature Ising model," J. Stat. Mech. 2005, P02008 (2005).
Large deviations of heat flow in harmonic chains. A Kundu, S Sabhapandit, A Dhar, J. Stat. Mech. 3007A. Kundu, S. Sabhapandit, and A. Dhar, "Large deviations of heat flow in harmonic chains," J. Stat. Mech. 2011, P03007 (2011).
Work fluctuations for a harmonic oscillator driven by an external random force. S Sabhapandit, Europhys. Lett. 9620005S. Sabhapandit, "Work fluctuations for a harmonic oscillator driven by an external random force," Europhys. Lett. 96, 20005 (2011).
Heat and work fluctuations for a harmonic oscillator. S Sabhapandit, 10.1103/PhysRevE.85.021108Phys. Rev. E. 8521108S. Sabhapandit, "Heat and work fluctuations for a harmonic oscillator," Phys. Rev. E 85, 021108 (2012).
Work fluctuations for a brownian particle in a harmonic trap with fluctuating locations. A Pal, S Sabhapandit, 10.1103/PhysRevE.87.022138Phys. Rev. E. 8722138A. Pal and S. Sabhapandit, "Work fluctuations for a brownian particle in a harmonic trap with fluctuating locations," Phys. Rev. E 87, 022138 (2013).
Heat flux and entropy produced by thermal fluctuations. S Ciliberto, A Imparato, A Naert, M Tanase, 10.1103/PhysRevLett.110.180601Phys. Rev. Lett. 110180601S. Ciliberto, A. Imparato, A. Naert, and M. Tanase, "Heat flux and entropy produced by thermal fluctuations," Phys. Rev. Lett. 110, 180601 (2013).
Statistical properties of the energy exchanged between two heat baths coupled by thermal fluctuations. S Ciliberto, Imparato, M Naert, Tanase, 10.1088/1742-5468/2013/12/P12014J. Stat. Mech. 12014S Ciliberto, A Imparato, A Naert, and M Tanase, "Statistical properties of the energy exchanged between two heat baths coupled by thermal fluctuations," J. Stat. Mech. 2013, P12014 (2013).
Nonequilibrium microcanonical and canonical ensembles and their equivalence. R Chetrite, H Touchette, 10.1103/PhysRevLett.111.120601Phys. Rev. Lett. 111120601R. Chetrite and H. Touchette, "Nonequilibrium microcanonical and canonical ensembles and their equivalence," Phys. Rev. Lett. 111, 120601 (2013).
Nonequilibrium Markov processes conditioned on large deviations. R Chetrite, H Touchette, 10.1007/s00023-014-0375-8Ann. Henri Poincaré. 16R. Chetrite and H. Touchette, "Nonequilibrium Markov processes conditioned on large deviations," Ann. Henri Poincaré 16, 2005-2057 (2015).
Variational and optimal control representations of conditioned and driven processes. R Chetrite, H Touchette, 10.1088/1742-5468/2015/12/P12001J. Stat. Mech. 12001R. Chetrite and H. Touchette, "Variational and optimal control representations of conditioned and driven processes," J. Stat. Mech. 2015, P12001 (2015).
Effective interactions and large deviations in stochastic processes. R L Jack, P Sollich, 10.1140/epjst/e2015-02416-9Eur. Phys. J. Spec. Top. 224R. L. Jack and P. Sollich, "Effective interactions and large deviations in stochastic processes," Eur. Phys. J. Spec. Top. 224, 2351-2367 (2015).
Construction of a coordinate Bethe ansatz for the asymmetric simple exclusion process with open boundaries. D Simon, 10.1088/1742-5468/2009/07/P07017J. Stat. Mech. 7017D. Simon, "Construction of a coordinate Bethe ansatz for the asymmetric simple exclusion process with open boundaries," J. Stat. Mech. 2009, P07017 (2009).
ASEP on a ring conditioned on enhanced flux. V Popkov, G M Schütz, D Simon, J. Stat. Mech. 10007V. Popkov, G. M. Schütz, and D. Simon, "ASEP on a ring conditioned on enhanced flux," J. Stat. Mech. 2010, P10007 (2010).
Large deviations and ensembles of trajectories in stochastic models. R L Jack, P Sollich, 10.1143/PTPS.184.304Prog. Theoret. Phys. Suppl. 184R. L. Jack and P. Sollich, "Large deviations and ensembles of trajectories in stochastic models," Prog. Theoret. Phys. Suppl. 184, 304-317 (2010).
Current fluctuations of interacting active Brownian particles. T Grandpre, D T Limmer, 10.1103/PhysRevE.98.060601Phys. Rev. E. 9860601T. GrandPre and D. T. Limmer, "Current fluctuations of interacting active Brownian particles," Phys. Rev. E 98, 060601 (2018).
Variational control forces for enhanced sampling of nonequilibrium molecular dynamics simulations. A Das, D T Limmer, 10.1063/1.5128956J. Chem. Phys. 151244123A. Das and D. T. Limmer, "Variational control forces for enhanced sampling of nonequilibrium molecular dynamics simulations," J. Chem. Phys. 151, 244123 (2019).
Dynamical large deviations of reflected diffusions. J Du Buisson, H Touchette, 10.1103/PhysRevE.102.012148Phys. Rev. E. 10212148J. du Buisson and H. Touchette, "Dynamical large deviations of reflected diffusions," Phys. Rev. E 102, 012148 (2020).
Effective driven dynamics for one-dimensional conditioned Langevin processes in the weak-noise limit. N Tizón-Escamilla, V Lecomte, E Bertin, 10.1088/1742-5468/aaeda3J. Stat. Mech. 13201N. Tizón-Escamilla, V. Lecomte, and E. Bertin, "Effective driven dynamics for one-dimensional conditioned Langevin processes in the weak-noise limit," J. Stat. Mech. 2019, 013201 (2019).
Learning nonequilibrium control forces to characterize dynamical phase transitions. J Yan, H Touchette, G M Rotskoff, 10.1103/PhysRevE.105.024115Phys. Rev. E. 10524115J. Yan, H. Touchette, and G. M. Rotskoff, "Learning nonequilibrium control forces to characterize dynamical phase transitions," Phys. Rev. E 105, 024115 (2022).
Role of current fluctuations in nonreversible samplers. F Coghi, R Chetrite, H Touchette, 10.1103/PhysRevE.103.062142Phys. Rev. E. 10362142F. Coghi, R. Chetrite, and H. Touchette, "Role of current fluctuations in nonreversible samplers," Phys. Rev. E 103, 062142 (2021).
Analogue studies of nonlinear systems. D G Luchinsky, P V E Mcclintock, M I Dykman, Rep. Prog. Phys. 61D. G. Luchinsky, P. V. E. McClintock, and M. I. Dykman, "Analogue studies of nonlinear systems," Rep. Prog. Phys. 61, 889-997 (1998).
Power and heat fluctuation theorems for electric circuits. R Van Zon, S Ciliberto, E G D Cohen, 10.1103/PhysRevLett.92.130601Phys. Rev. Lett. 92130601R. van Zon, S. Ciliberto, and E. G. D. Cohen, "Power and heat fluctuation theorems for electric circuits," Phys. Rev. Lett. 92, 130601 (2004).
Nonequilibrium fluctuations in a resistor. N Garnier, S Ciliberto, 10.1103/PhysRevE.71.060101Phys. Rev. E. 7160101N. Garnier and S. Ciliberto, "Nonequilibrium fluctuations in a resistor," Phys. Rev. E 71, 060101 (2005).
. R F Stengel, Optimal Control and Estimation. DoverR. F. Stengel, Optimal Control and Estimation (Dover, New York, 1994).
M Schulz, Control Theory in Physics and other Fields of Science. New YorkSpringerM. Schulz, Control Theory in Physics and other Fields of Science (Springer, New York, 2006).
J Bechhoefer, Control Theory for Physicists. CambridgeCambridge University PressJ. Bechhoefer, Control Theory for Physicists (Cambridge University Press, Cambridge, 2021).
Large deviation and statistical physics. Y Oono, 10.1143/PTPS.99.165Prog. Theoret. Phys. Suppl. 99Y. Oono, "Large deviation and statistical physics," Prog. Theoret. Phys. Suppl. 99, 165-205 (1989).
A Dembo, O Zeitouni, Large Deviations Techniques and Applications. New YorkSpringer2nd ed.A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, 2nd ed. (Springer, New York, 1998).
F Hollander, Large Deviations, Fields Institute Monograph. Providence, RIAMSF. den Hollander, Large Deviations, Fields Institute Monograph (AMS, Providence, RI, 2000).
Dynamical Large Deviations of Diffusions. J Du Buisson, Stellenbosch, South AfricaDepartment of Physics, Stellenbosch UniversityPh.D. thesisJ. du Buisson, Dynamical Large Deviations of Diffusions, Ph.D. thesis, Department of Physics, Stellenbosch University, Stellenbosch, South Africa (2022).
Heat fluctuations and initial ensembles. K Kim, C Kwon, H Park, 10.1103/PhysRevE.90.032117Phys. Rev. E. 9032117K. Kim, C. Kwon, and H. Park, "Heat fluctuations and initial ensembles," Phys. Rev. E 90, 032117 (2014).
Extension of the fluctuation theorem. R Van Zon, E G D Cohen, 10.1103/PhysRevLett.91.110601Phys. Rev. Lett. 91110601R. van Zon and E. G. D. Cohen, "Extension of the fluctuation theorem," Phys. Rev. Lett. 91, 110601 (2003).
Explicit solution of relative entropy weighted control. J Bierkens, H J Kappen, 10.1016/j.sysconle.2014.08.001Syst. & Cont. Lett. 72J. Bierkens and H. J. Kappen, "Explicit solution of relative entropy weighted control," Syst. & Cont. Lett. 72, 36-43 (2014).
Stochastic optimal control as nonequilibrium statistical mechanics: Calculus of variations over density and current. V Y Chernyak, M Chertkov, J Bierkens, H J Kappen, 10.1088/1751-8113/47/2/022001J. Phys. A: Math. Theor. 4722001V. Y. Chernyak, M. Chertkov, J. Bierkens, and H. J. Kappen, "Stochastic optimal control as non- equilibrium statistical mechanics: Calculus of variations over density and current," J. Phys. A: Math. Theor. 47, 022001 (2014).
Linear PDEs and eigenvalue problems corresponding to ergodic stochastic optimization problems on compact manifolds. J Bierkens, V Y Chernyak, M Chertkov, H J Kappen, 10.1088/1742-5468/2016/01/013206J. Stat. Mech. 13206J. Bierkens, V. Y. Chernyak, M. Chertkov, and H. J. Kappen, "Linear PDEs and eigenvalue problems corresponding to ergodic stochastic optimization problems on compact manifolds," J. Stat. Mech. 2016, 013206 (2016).
Brownian gyrator: A minimal heat engine on the nanoscale. R Filliger, P Reimann, 10.1103/PhysRevLett.99.230602Phys. Rev. Lett. 99230602R. Filliger and P. Reimann, "Brownian gyrator: A minimal heat engine on the nanoscale," Phys. Rev. Lett. 99, 230602 (2007).
Two-temperature Langevin dynamics in a parabolic potential. V Dotsenko, A Macio Lek, O Vasilyev, G Oshanin, 10.1103/PhysRevE.87.062130Phys. Rev. E. 8762130V. Dotsenko, A. Macio lek, O. Vasilyev, and G. Oshanin, "Two-temperature Langevin dynamics in a parabolic potential," Phys. Rev. E 87, 062130 (2013).
Electrical autonomous Brownian gyrator. K.-H Chiang, C.-L Lee, P.-Y. Lai, Y.-F Chen, 10.1103/PhysRevE.96.032123Phys. Rev. E. 9632123K.-H. Chiang, C.-L. Lee, P.-Y. Lai, and Y.-F. Chen, "Electrical autonomous Brownian gyrator," Phys. Rev. E 96, 032123 (2017).
that the matrix H k entering in the expression (56) of J * k is proportional to H for the Brownian gyrator. It can be checked, in particular. as is also the case for the transverse system [108]. This seems to be a general property of linear currentsIt can be checked, in particular, that the matrix H k entering in the expression (56) of J * k is proportional to H for the Brownian gyrator, as is also the case for the transverse system [108]. This seems to be a general property of linear currents.
. D A Bini, B Iannazzo, B Meini, Numerical Solution of Algebraic Riccati Equations. D. A. Bini, B. Iannazzo, and B. Meini, Numerical Solution of Algebraic Riccati Equations (SIAM, Philadelphia, 2011).
Adaptive sampling of large deviations. G Ferré, H Touchette, 10.1007/s10955-018-2108-8J. Stat. Phys. 172G. Ferré and H. Touchette, "Adaptive sampling of large deviations," J. Stat. Phys. 172, 1525-1544 (2018).
Adaptive power method for estimating large deviations in Markov chains. F Coghi, H Touchette, 10.1103/PhysRevE.107.034137Phys. Rev. E. 10734137F. Coghi and H. Touchette, "Adaptive power method for estimating large deviations in Markov chains," Phys. Rev. E 107, 034137 (2023).
Current fluctuations in periodically driven systems. A C Barato, R Chetrite, 10.1088/1742-5468/aabfc5J. Stat. Mech. 53207A. C. Barato and R. Chetrite, "Current fluctuations in periodically driven systems," J. Stat. Mech. 2018, 053207 (2018).
Large deviations of currents in diffusions with reflective boundaries. E Mallmin, J Du Buisson, H Touchette, 10.1088/1751-8121/ac039aJ. Phys. A: Math. Theor. 54295001E. Mallmin, J. du Buisson, and H. Touchette, "Large deviations of currents in diffusions with reflective boundaries," J. Phys. A: Math. Theor. 54, 295001 (2021).
| [] |
[
"Estimation of compositeness with correction terms",
"Estimation of compositeness with correction terms"
] | [
"Tomona Kinugawa \nDepartment of Physics\nTokyo Metropolitan University\n192-0397HachiojiJapan\n",
"Tetsuo Hyodo \nDepartment of Physics\nTokyo Metropolitan University\n192-0397HachiojiJapan\n"
] | [
"Department of Physics\nTokyo Metropolitan University\n192-0397HachiojiJapan",
"Department of Physics\nTokyo Metropolitan University\n192-0397HachiojiJapan"
] | [] | The compositeness X is defined as the probability to observe the composite structure such as the hadronic molecule component in a bound state. One of the model-independent approaches to calculate X is the weak-binding relation. However, when the scattering length a 0 is larger than the radius of the bound state R, the central value of the compositeness X becomes larger than unity, which cannot be interpreted as a probability. For the systems with a 0 > R, we need to estimate the compositeness with the correction terms. For the reasonable determination of the compositeness, we first present the quantitative estimation of the correction terms. Because the exact value of the compositeness should be contained in its definition domain 0 ≤ X ≤ 1, we propose the reasonable estimation method with the uncertainty band by excluding the region outside of the definition domain of the compositeness. We finally estimate the compositeness of physical systems, and obtain the result which we can interpret as the fraction of the composite component. * | 10.1051/epjconf/202227110003 | [
"https://export.arxiv.org/pdf/2208.14000v1.pdf"
] | 251,929,264 | 2208.14000 | d7bb444e5b3ad0232c8c7f1ed096290a6a7ba415 |
Estimation of compositeness with correction terms
Tomona Kinugawa
Department of Physics
Tokyo Metropolitan University
192-0397HachiojiJapan
Tetsuo Hyodo
Department of Physics
Tokyo Metropolitan University
192-0397HachiojiJapan
Estimation of compositeness with correction terms
The compositeness X is defined as the probability to observe the composite structure such as the hadronic molecule component in a bound state. One of the model-independent approaches to calculate X is the weak-binding relation. However, when the scattering length a 0 is larger than the radius of the bound state R, the central value of the compositeness X becomes larger than unity, which cannot be interpreted as a probability. For the systems with a 0 > R, we need to estimate the compositeness with the correction terms. For the reasonable determination of the compositeness, we first present the quantitative estimation of the correction terms. Because the exact value of the compositeness should be contained in its definition domain 0 ≤ X ≤ 1, we propose the reasonable estimation method with the uncertainty band by excluding the region outside of the definition domain of the compositeness. We finally estimate the compositeness of physical systems, and obtain the result which we can interpret as the fraction of the composite component. *
Introduction
Almost all hadrons are considered to be qqq or qq states in the constituent quark models. However, some hadrons are expected to have an extraordinary structure, and called exotic hadrons. In recent experiments in the heavy quark sector, candidates for exotic hadrons have been observed, as represented by the X(3872) [1]. One possible component of the candidates for the exotic hadrons is the hadronic molecule, which is a weakly bound state of hadrons.
We can quantitatively characterize the internal structure of the state by compositeness whether it is a hadronic molecule dominant (composite dominant) state or not [2]. The compositeness X is defined as the probability to find the hadronic molecule component in the normalized wavefunction of the bound states |Ψ , X = | molecule|Ψ | 2 . Here |molecule is the schematic notation of the hadronic molecule component. We can determine X by using the weak-binding relation [3,4]:
a 0 = R 2X 1 + X + O R typ R ,(1)
where a 0 is the scattering length and R ≡ 2µB is the radius of the bound state, determined by the binding energy B and the reduced mass µ. Taking into account the range correction to the weak-binding relation [5], we define R typ as the largest one among the length scale of the interaction R int and those in the effective range expansion except for a 0 :
R typ = max{R int , |r e |, |P s /R 2 |, · · · },(2)
where r e is the effective range and P s is the shape parameter (for more details, see Sec. III in Ref. [5]). When we consider sufficiently shallow bound states with R R typ , the correction terms of the weak-binding relation O(R typ /R) are negligible, and the compositeness X is determined only from the observables a 0 and R. Thanks to this universal feature, the weak-binding relation has been utilized as a model-independent approach to calculate X. However, naive application of Eq. (1) without the correction terms sometimes contradicts the definition domain of the compositeness, 0 ≤ X ≤ 1. For example, the compositeness of the deuteron d is given as X = 1.68 with a 0 = 5.42 fm (taken from CD-Bonn potential [6]) and B = 2.22 MeV (taken from PDG [7]). This problem is discussed in Refs. [8][9][10] in connection with the effective range. To avoid this contradiction, here we propose a reasonable estimation method of the compositeness with the uncertainty which arises from the correction terms O(R typ /R) in Eq. (1).
Estimation of compositeness with correction terms 2.1 Importance of correction terms
Let us consider the relation of the scattering length a 0 and the radius R by neglecting the correction terms O(R typ /R). Because X is the probability to find the composite component in a bound state, it is defined within 0 ≤ X ≤ 1. It follows from this relation that 2X/(1 + X) ≤ 1. Therefore, to satisfy the weak-binding relation (1) without the correction terms a 0 = R[2X/(1 + X)], R should be larger than a 0 . However, there are some systems with a 0 > R which give X > 1 as mentioned above. In such cases, we cannot interpret X as the probability. This problem originates in the assumption of neglecting the correction terms. For the systems with a 0 > R, it is expected that the weak-binding relation holds by taking into account the correction terms O(R typ /R), because 2X/(1 + X) + O(R typ /R) > 1 can be realized for 0 ≤ X ≤ 1. Therefore, it is necessary to develop a quantitative estimation method of the correction terms to obtain X ≤ 1 for the systems with a 0 > R.
Estimation of uncertainty band
From the discussion in Sec. 2.1, we propose the estimation method of the compositeness X with introducing the contribution from the correction terms O(R typ /R). As discussed in Ref. [4], the correction terms O(R typ /R) can be estimated quantitatively as the dimensionless quantity ξ:
ξ = R typ R .(3)
We then determine the upper and lower boundaries of the estimated compositeness X u (X l ) as
X u (ξ) = a 0 /R + ξ 2 − a 0 /R − ξ ,(4)X l (ξ) = a 0 /R − ξ 2 − a 0 /R + ξ ,(5)
for 0 ≤ ξ ≤ 1. It is expected that the exact value of X is contained within X l ≤ X ≤ X u . Numerically, X u and X l can go beyond the definition domain of the compositeness 0 ≤ X ≤ 1, depending on the values of a 0 , R and ξ. However, the results X ≥ 1 and X ≤ 0 do not make sense, because the exact value of X is not contained there. Therefore, we definē
X u = min{X u , 1},X l = max{X l , 0},(6)
to restrict the uncertainty band of the compositeness within the definition domain of X:
X l ≤ X ≤X u ,(7)
as illustrated in Fig. 1. We regard this uncertainty band (7) as the estimated compositeness and discuss the internal structure of the bound state with it. It is clear that the estimated compositeness with the uncertainty band (7) is restricted within 0 ≤ X ≤ 1, and we can interpret X as the probability. More details about the estimation of X are discussed in Sec. III and IV in Ref. [5].
1 X 0
Eq. (7) form weak-binding relation X 1 X 0 form weak-binding relation X Eq. (7) Figure 1. Schematic illustration of the definition of the uncertainty band (7). The left panel shows the case for X u > 1 (X u = 1), and the right shows that for X l < 0 (X l = 0).
Application to physical systems
Now we estimate the compositeness X of the actual physical systems with the uncertainty estimation discussed in Sec. 2.2. We consider the deuteron, X(3872), D * s0 (2317), D s1 (2460), NΩ dibaryon, ΩΩ dibaryon, 3 Λ H, and 4 He dimer. The deuteron d in the p-n scattering is chosen as the typical observed hadron. X(3872) in the D 0 -D * 0 scattering, D * s0 (2317) in the D-K scattering, and D s1 (2460) in the D * -K scattering are the candidates for the exotic hadrons which are experimentally observed [7]. NΩ and ΩΩ dibaryons are the states obtained by the lattice QCD calculation [11,12]. We can apply the weak-binding relation not only to the hadron systems but also to the nuclei and atomic systems. 3 Λ H in the Λ-d scattering is an example of nuclei, and 4 He dimer which is the weakly bound state of 4 He atoms is an example in the atomic systems.
For the estimation of X from the weak-binding relation, we need the scattering length a 0 , the reduced mass µ, the binding energy B, the effective range r e , and the interaction range R int . The radius of the bound state is calculated by R = 2µB. We tabulate relevant quantities in Tab. 1. We note that R int is not an observable, and therefore it is determined from the theoretical consideration. The procedure to determine these physical quantities is explained in Ref. [5]. The results of the estimated compositeness with the uncertainty band (7) are shown in the right column in Tab. 1. It is found that the range correction is important for the application to the X(3872) and the NΩ dibaryon [5]. We find that those bound states are dominated by the composite component because the lower boundariesX l are larger than 0.5.
Table 1 .
1The physical quantities and the compositeness X with the uncertainty band(7). u, mK and B.R. stand for the atomic mass unit, millikelvin and the Bohr radius.bound state
B
a 0
r e
R int
Compositeness X
d
2.22 MeV
5.42 fm
1.75 fm
1.43 fm
0.74 ≤ X ≤ 1
X(3872)
0.018 MeV 28.5 fm −5.34 fm
1.43 fm
0.53 ≤ X ≤ 1
D *
s0 (2317)
44.8 MeV
1.3 fm
−0.1 fm
0.359 fm
0.81 ≤ X ≤ 1
D s1 (2460)
45.1 MeV
1.1 fm
−0.2 fm
0.359 fm
0.55 ≤ X ≤ 1
NΩ dibaryon 1.54 MeV
5.30 fm
1.26 fm
0.676 fm
0.80 ≤ X ≤ 1
ΩΩ dibaryon
1.6 MeV
4.6 fm
1.27 fm
0.949 fm
0.79 ≤ X ≤ 1
3
Λ H
0.13 MeV
16.8 fm
2.3 fm
4.32 fm
0.74 ≤ X ≤ 1
4 He dimer
1.30 mK
189 B.R. 13.8 B.R. 10.2 B.R.
0.93 ≤ X ≤ 1
SummaryThe compositeness X characterizes the internal structure of shallow bound states, especially for the candidates for exotic hadrons. The weak-binding relation is one of the approaches to estimate X. When we neglect the correction terms O(R typ /R), the weak-binding relation becomes completely model-independent. However, if the scattering length a 0 is larger than the radius of the bound state R, the compositeness is overestimated as X ≥ 1 without the correction terms. To avoid this problem, we discuss the method to evaluate the correction terms O(R typ /R). We propose the estimation method of X with the uncertainty band, which includes the contribution of the correction terms O(R typ /R). Our uncertainty estimation provides the compositeness in 0 ≤ X ≤ 1 which can be interpreted as a probability. We finally perform reasonable estimations of X as shown in Tab. 1, and find that all states which we consider are composite dominant (X l ≥ 0.5).
. S K Choi, BellePhys. Rev. Lett. 91262001S.K. Choi et al. (Belle), Phys. Rev. Lett. 91, 262001 (2003)
. T Hyodo, Int. J. Mod. Phys. A. 281330045T. Hyodo, Int. J. Mod. Phys. A 28, 1330045 (2013)
. S Weinberg, Phys. Rev. 137672S. Weinberg, Phys. Rev. 137, B672 (1965)
. Y Kamiya, T Hyodo, PTEP. 201723Y. Kamiya, T. Hyodo, PTEP 2017, 023 (2017)
. T Kinugawa, T Hyodo, Phys. Rev. C. 10615205T. Kinugawa, T. Hyodo, Phys. Rev. C 106, 015205 (2022)
. R Machleidt, Phys. Rev. C. 6324001R. Machleidt, Phys. Rev. C 63, 024001 (2001)
. P A Zyla, Particle Data GroupPTEP. 2020P.A. Zyla et al. (Particle Data Group), PTEP 2020, 083C01 (2020)
. Y Li, F K Guo, J Y Pang, J J Wu, Phys. Rev. D. 10571502Y. Li, F.K. Guo, J.Y. Pang, J.J. Wu, Phys. Rev. D 105, L071502 (2022)
. J Song, L R Dai, E Oset, Eur. Phys. J. A. 58133J. Song, L.R. Dai, E. Oset, Eur. Phys. J. A 58, 133 (2022)
. M Albaladejo, J Nieves, Eur. Phys. J. C. 82724M. Albaladejo, J. Nieves, Eur. Phys. J. C 82, 724 (2022)
. T Iritani, HAL QCDPhys. Lett. B. 792284T. Iritani et al. (HAL QCD), Phys. Lett. B 792, 284 (2019)
. S Gongyo, Phys. Rev. Lett. 120212001S. Gongyo et al., Phys. Rev. Lett. 120, 212001 (2018)
| [] |
[
"Application of Adversarial Examples to Physical ECG Signals",
"Application of Adversarial Examples to Physical ECG Signals"
] | [
"Taiga Ono \nWaseda University\n\n",
"Takeshi Sugawara \nThe University of Electro-Communications\n\n",
"Jun Sakuma \nUniversity of Tsukuba\n\n",
"Tatsuya Mori \nWaseda University\n\n\nRIKEN AIP\n\n"
] | [
"Waseda University\n",
"The University of Electro-Communications\n",
"University of Tsukuba\n",
"Waseda University\n",
"RIKEN AIP\n"
] | [] | This work aims to assess the reality and feasibility of the adversarial attack against cardiac diagnosis system powered by machine learning algorithms. To this end, we introduce "adversarial beats", which are adversarial perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system. We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate. Next, to evaluate its feasibility in a physical environment, we mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings. To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup. Our real-world experiments demonstrate that adversarial beats successfully manipulated the diagnosis results 3-5 times out of 40 attempts throughout the course of 2 minutes. Finally, we discuss the overall feasibility and impact of the attack, by clearly defining motives and constraints of expected attackers along with our experimental results. | null | [
"https://arxiv.org/pdf/2108.08972v1.pdf"
] | 237,260,138 | 2108.08972 | 2f8bffcba5fd779ddc1b2e7df375884c7065f0ad |
Application of Adversarial Examples to Physical ECG Signals
Taiga Ono
Waseda University
Takeshi Sugawara
The University of Electro-Communications
Jun Sakuma
University of Tsukuba
Tatsuya Mori
Waseda University
RIKEN AIP
Application of Adversarial Examples to Physical ECG Signals
Deep Learning · Adversarial Examples · Hardware Security
This work aims to assess the reality and feasibility of the adversarial attack against cardiac diagnosis system powered by machine learning algorithms. To this end, we introduce "adversarial beats", which are adversarial perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system. We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate. Next, to evaluate its feasibility in a physical environment, we mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings. To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup. Our real-world experiments demonstrate that adversarial beats successfully manipulated the diagnosis results 3-5 times out of 40 attempts throughout the course of 2 minutes. Finally, we discuss the overall feasibility and impact of the attack, by clearly defining motives and constraints of expected attackers along with our experimental results.
Introduction
The application of neural networks in clinical diagnosis processes has gained popularity in recent years. For instance, there have been numerous studies applying pattern recognition on digital medical images such as X-rays and CT scans of patients to spot irregularities such as indicators of strokes or tumors [21]. Conventionally, such a diagnosis requires trained physicians to spend an extended period of time analyzing measurements, where as neural networks are able to quickly and accurately spot suspicious patterns. Machine-learning-powered services specializing in clinical applications [18,19,28] have been on the rise lately, and the United States Food & Drug Administration has given neural networks clearance for use in medical services [1]. Neural networks are not limited to image classification tasks as Pranav et al. [27] have demonstrated that neural networks can be leveraged to discover signs of arrhythmia in electrocardiograms (hereinafter referred to as ECGs).
While neural networks show great promise in automated clinical diagnosis, it could be exposed to the threats of adversarial examples [29]. Past work shows that a trained classifier to diagnose medical images can be fooled by medical images perturbed by adversarial noise, causing classifiers to make mistakes in diagnosing illnesses [9]. Recent studies [2,5,12,16,23,30] have also explored the robustness of the adversarial examples in the physical world. There are a few existing studies on generating adversarial perturbations applied on ECGs [5,14], where generated perturbations are injected into ECG measurements to cause misclassifications in segment-based discovery of atrial fibrillation. We note, however, that these previous studies have been limited to software simulations, with limited discussion on the feasibility of adversarial examples of ECG in the real world. In addition, difficultly in realizing adversarial examples via noise injection into sensors have been reported as well, due to noise effects [4,16].
Given this background, this work aims to bridge the gap between the threats assessed by the software simulation and those that may arise in the real world. We introduce "adversarial beats", which are adversarial perturbations tailored specifically against ECG, taking into account the physical constraints in implementing the attack. Our work explores the following research questions: RQ1: In the real world, what types of attackers would be motivated to leverage adversarial beats against an ECG diagnosis system, under what constraints? RQ2: Can we generate adversarial beats against machine-learning-powered ECG diagnosis systems to alter classification results to lead to a meaningful manipulation of ECG diagnosis results? RQ3: Can we apply the generated adversarial beats for ECGs in a hardware attack, taking into account physical constraints?
Key contributions of our work are summarized as follows: We first provide a plausible implementation scheme of neural networks in automated clinical diagnostics of medical data acquired from patients via monitoring, which without proper precautions could be vulnerable to manipulation. We also clarify the types of attackers that may be motivated to attack such implementations, along with their methods and constraints. We then introduce adversarial beats, which aims to spoof classification results of ECG diagnosis. A success rate of up to 65.4% is achieved during training. Finally, we perform real-world evaluation of the adversarial beats through a PoC hardware attack. 3-5 successful attack cases are achieved throughout 40 attempts. Our analysis gives in-depth insight into the feasibility of the attack as well as the realistic scenarios in which adversaries find clear incentives to perform the attack with adversarial beats. We hope future system designers refer to our work to review the threat of real-world attacks leveraging adversarial examples on ECG diagnosis systems.
Background: ECG Monitoring and Classification
In this section, we present the basics of ECG monitoring and describe the ECG heartbeat classification model we adopt in this work.
ECG monitoring
ECGs are waveforms commonly used to visualize and monitor the electrical activity of a patient's heart over time, allowing physicians to spot certain patterns entailing potential health risks. One of such patterns are arrhythmias, which is a broad class of irregular heartbeats [17]. While most arrhythmias are considered to be harmless, some indicate signs of dangerous heart activities which could lead to fatal conditions [17]. While Holter monitors [10] have been a conventional means to monitor ECGs over a extended period of time, recent development of wearable medical devices [31] offers convenient, non-intrusive methods of monitoring patient ECGs, allowing physicians to analyze heart activities of patients throughout their daily lives. As more convenient means of ECG monitoring develop, however, measured patient data will increase, along with the demand for an efficient and accurate analysis of measured ECGs. [20], or detecting signs of atrial fibrillation from an arbitrary segment of an ECG [27]. To evaluate the performance of arrhythmia classification algorithms, it is essential to have a standard protocol/dataset for researchers/engineers to compare the results. Association for Advancement of Medical Instrumentation (AAMI) recommends specific protocols for evaluating performance of automated ECG classifiers, and ANSI/AAMI EC57 [24] specifies the five types of heartbeats in Table 1: nonectopic beats (N), supra-ventricular ectopic beats (S), ventricular ectopic beats (V), fusion beats (F) and unclassifiable beats (Q), where N is considered normal heartbeats, S, V, and F are considered arrhythmic beats.
ECG Classification task
Threat Model
In this section, we lay out the target system of our proposed attack. To this end, we consider how machine learning-based ECG classifiers would be generally implemented in clinical settings in the future. Furthermore, we identify potential adversaries in such clinical settings.
A Model of Target Clinical Diagnosis System
There is a significant incentive in building fully-automated clinical diagnosis system, as it offers a potential means to cutting down healthcare expenses [9]. Analyzing how exactly a potential attacker can manipulate a neural networkbased diagnosis system is not a straightforward process, because there is no single diagnosis system infrastructure to consider. We thus propose a reference model, considering what is to be expected from future clinical diagnosis systems. Our reference model shown in Figure 1 revolves around daily monitoring of patient ECGs, aimed towards catching signs of suspicious heart activity entailing potential health risks. These tasks are expected to become increasingly convenient if classifiers are implemented in conjunction with newer IoT medical wearables, allowing physicians to analyze large amounts of monitored data and spotting any irregularities quickly. ECGs measured from IoT wearables on patients are sent to a central classifier trained to diagnose the collected data, which the results are sent to healthcare providers to base their decisions on.
Potential Adversaries
We now identify potential adversaries that could take advantage of the proposed system with malicious intent. We highlight their motives, as well as their abilities to go about an attack, allowing us to identify potential constraints for each adversary. All but one of the suggested attackers have the potential of conducting our proposed attack. We note, however, that multiple types of attackers could conspire together and share profits earned, also potentially lifting constraints that otherwise would be imposed if attacking on their own. Patient: Malicious patients could be incentivised to avoid expenses from procedures or medication, by faking diagnosis results to their favor, such as by feigning normal heartbeats to mask arrhythmias (false-negative). Although in the long run, it is the patient's health at risk, it is possible that those with monetary incentives attempt such an attack, leading to negative impacts on the patient themselves, and other actors involved in the process. We note that, while we conduct experiments to launch a false-positive attack of arrhythmia discovery, false-negative attacks would be possible by setting different target classes. Patients will spend some arbitrary time with the target IoT device, if not for a prolonged period depending on its application. Though limited in the extent of tampering the device, they do have physical access to the target device itself.
Hospital Personnel: Hospital personnel could be physicians, nurses, or any other personnel in charge of patient care. Malicious personnel could be incentivised to conspire against a patient, manipulating diagnosis results to incur additional fees for unnecessary procedures and checkups. Hospital personnel would have arbitrary access to maintain devices, including IoT devices, giving them just as much access to target devices as patients, if not more. Medical IoT Manufacturers: IoT manufacturers would be considered a part of the supply chain, in which devices that they manufacture are then utilized by hospitals and other client facilities. Although indirectly, such manufacturers may be motivated to conspire with hospitals for monetary incentives. Manufacturers would have the most extensive access to the device itself, enabling advanced tampering on the device. Attacks via hardware trojans installed within the supply chain is a known attack vector explored in previous work [32]. Third-Party Healthcare Organizations: Third-party healthcare organizations refer to organizations such as pharmacies and insurance companies. Their profits and services depend heavily on diagnosis results made by hospitals, and malicious actors within such organizations could be incentivised to conspire with other service providers for their own gain. It should be noted, however, that these organizations would have very limited physical access to the target device itself, making it very difficult for them to act on their own. This makes such malicious parties a co-conspirator, rather than the actual attacker, but with potentially the most monetary incentive out of all other attacker profiles.
The proposed types of potential attackers will be discussed in section 7, when they will be considered whether or not it will be feasible for them to go about with the proposed real-world attack method. In addition to the abilities of the attacker, we consider what they know about the target system. While it is unclear how neural network parameters are disclosed to the public in a future clinical diagnosis system, we assume that critical parameters are undisclosed to the public, given recent advancements in machine-learning security [26]. Studies regarding ECG classification tasks, however, often utilize datasets available online, which adversaries could also obtain easily. This leads us to believe that all types of adversaries, including patients, are capable of conducting a black-box attack on the classifier. While it is unclear how clinical institutions will regulate how ECGs are diagnosed by neural networks, we assume that healthcare providers are capable of white-box attacks on such a system, provided that they have insider access to such information, such as the exact means of which ECG is recoreded, preprocessed, and fed into classifiers.
Adversarial Beats
This section covers the principles and methods we implement to generate adversarial beats against an ECG diagnosis system. We present the physical constraints for achieving the real-world attack, and describe the adversarial beats generation algorithm against the underlying heartbeat classifier. 16 ! ∈ # ! $%& ! with ! $%& Inserted with Horizontal Translation Fig. 2. Inserting Adversarial beats in Target ECG.
Overcoming Physical Constraints
Adversarial beats must overcome physical constraints posed by the ECG preprocessing pipeline. To ensure that adversarial beats functions in the physical realm, the following challenges are to be considered: Real-Time Attack: Existing studies [5,14] attempted to generate specific adversarial perturbations for each sample, similar to the studies of conventional adversarial examples. From a physical standpoint, however, such an approach is not feasible due to the fact that the signals are to be generated in real-time, as ECGs are measured from a patient. Generating Short Noises: Adversarial perturbations generated in previous work [5] are around 30 seconds long at its longest, to approximately 5 seconds long, with diminishing effectiveness the shorter it gets, for certain classes at the point of direct digital injection. Longer noise makes managing where to inject it in the ECG difficult, and may suffer from arbitrary changes in the target ECG (such as heart rate/patient movement) during the noise injection, which is assumed to have an influence on its effectiveness. This leads us to believe that shorter perturbations increase the chances of a successful attack via physical injection. Thus, adversarial beats are optimized to be shorter and robust to shifting within the range of a single heartbeat. Existence of Physical Band-pass Filters: To make the adversarial perturbations work in the real-world, we need to consider the existence of band-pass filters implemented in the preprocessing stage before classification, which filters out any frequency components in the measured signal outside of a certain range. Conventionally, raw ECG segments are processed through certain bandpass filters to remove unwanted noise artifacts after measurement, commonly caused by improper electrode placements, external devices, or patient movement [22]. The specifications of noise artifacts considered throughout this work are summarized in Table 2. Although Chen et al. [5] addressed the issue by applying a frequency limit on the generated perturbations, they did not evaluate its effectiveness in a physical environment. Conventional ECGs are sampled at around 300-360 Hz to ensure no information is lost [22]. For adversarial beats to maintain their effectiveness after filtering, they must be constrained to the range of frequencies that the filter allows. As done in the work of Chen et al. [5], we limit the frequency components of the adversarial beats within the range target ECGs are filtered with. Adversarial beats are recorded to the extent of its sampling rate, and thus should have granularity no greater than 300-360 Hz.
Existence of Heartbeat Segmentation: Another component in the ECG pre-processing pipeline is the heartbeat segmentation operation. While there are various methods to diagnose an ECG, we implement beat-by-beat classificationa classification scheme to diagnose individual heartbeats in segments of ECGs. We believe it best represents diagnosis of ECGs that are monitored throughout daily life, as signs of arrhythmia throughout daily activities are indicators of health risks. Due to this fact, we propose that considering beat-by-beat classification a distinction from previous work, as it introduces a different impact compared to segmentation-based classification, which classify entire segments of ECGs to a limited range of classification types. Without any regard for its waveform, adversarial beats can substantially alter the general waveform of the ECG regardless of the implemented filter, which is assumed to be the result of strict constraints. Injected intrusive adversarial beats drastically alter the waveform of the ECG from its original state, causing the beat segmentation algorithm to detect additional nonexistent heartbeats. Discrepancies in numbers of detected heartbeats can be the source of arbitrary errors outputted by the classifier. Thus, in addition to detection from human perception, adversarial beats must avoid detection from such heartbeat segmentation algorithms. Preliminary experiments lead to the conclusion that limiting the amplitude of the adversarial beats is the simplest method of preventing interference with segmentation. Universal perturbations: Physical adversarial perturbations need to be universal in the following two aspects: (1) Beat Invariance: They must be valid for any heartbeat. There is uncertainty in what the class of the original measured heartbeat pertains to. To maximize the success of spoofing certain classes of heartbeat under any occasion, adversarial beats are trained to be effective on any types of heartbeats, and not just normal ones. (2) Positional Invariance: They need to be valid anywhere in the ECG signal. Similar to past work, the concept of Expectation Over Transformation [2] is applied to adversarial beats to be effective regardless of where in the target ECG it is applied, as aiming the injection at an exact relative location in an ECG is difficult for an attacker. Specifically, this technique involves implementing random degrees of horizontal shift when applying adversarial beats to an ECG signal [5]. This is because it is infeasible for hardware to inject adversarial beats in an exact position on the ECG signal in a physical implementation. This contributes to the positionalinvariance, or the universal characteristic regarding injection position.
Adversarial Beats Generation Algorithm
With the physical constraints shown above, we generate adversarial beats by training them on the preprocessed dataset until the misclassification proficiency ceases to increase. They are optimized solely on a digital environment, utilizing datasets available to the public, with digital transformations to simulate physical constraints. To optimize the amplitude of the adversarial beats without compromising its effectiveness, we implement an iterative algorithm shown in Algorithm 1, located in the appendix. To adversarial beats, we adopt the "Expectation Over Transformation," proposed by Athalye et al. [2]. The algorithm aims to generate adversarial examples that remain adversarial over a chosen transformation in the physical world. It has been adopted in several studies such as Chen et al. [5] and Brown et al. [3] for generating robust adversarial examples. The optimization problem for generating adversarial beats is formulated with Equation 1, which aims to minimize the categorical cross-entropy loss for the targeted class while minimizing the frequency components of the adversarial beats that will be filtered out during the physical ECG signal processing stage. In summary, this optimization equation optimizes adversarial beats to adapt to all target heartbeats and horizontal translation, while minimizing the amplitude of frequency components outside of the designated frequency range.
b * adv = arg max b adv {E b∼B,x∼X [log P (y t | b + T (b adv ; x))]−λ f |M (f ) · F (f ; b adv )|}
(1) B is the set of all heartbeat ECG segments in the training set, X is the set of all possible horizontal shift transformations applicable on the adversarial beat b adv within the heartbeat, and y t is a targeted heartbeat class. Note that x ∼ X refers to a random variable following a uniform distribution on X . T (·) is an operator that represents the horizontal shift transformation, i.e., placing the adversarial beat, b adv , within the target heartbeat ECG segment. A similar approach is taken in previous work [5]. F (f ; x(t)) represents a frequency component of a given signal x(t); it is computed by applying the Fourier transform to x(t). M (·) represents a function to represent the effect of the bandpath filters; i.e., it applies a mask such that frequency components that are to be filtered out during the ECG processing pipeline remain; i.e., M (f ) = 1 if the frequency f is to be filtered out, otherwise M (f ) = 0. During optimization, a capacity on the amplitude of the adversarial beat waveform is set to be contained within 0-A, where A is specified by Algorithm 1 in the appendix.
The resulting adversarial beat b * adv causes an ECG segment to be missclassified as a target class when inserted. Figure 2 illustrates an example case of an adversarial beat being inserted into a target heartbeat ECG segment. Since the set of target heartbeat ECG segments used to optimize the adversarial beats contain heartbeats of any class, they are applicable to heartbeats of any class. This contributes to the universal characteristic of adversarial beat regarding heartbeat classes. We note that the adversarial beat is shorter in length compared to the target ECG heartbeat segment. The main difference from the optimization model used by Chen et al. is that the regularization by L 2 -norm disregarded, as we are focused on avoiding interference with the segmentation algorithm. Furthermore, the adversarial beats are intended to be inserted on every heartbeat that the adversary intends to spoof classification results for. Adversarial beats tend to have greater amplitude and less smoothness compared to those reported in previous work [5,14], which have been demonstrated only in the digital environment. Similar tendency can be observed in physical adversarial perturbations in other domains, such as image [3,8] and audio [16]. The resulting adversarial beats from this optimization algorithm have optimized amplitudes to minimize heartbeat segmentation interference, and are robust to ECG frequency filtering.
Simulation-based Evaluation
In this section, we begin by evaluating the proficiency of the trained target heartbeat classifier. We then generate adversarial beats using our proposed algorithm, and evaluate its effectiveness in spoofing certain classes of heartbeats by digitally inserting them in ECG segments.
Target Heartbeat Classifier
At its core, the ECG diagnosis system is expected to take in a segment of an ECG of a patient and automatically return diagnosis results on them. Several algorithms are applied on the ECG sequentially as discussed in section 4.1, outputting a series of data that the target neural network classifier can properly analyze. In our implementation, we filter out relevant frequencies for baseline wandering and powerline interference using forward-backward filtering with cutoff frequencies allowing frequencies of 0.5-50 Hz. Frequencies to attenuate artifacts caused by patient movement are considered and implemented during heartbeat segmentation. Because the target neural network analyzes the waveform of ECGs of individual heartbeats, the measured ECG data must be segmented into individual heartbeats. Several heartbeat segmentation algorithms have been proposed [6,7,13]. The detected heartbeats are sliced from the original ECG segment and adjusted to fit the specified input of the classifier with their values normalized with 0-1. We implement heartbeat segmentation by adopting algorithms proposed by Hamilton [13]. As the baseline neural network model, we adopt the ECG heartbeat classification model developed by Kachuee et al. [20]. Details of the architecture and training datasets are specified in the appendix.
The confusion matrix for the target classifier on the test dataset consisting of 100 heartbeats for each class sampled from the test dataset is shown in Figure 3. Each axis represents classes of heartbeat, and the X-axis represents the classification result, while the Y-axis represents the ground truth. Each tile represents the number of resulting classifications. The overall accuracy of the trained classifier was 93.4%. The classifier is mostly able to consistently distinguish between different classes of heartbeats. Overall, the classifier made consistent classifications on ECGs measured from a test subject, suggesting translation of the classification proficiency on data measured during the experiments.
Generating Adversarial Beats
Two variations of adversarial beats each serving unique purposes are trained. Training data and evaluation data prepared in the previous section is used to generate the adversarial beats, similar to the target classifier. The first adversarial beat is trained to cause the target classifier to misclassify any heartbeat injected with it as S class heartbeats, regardless of their original class and spoofing an S class heartbeat (hereinafter referred to as AB-S). The second adversarial beat is trained similarly so that any injected heartbeats are misclassified by the target classifier as V class heartbeats, regardless of their original class, spoofing a V class heartbeat (hereinafter referred to as AB-V). Specifications of the two adversarial beats are shown in Table 3. AB-S achieved acceptable accuracy of up to 65.4% chance of successfully causing a targeted misclassification, with a relatively low amplitude of 0.1875. On the contrary, AB-V required a higher amplitude of 0.4 and adjustments to its length to achieve acceptable accuracy of 56.2% chance of causing a targeted misclassification. These results were achieved through heuristic adjustments of the ρ parameter from Algorithm 1. While a relatively acceptable degree of effectiveness and amplitude was achieved by threshold value ρ = 0.65 for AB-S, there was difficulty in generating AB-V with similar effectiveness without having to maintain large amplitude. ρ for AB-V was then decreased, which resulted in acceptable amplitude. This suggests S class heartbeats are easier to spoof compared to V class heartbeats, thus requiring less perturbations to cause a misclassification.
Testing Adversarial Beats
The trained adversarial beats are tested by introducing them into the test set and counting the cases of misclassification by the target classifier. This is done in a digital environment, isolated from the hardware experimentation to follow. Our intent is to first ensure that the generated adversarial beats possess a certain degree of effectiveness at this point. To simulate the uncertainty in where the adversarial beat may be injected during hardware injection and test whether they generalize to various translation, adversarial beats are digitally injected with random shifts throughout the sampling axis in the original heartbeat ECG. Figure 4(a) and Figure 4(b) presents cases of misclassifications. Each axis represents classes of heartbeat, and the X-axis represents the ground truth, while the Y-axis represents the classification result. Unlike Figure 3, tiles in Figure 4(a) and Figure 4(b) represent cases of misclassifications. AB-S and AB-V both show acceptable performance, as they cause expected instances of misclassifications according to the accuracy denoted throughout training.
Real-World Experiments
In this section, we apply the generated adversarial beats into a physical attack, and perform a physical hardware-based attack against our representation of a ECG diagnosis system. We perform a wired-signal injection to the device used for ECG measurement. Hardware setup is explained, and proficiency of the attack is reported with the introduced metrics.
Experimental Setup and Procedure
The diagram of our hardware setup is shown in Figure 5(a). In our setup, a conventional surface electrode is attached to the chest of a research participant, to emulate an arbitrary ECG measurement device. An ECG controller module reads the measured signals and performs amplification and filtering to the raw signal, outputting an unperturbed ECG as an analog signal. This raw ECG signal is then fed into a signal processing device, consisting of an analog-todigital converter (ADC), a digital-to-analog converter (DAC), and a PC 5 . The signal processing device reads the unperturbed ECG signal for a given period to monitor the timing to inject an adversarial beat. The signal processing device has an adversarial beat pre-computed from the methods covered in section 4 and 5, and transmits a segment of given length, containing an arbitrary number of the prepared adversarial beats separated by the computed interval. The injection waveform generated by the signal processing device (considered as the malicious device in our attack) and the unperturbed ECG signals are then combined via signal addition. Finally, the resulting combined signals are digitized at the target device's ADC, and sent to our ECG classifier. The resulting PoC hardware setup is shown in Figure 5(b). It's specifics are explained in the appendix. We note also, that the hardware used for the setup is easily obtainable in terms of accessibility and expenses.
Signal Processing
The signal processing device executes an algorithm to output adversarial beats in pulses that synchronize with the patient's heartbeat, ensuring accurate injection. The adversarial beat is also scaled so that it is injected in the intended amplitude relative to the ECG it is injected in. This algorithm ensures that adversarial beats are aligned to heartbeats, so that they retain their attack success rate regardless of which heartbeat it is injected in. The algorithm executed is as follows:
1) Read the ECG from the target patient for 5 seconds.
2) Filter the ECG and perform heartbeat detection. 3) Compute the distance between individual heartbeats that are detected. 4) Compute the amplitude of the measured ECG. 5) Scale the amplitude/length of the pre-computed adversarial beat to match the amplitude of the measured ECG/length of measured heartbeat distance. 6) Construct a 5 second signal, comprised of duplicates of the scaled adversarial beats padded by the calculated heartbeat distances so that they occur in the same rhythm as the ECG.
Procedure
The proposed hardware-injection attack is executed while measuring the ECG of a human participant, who has their heart activity measured for 120 seconds each trial. Before the experiment, we obtained informed consent and confirmed that the participant had no heart defects or disorders, i.e., all the ECG segments measured should be classified as N. Adversarial beats are injected for a total of 40 times every trial, and the injected ECG is filtered before undergoing beat segmentation. Here, we use the trained classifier from section 5.1. Finally, the segmented ECGs with individual heartbeats are fed into the classifier by batch, outputting the predicted class of every heartbeat in the measured ECG. The instances of each detected beat classes are recorded to show how often the proposed hardware attack was able to spoof the targeted heartbeat class. In our experiments, the following ECGs were measured, with a total of 10 trials performed for each: ECGs without any injections as control, ECGs injected with AB-S, and ECGs injected with AB-V. Table 4 summarizes the measurement results averaged over the 10 separate trials. A normal ECG waveform, and an ECG waveform measured with AB-S injected during measurement is displayed in Figure 6. When counting the additional arrhythmic beats discovered from injected ECGs compared to ECGs with no injection, we see that we were able to spoof 3-5 cases of heartbeats (approxmiately 5 additional S-class arrhythmia for ECGs injected with AB-S, and 3 additional V-class arrhythmia for ECGs injected with AB-V). This suggests that the adversarial beats are capable of spoofing certain heartbeats, generalize to unseen samples, and retain their effectiveness in a physical environment. We found that in the control sample, several heartbeats were misclassified as S class, resulting in a classification error rate of roughly 7.8%. While the original performance of the target classifier was suspected to be the prime cause of additional S class heartbeat detection, S class heartbeats are considered to possess a unique attribute, as the classifier also seldom misclassified average N class heartbeats as F class beats, which did not occur during the hardware-based attack. Additional observations are included in the appendix due to page limit.
Injection Attack Result
Discussion
Feasibility of the Attack
We discuss the feasibility of the attack, considering the cost, integration, and installation of the hardware required for performing the attack with the adversarial beats. We also take into consideration our suggested attacker models, and consider the feasibility of our proposed real-world attack in their perspective.
Attacker's Knowledge As mentioned in section 3, attackers are expected to be capable of conducting a black-box attack on the target ECG system. Our experimental results based on a white-box attack shows that the success rate is not necessarily very high, as we are only capable of spoofing 3-5 additional arrhythmic heartbeats in a physical setting. As white-box attacks generally have a higher success rate compared to black-box attacks, we conclude that a successful black-box attack in a physical setting has a limited threat model, leading us to believe that malicious hospital personnel are best fit to utilize adversarial beats, as they are the only actors who are presumed to have remotely any access to confidential information to the target ECG classification model.
Cost
Through our experiments we see that creating an adversarial beat is cheap in terms of resources, requiring little resource commitment from the adversary. As for the hardware used in the injection attacks, the adversary only needs to prepare the injection components of the hardware used in this experiment, i.e., a computer, ADC/DAC, and an audio mixer, which all are available at reasonable expenses. This is significant from the perspective of malicious patients or malicious hospital personnel, where advanced hardware tampering consisting of expensive hardware is infeasible. Our PoC hardware setup is possible for any adversary with reasonable knowledge and resourcefulness. This is, however, not to say that an advanced adaptation of our PoC is pointless.
Integration
With the growing trend in medical IoT and their expected convenience, we expect daily biomedical monitoring devices to be small in size. We believe that with the simplicity of our injection algorithm, the PoC hardware setup we propose can be embedded into a much smaller circuit component by a resourceful adversary, making potential tampering with the monitoring hardware unnoticeable. We note that the components used in our experimental setup are prototypes of the proposed attack, and integrating all these components into a small board, using a microcontroller with ADC/DAC and op-amps, is possible for an adversary with moderate expertise in embedded systems. Malicious medical IoT manufacturers would be capable of such tampering. With additional functionality, such as generating adversarial beats when prompted via a network signal, adversarial beats can be leveraged to poison a supply chain, enabling a type of backdoor that malicious manufacturers and their co-conspirators can utilize to manipulate diagnosis results. Installation
The installation of malicious hardware could occur in different phases depending on the attacker. As malicious hospital personnel are expected to be the ones maintaining measurement devices before providing them to the patient, they could install the adversarial contraption anytime before provision. On the contrary, patients will have arbitrary access to the measurement device during the extent of the measurement being taken. Malicious manufacturers may install the adversarial hardware contraption in the manufacturing phase. For malicious patients and hospital personnel, it is straightforward to attach an adversarial hardware circuit to an ECG monitoring device as we demonstrate in this work. If such tampering is undetected by other stake holders, the device can be used as an adversarial ECG measuring device, which outputs perturbed heartbeats by command, but otherwise returns the original, unperturbed ECG of the patient.
Overall Feasibility
Taking the aforementioned details into consideration, it can be concluded that our proposed attack requires hospital personnel with access to the physical target device and insider knowledge of the target ECG neural network classifier to succeed. While the exact degree of knowledge hospital personnel would have on the classification model is unclear, our proposed attack could be possible in the case of lack of access restriction to model parameters. We hope that future system designers can refer to our research as an example of a possible threat to look out for.
Limitations
We discuss further points of optimization and context which could improve our proposed hardware attack.
Adversarial Beats
There is much room for optimization of the adversarial beats. Their lengths are manually adjusted as a static value at the beginning of training, resulting with a heuristically optimized parameter at best. Improvement to the adversarial beat generation algorithm to find the optimal length of the adversarial beat is prioritized in future work. We note, also, that the waveform of adversarial beats are optimized to minimize the disruptiveness of the generated adversarial beats towards beat segmentation algorithms. While experimental results have shown heuristically that minimizing the prominence of the peaks in adversarial beats sufficiently prevents beat segmentation disruption, the specific conditions in which beat segmentation algorithm mistake adversarial beats as QRS complexes are unclear. Thus, specific beat segmentation algorithms should be considered in detail for future work to analyze the specific constraints they put on adversarial beats. An increase in the overall effectiveness of adversarial beats would also lead to a larger variety of potential attackers to consider. Improving on the generation algorithm of adversarial beats to enable a black-box attack will assist in expanding the discussion of a more diverse threat model.
Hardware Implementation
While the hardware setup showcased in this work demonstrates the feasibility of physical adversarial examples for ECGs, implementation could be improved.
Besides hardware noise, the injection algorithm implemented in the myDAQ device may also be responsible for the inaccuracy of recreating adversarial beats as the injection signal. As the algorithm only accommodates for difference in lengths of particular heartbeats by scaling the length of adversarial beats by average beat-to-beat distance within a 5-second window, the resulting length adjustment may be inaccurate, which can be taxing on their effectiveness. Furthermore, while adversarial beats are trained to be effective in any horizontal position on a target heartbeat, ending up on a prominent peak in the waveform significantly alters the maximum amplitude of the resulting injected ECG, which introduces a degree of uncertainty that could be detrimental to the success of the attack. Improvements in generating injection signals may allow adversaries for more consistent and versatile manipulation of the diagnosis results.
Target System
Our proposed attack is naive in the sense that it disregards the presence of human operators in the automated diagnosis system. Adversarial beats introduce prominent noise into the measured ECG signals. While we assume an automated diagnosis, a specialist in ECGs simply observing the measured ECG is enough to raise suspicion and potentially compromise the hardware attack. In this case, our human observer would be the hospital personnel, making them even more of a likely candidate to carry out the proposed attack. Future work focuses on minimizing the amplitude of adversarial beats, or otherwise overcoming the risk of the attack getting compromised by a human observer.
Ethical Consideration
The hardware implementation used in this work was not used for any purposes other than for experimentation. We obtained informed consent from the participant who had their ECGs measured; i.e., before the experiments, we explained the objectives of the experiments and that we use the measured ECGs purely for the experiments, and not for other purposes such as medical diagnosis.
Conclusion
Automated clinical diagnosis systems powered by neural networks are exposed to the threats of adversarial examples. To analyze the feasibility of adversarial examples for ECGs in the real world, we attempt a PoC attack against a naively implemented neural network-based automatic ECG diagnosis system. Using offthe-shelf hardware to implement our attack, we demonstrate that our proposed hardware attack is capable of causing a classifier trained to detect arrhythmia to make a misclassification. We also identify attackers and scenarios in which the proposed attack could be executed to commit a healthcare fraud. As this work showed, the specific domain knowledge and the specific schemes in which measurement data is collected play a substantial part in how an adversary may leverage adversarial examples in the real world. We hope our work can be referenced by system designers to implement preventive measures against potential adversarial attacks in future clinical diagnostic systems.
S-Target Spoofing Attack One of the specific conditions the S class heartbeats entails is Atrial Premature heartbeats [24]. While individual Atrial Premature heartbeats are considered harmless, 3-4 consecutive occurrences entail potential signs of severe stress, side affects of medicine, excessive substance consumption, and potential heart failure [17]. Patients with frequent occurrences of this arrhythmia are at times subject to anti-anxiety drugs, or heart medication to prevent further development of serious arrhythmia. The exact hardware attack executed in our experiments can be utilized by a malicious hospital personnel. A physician may be able to embed an adversarial hardware contraption between the input of the readings from the electrodes to feign consecutive S class heartbeats in patient ECGs. This will allow the physician to appoint excessive medication, leading to higher costs billed. Perfect performance of the hardware attack is not required to succeed, as only few of the attempts are required to succeed in order to raise suspicion of dangerous heart activity with occasional consecutive S class heartbeat occurrences.
V-Target Spoofing Attack V class heartbeats point to Premature Ventricular Contraction [24]. Premature Ventricular Contraction heartbeats indicate heart instability when occurring in succession, or in a pattern and is looked out for during periods following a heart surgery [17]. Patients seen with frequent occurences of this arrhythmia are at times subject to antiarrhythmic medication. Malicious hospital personnel may conduct a similar attack to the S-Target Spoofing attack to display such arrhythmia occurring in patterns to warrant unnecessary application of antiarrhythmic medication during post-surgery care. Constant monitoring is required after heart surgeries, and devices may be assigned to patients after heart surgeries to ensure stability in heart activity, making this scenario especially vulnerable to our suggested attack method.
Fig. 3 .
3Confusion Matrix of Target Classifier.
Fig. 4 .
4Confusion Matrix of Target Classifier Under Different Injection
Fig. 5 .
5Poc Hardware Setup.
Table 1 .
1Five Heartbeat Classes Specified in ANSI/AAMI EC57[24].Heartbeat Classification Task Convolutional neural networks are capable of categorizing individual heartbeats extracted from an ECGHeartbeat Class
Descriptions
N
Average/Normal
S
Atrial/Nodal premature
V
Ventricular premature
F
Fusion of V and N
Q
Paced/Unclassifiable
Fig. 1. A model of the Target Clinical Diagnosis System.Target Model
•
•
8
Patient
Physician
Insurer
Pharmacist
Neural Network
Service
Providers
Table 2 .
2Effects and Frequencies of Common Noise Artifacts in ECGs[22].Noise Type
Frequency [Hz]
Causes
Effects
Baseline Wandering
0.5
Electrode Placement
Vertical Displacement
Powerline Interference
50-60
External Devices
Sinusoidal interference
Motion Noise
1-10
Patient movement Artifacts mistaken as QRS complex
Table 3 .
3Specifications of Generated Adversarial Beats.Target Class
∞
2
Success Rate [%]
S (AB-S)
0.1875
1.44
65.4
V (AB-V)
0.4000
5.51
56.2
Table 4 .
4Diagnosis Results for ECGs.Target Class
F
N
Q
S
V
None (Control)
0
138.6
0
11.6
0
S
0
124.2
0
16.4
0
V
0.1
112.6
0.4
44.9
3.1
(a) No Injection
(b) Injected with AB-S
Fig. 6. Sample Segments of Experimental ECGs.
We discuss the feasibility of hardware implementation in Section 7.
Appendix A: Concrete Attack ScenariosWith the threat model laid out in section 3, we give some concrete examples as to how certain adversaries may go about an attack. Following the heartbeat types shown inTable 1, we present two heartbeat spoofing attack models and explain ways in which the resulting ECG readings can be used for medical fraud to benefit the adversary.Appendix B: Adversarial Beat Generation AlgorithmAlgorithm 1 describes the algorithm used to generate adversarial beats. In each iteration, adversarial beats are optimized to maximize attack success rate, and a stricter amplitude is assigned once the perturbation reaches the threshold attack success rate. After N unsuccessful iterations, the algorithm deems further constriction of amplitude difficult and aborts the training. For this experiment, we empirically set the threshold as N = 5.Appendix C: Target Model DetailsArchitecture The model complies with the five-heartbeats classification inTable 1. The architecture is a series of consecutive residual blocks consisting of convolutional layers and max-pooling layers, followed by two fully-connected layers with 32 neurons each. Convolutional layers perform 1-D convolution, each with 32 kernels of size 5, as specified by Kachuee et al.[20]. For the Max-pooling layer, size is 5 with a stride of 2. A classification accuracy of up to 93.4% was achieved in Kachuee's original work, successfully determining heartbeat patterns. Dataset To train the classifier, we adopt the PhysioNet MIT-BIH Arrhythmia Database[11,25]. All the heartbeats are labeled with their respective heartbeat annotations, which are adapted to the beat annotation standards specified in AAMI EC57 as shown inTable 1. We first segment ECGs in the dataset into individual heartbeats. The resulting dataset contains segments of ECGs of individual heartbeats, each labeled as a class, resized to universal length, so that they can be inserted into the target classifier. We normalize the amplitude of waveforms values to 0-1.0. We subsampled all but one class to counteract disproportionate class distributions. Finally, the resulting dataset is then split with 80% of the dataset used for training the classifier and the remaining 20% used for testing the classifier and the effectiveness of our created adversarial beats.Appendix D: Hardware SetupThe ECG controller is realized with the Analog Devices AD8232 module which amplifies and filters the raw electrical signal. We use an off-the-shelf data acquisition device, National Instruments myDAQ, capable of acquiring/generating electrical to realize the ADC/DAC in the signal processing device shown inFigure 5(a). A laptop PC is attached to the NI myDAQ device to inject precomputed adversarial beats with proper timing. Signal addition was executed by using a commercial audio mixer[15]. The final ADC component was implemented with an Arduino via their analog input port.
Artificial intelligence and machine learning in software as a medical device. U F D Administration, Administration, U.F..D.: Artificial intelligence and machine learning in software as a medical device. https://www.fda.gov/medical-devices/ software-medical-device-samd/artificial-intelligence-and-machine- learning-software-medical-device
Synthesizing robust adversarial examples. A Athalye, L Engstrom, A Ilyas, K Kwok, Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research. the 35th International Conference on Machine Learning. Machine Learning Research80Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial ex- amples. In: Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 284-293 (July 2018)
. T B Brown, D Mané, A Roy, M Abadi, J Gilmer, abs/1712.09665Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. CoRR abs/1712.09665 (2017)
Audio adversarial examples: Targeted attacks on speechto-text. N Carlini, D Wagner, Proceedings of the IEEE Symposium on Security and Privacy. the IEEE Symposium on Security and PrivacyCarlini, N., Wagner, D.: Audio adversarial examples: Targeted attacks on speech- to-text. In: Proceedings of the IEEE Symposium on Security and Privacy (2018)
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System. H Chen, C Huang, Q Huang, Q Zhang, CoRR abs/1901.03808Chen, H., Huang, C., Huang, Q., Zhang, Q.: ECGadv: Generating Adver- sarial Electrocardiogram to Misguide Arrhythmia Classification System. CoRR abs/1901.03808 (2019), http://arxiv.org/abs/1901.03808
Real time elecrocardiogram QRS detection using combined adaptive threshold. I Christov, Biomedical engineering online. 3Christov, I.: Real time elecrocardiogram QRS detection using combined adap- tive threshold. Biomedical engineering online 3, 1-9 (September 2004).
. 10.1186/1475-925X-3-28https://doi.org/10.1186/1475-925X-3-28
A single scan algorithm for QRS-detection and feature extraction. Wah Engelse, Z C , Computers in Cardiology. 6Engelse WAH, Z.C.: A single scan algorithm for QRS-detection and feature ex- traction. Computers in Cardiology 6, 37-42 (1979)
Physical adversarial examples for object detectors. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, F Tramèr, A Prakash, T Kohno, D Song, 12th USENIX Workshop on Offensive Technologies. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Tramèr, F., Prakash, A., Kohno, T., Song, D.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies, WOOT 2018. pp. 1-10 (2018)
Adversarial attacks against medical deep learning systems. S G Finlayson, I S Kohane, A L Beam, CoRR abs/1804.05296Finlayson, S.G., Kohane, I.S., Beam, A.L.: Adversarial attacks against medical deep learning systems. CoRR abs/1804.05296 (2018), http://arxiv.org/abs/ 1804.05296
Holter monitoring and loop recorders: From research to clinical practice. A Galli, F Ambrosini, F Lombardi, 10.15420/AER.2016.17.2Arrhythmia & Electrophysiology Review. 52Galli, A., Ambrosini, F., Lombardi, F.: Holter monitoring and loop recorders: From research to clinical practice. Arrhythmia & Electrophysiology Review 5(2), 136- 143 (August 2016). https://doi.org/10.15420/AER.2016.17.2
A Goldberger, L Amaral, L Glass, J Hausdorff, P Ivanov, R Mark, J Mietus, G Moody, C K Peng, H Stanley, 10.1161/01.CIR.101.23.e215PhysioBank, PhysioToolkit, and PhysioNet : Components of a new research resource for complex physiologic signals. 101Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P., Mark, R., Mietus, J., Moody, G., Peng, C.K., Stanley, H.: PhysioBank, PhysioToolkit, and PhysioNet : Components of a new research resource for complex physiologic signals. Circula- tion 101(23), 215-220 (July 2000). https://doi.org/10.1161/01.CIR.101.23.e215
Explaining and harnessing adversarial examples. I Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial ex- amples. In: International Conference on Learning Representations (2015), http: //arxiv.org/abs/1412.6572
Open source ECG analysis. P Hamilton, 10.1109/CIC.2002.1166717Comput Cardiol. 29Hamilton, P.: Open source ECG analysis. Comput Cardiol 29, 101-104 (October 2002). https://doi.org/10.1109/CIC.2002.1166717
Adversarial Examples for Electrocardiograms. X Han, Y Hu, L Foschini, L Chinitz, L Jankelson, R Ranganath, abs/1905.05163Han, X., Hu, Y., Foschini, L., Chinitz, L., Jankelson, L., Ranganath, R.: Adversarial Examples for Electrocardiograms. CoRR abs/1905.05163 (2019), http://arxiv. org/abs/1905.05163
Just mixer. M Hart, hart, M.: Just mixer. https://makerhart.com
Robust audio adversarial example for a physical attack. J S Hiromu Yakura, 10.24963/ijcai.2019/741Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)Hiromu Yakura, J.S.: Robust audio adversarial example for a physical at- tack. In: Proceedings of the Twenty-Eighth International Joint Confer- ence on Artificial Intelligence (IJCAI-19). pp. 5334-5341 (August 2019). https://doi.org/10.24963/ijcai.2019/741
ECG Workout-Exercises in Arrhythmia Interpretation. J Huff, Wolters KluwerHuff, J.: ECG Workout-Exercises in Arrhythmia Interpretation. Wolters Kluwer (2017)
. A Inc, Inc., A.: Arterys. https://www.arterys.com/ (2019)
. E Inc, Inc., E.: Enlitic. https://www.enlitic.com/ (2019)
ECG heartbeat classification: A deep transferable representation. M Kachuee, S Fazeli, M Sarrafzadeh, 2018 IEEE International Conference on Healthcare Informatics (ICHI). Kachuee, M., Fazeli, S., Sarrafzadeh, M.: ECG heartbeat classification: A deep transferable representation. In: 2018 IEEE International Con- ference on Healthcare Informatics (ICHI). pp. 443-444 (June 2018).
. 10.1109/ICHI.2018.00092https://doi.org/10.1109/ICHI.2018.00092
Deep learning applications in medical image analysis. J Ker, L Wang, J Rao, T Lim, IEEE Access. 6Ker, J., Wang, L., Rao, J., Lim, T.: Deep learning applica- tions in medical image analysis. IEEE Access 6, 9375-9389 (2018).
. 10.1109/ACCESS.2017.2788044https://doi.org/10.1109/ACCESS.2017.2788044
Signal processing techniques for removing noise from ECG signals. R Kher, J Biomed Eng. Kher, R.: Signal processing techniques for removing noise from ECG signals. J Biomed Eng pp. 1-9 (2019)
Adversarial examples in the physical world. A Kurakin, I Goodfellow, S Bengio, Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. ICLR Workshop (2017)
Testing and reporting performance results of cardiac rhythm and ST-segment measurement algorithms. ANSI/AAMI EC38for the Advancement of Medical Instrumentation, A.: Testing and reporting per- formance results of cardiac rhythm and ST-segment measurement algorithms. In: ANSI/AAMI EC38 (1998)
The impact of the MIT-BIH arrhythmia database. IEEE engineering in medicine and biology magazine : the quarterly magazine of the Engineering in Medicine. G Moody, R Mark, 10.1109/51.932724Biology Society. 20Moody, G., Mark, R.: The impact of the MIT-BIH arrhythmia database. IEEE engineering in medicine and biology magazine : the quarterly maga- zine of the Engineering in Medicine & Biology Society 20, 45-50 (June 2001). https://doi.org/10.1109/51.932724
Sok: Security and privacy in machine learning. N Papernot, P Mcdaniel, A Sinha, M P Wellman, 10.1109/EuroSP.2018.000352018 IEEE European Symposium on Security and Privacy (EuroS P). Papernot, N., McDaniel, P., Sinha, A., Wellman, M.P.: Sok: Security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS P). pp. 399-414 (2018). https://doi.org/10.1109/EuroSP.2018.00035
Cardiologist-level arrhythmia detection with convolutional neural networks. P Rajpurkar, A Y Hannun, M Haghpanahi, C Bourn, A Y Ng, CoRR abs/1707.01836Rajpurkar, P., Hannun, A.Y., Haghpanahi, M., Bourn, C., Ng, A.Y.: Cardiologist-level arrhythmia detection with convolutional neural networks. CoRR abs/1707.01836 (2017), http://arxiv.org/abs/1707.01836
Artificial intelligence and machine learning in clinical development: a translational perspective. P Shah, F Kendall, S Khozin, R Goosen, J Hu, J Laramie, M Ringel, N Schork, 10.1038/s41746-019-0148-3npj Digital Medicine. 269Shah, P., Kendall, F., Khozin, S., Goosen, R., Hu, J., Laramie, J., Ringel, M., Schork, N.: Artificial intelligence and machine learning in clinical devel- opment: a translational perspective. npj Digital Medicine 2(69), 1-5 (2019). https://doi.org/10.1038/s41746-019-0148-3
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, International Conference on Learning Representations. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014), http://arxiv.org/abs/1312.6199
Physically Realizable Adversarial Examples for LiDAR Object Detection. J Tu, M Ren, S Manivasagam, M Liang, B Yang, R Du, F Cheng, IEEE. Urtasun, R.Tu, J., Ren, M., Manivasagam, S., Liang, M., Yang, B., Du, R., Cheng, F., Urtasun, R. (eds.): Physically Realizable Adversarial Examples for LiDAR Object Detection. IEEE (2020), https://arxiv.org/pdf/2004.00543.pdf
Diagnostic utility of a novel leadless arrhythmia monitoring device. M Turakhia, D Hoang, P Zimetbaum, J Miller, V Froelicher, U Kumar, X Xu, F Yang, P Heidenreich, 10.1016/j.amjcard.2013.04.017The American journal of cardiology. 112Turakhia, M., Hoang, D., Zimetbaum, P., Miller, J., Froelicher, V., Kumar, U., Xu, X., Yang, F., Heidenreich, P.: Diagnostic utility of a novel leadless arrhyth- mia monitoring device. The American journal of cardiology 112 (May 2013). https://doi.org/10.1016/j.amjcard.2013.04.017
Hardware trojans: Lessons learned after one decade of research. K Xiao, D Forte, Y Jin, R Karri, S K Bhunia, M M Tehranipoor, 10.1145/2906147ACM Transactions on Design Automation of Electronic Systems. 221Xiao, K., Forte, D., Jin, Y., Karri, R., Bhunia, S.K., Tehranipoor, M.M.: Hardware trojans: Lessons learned after one decade of research. ACM Transactions on Design Automation of Electronic Systems 22(1) (2016). https://doi.org/10.1145/2906147
| [] |
[
"ON THE LARGEST AND THE SMALLEST SINGULAR VALUE OF SPARSE RECTANGULAR RANDOM MATRICES",
"ON THE LARGEST AND THE SMALLEST SINGULAR VALUE OF SPARSE RECTANGULAR RANDOM MATRICES"
] | [
"F Götze ",
"A Tikhomirov "
] | [] | [] | We derive estimates for the largest and smallest singular values of sparse rectangular N × n random matrices, assuming lim N,n→∞ n N = y ∈ (0, 1). We consider a model with sparsity parameter p N such that N p N ∼ log α N for some α > 1, and assume that the moments of the matrix elements satisfy the condition E |X jk | 4+δ ≤ C < ∞. We assume also that the entries of matrices we consider are truncated at the level (N p N ) 1 2 −κ with κ := δ 2(4+δ) . | 10.1214/23-ejp919 | [
"https://export.arxiv.org/pdf/2207.03155v3.pdf"
] | 250,334,351 | 2207.03155 | 2dc65554002914ae055b5783662b9a4d30b87555 |
ON THE LARGEST AND THE SMALLEST SINGULAR VALUE OF SPARSE RECTANGULAR RANDOM MATRICES
28 Nov 2022
F Götze
A Tikhomirov
ON THE LARGEST AND THE SMALLEST SINGULAR VALUE OF SPARSE RECTANGULAR RANDOM MATRICES
28 Nov 2022
We derive estimates for the largest and smallest singular values of sparse rectangular N × n random matrices, assuming lim N,n→∞ n N = y ∈ (0, 1). We consider a model with sparsity parameter p N such that N p N ∼ log α N for some α > 1, and assume that the moments of the matrix elements satisfy the condition E |X jk | 4+δ ≤ C < ∞. We assume also that the entries of matrices we consider are truncated at the level (N p N ) 1 2 −κ with κ := δ 2(4+δ) .
Introduction
In the last five to ten years, significant progress has been made in studying the asymptotic behavior of the spectrum of sparse random matrices. A typical example of such matrices is the incidence matrix of a random graph. Thus, for Bernoulli matrices Konstantin Tikhomirov obtained exact asymptotics for the probability of singularity, see [14]; also, see [9]. For the adjacency matrix of Erdös -Renyi random graphs, H.-T. Yau and L. Erdös & Co. proved a local semicircular law and investigated the behavior of the largest and the smallest singular values and as well as eigenvector statistics, see the papers of [2,4] and the literature therein. In particular for adjacency matrices of regular graphs, local limit theorems and the behavior of extremal eigenvalues were investigated by H.-T. Yau and co-authors [1]. For non-Hermitian sparse random matrices M. Rudelson and K. Tikhomirov proved the circular law under unimprovable conditions on the probability of sparsity and the moments of distributions of the matrix elements (see [12]). J.O. Lee and J.Y. Hwang studied the spectral properties of sparse sample covariance matrices, which includes adjacency matrices of the bipartite Erdös-Renyi graph model). In [7] the authors prove a local law for the eigenvalues density up to the upper spectral edge assuming that sparsity probability p has order N −1+ε for some ε > 0 (here N denotes the growing order of the matrix) and entries of matrix X ij are i.i.d. r.v.'s such that (in our notations) E |X 11 | 2 = 1 and E |X 11 | q ≤ (Cq) cq for every q ≥ 1.
(1.1)
They also prove the Tracy-Widom limit law for the largest eigenvalues of sparse sample covariance matrices. However, in the proof of the local Marchenko-Pastur law and the Tracy-Widom limit, they assume a priori that the result of [3,Lemma 3.11] holds for sparse matrices (see [7,Proposition 2.13]), which includes, in particular, the boundedness of the largest singular value that is the operator norm) of a sparse matrix. They don't investigate the smallest singular value of sparse rectangular matrices though.
We derive bounds for the smallest and the largest singular values of sparse rectangular random matrices assuming that the probability p N decreases in such a way that Np N ≥ log 2 κ N for some κ > 0, and that the moment conditions are weaker than those in (1.1) (see condition (1.6)). Our main result is devoted to the smallest singular value of a sparse rectangular random matrix from an ensemble of dilute Wigner type matrices.
Suppose n ≥ 1 and N > n. Consider independent identically distributed zero mean random variables X jk , 1 ≤ j ≤ N, 1 ≤ k ≤ n with E X 2 jk = 1 ( where the distribution of X jk may depend on N), which are independent of a set of independent Bernoulli random variables ξ jk ,
1 ≤ j ≤ N, 1 ≤ k ≤ n, with E ξ jk = p N .
In what follows we shall simplify notation by denoting p = p N . We now introduce the following model of dilute sparse matrices as a sequence of random matrices of the following type
X = (ξ jk X jk ) 1≤j≤N,1≤k≤n . (1.2)
Denote by s 1 ≥ · · · ≥ s n the singular values of X, and let Y = X * X denote the sample covariance matrix. Put y = y(N, n) = n N . We shall assume that y(N, n) → y 0 < 1 as N, n → ∞. In what follows we shall vary the parameter N only.
Theorem 1.1. Let E X jk = 0 and E |X jk | 2 = 1. Suppose that there exists a positive constant C > 0 such that E |X jk | 4+δ ≤ C < ∞,(1.Pr{ s 1 ≥ K Np} ≤ CN −Q +N 2 p Pr{|X 11 | > A(Np) 1 2 −κ ln N }. Theorem 1.2. Let E X jk = 0 and E |X jk | 2 = 1. Suppose that E |X 11 | 4 = µ 4 < ∞,
and there exists a positive constant B, such that
Np ≥ B log 2 N. (1.5)
Then there exists a constant τ 0 > 0 such that for every τ ≤ τ 0 , Q ≥ 1 and K > 0 there exists a constant C = C(Q, µ 4 , K, B) with
Pr{ s n ≤ τ Np} ≤ CN −Q + Pr{s 1 > K Np}.
These results immediately imply the following corollary.
p = p N ≥ B/ ln 4 N. Then Pr{ s 1 ≥ K Np} ≤ CN −Q + C ln δ N .
Proof. Applying Markov's inequality, we obtain
Pr{|X 11 | > A(Np) 1 2 −κ ln N} ≤ µ 4+δ (Np) 2 ln 4+δ N .
By the conditions of Corollary 1.4, we get
Pr{|X 11 | > A(Np) 1 2 −κ ln N} ≤ µ 4+δ N 2 B 4+δ ln δ N .
The result follows now immediately from theorem 1.1. Thus, Corollary 1.4 is proved.
We may consider random variables X ij for i = 1, . . . , N; j = 1, . . . , n, with identical distributions depending on N. In this case we have the following result. Corollary 1.5. In addition to conditions of Theorem 1.1 assume that for any q such
that 4 + δ ≤ q ≤ C log n E |X 11 | q ≤ C q 0 q q (Np) q( 1 2 −κ)−2 . (1.6)
Then for every Q ≥ 1 and A > 0 there exist constants K = K(Q, δ, µ 4+δ , A) and C = C(Q, δ, µ 4+δ , A) such that
Pr{ s 1 ≥ K Np} ≤ CN −Q .
and there exists a constant τ 0 > 0 such that for every τ ≤ τ 0 , Q ≥ 1 there exists a constant C = C(Q, δ, µ 4+δ )
Pr{ s n ≤ τ Np} ≤ CN −Q (1.7)
2. Proof of Theorem 1.1
Let X ij denote truncated random variables X ij , i.e.
X ij = X ij I{|X ij | ≤ A(Np) 1 2 −κ ln N},
where I{B} denotes the indicator of an event B. Let X denote the matrix with entries ξ ij X ij . By A we denote the operator norm of a matrix A. First we estimate the spectral norm of the matrix E X. Since X ij and ξ ij are identically distributed random variables we have E X = np| E X 11 |. By condition (1.3), we have
| E X 11 | = | E X 11 I{|X 11 | > A(Np) 1 2 −κ ln N}| ≤ C A 3 (Np) 3 2 +κ .
From here we get the bound
E X ≤ CA −3 (Np) − 1 2 −κ . (2.1)
We consider now the centered and truncated random variables X ij = X ij − E X ij for i = 1, . . . N, j = 1, . . . n, and the matrix X = (ξ ij X ij ). Let s 1 ≥ s 2 . . . ≥ s n denote the singular values of the matrix X and resp. let s 1 ≥ s 2 . . . ≥ s n denote the singular values of the matrix X. Note that
Pr{s 1 = s 1 } ≤ Pr{X = X} ≤ N i=1 n j=1 p Pr{ X ij = X ij } = nNp Pr{|X 11 | > A(Np) 1 2 −κ ln N} (2.2)
Furthermore, we have
s 1 ≤ s 1 + E X . (2.3)
According to (2.1) we may assume that
E X ≤ γ Np (2.4)
for sufficiently small γ > 0. We may write now
Pr{s 1 > K Np} ≤ Pr{ s 1 > 1 2 K Np} + N 2 p Pr{|X 11 | > A(Np) 1 2 −κ ln N} (2.5)
Note that
σ 2 n = E X 2 11 = E( X 11 ) 2 − (E X 11 ) 2 = 1 − E X 2 11 I{|X 11 | > A(Np) 1 2 −κ ln N} − (E X 11 I{|X 11 | > A(Np) 1 2 −κ ln N}) 2 . (2.6)
It is easy that
|1 − σ n | ≤ |1 − σ 2 n | ≤ 2µ 4+δ A 2+δ (Np) (2+δ)( 1 2 −κ) .
(2.7)
Without loss of generality we may assume that σ n ≥ 1 2 . Consider now the matrix X = 1 σn X. Lets 1 denote the largest singular value of the matrixX. Then
Pr{ s 1 > K Np} ≤ Pr{s 1 > 2K Np}. (2.8)
During the rest of the proof of Theorem 1.1 we shall consider the matrix X with entries ξ ij X ij , i = 1, . . . , N j = 1, . . . , n satisfying the following conditions (CI):
• ξ ij are independent Bernoulli r.v.'s with E ξ ij = p (= p N ); • X ij are i.i.d. r.v.'s for 1 ≤ i ≤ N, 1 ≤ j ≤ n, such that E X 11 = 0, E |X 11 | 4+δ ≤ µ 4+δ and |X 11 | ≤ A(Np) 1 2 −κ ln N a.
s. We use the following result of Seginer (see [13,Corollary 2.2]). Proposition 2.1. There exists a constant A such that for any N, n ≥ 1, any q ≤ 2 log max{n, N}, and any N × n random matrix X = (X ij ) where X ij are i.i.d. zero mean random variables, the following inequality holds:
max E max 1≤i≤N X i· q 2 , E max 1≤j≤n X ·j q 2 ≤ E X q ≤ (2A) q E max 1≤i≤N E X i· q 2 + max 1≤j≤n X ·j q 2 .
(2.9)
Here X i· , resp. X ·j , denote the i-th row, resp. the j-th column of X.
Proof of Theorem 1.1. Note that s 1 = X . Using the notations introduced above, we now estimate E X i· q . By the definition of X we have
E X i· q 2 = E n k=1 X 2 ik ξ ik q 2 ≤ 2 q−1 n k=1 E X 2 ik ξ ik q 2 + 2 q−1 E n k=1 (X 2 ik − 1)ξ ik q 2 .
(2.10) Note that
E X 2 ik ξ ik = p. (2.11)
Now, applying Rosenthal's inequality we get
E n k=1 (X 2 ik − 1)ξ ik q 2 ≤ C q q q 4 n k=1 E(X 2 ik − 1) 2 ξ ik q 4 + q q 2 p n k=1 E |X 2 ik − 1| q 2 , (2.12) which implies E n k=1 (X 2 ik − 1)ξ ik q 2 ≤ C q q q 4 (Np) q 4 + q q 2 Np E |X 11 | q . (2.13)
By assumptions (CI), we have
E |X 11 | q ≤ C q (Np) q 2 −qκ−2 ln q−4−δ N.
(2.14)
Note that for q ∼ ln N inequality (2.14) coincide with condition (1.6). Combining inequalities (2.10)-(2.14), we now get
E X i· q 2 ≤ C q (Np) q 2 1 + q Np q 4 + N −1 p −1 ln −(4+δ) N q ln 2 N (Np) 2κ q 2 .
Taking into account (1.5), as well as q ≤ C log n, we obtain, for q ≤ 2 log max{n, N},
E X i· q 2 ≤ C q (Np) q 2 .
A similar bound holds for E X ·j q . We may now write
E X q ≤ C q N(Np) q 2 .
Taking K ≫ C and applying Markov's inequality, the claim follows. Thus Theorem 1.1 is proved.
Smallest singular values
We shall now prove Theorem 1.2 using an approach developed by Litvak, Pajor, Rudelson [8], Rudelson and Vershynin in [10] for rectangular matrices for the case p = 1 and Götze and Tikhomirov in [5] for the sparse dilute Wigner matrices. Denote by S (n−1) the unit sphere in R n . Let x = (x 1 , . . . , x n ) ∈ S (n−1) be a fixed unit vector and X be a matrix defined in (1.2).
We divide the vectors on the sphere into two parts: compressible and incompressible vectors recalling the definition.
Definition 3.1. Let δ, ρ ∈ (0, 1). A vector x ∈ R n is called sparse if |supp(x)| ≤ δn. A vector x ∈ S (n−1) is called compressible if x is within Euclidean distance ρ from the set of all sparse vectors. A vector x ∈ S (n−1) is called incompressible if it is not compressible.
The sets of compressible and incompressible vectors will be denoted by Comp(δ, ρ) and Incomp(δ, ρ).
Note that
s n = inf x∈S (n−1) Xx 2 and Pr{s n ≤ τ Np} ≤ Pr{ inf x∈Comp(δ,ρ) Xx 2 ≤ τ Np}+Pr{ inf x∈Incomp(δ,ρ) Xx 2 ≤ τ Np},
(3.1) for some δ, ρ ∈ (0, 1) and τ > 0, not depending on n.
For sparse matrices with p = p N → 0 as N → ∞ we cannot directly estimate the first term on the right hand side of (3.1) using the well-known two step approach of estimating Pr{ Xx 2 ≤ τ √ Np} for a fixed vector x ∈ S (n−1) followed by a union bound for the some ε-net of Comp(δ, ρ) and arriving at a bound for the infimum of x ∈ Comp(δ n , ρ) with δ n ∼ p going to zero. The Rudelson -Vershynin methods for incompressible vectors won't work in this case. In order to estimate Pr{inf x∈Comp(δ,ρ) Xx 2 ≤ τ √ Np} with some δ > 0 which does not not depend on n, we shall use a method developed in Götze-Tikhomirov [5]. This is based on a recurrence approach which allows us to increase δ N step by step Np times arriving in log N steps at an estimate of δ > δ 0 which does not depend on N. The details of this approach will be described in Section 3.1.
In Section 3.3 we shall derive bounds for Pr{inf x∈Incomp(δ,ρ) Xx 2 ≤ τ √ Np}.
Compressible vectors.
Let L be an integer such that
δ 0 Np | log p| + 1 L−1 ≤ p −1 ≤ δ 0 Np | log p| + 1 L , (3.2)
where δ 0 ∈ (0, 1) denotes some constant independent on N. Note that under the conditions of Theorem 1.2
L ≤ c log N/ log log N (3.
3) with a constant c = c(δ 0 ). We introduce a set of numbers p νN and δ νN , for ν = 1, . . . , L, as follows p νN = (Np)δ ν−1N and δ νN = δ 0 p νN /(1 + | log p νN |).
Here p 0N = p and δ 0N = δ 0 p/(1 + | log p|).
Furthermore, introduce as well
p νN = Npδ 0 | log p| + 1 ν p and δ νN := δ 0 Np | log p| + 1 ν−1 δ 0 p | log p| + 1 .
Lemma 3.2. The following inequalities hold
p ν,N ≥ p ν (3.4) and δ ν,N ≥ δ ν,N , (3.5) for ν = 1, . . . , N
Proof. By condition of Theorem 1.2,
Np 1 + | ln p| ≥ B ln N. (3.6)
Without loss of generality we may assume that
Npδ 0 1 + | ln p| > 1. (3.7)
It is straightforward to check now that p ν,N ≥ p, for ν = 1, . . . , N. In fact, for ν = 1 it is easy. Assume that for some ν = 1, . . . , N − 1 the inequality p ν−1,N ≥ p holds. Then
p ν,N = Npδ 0 p ν−1,N 1 + | ln p n−1,N | ≥ Npδ 0 1 + | ln p| p ν−1,N ≥ Npδ 0 1 + | ln p| p ≥ p. (3.8)
We may write now the following inequalities δ ν,N ≥ δ 0 1 + | ln p| p ν,N (3.9) and p ν,
N ≥ Npδ 0 1 + | ln p| p ν−1,N ,(3.10)
for ν = 1, . . . , N. Applying induction for the last inequality, we get, for ν = 1, . . . , N,
p ν,N ≥ p ν,N . (3.11)
The last inequality implies that, for ν = 1, . . . , N,
δ ν,N ≥ δ 0 1 + | ln p| p ν−1,N = Npδ 0 1 + | ln p| ν−1 pδ 0 1 + | ln p| = δ nu,N . (3.12)
Thus, lemma is proved. Note that L ≥ 1 for Np 2 /(| log p| + 1) ≤ D with some constant D. The case Np 2 /(| log p| + 1) ≥ D will we treated separately. In what follows we shall assume that L ≥ 1.
Definition 3.4. The Lévy concentration function of a random variable ξ is defined for
ε > 0 as L(ξ, ε) = sup v∈R Pr{|ξ − v| ≤ ε}.
(3.14)
By P E we denote the orthogonal projection in R n onto a subspace E. Similarly, by P J we denote the orthogonal projection onto R J , where J ⊂ {1, 2, . . . , n}.
We reformulate and prove some auxiliary results from [10] below for our sparsity model.
First we prove an analog of [10, Lemma 3.2].
Lemma 3.5. Let x ∈ IC ν , ν = 1, . . . , L. Let
ζ j = n k=1
x k ξ jk X jk , j = 1, . . . , N.
Then there exists some absolute constant A such that
L( 1 √ p ζ j , ρ 2 ) ≤ 1 − Aρ 4 p νN . (3.15)
Remark 3.6. For ν = L there exists some constant 0 < b < 1 such that
L( 1 √ p ζ j , ρ 2 ) ≤ 1 − b < 1.
Proof. By Lemma 3.11 there exists a set σ(x) such that for k ∈ σ(x)
1 2 √ n ≤ |x k | ≤ 1 2nδ ν−1,N , and P σ(x) x 2 2 ≥ ρ 2 . Let η = k∈σ(x) x k ξ jk X jk / √ p. Note that E η 2 ≥ ρ 2 , E |η| 4 ≤ A 0 (1 + 1 Nδ ν−1,N p ).
Without loss of generality we may assume that Nδ ν−1,N p ≤ 1. This implies that
E |η| 4 ≤ 2A 0 Nδ ν−1,N p . (3.16) Let Z = η − v. Note that E Z 2 = E η 2 + v 2 ≥ v 2 + ρ 2 , and E η 4 ≥ (E η 2 ) 2 ≥ ρ 4 .
Using Minkowski's inequality, we get
E 1 4 |Z| 4 ≤ E 1 4 |η| 4 + v ≤ E 1 4 |η| 4 (1 + v ρ ) ≤ ρ −1 √ 2 E 1 4 |η| 4 (ρ 2 + v 2 ) 1 2 .
Using the Paley-Zygmund inequality, we get
Pr{|η − v| > ε} ≥ ρ 4 (E |Z| 2 − ε 2 ) 2 4 E |η| 4 (ρ 2 + v 2 ) 2 ≥ 1 4 E |η| 4 ρ 4 (ρ 2 + v 2 − ε 2 ) 2 (ρ 2 + v 2 ) 2 .
The last inequality and inequality (3.16) together imply
Pr{|η − v| ≥ ε} ≥ A 1 ρ 4 Nδ ν−1,N p(1 − 2ε 2 ρ 2 + v 2 ).
Finally, we may write
Pr{|η − v| ≥ 1 2 ρ} ≥ 1 2 A 1 ρ 4 p ν,N .
Thus Lemma 3.5 is proved.
For the set of sparse vectors the following lemma holds.
Lemma 3.7. The following inequality holds.
L(ξX/ √ p, 1 2 ) ≤ 1 − p 8µ 4
Proof. For the proof it is enough to note that by the Paley-Zygmund inequality we have
Pr{|ξX − v| ≥ 1 2 } ≥ p 1 + v 2 − ε 2 4 E |X| 4 (1 + v 2 ) 2 ≥ p 8µ 4
Lemma 3.8. Let ζ 1 , . . . , ζ N denote independent identically distributed random variables such that Pr{|ζ j | ≤ λ n } ≤ 1 − q N , for some λ N > 0 and q N ∈ (0, 1). Then there exist constants c, C such that Furthermore, we may write for τ > 0 and any t
Pr{ N j=1 ζ 2 j ≤ CNq N λ 2 N } ≤ exp{−cNq N }.Pr{ N j=1 ζ 2 j ≤ τ 2 Np} = Pr{ τ 2 Np 2 − 1 2 N j=1 ζ 2 j ≥ 0} ≤ exp{Npτ 2 t 2 /2} N j=1 E exp{−t 2 ζ 2 j /2}.
Using e −t 2 /2 = E e itη , where η is a standard Gaussian random variable, we obtain
Pr{ N j=1 ζ 2 j < τ 2 np} ≤ exp{Npτ 2 t 2 /2} N j=1 E η j n k=1 E ξ jk X jk exp{itξ jk X jk x k η j }, (3.18)
where η j , j = 1, . . . , N denote i.i.d. Gaussian standard r.v.s and E Z denotes expectation with respect to Z conditional on all other r.v.s. Take α = Pr{|η 1 | ≤ C 1 } for some absolute positive constant C 1 which will be chosen later. Then it follows from 3.18 that
Pr{ N j=1 ζ 2 j < τ 2 Np} ≤ exp{t 2 τ 2 Np/2} × N j=1 α E η j n k=1 E ξ jk X jk exp{itη j x k X jk ξ jk } |η j | ≤ C 1 + 1 − α .
Note that for any α, x ∈ [0, 1], and β ≤ α
1 − α + αx ≤ max{x β , β α β 1−β }. Furthermore, we have | E ξ jk X jk exp{itξ jk X jk x k η j }| ≤ exp{− p 2 (1 − |f jk (tx k η j )| 2 )}, (3.19) where f jk (u) = E exp{iuX jk }. Choose a constant M > 0 such that sup j,k≥1 E |X jk | 2 I{|X jk | > M} ≤ 1 2 .
Since 1 − cos x ≥ 11 24 x 2 for |x| ≤ 1, conditioning on the event |η j | ≤ C 1 , we get for
|t| ≤ 1 M C 1 , 1 − |f jk (tx k η j )| 2 = E X kj (1 − cos(tx k X kj η j ) ≥ 11 24 x 2 k t 2 η 2 j E | X kj | 2 I{|X kj | ≤ M}. (3.20)
Here we denote by X kj the symmetrization of the r.v. X kj . It follows from (3.19) for |t| ≤ 1/(MC 1 ), that for |η j | ≤ C 1 ,
| E ξ jk X jk exp{itξ jk X jk x k η j }| ≤ exp{−cpt 2 x 2 k η 2 j } (3.21)
This implies that
| n k=1 E ξ kj X kj exp{itη j x k ξ jk X jk }| ≤ exp{−cpt 2 η 2 j }. (3.22)
We may choose C 1 large enough such that following inequalities hold for |t| ≤ 1/MC 1 :
| E η j {exp{−cpt 2 η 2 j } |η j | ≤ C 1 }| ≤ exp{−ct 2 p/24}. (3.23)
Then we obtain
Pr{ N j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{Npτ 2 t 2 /2} exp{−cβt 2 Np/24} + β α N β 1−β (3.24)
Furthermore, we may take C 1 sufficiently large such that α ≥ 4 5 and choose β = 2 5 . We get
Pr{ N j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{Npτ 2 t 2 /2} exp{−ct 2 Np/60} + 2 −2N/3 . (3.25) For τ < min{ √ c √ 60 , √ ln 2 √ 3 MC 1 }, we have for |t| ≤ 1/(MC 1 ), Pr{ N j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{−ct 2 Np/120}. (3.26)
This implies the claim. Thus the lemma is proved.
Compressible and Incompressible Vectors.
First we prove an analog of Lemma 2.6 from [10].
Xx 2 ≤ τ 0 Np, X ≤ K Np} ≤ exp{−c 1 Np}, where δ 0N = δ 0 p/(| log p| + 1), ρ 0 = τ 0 /2K. (3.27) Proof. Let k = [nδ 0N ]. Denote by N η an η-net on the S (k−1) ∩ R k . Choose η = τ 0 /2K
First we consider the set of all sparse vectors Sparse(k) with support(x) ≤ k. Using Lemma 3.9 and a union bound, we get
Pr{ inf x∈Sparse(δ 0N ) Xx 2 ≤ 2ρ 0 √ np} ≤ n k |N η | exp{−c 0 Np}.Xx 2 ≤ 2τ 0 Np} ≤ 4nδ 0N 2πnδ 0N (1 − δ 0N ) (1 + K ρ 0 ) nδ 0N −1 δ nδ 0N 0N (1 − δ 0N ) n(1−δ 0N ) exp{−c 0 Np}.
Simple calculations show
Pr{ inf
x∈Sparse(δ 0N ) Xx 2 ≤ 2τ 0 Np} ≤ 2nδ 0N (1 − δ 0N )π × exp{nδ 0N (1 − 1 nδ 0N ) K ρ 0 − log δ 0N − (1 − δ 0N ) 1 δ 0N log(1 − δ 0N ) − c 0 Np}.
If we choose δ 0N := δ 0 p/(1 + | log p|) for a sufficiently small absolute constant δ 0 , we get
Pr{ inf x∈Sparse(δ 0N ) Xx 2 ≤ 2τ 0 Np} ≤ exp{−c 1 Np}.
Thus the Lemma is proved.
In what follows, we shall use a technique developed in Götze and Tikhomirov [5] which is based on the following lemmas.
Lemma 3.11. Let ρ, δ ∈ (0, 1). Assume that x ∈ Incomp(δ, ρ). Then there exists a set σ 0 (x) such that |σ 0 (x)| ≥ Cnδρ 2 and 1
2 √ n ≤ |x k | ≤ 1 √ nδ/2
for k ∈ σ 0 (x), and
k∈σ 0 (x) |x k | 2 ≥ ρ 2 .
For a proof of this Lemma see for instance [11,Lemma 3.4]. Proof. We repeat the proof of Lemma 3.9 till (3.20). Furthermore, by Lemma 3.11 there exists a set σ 0 (x) such that 1
2 √ n ≤ |x k | ≤ 1 √ nδ νN /2
for k ∈ σ 0 (x), and
k∈σ 0 (x) |x k | 2 ≥ ρ 2 . (3.28)
We may write now
n k=1 (1 − |f (tx k X jk η j )| 2 ) ≥ k∈σ 0 (x) (1 − |f (tx k X jk η j )| 2 ).
Note that for k ∈ σ 0 , and for |X jk | ≤ M, and for |η j | ≤ C, we have
|tx k X jk η j | ≤ |t|CM √ 2 √ Nδ νN .
Taking t = κ √ Nδ νN for κ = 1 CM √ 2 , we get |tx k X jk η j | ≤ 1,
and 1 − |f η j (tx k X jk η j )| 2 ≥ 11 24 t 2 x 2 k η 2 j E |X jk | 2 I{|X jk | ≤ M} ≥ 11 48 t 2 x 2 k η 2 j .
Repeating now the last part of the proof of Lemma 3.9 and taking into account inequality (3.28), we obtain for τ < ρ min{
√ c √ 60 , √ ln 2 √ 3 MC 1 }, and for |t| = κ √ Nδ νN , | n k=1 E ξ jk X jk exp{itη j x k ξ jk X jk }| ≤ exp{−cρ 2 pt 2 η 2 j }, (3.29)
where c is an absolute constant as in (3.22). We may choose C 1 large enough such that the following inequalities hold for |t| = κ √ Nδ νN :
| E η j {exp{−cpt 2 η 2 j } |η j | ≤ C 1 }| ≤ exp{−ct 2 p/24}. (3.30)
We use here that |t|p ≤ δ 0 by (3.2). Then we obtain
Pr{ n j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{Npτ 2 t 2 /2} exp{−cβt 2 Np/24} + β α N β 1−β )} (3.31)
Furthermore, we may take C 1 large enough such that α ≥ 4 5 and choose β = 2 5 . We get
Pr{ n j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{Npτ 2 t 2 /2} exp{−ct 2 Np/60} + 2 −2N/3 . (3.32) For τ < min{ √ c √ 60 , √ ln 2 √ 3 MC 1 }, we have for |t| = κ √ Nδ νN , Pr{ n j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{−ct 2 Np/120}. (3.33)
This inequality implies that
Pr{ N j=1 ζ 2 j ≤ τ 2 Np} ≤ exp{−c(ρ 2 N 2 κ 2 pδ νN ∧ N)/120}. (3.34)
Thus the lemma is proved.
Furthermore, we consider the sets defined as
C ν := IC ν−1 ∩ C ν , ν = 1, . . . , L.Xx 2 ≤ τ Np} ≤ exp{−cNp νN }.
Proof. According to Lemma 3.12 we have for any fixed x ∈ C ν Pr{ Xx 2 ≤ 2τ Np} ≤ exp{−c 1 Np ν,N }.
Consider η = τ K -net N of C ν . Then the event {inf x∈ Cν Xx 2 ≤ τ √ Np} implies { inf x∈N Xx 2 ≤ 2τ Np}. (3.36)
Without loss of generality we may assume that δ LN < 1. Using a union bound, we get
Pr{ inf
x∈ Cν
Xx 2 ≤ τ Np} ≤ n nδ νN |N | exp{−c 1 Np ν,N } (3.37)
Using Stirling's formula and a simple bound for the cardinality of an η-net, for some sufficiently small absolute constant α 0 > 0 (does not depend on ν) and
δ νN = α 0 p νN /(| log p ν,N | + 1), p νN := Npδ ν−1,N we get Pr{ inf x∈ Cν Xx 2 ≤ τ Np} ≤ exp{− c 1 Np νN }.
Thus Lemma 3.13 is proved.
Now we consider the case Np 2 /(| log p| + 1) > D for some sufficiently large constant D. Let x ∈ Incomp(δ 0N , ρ) and σ(x) denote the set described in Lemma 3.11. Let
ζ j = n k=1
x k ξ jk X jk , j = 1, . . . , N.
We have
L(ζ j , τ √ p) ≤ L( k∈σ(x) x k ξ jk X jk , τ √ p).
Using a Berry-Esseen bound we get
L(ζ j , τ √ p) ≤ Cτ + C k∈σ(x) x 3 k p E |X jk | 3 ( k∈σ(x) x 2 k p) 3 2 ≤ Cτ + Cµ 3 ρ √ nδ 0N p .
Note that npδ 0N = yδ 0 Np 2 /(1 + | ln p|). Choosing D sufficiently large, we have
L(ζ j , τ √ p) ≤ 1 − b,
for some constant b ∈ (0, 1). By Lemma 3.8 we get
Pr{ Xx 2 ≤ 2τ Np} ≤ exp{−cN},
for τ ≤ τ 0 and c > 0. Inequality (3.2) implies that there exists γ 0 > 0 such that
Pr{ inf x∈C 1 ∩Incomp(δ 0 ,ρ) Xx 2 ≤ τ Np} ≤ exp{−cN}.
Note that
Comp(δ LN , ρ) ⊂ C 0 ∪ ∪ L ν=1 C ν .
Using a union bound, we get
Pr{ inf
x∈Comp(δ LN ,ρ) Xx 2 ≤ τ √ np} ≤ exp{−cNp}+ L−1 ν=1 exp{−c(Np) ν Nδ 0,N } ≤ exp{−cNp}.
(3.38) By Corollary 3.3, Comp(γ 0 , ρ) ⊂ C L .
This implies that
inf x∈Incomp(γ 0 ,ρ)
Xx 2 ≤ inf x∈Incomp(δ LN ,ρ) Xx 2 . (3.39)
In what follows we shall estimate the probability Pr{inf x∈Incomp(γ 0 ,ρ) Xx 2 ≤ τ √ Np}.
3.3. Incompressible Vectors. Using a decomposition of the unit sphere S (n−1) = Comp ∪ Incomp, we decompose the invertibility problem onto two sub problems for compressible and incompressible vectors:
Pr{s n (X) ≤ ε √ p √ N } ≤ Pr{ inf x∈Comp Xx 2 ≤ ε √ p √ N } + Pr{ inf x∈Incomp Xx 2 ≤ ε √ p √ N }. (3.40)
A bound for the compressible vectors follows from inequality (3.38). It remains to find a lower bound for Xx 2 for incompressible vectors. Let η, η 1 , . . . , η N denote standard Gaussian random variables independent of X jk , ξ jk for 1 ≤ j ≤ N, 1 ≤ k ≤ n. We shall prove the following lemma.
Lemma 3.14. Let x ∈ IC(δ, ρ). Then there exist absolute constants c 1 such that for any C > 0 , the following inequality
Pr{ Xx 2 ≤ t Np} ≤ ( 2t t 2 + ρ 2 /2 ) N + ( 2c 0 C exp{− C 2 2 }) N , (3.41) holds for t ≥ c 1 µ 4 / √ Npδ.
Proof. We may write
Pr{ Xx 2 ≤ t Np} = Pr{ N j=1 ζ 2 j < t 2 Np} (3.42)
where ζ j = n k=1 X jk ξ jk x k . Applying Markov's inequality, we get
Pr{ N j=1 ζ 2 j < t 2 Np} ≤ e N E exp{− 1 t 2 p N j=1 ζ 2 j } = e N N j=1 E exp{− 1 t 2 p ζ 2 j }. (3.43)
We may rewrite the r.h.s. of (3.43) as follows
Pr{ N j=1 ζ 2 j < t 2 Np} ≤ e N N j=1 E exp{i 1 t √ p ζ j η j }. (3.44)
Conditioning by η j , we get and |σ(x)| ≥ 1 2y δρ 2 N. We may write the following inequality
Pr{ N j=1 ζ 2 j < t 2 Np} ≤ e N N j=1 E η j n k=1 | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }|(E η j k∈σ(x) | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }| ≤ E η j k∈σ(x) | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }|. (3.46)
For any constant C we have
E η j k∈σ(x) | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }| ≤ E η j k∈σ(x) | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }| I{|η j | ≤ C} + Pr{|η j | > C}.
(3.47)
Consider k ∈ σ(x) now. Taking expectation with respect to ξ jk conditioning on X jk and η j ), we obtain
| E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk } | = |1 + p(E X jk exp{i 1 t √ p η j x k X jk } − 1)|. (3.48)
Applying Taylor's formula for the characteristic function E X jk exp{i 1 t √ p η j x k X jk }, we may write
|1 + p(E X jk exp{i 1 t √ p η j x k X jk ||η j | ≤ C} − 1)| ≤ |1 + p(− 1 2t 2 p η 2 j x 2 k + E |X 11 | 3 6t 3 p 3 2 |x k | 3 |η j | 3 )|. (3.49) Since E |X 11 | 3 ≤ E 3 4 |X 11 | 4 ≤ µ 3 4 4 ≤ µ 4 , for |η j | ≤ C, and t ≥ Cµ 4 √ yNpδ , (3.50) we have |x k ||η j | E |X 11 | 3 3t √ p ≤ Cµ 4 √ 2 3t √ yNδp ≤ 1 2 .
Taking into account this inequality, we get for |η j | ≤ C,
|1 + p(E X jk ξ jk exp{i 1 t √ p η j x k X jk } − 1)| ≤ exp{− 1 4t 2 x 2 k η 2 j }. (3.51) Since k∈σ(x) x 2 k ≥ ρ 2 , this inequality implies that n k=1 | E X jk ξ jk exp{i 1 t √ p η j x k X jk ξ jk }|I{|η j | ≤ C} ≤ exp{− ρ 2 4t 2 η 2 j }. (3.52)
From here it follows for any C > 0
Pr{ N j=1 ζ 2 j < t 2 Np} ≤ N j=1 E exp{− ρ 2 4t 2 η 2 j } + Pr{|η j | > C} . (3.53)
There exists an absolute constant c 0 > 0 such that
Pr{|η j | > C} ≤ c 0 C exp{− C 2 2 }. (3.54)
This inequality implies that
Pr{ N j=1 ζ 2 j < t 2 Np} ≤ ( t t 2 + ρ 2 /2 + c 0 C exp{− C 2 2 }) N ≤ ( 2t t 2 + ρ 2 /2 ) N + ( 2c 0 C exp{− C 2 2 }) N . (3.55)
Thus, Lemma 3.14 is proved.
Proof of Theorem 1.2:
First we note that We consider an ε-net N on the set of incompressible vectors IC(γ 0 , ρ) with ε = t
2K
where K > 0 is fixed. It is straightforward to check that Then, applying the result of Lemma 3.14, we get (for t ≥ ... c 1 µ 4 √ N γ 0 p )
Pr{ inf
x∈IC(γ 0 ,ρ)
Xx 2 ≤ t Np} ≤ |N | ( 2t t 2 + ρ 2 /2 ) N + ( 2c 0 C exp{− C 2 2 }) N ≤ yN 1 + 4K t n−1 ( 2t t 2 + ρ 2 /2 ) N + ( 2c 0 C exp{− C 2 2 }) N . (3.61)
It is easy to see that, for any 0 < t ≤ τ 0 ,
Pr{ inf
x∈IC(δ,ρ)
Xx 2 ≤ t Np} ≤ Pr{ inf x∈IC(δ,ρ) Xx 2 ≤ τ 0 Np}. (3.62)
Without loss of generality we may assume that τ 0 ≤ 4K. Taking into account both that N ≤ e N and y < 1 rewrite the inequality (3.63) in the form
Pr{ inf
x∈IC(γ 0 ,ρ)
Xx 2 ≤ τ 0 Np} ≤ 5K 2τ 0 yN ( 4eτ 0 4τ 2 0 + ρ 2 /2 ) N + ( 2c 0 e C exp{− C 2 2 }) N ≤ (5K) y 4e √ 2 ρ τ
Corollary 1 . 3 .
13Under conditions of Theorem 1.1 there exist a constant τ 0 > 0 such that for any τ ≤ τ 0 and for any A > 0 there exists a constant C = C(A, δ) depending on A and δ such that the following inequality holds Pr{ s n ≤ τ Np} ≤ CN −Q +N 2 p Pr{|X 11 | > A(Np) 1 2 −κ ln N}.
Corollary
Corollary 3. 3 .
3There exist constants γ 0 > 0, γ 1 > 0 such that δ L,N ≥ γ 0 and p LN ≥ γ 1 .(3.13)Introduce the sets C ν := Comp(δ ν,N , ρ), IC ν := Incomp(δ ν,N , ρ), ν = 0, . . . , L.
proof of this lemma see[5, Lemma 4.5]. We start with the estimation of Xx 2 for a fixed x ∈ S (n−1) .Lemma 3.9. There exist positive absolute constants τ 0 and c 0 such that Pr{ Xx 2 ≤ τ 0 Np} ≤ exp{−c 0 Np}. Proof of Lemma 3.9. The proof of this lemma may be found in [5, Lemma 4.1], but for readers convenience we repeat it here. Let ζ j = n k=1 X jk ξ jk x k , j = 1, . . .
Lemma 3 . 10 .
310There exist positive absolute constants δ 0 , τ 0 , c 1 such that Pr{ inf x∈Comp(δ 0N ,ρ 0 )
Lemma 3 . 12 .
312Let x ∈ IC ν for some ν = 0, . . . , L − 1. Then there exist constants c 1 and c 2 such that for any 0 < τ ≤ τ 0 Pr{ Xx 2 ≤ τ Np} ≤ exp{−c 1 Np ν+1N }.
. 13 .
13Under conditions of Theorem 1.2 we have, for ν = 1, . . .
x∈IC(γ 0 ,ρ) Xx 2 ≤ t Np, X ≤ K Np} ≤ e −N/2 ,(3.67)
3) for any j, k ≥ 1 and for some δ > 0. Suppose also that there exists a positive constant B, such that Np ≥ B log Then for every Q ≥ 1 and A > 0 there exists a constant K = C(Q, δ, µ 4+δ , A, B) such that3
2κ N,
(1.4)
where κ = δ
2(4+δ) .
1.4. Assume the conditions of Theorem 1.1. In addition assume that there exists a constant B such that for every N ≥ 1
3.45)By Lemma 3.11 there exists a set σ(x) such that for k ∈ σ(x) we have 1 √ n ≤ |x k | ≤2
√
2
√
nδ
Xx 2 ≤ τ Np, X ≤ K Np} ≤ Pr{ inf Xx 2 ≤ 2τ Np} (3.59)Applying a union-bound, we getPr{ infXx 2 ≤ 2τ Np} ≤ |N | sup x∈IC(γ 0 ,ρ)Pr{ inf
x∈IC(γ 0 ,ρ)
x∈N
x∈N
Pr{ Xx 2 ≤ 2τ Np}.
(3.60)
By [10, Proposition2.1], we have
|N | ≤ n 1 +
2
ε
n−1
.
Edge rigidity and universality of random regular graphs of intermediate degree. Geometric and Functional Analysis. Roland Bauerschmidt, Jiaoyang Huang, Antti Knowles, Horng-Tzer Yau, 10.1007/s00039-020-00538-030Roland Bauerschmidt, Jiaoyang Huang, Antti Knowles, Horng-Tzer Yau. Edge rigidity and uni- versality of random regular graphs of intermediate degree. Geometric and Functional Analysis 30, 693-769, (2020), DOI: 10.1007/s00039-020-00538-0.
Eigenvector statistics of sparse random matrices. Paul Bourgade, Jiaoyang Huang, Horng-Tzer Yau, 10.1214/17-EJP81arXiv:1609.09022Electron. J. Probab. 22Paul Bourgade, Jiaoyang Huang, Horng-Tzer Yau. Eigenvector statistics of sparse random ma- trices. Electron. J. Probab. 22: 1-38 (2017). DOI: 10.1214/17-EJP81, eprint arXiv:1609.09022
A necessary and sufficient condition for edge universality at the largest singular values of covariance matrices. Xiucai Ding, Fan Yang, Ann. Appl. Probab. 283Xiucai Ding, Fan Yang. A necessary and sufficient condition for edge universality at the largest singular values of covariance matrices. Ann. Appl. Probab., 28(3): 1679-1738, 2018.
Spectral Statistics of Erdös -Renyi Graphs I: Local Semicircular Law. Laszlo Erdös, Antti Knowles, Horng-Tzer Yau, Jun Yin, 10.1214/11-AOP734The Annals of Probability. 413BLaszlo Erdös, Antti Knowles, Horng-Tzer Yau, Jun Yin. Spectral Statistics of Erdös -Renyi Graphs I: Local Semicircular Law. The Annals of Probability, 2013, Vol. 41, No. 3B, 2279-2375 DOI: 10.1214/11-AOP734.
On the circular law. Friedrich Götze, Alexander N Tikhomirov, Annals of Probability. 38Friedrich Götze, Alexander N. Tikhomirov. On the circular law. Annals of Probability, 2010, vol. 38, 1444-1491.
Moment inequalities for linear and nonlinear statistics. Friederich Götze, Alexey A Naumov, Alexander N Tikhomirov, Teor. Veroyatnost. i Primenen., v. 651Friederich Götze, Alexey A. Naumov, Alexander N. Tikhomirov. Moment inequalities for linear and nonlinear statistics. Teor. Veroyatnost. i Primenen., v.65, issue 1, p. 3-22, 2020.
Local Law and Tracy -Widom Limit for Sparse Sample Covariance Matrices. Bernoulli. Jong Yun Hwang anf Ji Oon Lee263Jong Yun Hwang anf Ji Oon Lee. Local Law and Tracy -Widom Limit for Sparse Sample Co- variance Matrices. Bernoulli 26(3):2400-2435 (2020).
Nicole Tomczak-Jaegermann. Smallest singular value of random matrices and geometry of random polytopes. Alexander Litvak, Alain Pajor, Mark Rudelson, Adv. Math. 195Alexander Litvak, Alain Pajor, Mark Rudelson, Nicole Tomczak-Jaegermann. Smallest singular value of random matrices and geometry of random polytopes.Adv. Math. 195,(2005), 491-523.
Singularity of sparse Bernoulli matrices. Alexander E Litvak, Konstantin E Tikhomirov, 10.1215/00127094-2021-0056Duke Mathematical Journal. 2022Alexander E. Litvak, Konstantin E. Tikhomirov. Singularity of sparse Bernoulli matrices, Duke Mathematical Journal, 2022, 1135-1233 (1 April 2022). DOI: 10.1215/00127094-2021-0056.
The smallest singular value of a random rectangular matrix. Mark Rudelson, Roman Vershynin, 10.1002/cpa.20294Communications on Pure and Applied Mathematics. 6212Mark Rudelson, Roman Vershynin. The smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics 62(12), 2009, pp.1707-1739, DOI: 10.1002/cpa.20294.
The Littlewood -Offord problem and invertibility of random matrices. Mark Rudelson, Roman Vershynin, Advances in Mathematics. 218Mark Rudelson, Roman Vershynin. The Littlewood -Offord problem and invertibility of random matrices. Advances in Mathematics, 218(2008), 600-633.
Sparse circular law under minimal assumptions. Geometric and Functional Analysis. Mark Rudelson, Konstantin Tikhomirov, 29Mark Rudelson, Konstantin Tikhomirov. Sparse circular law under minimal assumptions. Geo- metric and Functional Analysis, 29, pages 561-637(2019).
The expected norm of random matrices. Yoav Seginer, Combinatorics. Probability and Computing. 9Yoav Seginer. The expected norm of random matrices. Combinatorics. Probability and Computing (2000) 9, 149-166.
Konstantin E Tikhomirov, Singularity of random Bernoulli matrices Annals of mathematics. 191Konstantin E. Tikhomirov. Singularity of random Bernoulli matrices Annals of mathematics, vol. 191, pp 592-639.
The smallest singular value of a random rectangular matrix with no moment assumptions on entries. Konstantin E Tikhomirov, Israel Journal of Mathematics. 212Konstantin E. Tikhomirov. The smallest singular value of a random rectangular matrix with no moment assumptions on entries. Israel Journal of Mathematics volume 212, pages 289-314 (2016).
Germany Email address: [email protected]. Friedrich Götze, deBielefeldFaculty of Mathematics, Bielefeld UniversityFriedrich Götze, Faculty of Mathematics, Bielefeld University, Bielefeld, Ger- many Email address: [email protected]
. Alexander N Tikhomirov, Syktyvkar, RussiaInstitute of Physics and Mathematics, Komi Science Center of Ural Branch of RASEmail address: [email protected] N. Tikhomirov, Institute of Physics and Mathematics, Komi Science Cen- ter of Ural Branch of RAS, Syktyvkar, Russia; Email address: [email protected]
| [] |
[
"Raman spectroscopic determination of the length, strength, compressibility, Debye temperature, elasticity, and force constant of the C-C bond in graphene",
"Raman spectroscopic determination of the length, strength, compressibility, Debye temperature, elasticity, and force constant of the C-C bond in graphene"
] | [
"X X Yang \nInstitute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina\n",
"J W Li \nInstitute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina\n",
"Z F Zhou \nInstitute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina\n",
"Y Wang \nSchool of Information and Electronic Engineering\nHunan University of Science and Technology\n411201XiangtanChina\n",
"L W Yang \nInstitute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina\n",
"W T Zheng \nDepartment of materials Science\nJilin University\n130012Changchun ChangchunChina\n",
"Chang Q Sun \nInstitute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina\n\nSchool of Electrical and Electronic Engineering\nNanyang Technological University\n639798SingaporeSingapore\n"
] | [
"Institute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina",
"Institute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina",
"Institute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina",
"School of Information and Electronic Engineering\nHunan University of Science and Technology\n411201XiangtanChina",
"Institute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina",
"Department of materials Science\nJilin University\n130012Changchun ChangchunChina",
"Institute for Quantum Engineering\nFaculty of Materials and Optoelectronic Physics\nKey Laboratory of Low-Dimensional Materials and Application Technologies\nMicro-Nano Energy Technology\nXiangtan University\n411105HunanChina",
"School of Electrical and Electronic Engineering\nNanyang Technological University\n639798SingaporeSingapore"
] | [] | From the perspective of bond relaxation and vibration, we have reconciled the Raman shifts of graphene under the stimuli of the number-of-layer, uni-axial-strain, pressure, and temperature in terms of the response of the length and strength of the representative bond of the entire specimen to the applied stimuli. Theoretical unification of the measurements clarifies that: (i) the opposite trends of Raman shifts due to number-of-layer reduction indicate that the G-peak shift is dominated by the vibration of a pair of atoms while the D-and the 2D-peak shifts involves zneighbor of a specific atom; (ii) the tensile strain-induced phonon softening and phonon-band splitting arise from the asymmetric response of the C 3v bond geometry to the C 2v uni-axial bond elongation; (iii) the thermal-softening of the phonons originates from bond expansion and weakening; and (iv) the pressure-stiffening of the phonons results from bond compression and work hardening. Reproduction of the measurements has led to quantitative information about the referential frequencies from which the Raman frequencies shift, the length, energy, force constant, Debye temperature, compressibility, elastic modulus of the C-C bond in graphene, which is of instrumental importance to the understanding of the unusual behavior of graphene. | 10.1039/c1nr11280e | [
"https://export.arxiv.org/pdf/1109.3959v1.pdf"
] | 205,796,395 | 1109.3959 | 0c965beb1bbb35f97b1fbbba3f64c63f617e34a1 |
Raman spectroscopic determination of the length, strength, compressibility, Debye temperature, elasticity, and force constant of the C-C bond in graphene
X X Yang
Institute for Quantum Engineering
Faculty of Materials and Optoelectronic Physics
Key Laboratory of Low-Dimensional Materials and Application Technologies
Micro-Nano Energy Technology
Xiangtan University
411105HunanChina
J W Li
Institute for Quantum Engineering
Faculty of Materials and Optoelectronic Physics
Key Laboratory of Low-Dimensional Materials and Application Technologies
Micro-Nano Energy Technology
Xiangtan University
411105HunanChina
Z F Zhou
Institute for Quantum Engineering
Faculty of Materials and Optoelectronic Physics
Key Laboratory of Low-Dimensional Materials and Application Technologies
Micro-Nano Energy Technology
Xiangtan University
411105HunanChina
Y Wang
School of Information and Electronic Engineering
Hunan University of Science and Technology
411201XiangtanChina
L W Yang
Institute for Quantum Engineering
Faculty of Materials and Optoelectronic Physics
Key Laboratory of Low-Dimensional Materials and Application Technologies
Micro-Nano Energy Technology
Xiangtan University
411105HunanChina
W T Zheng
Department of materials Science
Jilin University
130012Changchun ChangchunChina
Chang Q Sun
Institute for Quantum Engineering
Faculty of Materials and Optoelectronic Physics
Key Laboratory of Low-Dimensional Materials and Application Technologies
Micro-Nano Energy Technology
Xiangtan University
411105HunanChina
School of Electrical and Electronic Engineering
Nanyang Technological University
639798SingaporeSingapore
Raman spectroscopic determination of the length, strength, compressibility, Debye temperature, elasticity, and force constant of the C-C bond in graphene
1 Corresponding author.grapheneRamanbondlattice dynamics
From the perspective of bond relaxation and vibration, we have reconciled the Raman shifts of graphene under the stimuli of the number-of-layer, uni-axial-strain, pressure, and temperature in terms of the response of the length and strength of the representative bond of the entire specimen to the applied stimuli. Theoretical unification of the measurements clarifies that: (i) the opposite trends of Raman shifts due to number-of-layer reduction indicate that the G-peak shift is dominated by the vibration of a pair of atoms while the D-and the 2D-peak shifts involves zneighbor of a specific atom; (ii) the tensile strain-induced phonon softening and phonon-band splitting arise from the asymmetric response of the C 3v bond geometry to the C 2v uni-axial bond elongation; (iii) the thermal-softening of the phonons originates from bond expansion and weakening; and (iv) the pressure-stiffening of the phonons results from bond compression and work hardening. Reproduction of the measurements has led to quantitative information about the referential frequencies from which the Raman frequencies shift, the length, energy, force constant, Debye temperature, compressibility, elastic modulus of the C-C bond in graphene, which is of instrumental importance to the understanding of the unusual behavior of graphene.
Table of Contents
Introduction
Overwhelming contributions have been made in recent years with establishment of a huge database towards the lattice dynamics of the single-and the few-layer graphene and their nanoribbons (GNRs) under externally applied stimuli. The stimuli include the number-of-layer (n), [1,2,3,4] uni-axial compressive and tensile strains (), [5,6,7] pressure (P), [8] temperature (T), [9,10] defect density and location, [11] substrate interaction, [12,13] hydrogenation, [14,15] dopant, [16] incident photon polarization, [17] edge conditions, [18,19] etc. A large graphene sheet manifests two Raman phonon modes: i) as the standard first-order approximation of Raman process, the G band (~1580 cm −1 ) was suggested to arise from the in-plane vibration of the sp 2 carbon network; [20] ii) the 2D band (2680 cm −1 ) was believed as a second-order process of double resonant Raman feature. [21] In the presence of undercoordinated defect or edge atoms, a defect-induced D band at frequencies around 1345 cm −1 can be resolved with intensity being varied with edge conditions. [18,19] The Raman shifts of graphene are very sensitive to the applied stimuli. The D and 2D bands undergo a redshift when the number-of-layer of the graphene is reduced and the Raman frequencies change with the energy of the incident radiation. [3,22] Under 514.5 nm light radiation, the D mode shifts from 1367 to 1344 cm -1 and the 2D mode changes from 2720 to 2680 cm -1 when the bulk graphite evolves into the monolayer graphene. [4] In contrast, a blue shift happens to the G mode that shifts from 1582 to 1587 cm -1 when the n is reduced from 20 to one. [3,4] When the n is increased from a few to multiple, the Raman peaks turn from the dominance of the monolayer component to the dominance of the bulk component. [3] The opposite n-dependent trends indicate that the G mode and the D/2D modes are governed by different mechanisms though their origins remain to be clear.
When the graphene is under an uni-axial tensile strain [5] or heating, [23] the Raman peaks shift to lower frequencies. When pressure is increased, the Raman frequency shifts up sublinearly. [8] The uni-axial tensile strain can soften and split the G and 2D bands and the extent of the band splitting depends on the magnitude and the relative direction of the strain to the crystal orientation. [7,24] Under the compressive strain, the Raman shift extrapolates the trend of the tensile strain. [25] It is even amazing that the spectral intensity of the D band is one order higher at the armchair edge than that at the zigzag edge of the GNR. [18,19] In order to describe the effect of P and T on the Raman shifts in general, the empirically quadratic functions are often used, [26,27]
2 0 0 e d T T T Thermal softening P aP bP pressure hardening where ω(0) is or Ln Ln E
has also been employed to describe the strain effect. [5] Recent densityfunctional theory calculations [24] suggested that the strain activates the processes of "twophonon double-resonance scattering", which are responsible for the strain-induced phonon band splitting and softening. Although the given empirical models could fit to the measurements independently, theoretically consistent formulation of the measurements remains challenging. In particular, the number-of-layer effect on the Raman shift remains theoretically unexplored.
Consistent insight into the mechanism behind the multiple-factor-stimulated Raman shift and, most strikingly, an extraction of quantitative information about the bonding identities from the sophisticated measurements are highly demanded and should be the provision of the sophisticated experiments.
A physical model should meet the following criteria and be able to:
i) reproduce the measurements with meaningful parameters;
ii) provide consistent insight into the physical mechanism behind observations;
iii) extract quantitative information from the measurements; and, iv) establish the correlation between the measurements under various stimuli such as size, strain, pressure and temperature in the present course.
It is our thought that the external stimuli activate a certain set of intrinsic parameters that govern the Raman shift intrinsically. The vibration frequency should depend on the stimuli in a hidden form, instead. The relationship of the "vibration-intrinsic parameter-stimuli" is to be established. The objective of this contribution is to show that incorporating our original bond order-length-strength (BOLS) correlation theory [28,29,30,31] to the Raman spectroscopy has enabled us to reconcile the effects of the number-of-layer, pressure, strain, and temperature on the Raman shifts of graphene. From the perspective of the local bond averaging (LBA) approach, [32] we focus on the formulation of the Raman shifts as a function of the order, length and strength of the representative bond of the entire specimen and their response to the applied stimuli. Agreement between the modeling predictions and the measurements has led to consistent insight into the mechanism behind the fascinations with revealing of quantitative information of the referential frequency 1 and its bulk shift, binding energy b E , atomic cohesive energy E coh , binding energy density E den , Debye temperature D , elastic modulus B, force constant k, compressibility , and the effective coordination number (CN, or z) for the few-layer graphene and their bond lengths and energies, which is beyond the scope of available approaches.
Principles
BOLS correlation
Extended from the "atomic coordination-radius" correlation premise of Goldschmidt, [33] Pauling [34] and Feibelman [35] and experimental evidence [36,37,38,39,40] to include the bond energy response to the spontaneous contraction of the bonds between undercoordinated atoms, the BOLS correlation indicates that the shorter and stronger bonds between undercoordinated atoms cause local densification and quantum entrapment of bonding electrons and binding energy, which modulate the local atomic cohesive energy, the binding energy density, the Hamiltonian of the entire specimen and the relevant properties such as mechanical strength, thermal stability, lattice dynamics, photonic, magnetic and dielectric properties associated with atomic undercoordination. [28] Numerically, the BOLS correlation is expressed
as ing strengthen bond C E E n contractio bond z z C d d m z b z z b z / 8 12 exp 1 2 / (1)
The subscripts z and b denote an atom with z coordination neighbors and in the bulk as a standard, respectively. The bond contraction coefficient C z varies only with the effective z of the atom of concern regardless of the nature of the bond or the solid dimension. The index m = 2.56 is the bond nature indicator of carbon. [30] Using the length of 0.154 nm for the C-C bond in diamond and 0.142 nm in graphite, one can readily derive the effective CN for the bulk graphite as z g = 5.335, according to the bond contraction coefficient (eq 1). For the C atom in the bulk diamond, the effective CN is 12 instead of 4 because the diamond structure is formed by an interlock of two fcc unit cells. By the relation of E z = C z -m E b , and the known atom cohesive energy of diamond, 7.37 eV, [41] the single C-C bond energy in the diamond is E b = 7.37/12 = 0.615 eV and it is E 3 = 1.039 eV in the monolayer graphene of z = 3. The cohesive energy per atom in graphene is 3.11 eV/atom.
Theoretical reproduction of the elastic modulus enhancement, [30,42,43] melting point depression of the single-walled carbon nanotube (SWCNT), [30,44] the C 1s binding energy shift of graphene edge, graphene, graphite, and diamond, [45,46] the width dependence of the band gap expansion of GNR [47] and the Dirac-Fermi polarons generation and hydrogenation [29] have confirmed consistently that the C-C bond at the graphene edge contracts by 30% from 0.154 to 0.107 nm with a 152% bond energy gain. [30,42,43] For the 3-coordinated GNR interior atoms, the C-C bond contracts by 18.5% to 0.125 nm with a 68% increase of bond energy. [42] The Young's modulus of the SWCNT was determined to be 2.595 TPa with respect to the bulk modulus of 865 GPa. The effective wall thickness of the SWCNT is determined to be 0.142 nm. The CN reduction-induced bond contraction is in good accordance with the supershort Cr-Cr dimer distance of 0.180 nm compared with that in the bulk of 0.254 ~ 0.270 nm. [48,49] These findings agree well with what discovered by Girit et al [50] in their transmission electron microscopic study of the thermodynamic behavior of graphene. They found that breaking a C-C bond of the 2-coordinated carbon atom near the vacancy requires 7.50 eV per bond that is 32% higher than the energy (5.67 eV/bond) required for breaking one bond of a 3-coordinated carbon atom in a suspended graphene. These findings provide further evidence for the BOLS prediction of the shorter and stronger bonds between undercoordinated carbon atoms.
The Raman shifts
It is emphasized that the solution to the Hamiltonian of a vibration system is a Fourier series with multiple terms of frequencies being fold of that of the primary mode. [51] Therefore, the frequency of the secondary 2D mode should be twofold that of the primary D mode. This fact may clarify the origin of the 2D mode as commonly referred as the double resonant Raman process. Any perturbation to the Hamiltonian such as the interlayer Van der Waals force or the dipole-dipole interaction or the nonlinear effect may cause the folded frequencies to deviate from the ideal values. The fact that the number-of-layer reduction induced D peak shifting from 1367
to 1344 cm -1 and the 2D peak shifting from 2720 to 2680 cm -1 , is right within this expectation.
The opposite trends of the Raman shifts due to the change of the number-of -layer indicate that the G mode is different from the D/2D mode in origin; therefore, one cannot expect to unify them simultaneously using an identical model. On the other hand, the applied strain, pressure, temperature or the atomic-CN variation can modulate the length and energy of the involved bonds, or their representative, and hence the phonon frequencies in terms of bond relaxation and vibration. Band splitting is expected to happen if the uni-axial strain is applied mismatching the graphene of C 3v group symmetry. The extent of band splitting under strain depends on the extent of the mismatch between crystal geometry and strain.
Analytical solutions
Generally, one can measure the Raman resonance frequency as
G D D x x x , 2 , x 0 ,
where 0
x is the reference point from which the Raman shift x proceeds under the applied stimuli. The 0
x may vary with the frequency of the incident radiation and substrate conditions but not the nature and the trends induced by the applied stimuli. By expanding the interatomic potential in a Taylor series around its equilibrium and considering the effective atomic z, we can derive the vibration frequency shift of the harmonic system,
...... 0 2 ... 0 ! 2 0 ! 3 2 2 3 2 2 2 0 n z z z n z z d 2 2 2 2 2 2 2 1 2 1 2 1 x d E x r r u x z d r
As the first-order approximation, the lattice vibration frequency can be detected as
Raman shift , , , z z x E d z from the reference point, , , , 1 b b x E d
, which depends functionally on the order z, length d z , and energy E z of the representative bond for the entire specimen and the reduced mass of the dimer atoms of the representative bond with
2 1 2 1 m m m m , mode D 2 / D mode G 1 1 , ,, 1 , , , , , , 2 / 12 2 z E d dr r u d E d E d z E d z z z d r b b x z z x z z x z (2)
The number-of-layer (coordination number) reduction induced D/2D mode redshift and the G mode blue shift suggest that the G mode be dominated by interaction between two neighboring atoms while, as the double phonon resonance, the D/2D modes involve all the z neighbors of a specific atom. According to the BOLS, the E z /d z increases when the z is reduced, and hence blueshift happens, which is the case of the G mode; however, if the z is involved, the zE z /d z drops with z, which is right the case of the D and 2D mode under the reduction of the number-of-layer. This clarification may deepens our insight into the bond origin of the Raman active mode not only in graphene but also in TiO 2 [52] with the size induced abnormal blueshift of the 141 cm -1 vibration.
The Raman shift , , , z z x E d z
is a hidden function of the stimuli of T and P. For convenience, we may rename 0 Incorporating the variables of atomic coordination, strain, temperature, and pressure (z, , T, P) into the expressions (2), we have the general form of the relative Raman shift,
x as 1 x that is 2 / 1 0 0 0 0 0 0 , ,, , , , , , 0 , 1 , , 0 , , , , 1 ,, , b b b E T P z E T P z d zd T P T P z T P T P z where 0 0 0 0 ,, 0 , , , 0 ,T P z E E T P z d d b b b b b T T V V z m z b J J P P T T z b J J E dv v p dt t d d C E E T P z E dp p dt t d C d d T P z d 0 0 0 0 0 2 0 0 0 1 1 1 , , , 1 1 1 1 1 1 , , , (3)
T 0 and P 0 are the references at the ambient conditions. J is the energy perturbation and J
Number-of-layer and strain dependence
Taking z g = 5.335 for the bulk graphite as a reference, we can derive from eqs (2) and (3) the reference of Letting
mode G 1 2D and D 1 1 , 1 2 / g m zg z x g x x x g x z z C C z z z z C , We have, g x x g x x x g x g x g x x x z z C z z z z C z z C z z , 1 1 , 1 , 1 (4)
From the matching to the number-of-layer dependence, we can derive the Similarly, the strain-effect is given as,
. 2 ' ' 1 ' ' 1 1 1 , , 0 , 1 0 , 0 , 1 , 2 2 / 1 2 2 / 1 0 2 2 / 1 const E d with z z x x x x (5)
In order to reflect the asymmetric responses of the C 3v bond geometry to the applied C 2v strain, we introduced a strain coefficient bounded by 0 1. The = is for the branch responding less to the applied strain. By definition, we can derive the Grüneisen parameter from (5),
2 2 / 1 2 2 1 ' 1 ' 1 ' ' , 1 ' 1 ' 1 ln ' E E or Ln With 0 , 1 0 , 0 , 1 , ' x x x x z z (6)
From matching the strain dependent, we can derive the force constant of the C-C bond and the strain coefficient , without needing the Grüneisen parameter that is not a constant as a matter of fact.
Pressure and temperature dependence
Likewise, using the approximation 1+ x exp(x) at x << 1, we can formulate the thermal and pressure effects, [53]
(7)
The thermally-and mechanically-induced energy perturbations T and P follow the relations,
[32] T T T coh T x D T z D V z T dt E dx x e T R dt zE T C E dt t D 0 0 / 0 2 1 2 0 1 4 / linear Non P P V V P x BM x B x x B x p with E dx x p dx x p E V E dv v p x x den z V V z P 2 ' 0 3 / 2 ' 0 3 / 5 3 / 7 0 1 1 0 1 4 / 1 4 3 1 2 / 3 0 (8)
The T is the integral of the specific heat reduced by the bond energy in two-dimensional Debye approximation. When the measuring temperature T is higher than D , the two-dimensional specific heat C v approaches a constant of 2R (R is the idea gas constant). The atomic cohesive energy E coh = zE z and the D are the uniquely adjustable parameters in calculating the T . The P is calculated based on the integral of the Birch-Mürnaghan (BM) equation, [54,55] or the x(p).
Experimental and calculation procedures
The well established Raman database as functions of the number-of-layer, [3,22] compressive [5,25] and tensile [25] strain, temperature, [56] and pressure [8]
Results and discussion
Number-of-layer dependence
With the given Raman frequencies of the 2D peak shifting from 2720 to 2680 cm -1 and the D peak from 1367 to 1344 cm -1 when the graphite (z g ) turns to be the monolayer (z = 3) graphene, [3,22,57] and the G mode shifting from 1582 to 1587 cm -1 , [3,4] we can calibrate the z-dependent relative shift of the possible modes, Figure 1 shows the modeling reproduction of the z dependence Raman frequencies of (a) the D/2D modes [3,5,25] and (b) the G mode. The inset in (b) shows the original experimental -n data for the G mode. [3,4] Panel (c) is the z -n transformation derived from (a) and (b). It is seen that when the n is greater than 6, the z reaches and then maintains almost the bulk graphite value of 5.335. The consistency between predictions and the measurements of the z-dependent Raman shifts and the z-n transformation function for the three modes evidences the essentiality and appropriateness of the proposed mechanisms for the lattice vibration in graphene.
G D D z C and G D D z z C C z z z z C g x g z z x g x x x g x g, 3 3 1 cm G D D z C z C z g x g x g x x x 1 , cm G z z C D z z C D z z C z z C z z b G b D b D g x x g x x x (9)
Strain-induced red shifting and band splitting
Combining eq (5) and (9), we have the joint z-and -dependent Raman shifts: Figure 2 shows the (a) uni-axial compressive [5,25] and tensile [25] strain induced D/2D mode red shifting without band splitting and the (b) tensile strain induced 2D mode red shifting and band splitting. [24] For all the D/2D and G modes, the reduced force constant
1 ' 1 , 0 , 1 0 , 0 , 1 , cm G D D z z C z z C z z g x g x x g x x x (10) 30 . 0 2 ' 2 z z E d
, has been derived, corresponding to 6.283 N/m for graphene (z = 3).
The inset in (a) illustrates the asymmetric response of the three bonds of C 3v group symmetry, denoted 1, 2, and 3, to the uni-axial tensile strain. There are two extreme situations: at = 0, or the strain is along bond 2, 1 = 3 = 2 < 2 ; at = 30, or the strain is perpendicular to bond 3, 1 = 2 > 3 ~ 0. The 2 seems to be the maximum at = 0. The bond configuration symmetry allows us to focus on the angle ranging from 0 to 30 between a specific bond and the strain.
There should be a branch remaining the original frequency as 3 ~ 0 at = 30 in panel (a) ; panel (b) should correspond to = 0 with band splitting. Figure 2 BOLS reproduction of (a) the uni-axial compressive [5,25] and tensile [25] strain induced Raman shift of the D/2D bands and (b) the tensile strain induced 2D band red shifting and band splitting. [24] The inset in (a) illustrates the anisotropic response of the C 3v bonds to the uni-axial strain. There are two extreme situations: at = 0, 1 = 3 = 2 < 2 ; at = 30, 1 = 2 > In addition to the Grüneisen parameters derived in eq (6), we employed the strain coefficient for the strain-induced red shifting and band splitting. The introduction of the and the is more convenient and physically meaningful than using the Grüneisen parameter alone.
In panel (b), the = 0.31 for the upper and = 1.0 for the lower branches. This result means that the bonds labeled 1 and 3 in the inset are elongated by 31% of that of the bond 2 if the stain is along bond 2. As the changes with the relative direction between the strain and the crystal orientation, any possible extent of splitting and frequency variation with strain can be reproduced, which could further explain how the "two-phonon double-resonance" proceeds under the given circumstances.
= =2 0 /3. Hence,
the C-C bond force constant 0
=3 0 /2 ~ 9.424 N/m.
Pressure and temperature dependence
The matching to the measured T-dependent Raman shift of the 2D mode in Figure 3a turns out that D = 540 K, with the given atomic cohesive energy of 3. is proportional to T 2 for the two-dimensional system at very low temperatures.
These results imply that the θ D determines the width of the shoulder and the 1/E coh and the thermal expansion coefficient determine the slope of the curve at high temperatures. The match to the measured P-dependent Raman shift in Figure 3b gives rise to the compressibility of = 1.14510 -3 (GPa -1 ) and = 7.6310 -5 (GPa -2 ) and the energy density of 320 eV/nm 3 . Figure 3 BOLS reproduction of the measured (a) temperature [56] and (b) pressure [8] dependent
Raman shift of the 2D mode of graphene and CNT gives rise to the Debye temperature of 504 K with the given atomic cohesive energy of 3.11 eV/atom, and the compressibility and energy density as given in Table 1. [29] confirmed that the Dirac-Fermi polarons preferentially generate at the zigzag-edge of graphene [59] and at graphite surface atomic vacancies [60] because of the longer and uniform 3d distance between the dangling bond electrons along the edge. As illustrated in Figure 4, the dangling bond electrons at the edges of the armchair-and the reconstructed zigzag graphene (5 and 7 atomic rings) tend to form quasi-triple-bond between the edge atoms of shorter d distance. The isolation and polarization of the unpaired dangling bond electrons at the zigzag edge by the locally and deeply entrapped core and bonding electrons may scatter the incident radiation substantially and hence lowers the Raman reflectivity of the D band at the zigzag edge compared with that at the arm-chaired edge. [18,19]
Conclusion
We have formulated, for the first time, the number-of-layer, uni-axial-strain, pressure, and temperature dependent Raman shifts of graphene, as a union, as a function depending on the BOLS correlation in terms of the response of the length and energy of the representative bond to the applied stimuli. Numerical reproduction of the measurements clarified that: (i) the number-of-layer induced D and 2D redshift and the G mode blueshift are ruled by different mechanisms resulting from the undercoordination induced bond strain and bond strength gain; The D/2D vibration involves all the z neighbors on a C atom while the G mode involve only a dimer; (ii) the strain-induced phonon softening and band splitting arise from the anisotropic bond elongation and bond weakening; (iii) the thermally-softening originates from bond expansion and bond weakening due to vibration; and (iv) the mechanically-stiffening results from bond compression and bond strengthening due to mechanical work hardening. Reproduction of the measurements has led to quantitative information, as summarized in Table 1, of the referential frequencies for each mode from which the Raman shifts proceed, the bond length, bond energy, atomic cohesive energy, binding energy density, force constant, Debye temperature, compressibility, elastic modulus of graphene, and the effective coordination numbers for the few-layer graphene, which is of instrumental importance to the understanding of the unusual behavior of graphene. Findings and understandings demonstrate the essentiality of the proposed approach that has empowered the Raman spectroscopy immensely in gaining quantitative information about the dynamics of the representative bond of a specimen.
to be obtained from matching theory to the measurements. The reduced mass of the dimer remains a constant unless chemical adsorption or isotope is being involved. Therefore, the Raman shift provides a powerful tool for detecting any change of the order, length, strength and the reduced mass of the representative bond for the specimen. Here we use the proportional relations based on the dimensionality analysis as we do not need the exact values of the hided constants in the numerical expressions in seeking for relative change of the concerned quantities. What we are concerned are the bonding origins of the Raman shifts and the quantitative information extracted from the sophisticated measurements.
.
by the applied stimuli. The summation and the production are preceded over all the The (t) and (p) are, respectively, the thermal expansion coefficient and the compressibility. The k() is the effective force constant and (t) the specific heat of the representative bond. These expressions indicate that the mechanical work hardening by compression or by the compressive strain will shorten and strengthen but the thermal vibration or the tensile strain will elongate and weaken the C-C bond. Atomic CN reduction shortens and strengthens the C-C bond, according to the BOLS correlation. The generalized form indicates that we can consider all the stimuli either individually or collectively, depending on the experimental conditions.
the possible Raman modes.
between the n and the effective z of the n-layered graphene.
effective force constant of all the C 3v bonds of a given atom. Practically, one can measure the average strain of the entire specimen other than that of the individual bond with a force constant 0 . Because of the C 3v symmetry of the graphene, the applied uni-axial or bi-axial strain differentiates the actual strains of the three bonds in the graphene. The magnitudes of the strain and the strain energy of each bond should vary with the relative direction between the strain and the crystal orientation. It is therefore not surprising that the mechanical strain induces phonon band splitting.
V 0
0is the initial volume of a bond. The variables in P are the binding energy density E den = E z /V 0 and the compressibility and . The x(P) is another form of the equation of states in terms of the nonlinear compressibility. Matching the BM equation to the x(P) or the measured x-P curve, one can derive the nonlinear compressibility and , the bulk modulus B 0 (B 0 1) and its first-order differentiation 0 ' B . Substituting the integrals (8) into (7), we can reproduce the Pand T-dependent Raman shift with derivatives of the D , , and E den and the compressibility derived from the x(P) relation with the known E coh .
enabled the verification of the derived formulations and expectations. In numerical calculations, the known effective bond length d g = 0.142 nm and z g = 5.335 for the bulk graphite, z = 3 for the single layer graphene, and the m = 2.56 for carbon were taken as input parameters. We assumed the experimental database is sufficiently accurate. Errors in measurements will render the accuracy of the derivatives but not the nature and the trends of the observations. We firstly calculated the -z curves and calibrate them with the known vibration frequencies for the bulk graphite and the monolayer graphene. This calibration leads to the quantification of . Based on the z-dependent -z curve, we can determine the correlation between the effective atomic CN and the number-of-layer n for the few-layer graphene of all the possible modes. Likewise, matching to the strain-dependent Raman shifts and the strain-induced band splitting, we can determine the force constant and the effective strain for different branches of the splitting band. Matching to the T-and P-dependence, we can obtain the D , E coh , E den and and values without involving any other adjustable parameters. We have thus unified, for the first time, the Raman shifts of various modes under the considered stimuli, which have not been realized by any other approaches.
Figure 1
1BOLS reproduction of the z dependence of the (a) D/2D modes[3,22] and the (b) G mode Raman frequency with derivative of the referential vibration frequency of each. The inset in (b) shows the originally experimental -n data for the G mode.[3,4] Panel (c) shows the correlation function between the atomic CN and the number-of-layer. For n > 6, the z approaches and maintains almost the bulk graphite value of 5.335.
3 ~ 0 .
30The reduced force constant ' = 0.3 and the effective strain of the slow-shifted branch in (b) = . There should be a constant branch in (a) if = 30. The 2D mode in panel (a) appears to correspond to = 30 needing a constant branch because 3 ~ 0 and (b) to 0 situation. Panel (c) and eq (6) show the Grüneisen parameters for the diverged branches of the 2D mode in (b).
From
the C 3v bond configuration shown as inset in Figure 2a, and the derived effective force constant 6.283 N/m, we can estimate the force constant of the C-C bond in the single layer graphene. We may define the C-C bond force constant k 0 , the bonds labeled 1 and 3 are approximated as in parallel and the resultant 13 = 2 0 . This resultant bond connects with bond 2in series and therefore the resultant force constant of the three bonds is 123
11 eV/atom. The D is about 1/3 fold of the melting point 1605 K for the SWCNT.[30] At T ~ D /3, the Raman shift turns gradually from the nonlinear to the linear form when the temperature is increased. The slow decrease of the Raman shift at very low temperatures arises from the
Figure 4
4Density function theory calculated preferential generation of the DFPs with alternative directions (different colors) of spins at the zigzag edge and atomic vacancy of graphene. The DFPs are recognized as the isolation and polarization of the dangling bond electrons by the densely entrapped bonding electrons of the undercoordinated edge atoms that are separated by 3d along the zigzag edge uniformly. The shorter d separation of the dangling bond electronsform quasi-triple-bond without being polarized along the arm-chair edge.[29]
the frequency measured at the ambient temperature and under the atmospheric pressure. The terms of thermal expansion and the coupling of anharmonic phonons of multiple branches. In the P effect, the freely-adjustable parameters, a and b, are involved. A Grüneisen parameter,
e T
and
d T
represent contributions from, respectively, the
From the dimensionality analysis, the term vibration energy to the third term in the Taylor series and omitting the higher order terms, we have,r
z
n
z
d
r
n
n
n
d
r
d
r
E
d
r
d
r
dr
r
u
d
E
d
r
dr
n
r
u
d
r
u
z
z
d
r
r
r
u
2
is proportional to 2
d
E z . Equaling the
Table 1
1Instrumental information derived from the reproduction of the -, T-, and Pdependent Raman shift of monolayer graphene.Stimuli
Quantities
Values
Refs
Number-of-layer
1
x
G
D
D
7
.
1566
2
6
.
2562
8
.
1276
(cm -1 )
-
1
g
z
G
D
D
0
.
16
2
4
.
157
2
.
90
(cm -1 )
-
Strain
(Nm -1 )
6.283
-
Temperature
E coh (eV/atom)
3.11
-
D (K)
540
T m = 1605[30]
(10 -6 K -1 )
9.0
-
Pressure
E den (eV/nm 3 )
320
-
/ (10 -3 GPa -
1 /GPa -2 )
1.145/0.0763
-
B 0 /B 0 (GPa/-)
690/5
704/1[58]
4.4
Edge discriminative Raman reflectivity
Recent progress
AcknowledgementsFinancial supports from NSF (Nos. 50832001 and 11172254) of China and MOE (RG15/09), Singapore, are gratefully acknowledged.
Probing Layer Number and Stacking Order of Few-Layer Graphene by Raman Spectroscopy. Y Hao, Y Wang, L Wang, Z Ni, Z Wang, R Wang, C K Koo, Z Shen, Jtl Thong, Small. 6Hao Y, Wang Y, Wang L, Ni Z, Wang Z, Wang R, Koo CK, Shen Z, and Thong JTL, Probing Layer Number and Stacking Order of Few-Layer Graphene by Raman Spectroscopy. Small, 2010; 6: 195-200.
Vibrational properties of graphene and graphene layers. H Wang, Y Wang, X Cao, M Feng, Lan G , Journal of Raman Spectroscopy. 40Wang H, Wang Y, Cao X, Feng M, and Lan G, Vibrational properties of graphene and graphene layers. Journal of Raman Spectroscopy, 2009; 40: 1791-1796.
Spatially Resolved Raman Spectroscopy of Single-and Few-Layer Graphene. D Graf, F Molitor, K Ensslin, C Stampfer, A Jungen, C Hierold, L Wirtz, Nano Letters. 7Graf D, Molitor F, Ensslin K, Stampfer C, Jungen A, Hierold C, and Wirtz L, Spatially Resolved Raman Spectroscopy of Single-and Few-Layer Graphene. Nano Letters, 2007; 7: 238-242.
Raman Scattering from High-Frequency Phonons in Supported n-Graphene Layer Films. A Gupta, G Chen, P Joshi, S Tadigadapa, Eklund Pc, Nano Letters. 6Gupta A, Chen G, Joshi P, Tadigadapa S, and Eklund PC, Raman Scattering from High- Frequency Phonons in Supported n-Graphene Layer Films. Nano Letters, 2006; 6: 2667-2673.
Uniaxial strain in graphene by Raman spectroscopy: G peak splitting, Gruneisen parameters, and sample orientation. Tmg Mohiuddin, A Lombardo, R R Nair, A Bonetti, G Savini, R Jalil, N Bonini, D M Basko, C Galiotis, N Marzari, K S Novoselov, A K Geim, A C Ferrari, Phys Rev B. 79205433Mohiuddin TMG, Lombardo A, Nair RR, Bonetti A, Savini G, Jalil R, Bonini N, Basko DM, Galiotis C, Marzari N, Novoselov KS, Geim AK, and Ferrari AC, Uniaxial strain in graphene by Raman spectroscopy: G peak splitting, Gruneisen parameters, and sample orientation. Phys Rev B, 2009; 79: 205433.
Raman Mapping Investigation of Graphene on Transparent Flexible Substrate: The Strain Effect. T Yu, Z Ni, C Du, Y You, Y Wang, Z Shen, The Journal of Physical Chemistry C. 112Yu T, Ni Z, Du C, You Y, Wang Y, and Shen Z, Raman Mapping Investigation of Graphene on Transparent Flexible Substrate: The Strain Effect. The Journal of Physical Chemistry C, 2008; 112: 12602-12605.
Probing Strain-Induced Electronic Structure Change in Graphene by Raman Spectroscopy. M Huang, H Yan, T F Heinz, J Hone, Nano Letters. 10Huang M, Yan H, Heinz TF, and Hone J, Probing Strain-Induced Electronic Structure Change in Graphene by Raman Spectroscopy. Nano Letters, 2010; 10: 4074-4079.
High-pressure Raman spectroscopy of graphene. J E Proctor, E Gregoryanz, K S Novoselov, M Lotya, J N Coleman, Physical Review B. 80Proctor JE, Gregoryanz E, Novoselov KS, Lotya M, Coleman JN, and Halsall MP, High-pressure Raman spectroscopy of graphene. Physical Review B, 2009; 80: 073408-073412.
Raman nanometrology of graphene: Temperature and substrate effects. I Calizo, S Ghosh, W Bao, F Miao, Ning Lau, C Balandin, A A , Solid State Communications. 149Calizo I, Ghosh S, Bao W, Miao F, Ning Lau C, and Balandin AA, Raman nanometrology of graphene: Temperature and substrate effects. Solid State Communications, 2009; 149: 1132- 1135.
Temperature effects on the Raman spectra of graphenes: dependence on the number of layers and doping. J L Dattatray, U Maitra, L S Panchakarla, U V Waghmare, Cnr Rao, J. Phys.: Condens. Matter. 2355303Dattatray JL, Maitra U, Panchakarla LS, Waghmare UV, and Rao CNR, Temperature effects on the Raman spectra of graphenes: dependence on the number of layers and doping. J. Phys.: Condens. Matter, 2011; 23: 055303.
. E H Martins Ferreira, Mvo Moutinho, F Stavale, M M Lucchese, R B Capaz, C A Achete, Martins Ferreira EH, Moutinho MVO, Stavale F, Lucchese MM, Capaz RB, Achete CA, and
Evolution of the Raman spectra from single-, few-, and many-layer graphene with increasing disorder. A Jorio, Physical Review B. 82Jorio A, Evolution of the Raman spectra from single-, few-, and many-layer graphene with increasing disorder. Physical Review B, 2010; 82: 125429-125438.
Probing the Intrinsic Properties of Exfoliated Graphene: Raman Spectroscopy of Free-Standing Monolayers. S Berciaud, S Ryu, L E Brus, Heinz Tf, Nano Letters. 9Berciaud S, Ryu S, Brus LE, and Heinz TF, Probing the Intrinsic Properties of Exfoliated Graphene: Raman Spectroscopy of Free-Standing Monolayers. Nano Letters, 2008; 9: 346-352.
The effect of substrates on the Raman spectrum of graphene. I Calizo, W Bao, F Miao, C N Lau, A A Balandin, Applied Physics Letters. 91Calizo I, Bao W, Miao F, Lau CN, and Balandin AA, The effect of substrates on the Raman spectrum of graphene. Applied Physics Letters, 2007; 91: 201904.
Thickness-Dependent Reversible Hydrogenation of Graphene Layers. Z Luo, T Yu, K J Kim, Z Ni, Y You, S Lim, Z Shen, S Wang, Lin J , ACS Nano. 3Luo Z, Yu T, Kim KJ, Ni Z, You Y, Lim S, Shen Z, Wang S, and Lin J, Thickness-Dependent Reversible Hydrogenation of Graphene Layers. ACS Nano, 2009; 3: 1781-1788.
Selective Etching of Graphene Edges by Hydrogen Plasma. L M Xie, L Y Jiao, H J Dai, J. Am. Chem. Soc. 132Xie LM, Jiao LY, and Dai HJ, Selective Etching of Graphene Edges by Hydrogen Plasma. J. Am. Chem. Soc., 2010; 132: 14751-14753.
Control of Electronic Structure of Graphene by Various Dopants and Their Effects on a Nanogenerator. H J Shin, W M Choi, D Choi, G H Han, S M Yoon, H K Park, S W Kim, Y W Jin, S Y Lee, J M Kim, J Y Choi, Y H Lee, J. Am. Chem. Soc. 132Shin HJ, Choi WM, Choi D, Han GH, Yoon SM, Park HK, Kim SW, Jin YW, Lee SY, Kim JM, Choi JY, and Lee YH, Control of Electronic Structure of Graphene by Various Dopants and Their Effects on a Nanogenerator. J. Am. Chem. Soc., 2010; 132: 15603-15609.
Influence of the Atomic Structure on the Raman Spectra of Graphite Edges. L G Cancado, M A Pimenta, Bra Neves, Mss Dantas, Physical Review Letters. 93Cancado LG, Pimenta MA, Neves BRA, Dantas MSS, and Jorio A, Influence of the Atomic Structure on the Raman Spectra of Graphite Edges. Physical Review Letters, 2004; 93: 247401- 247405.
Raman Scattering at Pure Graphene Zigzag Edges. B Krauss, Nemes Incze, P Skakalova, V Biro, L P Klitzing, K V Smet, J H , Nano Letters. 10Krauss B, Nemes Incze P, Skakalova V, Biro LP, Klitzing KV, and Smet JH, Raman Scattering at Pure Graphene Zigzag Edges. Nano Letters, 2010; 10: 4544-4548.
Edge chirality determination of graphene by Raman spectroscopy. Y You, Z Ni, T Yu, Z Shen, Applied Physics Letters. 93You Y, Ni Z, Yu T, and Shen Z, Edge chirality determination of graphene by Raman spectroscopy. Applied Physics Letters, 2008; 93: 163112-163115.
Studying disorder in graphite-based systems by Raman spectroscopy. M A Pimenta, G Dresselhaus, M S Dresselhaus, L G Cancado, A Jorio, R Saito, Physical Chemistry Chemical Physics. 9Pimenta MA, Dresselhaus G, Dresselhaus MS, Cancado LG, Jorio A, and Saito R, Studying disorder in graphite-based systems by Raman spectroscopy. Physical Chemistry Chemical Physics, 2007; 9: 1276-1290.
Graphite, and Carbon Nanotubes by Raman Spectroscopy. M S Dresselhaus, A Jorio, R Saito, Characterizing Graphene, Annual Review of Condensed Matter Physics. 1Dresselhaus MS, Jorio A, and Saito R, Characterizing Graphene, Graphite, and Carbon Nanotubes by Raman Spectroscopy. Annual Review of Condensed Matter Physics, 2010; 1: 89- 108.
Probing Graphene Edges via Raman Scattering. A K Gupta, T J Russin, H R Gutierrez, Eklund Pc, ACS Nano. 3Gupta AK, Russin TJ, Gutierrez HR, and Eklund PC, Probing Graphene Edges via Raman Scattering. ACS Nano, 2008; 3: 45-52.
Temperature Dependence of the Raman Spectra of Graphene and Graphene Multilayers. I Calizo, A A Balandin, W Bao, F Miao, C N Lau, Nano Letters. 7Calizo I, Balandin AA, Bao W, Miao F, and Lau CN, Temperature Dependence of the Raman Spectra of Graphene and Graphene Multilayers. Nano Letters, 2007; 7: 2645-2649.
Strain-Dependent Splitting of the Double-Resonance Raman Scattering Band in Graphene. D Yoon, Y-W Son, H Cheong, Phys. Rev. Lett. 106155502Yoon D, Son Y-W, and Cheong H, Strain-Dependent Splitting of the Double-Resonance Raman Scattering Band in Graphene. Phys. Rev. Lett., 2011; 106: 155502.
A Close Look at Fundamental Parameters through Biaxial Straining. F Ding, H Ji, Y Chen, A Herklotz, K Dorr, Y Mei, A Rastelli, O G Schmidt, Stretchable Graphene, Nano Letters. 10Ding F, Ji H, Chen Y, Herklotz A, Dorr K, Mei Y, Rastelli A, and Schmidt OG, Stretchable Graphene: A Close Look at Fundamental Parameters through Biaxial Straining. Nano Letters, 2010; 10: 3453-3458.
Raman Modes of 6H Polytype of Silicon Carbide to Ultrahigh Pressures: A Comparison with Silicon and Diamond. J Liu, Y K Vohra, Physical Review Letters. 72Liu J and Vohra YK, Raman Modes of 6H Polytype of Silicon Carbide to Ultrahigh Pressures: A Comparison with Silicon and Diamond. Physical Review Letters, 1994; 72: 4105-4108.
Temperature dependence of Raman scattering in ZnO. R Cuscó, Alarcón Lladó, E Ibáñez, J Artús, L Jiménez, J Wang, B Callahan, M J , Physical Review B. 75Cuscó R, Alarcón Lladó E, Ibáñez J, Artús L, Jiménez J, Wang B, and Callahan MJ, Temperature dependence of Raman scattering in ZnO. Physical Review B, 2007; 75: 165202-165213.
Size dependence of nanostructures: Impact of bond order deficiency. Progress in Solid State Chemistry. C Q Sun, 35Sun CQ, Size dependence of nanostructures: Impact of bond order deficiency. Progress in Solid State Chemistry, 2007; 35: 1-159.
Discriminative generation and hydrogen modulation of the Dirac-Fermi polarons at graphene edges and atomic vacancies. X Zhang, W T Zheng, J-L Kuo, C Q Sun, Carbon. 49Zhang X, Zheng WT, Kuo J-L, and Sun CQ, Discriminative generation and hydrogen modulation of the Dirac-Fermi polarons at graphene edges and atomic vacancies. Carbon, 2011; 49: 3615- 3621.
Dimension, strength, and chemical and thermal stability of a single C-C bond in carbon nanotubes. C Q Sun, H L Bai, B K Tay, S Li, Jiang Ey, J. Phys. Chem. B. 107Sun CQ, Bai HL, Tay BK, Li S, and Jiang EY, Dimension, strength, and chemical and thermal stability of a single C-C bond in carbon nanotubes. J. Phys. Chem. B, 2003; 107: 7544-7546.
Coordination-Resolved C-C Bond Length and the C 1s Binding Energy of Carbon Allotropes and the Effective Atomic Coordination of the Few-Layer Graphene. C Q Sun, Y Sun, Y G Nie, Y Wang, J S Pan, G Ouyang, L K Pan, Z Sun, J Chem Phys C. 113Sun CQ, Sun Y, Nie YG, Wang Y, Pan JS, Ouyang G, Pan LK, and Sun Z, Coordination- Resolved C-C Bond Length and the C 1s Binding Energy of Carbon Allotropes and the Effective Atomic Coordination of the Few-Layer Graphene. J Chem Phys C, 2009; 113: 16464-16467.
Thermo-mechanical behavior of low-dimensional systems: The local bond average approach. C Q Sun, Prog. Mater Sci. 54Sun CQ, Thermo-mechanical behavior of low-dimensional systems: The local bond average approach. Prog. Mater Sci., 2009; 54: 179-307.
Crystal structure and chemical correlation. V M Goldschmidt, Berichte Der Deutschen Chemischen Gesellschaft. 60Goldschmidt VM, Crystal structure and chemical correlation. Berichte Der Deutschen Chemischen Gesellschaft, 1927; 60: 1263-1296.
Atomic radii and interatomic distances in metals. L Pauling, J. Am. Chem. Soc. 69Pauling L, Atomic radii and interatomic distances in metals. J. Am. Chem. Soc., 1947; 69: 542- 553.
Relaxation of hcp(0001) surfaces: A chemical view. P J Feibelman, Phys Rev B. 53Feibelman PJ, Relaxation of hcp(0001) surfaces: A chemical view. Phys Rev B, 1996; 53: 13740- 13746.
The effect of gold particle size on Au-Au bond length and reactivity toward oxygen in supported catalysts. J T Miller, A J Kropf, Y Zha, J R Regalbuto, L Delannoy, C Louis, E Bus, J A Van Bokhoven, J. Catal. 240Miller JT, Kropf AJ, Zha Y, Regalbuto JR, Delannoy L, Louis C, Bus E, and van Bokhoven JA, The effect of gold particle size on Au-Au bond length and reactivity toward oxygen in supported catalysts. J. Catal., 2006; 240: 222-234.
Coordination-dependent surface atomic contraction in nanocrystals revealed by coherent diffraction. W J Huang, R Sun, J Tao, L D Menard, R G Nuzzo, J M Zuo, Nat. Mater. 7Huang WJ, Sun R, Tao J, Menard LD, Nuzzo RG, and Zuo JM, Coordination-dependent surface atomic contraction in nanocrystals revealed by coherent diffraction. Nat. Mater., 2008; 7: 308- 313.
Structure of gold nanoparticles suspended in water studied by x-ray diffraction and computer simulations. V Petkov, Y Peng, G Williams, B H Huang, D Tomalia, Y Ren, Phys Rev B. 72195402Petkov V, Peng Y, Williams G, Huang BH, Tomalia D, and Ren Y, Structure of gold nanoparticles suspended in water studied by x-ray diffraction and computer simulations. Phys Rev B, 2005; 72: 195402.
Initial-and final-state effects on metal cluster/substrate interactions, as determined by XPS: copper clusters on Dow Cyclotene and highly oriented pyrolytic graphite. Yang Dq, E Sacher, Yang DQ and Sacher E, Initial-and final-state effects on metal cluster/substrate interactions, as determined by XPS: copper clusters on Dow Cyclotene and highly oriented pyrolytic graphite.
. Appl. Surf. Sci. 195Appl. Surf. Sci., 2002; 195: 187-195.
Size dependence of core and valence binding energies in Pd nanoparticles: Interplay of quantum confinement and coordination reduction. I Aruna, B R Mehta, L K Malhotra, S M Shivaprasad, J. Appl. Phys. 104Aruna I, Mehta BR, Malhotra LK, and Shivaprasad SM, Size dependence of core and valence binding energies in Pd nanoparticles: Interplay of quantum confinement and coordination reduction. J. Appl. Phys., 2008; 104: 064308-064305.
Introduction to Solid State Physics. C Kittel, Wiley57New YorkKittel C, Introduction to Solid State Physics. Wiley: New York,1996: pp 57.
E W Wong, P E Sheehan, C M Lieber, Nanobeam Mechanics: Elasticity, Strength, and Toughness of Nanorods and Nanotubes. 277Wong EW, Sheehan PE, and Lieber CM, Nanobeam Mechanics: Elasticity, Strength, and Toughness of Nanorods and Nanotubes. Science, 1997; 277: 1971-1975.
Bending and buckling of carbon nanotubes under large strain. M R Falvo, G J Clary, R M Taylor, Chi V , Brooks Jr, F P Washburn, S Superfine, R , Nature. 389Falvo MR, Clary GJ, Taylor RM, Chi V, Brooks Jr FP, Washburn S, and Superfine R, Bending and buckling of carbon nanotubes under large strain. Nature, 1997; 389: 582-584.
Surface Superstructure of Carbon Nanotubes on Highly Oriented Pyrolytic Graphite Annealed at Elevated Temperatures. A Bai, F Seiji, Y Kiyoshi, Y Masamichi, Jpn. J. Appl. Phys. 37Bai A, Seiji F, Kiyoshi Y, and Masamichi Y, Surface Superstructure of Carbon Nanotubes on Highly Oriented Pyrolytic Graphite Annealed at Elevated Temperatures. Jpn. J. Appl. Phys. , 1998; 37: 3809-3811.
Surface-bulk core-level splitting in graphite. T Balasubramanian, J N Andersen, L Wallden, Physical Review B. 64Balasubramanian T, Andersen JN, and Wallden L, Surface-bulk core-level splitting in graphite. Physical Review B, 2001; 64: 205420-205423.
Scanning Photoemission Microscopy of Graphene Sheets on SiO2. K Kim, H Lee, J H Choi, Y S Youn, J Choi, H Lee, T H Kang, M C Jung, H J Shin, H J Lee, S Kim, Kim B , Advanced Materials. 20Kim K, Lee H, Choi JH, Youn YS, Choi J, Lee H, Kang TH, Jung MC, Shin HJ, Lee HJ, Kim S, and Kim B, Scanning Photoemission Microscopy of Graphene Sheets on SiO2. Advanced Materials, 2008; 20: 3589-3591.
Underneath the fascinations of carbon nanotubes and graphene nanoribbons. W T Zheng, C Q Sun, Energy & Environmental Science. 4Zheng WT and Sun CQ, Underneath the fascinations of carbon nanotubes and graphene nanoribbons. Energy & Environmental Science, 2011; 4: 627-655.
The shortest metal-metal bond yet: Molecular and electronic structure of a dinuclear chromium diazadiene complex. K A Kreisel, Gpa Yap, O Dmitrenko, C R Landis, K H Theopold, J. Am. Chem. Soc. 12914162Kreisel KA, Yap GPA, Dmitrenko O, Landis CR, and Theopold KH, The shortest metal-metal bond yet: Molecular and electronic structure of a dinuclear chromium diazadiene complex. J. Am. Chem. Soc., 2007; 129: 14162-+.
After 155 years, a crystalline chromium carboxylate with a supershort Cr-Cr bond. F A Cotton, E A Hillard, C A Murillo, H C Zhou, J. Am. Chem. Soc. 122Cotton FA, Hillard EA, Murillo CA, and Zhou HC, After 155 years, a crystalline chromium carboxylate with a supershort Cr-Cr bond. J. Am. Chem. Soc., 2000; 122: 416-417.
Graphene at the Edge: Stability and Dynamics. C Ö Girit, J C Meyer, R Erni, M D Rossell, C Kisielowski, L Yang, C H Park, M F Crommie, M L Cohen, S G Louie, Science. 323Girit CÖ, Meyer JC, Erni R, Rossell MD, Kisielowski C, Yang L, Park CH, Crommie MF, Cohen ML, Louie SG, and Zettl A, Graphene at the Edge: Stability and Dynamics. Science, 2009; 323: 1705-1708.
. W G Han, C T Zhang, A Theory, Nonlinear, Vibrations, Hydrogen-Bonds, J. Phys.: Condens. Matter. 3Han WG and Zhang CT, A THEORY OF NONLINEAR STRETCH VIBRATIONS OF HYDROGEN-BONDS. J. Phys.: Condens. Matter, 1991; 3: 27-35.
Strain engineering of the elasticity and the Raman shift of nanostructured TiO2. X Liu, L Pan, Sun Zhuo, C Sun, Q , J. Appl. Phys. accepted on 16/07/2011Liu X, Pan L, Sun Zhuo, and Sun C, Q,, Strain engineering of the elasticity and the Raman shift of nanostructured TiO2. J. Appl. Phys., 2011; accepted on 16/07/2011.
Atomistic Origin of the Thermally Driven Softening of Raman Optical Phonons in Group III Nitrides. M X Gu, L K Pan, Au Yeung, T C Tay, B K Sun, C Q , The Journal of Physical Chemistry C. 111Gu MX, Pan LK, Au Yeung TC, Tay BK, and Sun CQ, Atomistic Origin of the Thermally Driven Softening of Raman Optical Phonons in Group III Nitrides. The Journal of Physical Chemistry C, 2007; 111: 13606-13610.
. F Birch, Finite Elastic Strain of Cubic Crystals. Physical Review. 71Birch F, Finite Elastic Strain of Cubic Crystals. Physical Review, 1947; 71: 809-824.
The Compressibility of Media under Extreme Pressures. F D Murnaghan, Proceedings of the National Academy of Sciences of the United States of America. 30Murnaghan FD, The Compressibility of Media under Extreme Pressures. Proceedings of the National Academy of Sciences of the United States of America, 1944; 30: 244-247.
Low-Temperature Raman Spectroscopy of Individual Single-Wall Carbon Nanotubes and Single-Layer Graphene. L Zhang, Z Jia, L Huang, S O'brien, Yu Z , The Journal of Physical Chemistry C. 112Zhang L, Jia Z, Huang L, O'Brien S, and Yu Z, Low-Temperature Raman Spectroscopy of Individual Single-Wall Carbon Nanotubes and Single-Layer Graphene. The Journal of Physical Chemistry C, 2008; 112: 13893-13900.
Double Resonant Raman Scattering in Graphite. C Thomsen, S Reich, Physical Review Letters. 85Thomsen C and Reich S, Double Resonant Raman Scattering in Graphite. Physical Review Letters, 2000; 85: 5214-5217.
Elastic properties of carbon nanotubes under hydrostatic pressure. S Reich, C Thomsen, P Ordejon, Physical Review B. 65Reich S, Thomsen C, and Ordejon P, Elastic properties of carbon nanotubes under hydrostatic pressure. Physical Review B, 2002; 65: 153407-153411.
Electronic structures of graphene edges and nanographene. T Enoki, Y Kobayashi, K I Fukui, Int. Rev. Phys. Chem. 26Enoki T, Kobayashi Y, and Fukui KI, Electronic structures of graphene edges and nanographene. Int. Rev. Phys. Chem., 2007; 26: 609-645.
Missing Atom as a Source of Carbon Magnetism. M M Ugeda, I Brihuega, F Guinea, Gómez-Rodríguez Jm , Phys. Rev. Lett. 10496804Ugeda MM, Brihuega I, Guinea F, and Gómez-Rodríguez JM, Missing Atom as a Source of Carbon Magnetism. Phys. Rev. Lett., 2010; 104: 096804.
| [] |
[
"Simultaneous multiple angular displacement estimation precision enhanced by the intramode correlation",
"Simultaneous multiple angular displacement estimation precision enhanced by the intramode correlation"
] | [
"Shoukang Chang \nSchool of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China\n",
"Wei Ye \nSchool of Information Engineering\nNanchang Hangkong University\n330063NanchangChina\n",
"Xuan Rao \nSchool of Information Engineering\nNanchang Hangkong University\n330063NanchangChina\n",
"Huan Zhang \nSchool of Physics\nSun Yat-sen University\n510275GuangzhouChina\n",
"Liqing Huang \nSchool of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China\n",
"Mengmeng Luo \nDepartment of Physics\nXi'an Jiaotong University City College\nXi'an 710018China\n",
"Yuetao Chen \nSchool of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China\n",
"Shaoyan Gao \nSchool of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China\n"
] | [
"School of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China",
"School of Information Engineering\nNanchang Hangkong University\n330063NanchangChina",
"School of Information Engineering\nNanchang Hangkong University\n330063NanchangChina",
"School of Physics\nSun Yat-sen University\n510275GuangzhouChina",
"School of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China",
"Department of Physics\nXi'an Jiaotong University City College\nXi'an 710018China",
"School of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China",
"School of Physics\nMOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter\nShaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices\nXi'an Jiaotong University\n710049People's Republic of China"
] | [] | The angular displacement estimation is one of significant branches of quantum parameter estimation. However, most of the studies have focused on the single-angular displacement estimation, while the multiple angular displacement estimation in ideal and noisy scenarios is still elusive. In this paper, we investigate the simultaneous multiple angular displacement estimation based on an orbital angular momentum (OAM), together with inputting (d + 1)-mode NOON-like states as the probe state. By revealing the role of the intramode correlation of the probe state, this allows us to give a reasonable explanation for the corresponding quantum Cramér-Rao bound (QCRB) behaviors with and without photon losses. Our analyses suggest that the QCRB for the multiple angular displacement estimation is always positively related to the intramode correlation, especially for the multimode entangled squeezed vacuum state showing the best performance compared to another probe state. More importantly, strengthening the robustness of multiple angular-displacement estimation systems can be achieved by increasing the OAM quantum number. | null | [
"https://export.arxiv.org/pdf/2210.16831v1.pdf"
] | 253,237,972 | 2210.16831 | e5cebf4dcadd28815e89671043c16684124c8e3b |
Simultaneous multiple angular displacement estimation precision enhanced by the intramode correlation
30 Oct 2022
Shoukang Chang
School of Physics
MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter
Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices
Xi'an Jiaotong University
710049People's Republic of China
Wei Ye
School of Information Engineering
Nanchang Hangkong University
330063NanchangChina
Xuan Rao
School of Information Engineering
Nanchang Hangkong University
330063NanchangChina
Huan Zhang
School of Physics
Sun Yat-sen University
510275GuangzhouChina
Liqing Huang
School of Physics
MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter
Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices
Xi'an Jiaotong University
710049People's Republic of China
Mengmeng Luo
Department of Physics
Xi'an Jiaotong University City College
Xi'an 710018China
Yuetao Chen
School of Physics
MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter
Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices
Xi'an Jiaotong University
710049People's Republic of China
Shaoyan Gao
School of Physics
MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of Condensed Matter
Shaanxi Province Key Laboratory of Quantum Information and Quantum Optoelectronic Devices
Xi'an Jiaotong University
710049People's Republic of China
Simultaneous multiple angular displacement estimation precision enhanced by the intramode correlation
30 Oct 20220367-a0530-d4250Dv0365Wj
The angular displacement estimation is one of significant branches of quantum parameter estimation. However, most of the studies have focused on the single-angular displacement estimation, while the multiple angular displacement estimation in ideal and noisy scenarios is still elusive. In this paper, we investigate the simultaneous multiple angular displacement estimation based on an orbital angular momentum (OAM), together with inputting (d + 1)-mode NOON-like states as the probe state. By revealing the role of the intramode correlation of the probe state, this allows us to give a reasonable explanation for the corresponding quantum Cramér-Rao bound (QCRB) behaviors with and without photon losses. Our analyses suggest that the QCRB for the multiple angular displacement estimation is always positively related to the intramode correlation, especially for the multimode entangled squeezed vacuum state showing the best performance compared to another probe state. More importantly, strengthening the robustness of multiple angular-displacement estimation systems can be achieved by increasing the OAM quantum number.
I. INTRODUCTION
Quantum parameter estimation provides a feasible way to more accurately estimate physical quantities that can not be measured directly than its classical counterpart [1][2][3][4]. As a specific example, in phase-estimated systems, the usage of quantum resources, involving nonclassical and entanglement states, can make the phase sensitivity beat the so-called shot-noise limit, even closing to the renowned Heisenberg limit [5][6][7]. In general, the precision limit of quantum parameter estimation can be visually quantified by means of the quantum Cramér-Rao bound (QCRB), which is not only inversely proportional to the quantum Fisher information (QFI) [2,8], but also has been extensively studied and used especially in quantum single-(or multi-) phase estimation.
Originally, a conventional model to study the quantum parameter estimation is the phase estimation problem [9]. In particular, taking advantage of optical interferometers, such as a Mach-Zehnder interferometer [10][11][12] and an SU(1,1) interferometer [13][14][15][16], early investigations of phase estimation pay attention to the singlephase estimation since it can be easily realized both theoretically and experimentally [5,11,15]. More strikingly, the single-phase estimation with the QCRB in the presence of noisy environments, e.g., photon loss [17][18][19], phase diffusion [20,21], and thermal noise [22,23], can be tackled using the variational method [17,20] pro-posed by Escher, greatly promoting the practical applications of quantum metrology [24][25][26]. On the other hand, extending toward the multiple phase estimation with the QCRB has attracted considerable interest more recently, thereby resulting in the potential applications [27][28][29][30][31][32][33][34], such as quantum-enhanced sensor network [29][30][31][32] and optical imaging [33,34]. Moreover, in order to improve the precision of multiple-phase estimation, multimode NOON (or NOON-like) states [35][36][37][38][39], generalized entangled coherent states [40] and multimode Gaussian states [41] have been considered, even in the presence of noisy environment [42][43][44][45]. More interestingly, by using correlated quantum states, the simultaneous estimation performance of multiple phases can show a significant advantage scaling as O(d) with the number of phase shifts d over the optimal individual case [35], but the O(d) advantage would fade away in photon-loss scenarios [45]. Further, in order to find saturable QCRB in multiparameter estimation, the necessary and sufficient conditions for projective measurements to saturate the QFI for multiple phase estimation with pure probe states can be achieved [46].
In addition to the phase-estimated systems, the angular displacement estimation based on an orbital angular momentum (OAM) has been one of important branches of parameter-estimated systems, particularly when the OAM quantum number l that is theoretically unbounded can give rise to the unbounded increase in the estimation precision [47][48][49]. Although the OAM values as high as 10010 quanta have been proven experimentally [50], this value is not indeed unbounded via the limited aperture of optical systems [47,50,51]. As a result, other methods have to be found to improve the angular displacement estimation. For instance, to show the increased performance of angular displacement estimation, the usages of entangled photon states [49] and twisted N00N states [47] were taken into account. Apart from the aforementioned methods of generating the probe states, Magañ-Loaiza et al presented the quantum-improved sensitive estimation of angular rotations based on a sort of weak-value amplification [52]. More dramatically, in ideal and realistic scenarios, Zhang et al suggested a super-resolved angular displacement estimation protocol using a Sagnac interferometer together with parity measurement [53]. Even so, it should be noticed that these studies mentioned above pay attention to the single-angular displacement estimation systems, whereas the multiple angular displacement estimation problem in the ideal and noisy environments has not been studied before. Therefore, in this paper, we shall present the derivation of the QCRB for the multiple angular displacement estimation with and without the photon losses when using the (d+ 1)-mode NOON-like states [including the multimode NOON state (MNOONS), the multimode entangled coherent state (MECS), the multimode entangled squeezed vacuum state (MESVS) and the multimode entangled squeezed coherent state (MESCS)] as the probe states. Our results find that, the QCRB for the multiple angular displacement estimation in both ideal and photon-loss cases is positively associated with the intramode correlation, especially for the MESVS exhibiting the best performance when comparing to other probe states. More interestingly, the OAM quantum number l can be profitably used for strengthening the robustness of multiple angular displacement estimation systems.
The rest of this paper is arranged as follows. In Sec. 2, we first describe the QCRB for the multiple angular displacement estimation with d independent angular displacements in the ideal scenario, and then focus on the behaviors of the QCRB when given the four specific probe states. In Sec. 3, we consider the effects of photon losses on the multiple angular displacement estimation precision, and also analyze its QCRB with the four probe states under the photon losses. Finally, conclusions are presented in the last section.
II. THE QCRB FOR THE MULTIPLE ANGULAR DISPLACEMENT ESTIMATION IN THE IDEAL SCENARIO
In an ideal case, let us beginning with the description of the QCRB for the simultaneous estimation with d independent angular displacements, whose schematic diagram is shown in Fig. 1. To be more specific, here we first take a balanced (d + 1)-mode entangled pure as the probe state, which can be defined as [36] |Ψ =N d m=0
|0 0 |0 1 |0 2 ... |ψ m ... |0 d ,(1)whereN = [(1 + d)(1 + d | ψ|0 |
2 )] −1/2 is the normalization factor. According to Eq. (1), it is obvious that this probe state is a superposition of d + 1 multimode quantum states with both an arbitrary single-mode quantum state |ψ m on the mth mode and a zero photon state on the other modes. It should be mentioned that, when |ψ m is respectively the Fock state |N m , the coherent state |α m , the squeezed vacuum state |r 1 m and the squeezed coherent state |β, r 2 m , one can obtain the MNOONS |Ψ N , the MECS |Ψ α , the MESVS |Ψ r1 and the MESCS |Ψ β,r2 , which will be seen as the probe state to analyze the behaviors of the QCRB in the following sections.
Subsequently, the generated probe state |Ψ is sent to d + 1 spiral phase plates (SPPs) which introduce the OAM degree of freedom, and after undergoing the d + 1 Dove prisms (DPs) to generate d independent angular displacements θ m to be estimated (here θ 0 = 0 is viewed as the reference beam), the corresponding evolution operator can be expressed aŝ
U θ = exp i d m=1 2ln m θ m ,(2)
where l is the quanta number of the OAM,n m =â † mâm and θ m denote the photon number operator and the angular displacement on mode m, respectively. After the interaction between the probe state and the evolution operatorÛ θ , the resulting state becomes |Ψ θ =Û θ |Ψ , so that the QCRB for the multiple angular displacement estimation in an ideal scenario is given by [35][36][37][38]
|δθ| 2 ≥ |δθ| 2 QCRB = Tr F −1 ,(3)
where F −1 represents the inverse matrix of the d × d quantum Fisher information matrix (QFIM). Generally speaking, the QCRB for the multiple angular displacement estimation is not achievable. Nevertheless, for the unitary angular displacement process, i.e., |Ψ θ = U θ |Ψ , the QCRB of pure quantum states can be saturated if the probe state |Ψ satisfies [40,54] Ψ|
i(∂Û † θ /∂θ j )Û θ , i(∂Û † θ /∂θ m )Û θ |Ψ = Ψ| [2ln j , 2ln m ] |Ψ = 0, ∀j, m,(4)
wheren j,m are the photon number operators on modes j and m. Since bothn j andn m are the Hermitian and mutually commuting operators, i.e., [n j ,n m ] = 0, ∀j, m, it is easy for the probe state |Ψ to find that the saturation condition is always true. Thus, its elements of the QFIM can be given by
F jm = 16l 2 Cov(n j ,n m ),(5)
where Cov(n j ,n m ) = n jnm − n j n m is the covariance between the photon number operatorsn j andn m , and the average · is taken with respect to the probe state |Ψ . Combining Eqs. (1) and (5), as a result, the QFIM can be calculated as
F = 16l 2 n 2 m I − n m 2Ĩ ,(6)
where I is the d × d identity matrix andĨ represents the matrix with the elementsĨ jm = 1, for all j and m. Upon substituting Eqs. (6) into (3), the analytical expression of the QCRB for the multiple angular displacement estimation with the probe state |Ψ shown in Eq. (1) can be finally derived by
|δθ| 2 QCRB = d 16l 2 (n 2 m g (2) m +n m ) 1 + 1 g (2) m +n −1 m − d ,(7)
wheren m = n m denotes the average photon number of the probe state |Ψ on mode m, and g
(2) m = â † mâ †
mâ mâm /n 2 m is the second-order coherence function, represented as an intramode correlation [55]. Generally speaking, the smaller the value of the QCRB, the more precise the parameter estimation. According to Eq. (7), notably, the QCRB is positively correlated with the intramode correlation g (2) m . That is to say, the intramode correlation contributes to the enhancement of multiple angular displacement estimation precision.
To clearly see the behaviors of the QCRB for the multiple angular displacement estimation, here we take four specific probe states into account, including the MNOONS |Ψ N , the MECS |Ψ α , the MESVS |Ψ r1 , and the MESCS |Ψ β,r2 as the probe states [see Appendix A for more details]. Without loss of generality, we also assume that both the amplitude α (β) of coherent states and the squeezing parameter r 1 (r 2 ) are real numbers, so as to achieve the total mean photon numbersN for the above four multimode entangled states [see Eq. in Ref. [36]]. In this case, Fig. 2(a) shows the QCRB for the four multimode entangled states changing with the total mean photon numberN , when fixed values of l = 2 and d = 15. It is shown that the value of the QCRB for the given multimode entangled states rapidly decreases with the increase ofN . Moreover, at the same total mean photon numberN , the MESVS (red line) shows the lowest QCRB value, followed by the MESCS (green line), the MECS (blue line) and the MNOONS (black line), which means that the usage of the MESVS as the probe state can achieve the highest estimation precision. The reason for this phenomenon is that the intramode correlation of the MESVS is the strongest in comparison to another multimode probe state, as shown in Fig. 2(b). In this sense, it is also demonstrated that the intramode correlation is conducive to effectively improve the multiple angular displacement estimation precision.
On the other hand, we also consider the effects of both the number of independent angular displacements d and the quanta number of the OAM l on the QCRB, as depicted in Fig. 3. It is clearly seen from Fig. 3(a) that when fixed parameters of l = 2 andN = 5, the QCRB for the four probe states increases with the increase of d, meaning that as the number of independent angular displacements d increases, the multiple angular displacement estimation precision becomes worse. This phenomenon results from that the QCRB is passively corre- lated with the number of independent angular displacements d, as given in Eq. (7). Even so, as we can see in Fig. 3(b), at fixed parameters of d = 15 andN = 5, when increasing the quanta number of the OAM l, the QCRB for the four probe states tends to be smaller and smaller. This reflects, to some extent, that increasing l can effectively improve the multiple angular displacement estimation precision. More importantly, it is seen from Fig. 3 that compared to other probe states, the MESVS still maintains the highest estimation precision.
III. THE QCRB FOR THE MULTIPLE ANGULAR
DISPLACEMENT ESTIMATION WITH PHOTON LOSSES
In the real-life scenarios, the inevitable interaction between the probe state system S and its surrounding environment E is always existed, greatly making the parameter-estimated performance worse. In general, there are various interactions, such as photon loss, phase diffusion, and thermal noise. For the sake of simplicity, here we only pay attention to how the photon losses affect the multiple angular displacement estimation precision. In addition, it should be noted that the probe state interacts with the d+1 DPs to generate d independent an- Here we use the fictitious beam splitter (BS) with a transmissivity ηm to simulate a photon-loss process gular displacements θ m in the photon-loss environment, which would no longer be an unitary evolution. This also leads to that, for the multiple angular displacement estimation with photon losses, the methods used to derive the QCRB given in Eq. (7) can not be directly employed. Fortunately, with the assistance of an variational method, Yue et al. derived the general form of the QCRB of multiphase estimation systems in the photon-loss case [45]. By extending that work [45], in this section, we shall utilize the variational method to study the effects of photon losses on the multiple angular displacement estimation precision (see Fig.4), such that a brief review of this variational approach is necessary in the following.
When given an initial (d + 1)-mode probe state |Ψ S in the probe system S and an initial state 0 E in the photon-loss environment, it is essential to expand the sizes of both the probe system space S and the photonloss environment space E, thereby resulting in that the probe state |Ψ S in the enlarged system-environment space S + E undergoes the unitary evolutionÛ S+E (θ), which can be expressed as [45] with the variational parameters δ m (δ m = 0 and −1 are respectively the photon losses occurring before and after the d + 1 DPs), and η m quantifying the strength of the photon losses. In practice, such a photon-loss strength can be often regarded as the transmissivity of fictitious beam splitters, as seen in Fig. 4. Among them, η m = 0 and 1 respectively indicate the complete-absorption and lossless cases. In this situation, the QCRB for the multiple angular displacement estimation under the photon losses turns out to be [45]
|Ψ(θ) S+E =Û S+E (θ) |Ψ S 0 E = kΠ k (θ) |Ψ S k E ,(8)|δθ| 2 QCRBL = max Π k (θ) Tr[C −1 Q (θ,Π k (θ))],(10)
where C Q (θ,Π k (θ)) is the QFIM for the enlarged systemenvironment space S + E, and the matrix elements of C Q (θ,Π k (θ)) are expressed as
C Qjm (θ,Π k (θ)) = 4 Λ jm − Γ j Γ m ,(11)withΓ m = i km dΠ † km (θ m ) dθ mΠ km (θ m ), Λ jm = km dΠ † km (θm) dθm dΠ km (θm) dθm , j = m Γ jΓm , j = m .(12)
Upon substituting Eqs. (9) into (12), one can further obtainΓ m = 2lχ mnm ,
Λ jm = 4l 2 (χ 2 mn 2 m + γ mnm ), j = m Γ jΓm , j = m ,(13)
with
χ m = 1 − (1 + δ m ) (1 − η m ) and γ m = η m (1 − η m ) (1 + δ m ) 2 .
For the sake of calculation, here we only consider the specific cases of η m = η and δ m = δ for any m. Thus, based on Eqs. (11) and (13), one can derive the lower bound of the QCRB for the multiple angular displacement estimation, i.e.,
Tr[C −1 Q ] = (d − 1)N −2 16l 2 σ +N −2 16l 2 σ − dN 2 χ 2 ψ|n |ψ 2 ,(14)
where σ = χ 2 ψ|n 2 |ψ + γ ψ|n |ψ . To further simplify the calculation, we also assume that d ≫ 1, leading to that the second term is infinitesimal compared with the first term given in Eq. (14), such that
Tr[C −1 Q ] ≈ (d − 1)N −2 16l 2 σ .(15)
In order to maximize Tr[C −1 Q ], the optimal value of δ can be easily calculated as
δ opt = ψ|n 2 |ψ (1 − η) ψ|n 2 |ψ + η ψ|n |ψ − 1.(16)
Therefore, substituting Eqs. (16) into (15), one can obtain the explicit expression of the QCRB for the multiple angular displacement estimation in the presence of photon losses, i.e.,
|δθ| 2 QCRBL = d − 1 16l 2n m 1 − η η + 1 1 +n m g (2) m .(17)
From Eq. (17), it is clear that the QCRB is also positively correlated with the intramode correlation g (2) m even in the presence of photon losses.
Next, in order to analyze the effects of the photon losses on the QCRB, let us consider the four probe resources, involving the MNOONS |Ψ N , the MECS |Ψ α , the MESVS |Ψ r1 , and the MESCS |Ψ β,r2 [One can refer to Appendix B about the expressions of the QCRB for these probe states]. When given the values ofN = 5, d = 15 and l = 2, we plot the QCRB as a function of the photon-loss strength η for the four probe resources, as depicted in Fig. 5(a). As we can see, the value of the QCRB for these probe states increases rapidly with the decrease of η, implying that the accuracy of the multiple angular displacement estimation is greatly affected by the photon losses. In spite of this, the QCRB for the MESVS (red dashed line) still shows the best performance even in the presence of photon losses, followed by the MESCS, the MECS and the MNOONS. Moreover, in order to compare the gap between the ideal and photonloss cases, at fixed parameters of η = 0.7, d = 15 and l = 2, we also show the QCRB changing with the mean photon numberN for the given probe resources, i.e., the MNOONS (black lines), the MECS (blue lines), the MESVS (red lines), and the MESCS (green lines), as pictured in Fig. 5(b). It is clearly seen that, although the QCRB for the MNOONS performs worse than that for other probe states with and without the photon losses, its gap between the ideal and photon-loss cases is the smallest. This means that applying the MNOONS into multiple angular displacement estimation systems is more robust against photon losses than other probe resources at the same conditions. More interestingly, for these probe resources, both the QCRB and the gap with and without the photon losses can be further reduced as the mean photon numberN increases, implying that the increase of the mean photon numberN of probe states is a highly effective way to enhance the multiple angular displacement estimation performance.
On the other hand, we also examine the influences of both d and l on the QCRB under the photon losses when given parameters of η = 0.7 andN = 5, as pictured in Fig. 6. Analogous to the ideal cases, for the photon losses as the d (l) increases, the multiple angular displacement estimation precision becomes worse (more precise), and the MESVS still maintains the highest estimation precision even beyond the ideal case of the MNOONS. In addition, we also notice that, when given the same probe state, e.g., the MESVS, as the d (l) increases, the corresponding gap between the ideal and photon-loss scenarios increases (decreases), implying that the decrease of d (or the increase of l) can not only improve the multiple angular displacement estimation precision, but also enhance the robustness against the photon losses. However, for different probe states, such as the MESVS and the MECS shown in Fig. 6(a), their gaps can not be directly visualized and compared. For this reason, to intuitively quantify and visualize the gaps for the four probe states, we give the definition of the robustness against the photon losses, i.e., Fig. 7 shows the R changing with d and l for the four probe resources when given parameters of η = 0.7 andN = 5. Visually, for the given probe resources, the corresponding robustness R is positively correlated with l and negatively correlated with d. In particular, it is more interesting that the MNOONS presents the best robustness, followed by the MECS, the MESCS, and the MESVS, which is completely opposite to the presentation accuracy of their multiple angular displacement estimations. That is to say, the QCRB for the MNOONS shows the worst performance with and without the photon losses compared to other probe resources, but the usage of the MNOONS in the multiple angular displacement estimation systems has the best robustness. Furthermore, it is worth mentioning that, when comparing to other probe resources, the robustness performance for the MESVS against the photon losses is relatively poor, but can gradually approach the robustness of the MNOONS with the increase of l, which also means that the OAM quantum number l is profitably used for enhancing the robustness of multiple angular displacement estimation systems.
R = |δθ| 2 QCRBL − |δθ| 2 QCRB .(18)
IV. CONCLUSIONS
In summary, we have revealed an important factor, i.e., the intramode correlation of the probe state, which affects the multiple angular displacement estimation precision with and without the photon losses. This finding offers a reasonable explanation for the multiple angular displacement estimation performance with (d + 1)-mode NOON-like probe states. The results show that the usage of the MESVS as the probe state is more beneficial for obtaining the highest estimation precision than another multimode probe state, which results from the intramode correlation of the MESVS is the strongest. We have also considered the effects of the photon losses on the multiple angular displacement estimation precision by the means of the variational method. The results suggest that the accuracy of the multiple angular displacement estimation is greatly affected by the photon losses, but the QCRB for the MESVS still shows the best estimation performance when comparing to the one for another probe state. More interestingly, different from the multiphase estimated systems, we can also regulate and control the quanta number of the OAM l to effectively improve the robustness and precision of multiple angular displacement estimation.
wheren m(N ) , g (2) m(N ) ,n m(α) , g (2) m(α) ,n m(r1) , g (2) m(r1) , n m(β,r2) , and g
FIG. 1 :
1Schematic diagram of multiple angular displacement estimation with d angular displacements, where a given probe state |Ψ after passing through spiral phase plate (SPP) and Dove prisms (DP) with the same number d + 1 can be in readout.
the multiple angular displacement estimation as a function of the total mean photon number N with several different probe states, i.e., the MNOONS (black line), the MECS (blue line), the MESVS (red line), and the MESCS (green line), at fixed parameters of l = 2 and d = 15.
online) The QCRB for the multiple angular displacement estimation as a function of (a) the independent angular-displacement number d with l = 2 andN = 5, and of (b) the quanta number of the OAM l with d = 15 andN = 5, when given several different probe states, i.e., the MNOONS (black line), the MECS (blue line), the MESVS (red line), and the MESCS (green line).
online) Schematic diagram of multiple angular displacement estimation with d angular displacements under the photon losses occurring at both ends of d + 1 DPs.
whereÛ
S+E (θ) = ⊗ d m=0Û m S+E (θ m ) is the unitary evolution operator, 0 E = ⊗ d m=0 |0 Em is the initial state of environment, k E = ⊗ d m=0 |k m Em is the orthogonal basis of the environment, andΠ k (θ) = ⊗ d m=0Π km (θ m )
online) The QCRB for the multiple angular displacement estimation as a function of (a) the photon-loss strength η with l = 2, d = 15, andN = 5, and of (b) the mean photon numberN with l = 2, d = 15 and η = 0.7, when inputting the MNOONS (black lines), the MECS (blue lines), the MESVS (red lines), and the MESCS (green lines). The dashed and solid lines correspond to the photon-loss and ideal cases, respectively.
online) The QCRB for the multiple angular displacement estimation as a function of (a) the number of independent angular displacements d with η = 0.7, l = 2 and N = 5, and of (b) the quanta number of the OAM l with η = 0.7, d = 15 andN = 5, when inputting the MNOONS (black lines), the MECS (blue lines), the MESVS (red lines) and the MESCS (green lines). The dashed and solid lines correspond to the photon-loss and ideal cases, respectively. From Eq. (18), the smaller the value of R, the smaller the gap between |δθ| 2 QCRBL and |δθ| 2 QCRB , meaning that the robustness against the photon losses is stronger. To see this point,
FIG. 7 :
7(Color online) The R for the multiple angular displacement estimation as a function of (a) the number of independent angular displacements d with η = 0.7, l = 2 andN = 5, and of (b) the quanta number of the OAM l with η = 0.7, d = 10 andN = 5, when inputting the MNOONS (black solid line), the MECS (blue dot-dashed line), the MESVS (red dashed line) and the MESCS (green dot line).
β,r2) +n m(β,r2) ) d) (1 + de −β 2 (1−tanh r2) sec hr 2 ) , Z 1 = β 2 sinh(2r 2 ) + 2β 2 − 1 cosh(2r 2 ),
AcknowledgmentsWe sincerely thank Prof. Yu
m(β,r2) can be given by Eq. m(β,r2) can be given by Eq. (A2).
Tight Bounds on the Simultaneous Estimation of Incompatible Parameters. J S Sidhu, Y K Ouyang, E T Campbell, P Kok, Phys. Rev. X. 1111028J. S. Sidhu, Y. K. Ouyang, E. T. Campbell, and P. Kok, Tight Bounds on the Simultaneous Estimation of Incom- patible Parameters, Phys. Rev. X 11, 011028 (2021).
Intrinsic Sensitivity Limits for Multiparameter Quantum Metrology. A Z Goldberg, L L Sáchez-Soto, H Ferretti, Phys. Rev. Lett. 127110501A. Z. Goldberg, L. L. Sáchez-Soto, and H. Ferretti, Intrinsic Sensitivity Limits for Multiparameter Quantum Metrol- ogy, Phys. Rev. Lett. 127, 110501 (2021).
P Horodecki, Ł Rudnicki, K Zyczkowski, Five Open Problems in Quantum Information Theory, PRX Quantum. 310101P. Horodecki, Ł. Rudnicki, and K. Zyczkowski, Five Open Problems in Quantum Information Theory, PRX Quantum 3, 010101 (2022).
Evaluating the quantum Ziv-Zakai bound for phase estimation in noisy environments. S K Chang, W Ye, X Rao, H Zhang, L Q Huang, M M Luo, Y T Chen, S Y Gao, L Y Hu, Opt. Express. 3024207S. K. Chang, W. Ye, X. Rao, H. Zhang, L. Q. Huang, M. M. Luo, Y. T. Chen, S. Y. Gao, and L. Y. Hu, Evaluating the quantum Ziv-Zakai bound for phase estimation in noisy environments, Opt. Express 30, 24207 (2022).
Quantum Metrology with Two-Mode Squeezed Vacuum: Parity Detection Beats the Heisenberg Limit. P M Anisimov, G M Raterman, A Chiruvelli, W N Plick, S D Huver, H Lee, J P Dowling, Phys. Rev. Lett. 104103602P. M. Anisimov, G. M. Raterman, A. Chiruvelli, W. N. Plick, S. D. Huver, H. Lee, and J. P. Dowling, Quantum Metrol- ogy with Two-Mode Squeezed Vacuum: Parity Detection Beats the Heisenberg Limit, Phys. Rev. Lett. 104, 103602 (2010).
. V Giovannetti, S Lloyd, L Maccone, Quantum Metrology, Phys. Rev. Lett. 9610401V. Giovannetti, S. Lloyd, and L. Maccone, Quantum Metrology, Phys. Rev. Lett. 96, 010401 (2006).
Incorporating Heisenberg's Uncertainty Principle into Quantum Multiparameter Estimation. X M Lu, X G Wang, Phys. Rev. Lett. 126120503X. M. Lu and X. G. Wang, Incorporating Heisenberg's Un- certainty Principle into Quantum Multiparameter Estima- tion, Phys. Rev. Lett. 126, 120503 (2021).
Statistical Distance and the Geometry of Quantum States. S L Braunstein, C M Caves, Phys. Rev. Lett. 723439S. L. Braunstein, and C. M. Caves, Statistical Distance and the Geometry of Quantum States, Phys. Rev. Lett. 72, 3439 (1994).
Multiple-Phase Quantum Interferometry: Real and Apparent Gains of Measuring All the Phases Simultaneously. W Góecki, R Demkowicz-Dobrzański, Phys. Rev. Lett. 12840504W. Góecki, and R. Demkowicz-Dobrzański, Multiple- Phase Quantum Interferometry: Real and Apparent Gains of Measuring All the Phases Simultaneously, Phys. Rev. Lett. 128, 040504 (2022).
SU(2) and SU(1,1) interferometers. B Yurke, S L Mccall, J R Klauder, Phys. Rev. A. 334033B. Yurke, S. L. McCall, and J. R. Klauder, SU(2) and SU(1,1) interferometers, Phys. Rev. A 33, 4033 (1986).
Quantum Metrology with Entangled Coherent States. J Joo, W J Munro, T P Spiller, Phys. Rev. Lett. 10783601J. Joo, W. J. Munro, and T. P. Spiller, Quantum Metrol- ogy with Entangled Coherent States, Phys. Rev. Lett. 107, 083601 (2011).
Improved phase sensitivity in a quantum optical interferometer based on multiphoton catalytic twomode squeezed vacuum states. H Zhang, W Ye, C P Wei, Y Xia, S K Chang, Z Y Liao, L Y Hu, Phys. Rev. A. 10313705H. Zhang, W. Ye, C. P. Wei, Y. Xia, S. K. Chang, Z. Y. Liao, and L. Y. Hu, Improved phase sensitivity in a quantum op- tical interferometer based on multiphoton catalytic two- mode squeezed vacuum states, Phys. Rev. A 103, 013705 (2021).
Improving the phase sensitivity of an SU(1,1) interferometer with photonadded squeezed vacuum light. L L Guo, Y F Yu, Z M Zhang, Opt. Express. 26L. L. Guo, Y. F. Yu, and Z. M. Zhang, Improving the phase sensitivity of an SU(1,1) interferometer with photon- added squeezed vacuum light, Opt. Express 26, 29099- 29109 (2018).
Quantifying quantum amplified metrology via Fisher information. G S Agarwal, L Davidovich, Phys. Rev. G. S. Agarwal, and L. Davidovich, Quantifying quantum amplified metrology via Fisher information, Phys. Rev.
. Res, 412014Res. 4, L012014 (2022).
SU(2)-in-SU(1,1) Nested Interferometer for High Sensitivity, Loss-Tolerant Quantum Metrology. W Du, J Kong, G Z Bao, P Y Yang, J Jia, S Ming, C H Yuan, J F Chen, Z Y Ou, M W Mitchell, W P Zhang, Phys. Rev. Lett. 12833601W. Du, J. Kong, G. Z. Bao, P. Y. Yang, J. Jia, S. Ming, C. H. Yuan, J. F. Chen, Z. Y. Ou, M. W. Mitchell, and W. P. Zhang, SU(2)-in-SU(1,1) Nested Interferometer for High Sensitivity, Loss-Tolerant Quantum Metrology, Phys. Rev. Lett. 128, 033601 (2022).
Improvement of phase sensitivity in an SU(1,1) interferometer via a phase shift induced by a Kerr medium. S K Chang, W Ye, H Zhang, L Y Hu, J H Huang, S Q Liu, Phys. Rev. A. 10533704S. K. Chang, W. Ye, H. Zhang, L. Y. Hu, J. H. Huang, and S. Q. Liu, Improvement of phase sensitivity in an SU(1,1) in- terferometer via a phase shift induced by a Kerr medium, Phys. Rev. A 105, 033704 (2022).
General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology. B M Escher, R L De Matos Filho, L Davidovich, Nat. Phys. 7B. M. Escher, R. L. de Matos Filho, and L. Davidovich, General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology, Nat. Phys. 7, 406-411 (2011).
Optimal Quantum Estimation of Loss in Bosonic Channels. A Monras, M G A Paris, Phys. Rev. Lett. 98160401A. Monras and M. G. A. Paris, Optimal Quantum Esti- mation of Loss in Bosonic Channels, Phys. Rev. Lett. 98, 160401 (2007).
Improving phase estimation using numberconserving operations. H Zhang, W Ye, C P Wei, C J Liu, Z Y Liao, L Y Hu, Phys. Rev. A. 10352602H. Zhang, W. Ye, C. P. Wei, C. J. Liu, Z. Y. Liao, and L. Y. Hu, Improving phase estimation using number- conserving operations, Phys. Rev. A 103, 052602 (2021).
Quantum Metrological Limits via a Variational Approach. B M Escher, L Davidovich, N Zagury, R L De Matos Filho, Phys. Rev. Lett. 109190404B. M. Escher, L. Davidovich, N. Zagury, and R. L. de Matos Filho, Quantum Metrological Limits via a Variational Ap- proach, Phys. Rev. Lett. 109, 190404 (2012).
Optical Phase Estimation in the Presence of Phase Diffusion. M G Genoni, S Olivares, M G A Paris, Phys. Rev. Lett. 106153603M. G. Genoni, S. Olivares, and M. G. A. Paris, Optical Phase Estimation in the Presence of Phase Diffusion, Phys. Rev. Lett. 106, 153603 (2011).
Bounding the quantum limits of precision for phase estimation with loss and thermal noise. C N Gagatsos, B A Bash, S Guha, A Datta, Phys. Rev. A. 9662306C. N. Gagatsos, B. A. Bash, S. Guha, and A. Datta, Bound- ing the quantum limits of precision for phase estimation with loss and thermal noise, Phys. Rev. A 96, 062306 (2017).
Maximal quantum Fisher information for phase estimation without initial parity. X Yu, X Zhao, L Y Shen, Y Y Shao, J Liu, X G Wang, Opt. Express. 26X. Yu, X. Zhao, L. Y. Shen, Y. Y. Shao, J. Liu, and X. G. Wang, Maximal quantum Fisher information for phase es- timation without initial parity, Opt. Express 26, 16292- 16302 (2018).
Quantum sensing. C L Degen, F Reinhard, P Cappellaro, Rev. Mod. Phys. 8935002C. L. Degen, F. Reinhard, and P. Cappellaro, Quantum sensing, Rev. Mod. Phys. 89, 035002 (2017).
Quantum metrology with nonclassical states of atomic ensembles. L Pezzè, A Smerzi, M K Oberthaler, R Schmied, P Treutlein, Rev. Mod. Phys. 9035005L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Quantum metrology with nonclassical states of atomic ensembles, Rev. Mod. Phys. 90, 035005 (2018).
Quantumenhanced measurements without entanglement. D Braun, G Adesso, F Benatti, R Floreanini, U Marzolino, M W Mitchell, S Pirandola, Rev. Mod. Phys. 9035006D. Braun, G. Adesso, F. Benatti, R. Floreanini, U. Mar- zolino, M. W. Mitchell, and S. Pirandola, Quantum- enhanced measurements without entanglement, Rev. Mod. Phys. 90, 035006 (2018).
Improvement of self-referenced continuous-variable quantum key distribution with quantum photon catalysis. W Ye, H Zhong, D Liao, L Y Huang, Y Hu, Guo, Opt. Express. 2717186W. Ye, H. Zhong, Q Liao, D. Huang, L. Y. Hu, and Y. Guo, Improvement of self-referenced continuous-variable quantum key distribution with quantum photon catalysis, Opt. Express 27, 17186 (2019).
Discrete modulation continuous-variable quantum key distribution based on quantum catalysis. W Ye, Y Guo, Y Xia, H Zhong, J Z Zhang, L Y Ding, Hu, Acta Phys. Sin. 6960301W. Ye, Y. Guo, Y. Xia, H Zhong, H. Zhang, J. Z. Ding, L. Y. Hu, Discrete modulation continuous-variable quantum key distribution based on quantum catalysis, Acta Phys. Sin. 69, 060301 (2020).
Quantum Metrological Power of Continuous-Variable Quantum Networks. H Kwon, Y Lim, L Jiang, H Jeong, C H Oh, Phys. Rev. Lett. 128180503H. Kwon, Y. Lim, L. Jiang, H. Jeong, and C. H. Oh, Quan- tum Metrological Power of Continuous-Variable Quantum Networks, Phys. Rev. Lett. 128, 180503 (2022).
Entangled Sensor-Networks for Dark-Matter Searches. A J Brady, C Gao, R Harnik, Z Liu, Z S Zhang, Q T Zhuang, PRX Quantum. 330333A. J. Brady, C. Gao, R. Harnik, Z. Liu, Z. S. Zhang, and Q. T. Zhuang, Entangled Sensor-Networks for Dark-Matter Searches, PRX Quantum 3, 030333 (2022).
Multiparameter Estimation in Networked Quantum Sensors. T J Proctor, P A Knott, J A Dunningham, Phys. Rev. Lett. 12080501T. J. Proctor, P. A. Knott, and J. A. Dunningham, Mul- tiparameter Estimation in Networked Quantum Sensors, Phys. Rev. Lett. 120, 080501 (2018).
Evaluating the Holevo Cramér-Rao Bound for Multiparameter Quantum Metrology. F Albarelli, J F Friel, A Datta, Phys. Rev. Lett. 123200503F. Albarelli, J. F. Friel, and A. Datta, Evaluating the Holevo Cramér-Rao Bound for Multiparameter Quantum Metrol- ogy, Phys. Rev. Lett. 123, 200503 (2019).
Quantum Theory of Superresolution for Two Incoherent Optical Point Sources. M Tsang, R Nair, X M Lu, Phys. Rev. X. 631033M. Tsang, R. Nair, and X. M. Lu, Quantum Theory of Su- perresolution for Two Incoherent Optical Point Sources, Phys. Rev. X 6, 031033 (2016).
Interferometric superlocalization of two incoherent optical point sources. R Nair, M Tsang, Opt. Express. 24R. Nair and M. Tsang, Interferometric superlocalization of two incoherent optical point sources, Opt. Express 24, 3684-3701 (2016).
Walmsley, Quantum Enhanced Multiple Phase Estimation. P C Humphreys, M Barbieri, A Datta, I A , Phys. Rev. Lett. 11170403P. C. Humphreys, M. Barbieri, A. Datta, and I. A. Walms- ley, Quantum Enhanced Multiple Phase Estimation, Phys. Rev. Lett. 111, 070403 (2013).
Quantum multiparameter estimation with generalized balanced multimode NOON like states. L Zhang, K W C Chan, Phys. Rev. A. 9532321L. Zhang, and K. W. C. Chan, Quantum multiparameter estimation with generalized balanced multimode NOON like states, Phys. Rev. A 95, 032321 (2017).
Quantum enhanced multiple-phase estimation with multi-mode N00N states. S Hong, J Rehman, Y S Kim, Y W Cho, S W Lee, H Jung, S Moon, S W Han, H T Lim, Nat. Commun. 125211S. Hong, J. Rehman, Y. S. Kim, Y. W. Cho, S. W. Lee, H. Jung, S. Moon, S. W. Han, and H. T. Lim, Quantum en- hanced multiple-phase estimation with multi-mode N00N states, Nat. Commun. 12, 5211 (2021).
Distributed phase estimation and networked quantum sensors with W-type quantum probes. Y Maleki, M S Zubairy, Phys. Rev. A. 10532428Y. Maleki, and M. S. Zubairy, Distributed phase estimation and networked quantum sensors with W-type quantum probes, Phys. Rev. A 105, 032428 (2022).
Quantum metrological bounds for vector parameters. Y R Zhang, H Fan, Phys. Rev. A. 9043818Y. R. Zhang and H. Fan, Quantum metrological bounds for vector parameters, Phys. Rev. A 90, 043818 (2014).
Quantum multiparameter metrology with generalized entangled coherent state. J Liu, X M Lu, Z Sun, X G Wang, J. Phys. A: Math. Theor. 49115302J. Liu, X. M. Lu, Z. Sun, and X. G. Wang, Quantum mul- tiparameter metrology with generalized entangled coher- ent state, J. Phys. A: Math. Theor. 49, 115302 (2016).
Gaussian systems for quantum-enhanced multiple phase estimation. C N Gagatsos, D Branford, A Datta, Phys. Rev. A. 9442342C. N. Gagatsos, D. Branford, and A. Datta, Gaussian sys- tems for quantum-enhanced multiple phase estimation, Phys. Rev. A 94, 042342 (2016).
Probe Incompatibility in Multiparameter Noisy Quantum Metrology. F Albarelli, R Demkowicz-Dobrzański, Phys. Rev. X. 1211039F. Albarelli, and R. Demkowicz-Dobrzański, Probe Incom- patibility in Multiparameter Noisy Quantum Metrology, Phys. Rev. X 12, 011039 (2022).
Reaching for the quantum limits in the simultaneous estimation of phase and phase diffusion. M Szczykulska, T Baumgratz, A Datta, Quantum Sci. Technol. 244004M. Szczykulska, T. Baumgratz, and A. Datta, Reaching for the quantum limits in the simultaneous estimation of phase and phase diffusion, Quantum Sci. Technol. 2, 044004 (2017).
Quantum Asymmetry and Noisy Multimode Interferometry. F Albarelli, M Mazelanik, M Lipka, A Streltsov, M Parniak, R Demkowicz-Dobrzański, Phys. Rev. Lett. 128240504F. Albarelli, M. Mazelanik, M. Lipka, A. Streltsov, M. Par- niak, and R. Demkowicz-Dobrzański, Quantum Asymme- try and Noisy Multimode Interferometry, Phys. Rev. Lett. 128, 240504 (2022).
Quantum-enhanced metrology for multiple phase estimation with noise. J D Yue, Y R Zhang, H Fan, Sci. Rep. 45933J. D. Yue, Y. R. Zhang, and H. Fan, Quantum-enhanced metrology for multiple phase estimation with noise, Sci. Rep. 4, 5933 (2014).
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases. L Pezzè, M A Ciampini, N Spagnolo, P C Humphreys, A Datta, I A Walmsley, M Barbieri, F Sciarrino, A Smerzi, Phys. Rev. Lett. 119130504L. Pezzè, M. A. Ciampini, N. Spagnolo, P. C. Humphreys, A. Datta, I. A. Walmsley, M. Barbieri, F. Sciarrino, and A. Smerzi, Optimal Measurements for Simultaneous Quan- tum Estimation of Multiple Phases, Phys. Rev. Lett. 119, 130504 (2017).
Photonic Angular Superresolution Using Twisted N00N States. M Hiekkamäki, F Bouchard, R Fickler, Phys. Rev. Lett. 127263601M. Hiekkamäki, F. Bouchard, and R. Fickler, Photonic An- gular Superresolution Using Twisted N00N States, Phys. Rev. Lett. 127, 263601 (2021).
Photonic polarization gears for ultra-sensitive angular measurements. V Ambrosio, N Spagnolo, L D Re, S Slussarenko, Y Li, L C Kwek, L Marrucci, S P Walborn, L Aolita, F Sciarrino, Nat. Commun. 42432V. D'Ambrosio, N. Spagnolo, L. D. Re, S. Slussarenko, Y. Li, L. C. Kwek, L. Marrucci, S. P. Walborn, L. Aolita, and F. Sciarrino, Photonic polarization gears for ultra-sensitive angular measurements, Nat. Commun. 4, 2432 (2013).
Supersensitive measurement of angular displacements using entangled photons. A Kumar Jha, G S Agarwal, R W Boyd, Phys. Rev. A. 8353829A. Kumar Jha, G. S. Agarwal, and R. W. Boyd, Super- sensitive measurement of angular displacements using entangled photons, Phys. Rev. A 83, 053829 (2011).
Quantum entanglement of angular momentum states with quantum numbers up to 10,010. R Fickler, G Campbell, B Buchler, P Koy Lam, A Zeilinger, Proc. Natl. Acad. Sci. U.S.A. 11313642R. Fickler, G. Campbell, B. Buchler, P. Koy Lam, and A. Zeilinger, Quantum entanglement of angular momentum states with quantum numbers up to 10,010, Proc. Natl. Acad. Sci. U.S.A. 113, 13642 (2016).
Divergence of an orbital-angular momentum-carrying beam upon propagation. M J Padgett, F M Miatto, M P J Lavery, A Zeilinger, R W Boyd, New J. Phys. 1723011M. J. Padgett, F. M. Miatto, M. P. J. Lavery, A. Zeilinger, and R. W. Boyd, Divergence of an orbital-angular momentum-carrying beam upon propagation, New J. Phys. 17, 023011 (2015).
Amplification of Angular Rotations Using Weak Measurements. O S Magaña-Loaiza, M Mirhosseini, B Rodenburg, R W Boyd, Phys. Rev. Lett. 112200401O. S. Magaña-Loaiza, M. Mirhosseini, B. Rodenburg, and R. W. Boyd, Amplification of Angular Rotations Us- ing Weak Measurements, Phys. Rev. Lett. 112, 200401 (2014).
Super-resolved angular displacement estimation based upon a Sagnac interferometer and parity measurement. J D Zhang, Z J Zhang, L Z Cen, J Y Hu, Y Zhao, Opt. Express. 284320J. D. Zhang, Z. J. Zhang, L. Z. Cen, J. Y. Hu, and Y. Zhao, Super-resolved angular displacement estimation based upon a Sagnac interferometer and parity measurement, Opt. Express 28, 4320 (2020).
A new approach to the Cramér-Rao-type bound of the pure-state model. K Matsumoto, J. Phys. A: Math. Gen. 353111K. Matsumoto, A new approach to the Cramér-Rao-type bound of the pure-state model, J. Phys. A: Math. Gen. 35, 3111 (2002).
Physical resources for optical phase estimation. J Sahota, N Quesada, D F V James, Phys. Rev. A. 9433817J. Sahota, N. Quesada, and D. F. V. James, Physical re- sources for optical phase estimation, Phys. Rev. A 94, 033817 (2016).
| [] |
[
"Broadband X-ray spectral analysis of the ULX NGC 1313 X-1 using JeTCAF: Origin of the ULX bubble",
"Broadband X-ray spectral analysis of the ULX NGC 1313 X-1 using JeTCAF: Origin of the ULX bubble"
] | [
"B P \nIndian Institute of Astrophysics\nII Block\n560034Koramangala, BangaloreIndia\n\nNicolaus Copernicus Astronomical Center\nPolish Academy of Sciences\nBartycka 1800716WarsawPoland\n"
] | [
"Indian Institute of Astrophysics\nII Block\n560034Koramangala, BangaloreIndia",
"Nicolaus Copernicus Astronomical Center\nPolish Academy of Sciences\nBartycka 1800716WarsawPoland"
] | [] | NGC 1313 X-1 is a mysterious Ultra-luminous X-ray (ULX) source whose X-ray powering mechanism and a bubble-like structure surrounding the source are topics of intense study. Here, we perform the X-ray spectroscopic study of the source using a joint XMM-Newton and NuSTAR observations taken during 2012 − 2017. The combined spectra cover the energy band 0.3 − 20 keV. We use the accretion-ejection-based JeTCAF model for spectral analysis. The model fitted disc mass accretion rate varies from 4.6 to 9.6 Edd and the halo mass accretion rate varies from 4.0 to 6.1 Edd with a dynamic Comptonizing corona of average size of ∼ 15 . The data fitting is carried out for different black hole (BH) mass values. The goodness of the fit and uncertainties in model parameters improve while using higher BH mass with most probable mass of the compact object to be 133 ± 33 M ⊙ . We have estimated the mass outflow rate, its velocity and power, and the age of the inflated bubble surrounding the source. Our estimated bubble morphology is in accord with the observed optical bubble and winds found through high-resolution X-ray spectroscopy, suggesting that the bubble expanded by the outflows originating from the central source. Finally, we conclude that the super-Eddington accretion onto a nearly intermediate mass BH may power a ULX when the accretion efficiency is low, though their efficiency increases when jet/outflow is taken into account, in agreement with numerical simulations in the literature. | 10.1088/1538-3873/accf35 | [
"https://export.arxiv.org/pdf/2304.12731v1.pdf"
] | 258,309,317 | 2304.12731 | a9f28f5fc75b904a6c382e2a12f5fce8b8909c29 |
Broadband X-ray spectral analysis of the ULX NGC 1313 X-1 using JeTCAF: Origin of the ULX bubble
25 Apr 2023
B P
Indian Institute of Astrophysics
II Block
560034Koramangala, BangaloreIndia
Nicolaus Copernicus Astronomical Center
Polish Academy of Sciences
Bartycka 1800716WarsawPoland
Broadband X-ray spectral analysis of the ULX NGC 1313 X-1 using JeTCAF: Origin of the ULX bubble
25 Apr 2023arXiv:2304.12731v1 [astro-ph.HE] D A 26, 2023 Typeset using L A T E X twocolumn style in AASTeX631 3 Equals first authoraccretion, accretion discs -stars: black holes -ISM:bubbles -ISM:jets and outflows -X- rays:individual:NGC 1313 X-1
NGC 1313 X-1 is a mysterious Ultra-luminous X-ray (ULX) source whose X-ray powering mechanism and a bubble-like structure surrounding the source are topics of intense study. Here, we perform the X-ray spectroscopic study of the source using a joint XMM-Newton and NuSTAR observations taken during 2012 − 2017. The combined spectra cover the energy band 0.3 − 20 keV. We use the accretion-ejection-based JeTCAF model for spectral analysis. The model fitted disc mass accretion rate varies from 4.6 to 9.6 Edd and the halo mass accretion rate varies from 4.0 to 6.1 Edd with a dynamic Comptonizing corona of average size of ∼ 15 . The data fitting is carried out for different black hole (BH) mass values. The goodness of the fit and uncertainties in model parameters improve while using higher BH mass with most probable mass of the compact object to be 133 ± 33 M ⊙ . We have estimated the mass outflow rate, its velocity and power, and the age of the inflated bubble surrounding the source. Our estimated bubble morphology is in accord with the observed optical bubble and winds found through high-resolution X-ray spectroscopy, suggesting that the bubble expanded by the outflows originating from the central source. Finally, we conclude that the super-Eddington accretion onto a nearly intermediate mass BH may power a ULX when the accretion efficiency is low, though their efficiency increases when jet/outflow is taken into account, in agreement with numerical simulations in the literature.
INTRODUCTION
Ultra-luminous X-ray sources (ULXs) are point-like sources with isotropic luminosities exceeding a value of 10 39 erg s −1 . To date, a few hundred of ULXs are known (Swartz et al. 2004;Walton et al. 2011). A large number of ULXs are located in star-forming galaxies and associated with young stellar population Swartz et al. 2009;Poutanen et al. 2013). However, their powering mechanism is not yet well-understood. So far, different scenarios have been proposed to explain various observational features including the luminosity of ULXs.
First of them involves, super-Eddington accretion (with or without beaming) onto a stellar mass black holes (StMBH; Gilfanov et al. 2004;Poutanen et al. 2007;King 2009). A key feature predicted by the theory and simulation (Poutanen et al. 2007;Takeuchi et al. 2013;Kobayashi et al. 2018, and refer-Corresponding author: Santanu Mondal [email protected] ences therein) for this type of accretion is the presence of strong optically thick wind, which covers the inner region of the disc and collimates the radiation and also from observations (Middleton et al. 2015, and references therein). While these models give clues to understanding the super-Eddington accretion regime to some extent, many questions about the super-Eddington regime and its connection with ULXs remain open. For instance, 1) what is the degree to which emission is beamed (e.g. King et al. 2001;Jiang et al. 2014;Mushtukov et al. 2021, and references therein)? 2) how much fraction of energy is carried out as outflows? 3) what are the mechanical and radiative feedback induced by ULXs? and 4) what is the exact accretion flow geometry allowing these objects to reach such high luminosity? Conversely, if the StMBH has a highly magnetized accretion disc, then even sub-Eddington accretion can power some ULXs (Mondal & Mukhopadhyay 2019).
The second scenario is sub-Eddington accretion onto so-called intermediate mass black holes (IMBH; Colbert & Mushotzky 1999;Miller et al. 2003, and references therein). This accretion regime is typical for Galac-tic Black Hole Binaries (GBHBs), therefore they could show similar properties in accretion (Kaaret et al. 2001;Miller et al. 2003).
However, these IMBHs may accrete in super-Eddington regime and power some ULXs (Mondal et al. 2022). For instance, by studying the Chandra observations of Antennae galaxy, King et al. (2001) proposed that under certain conditions on stellar companion and binary orbit would allow the possibility that individual ULXs may harbor extremely massive black holes (MBHs), the growth of massive BHs can also be through rapid mass accretion in ∼ 100 M ⊙ BHs (Greene et al. 2020, for a review), after the death of the earliest known Pop-III stars.
However, while it is generally accepted the above two scenarios, the discovery of X-ray pulsations in one ULX (Bachetti et al. 2014) showed that neutron star (NSs) can also attain super-Eddington luminosities. Followed by the discovery, a few more pulsating ULXs (PULXs; Fürst et al. 2016;Israel et al. 2017;Carpano et al. 2018;Sathyaprakash et al. 2019;Rodríguez Castillo et al. 2020) and the possible confirmation of another NSULX through the detection of a cyclotron resonance by strong magnetic field (Brightman et al. 2018) have been identified. These discoveries and findings suggest that NSULX may dominate the ULX population. Yet, there is still some debate on the underlying powering mechanism for such extreme luminosities.
NGC 1313 X-1 (hereafter ULX-1) is located in the starburst galaxy NGC 1313 at a distance of 4.13 Mpc (Méndez et al. 2002). The galaxy also hosts other prominent luminous sources, however, the ULX-1 can be well isolated from other sources, suffers less from background contamination, and is also in proximity to the Earth (z ∼ 0.00157). This provides a unique opportunity to obtain observationally rich information. ULX−1 has been extensively studied in the spectro-temporal domain in the literature. Feng & Kaaret (2006) studied the spectral evolution of both ULX sources (X-1 and X-2) using simple powerlaw (PL) continuum and multi-color disc (MCD) models within the energy range of XMM-Newton before ULX-2 was identified as a likely pulsar (Sathyaprakash et al. 2019). Recently, Walton et al. (2020) analysed combined multi-instrument XMM-Newton+Chandra+NuSTAR spectra of ULX-1 to study the broadband spectral variability using three component disc model. A variability analysis was conducted between different energy bands to understand the causal connection between different emission regions in the accretion disc. Gladstone et al. (2009) reported a spectral cutoff at ∼ 5 keV. For the first time, Bachetti et al. (2013), studied ULX-1 using joint XMM-Newton and NuSTAR data and suggested a spectral break above 10 keV, where the BH is accreting in near Eddington rate. Along with the continuum spectral variability, emission and absorption lines have been observed too for the ULX−1 Walton et al. 2016;Pinto et al. 2020). Kara et al. (2020) took an attempt to explain timing properties as originating from beamed relativistic outflows. Very recently, a shock ionized bubble has been identified around the ULX-1 using MUSE spectroscopic studies , which suggests the presence of outflows from the ULX-1. Similar bubble structure was reported earlier by Pakull & Grisé (2008) in other ULX systems.
Several studies in the literature put light on the mass estimation of the central compact object in ULX−1. Those findings reported two possibilities of the mass of the BH, one in the StMBH to the higher end of the StMBH range (Miller et al. 2003;Soria 2007;Bachetti et al. 2013), while the other in the IMBH range (Miller et al. 2003;Fabian et al. 2004;Wang et al. 2004;Dewangan et al. 2010;Jang et al. 2018;Kobayashi et al. 2019). The quasi-periodic oscillation study suggested mass in the IMBH range (Pasham et al. 2015;Huang 2019). Overall, we see that the mass of the ULX-1 is reported over a very large range, from as low as 11 M ⊙ to as high as 7000 M ⊙ . Therefore, the type of the central compact object is not known to date. Hence, to understand the importance of these differences in opinions, an extensive study on the central object is required.
Here, we highlight some of the observed signatures and evidence, that direct us to consider ULX-1 as a likely BH accretor: (1) The color-color diagram in Pintore et al. (2017) shows that ULX-1 is situated at the centre of the plot while extending towards softer ratios. Moreover, they suggested that ULX-1 might not host a NS accretor. This is firstly supported by the non-detection of pulsations till date, (2) Walton et al. (2020) carried out extensive pulsation search with both XMM-Newton and NuSTAR data of ULX-1, however, did not detect any signal above 3-confidence level. A similar conclusion was drawn by Doroshenko et al. (2015). The non-detection of pulsation could be due to the limited statistics, low pulseperiod, and variable pulsation, which could be improved with additional observation. It can also be possible that the signal is faded by scattering from the wind, (3) According to Gúrpide et al. (2021), a BH accretor can swallow any excess radiation in its vicinity, in the process of stabilising the outflowing radiation. Thus, the absence of large variability in hard energy ranges disfavors the presence of NS accretor, and (4) A dipole field strength of ≤ 10 10 G calculated for ULX-1 considering propellar state transitions (Middleton et al. 2022) is quite low compared to some PULXs.Therefore, we carry out the rest of the analysis of broadband X-ray of ULX-1 considering it a BH candidate.
To understand the rich accretion behavior, several authors have undertaken combined disc-corona models in their study. These models successfully fit the spectra and extract the corona properties. However, most of them are solely radiative transfer mechanism based and disregard the physical origin of the corona and change in its properties (optical depth, size, temperature, etc.). Therefore, it motivates us to use a model which self-consistently considers both disc, dynamic corona, and mass outflow in a single picture.
According to the Two Component Advective Flow (TCAF ) solution (Chakrabarti & Titarchuk 1995), accretion disc has two components one is a standard, high viscosity, optically thick Keplerian disc and the other one is a hot, low viscosity, optically thin sub-Keplerian disc. The second component moves faster and forms the inner dynamic corona after forming a hydrodynamic shock (Fukue 1987;Chakrabarti 1989;Mondal & Chakrabarti 2013, and references therein). In the post-shock region (or dynamic corona) which is also known as CENBOL (CENtrifugal BOundary Layer), matter piles up there and soft photons from the Keplerian disc get upscattered to become hard due to inverse Comptonisation. This model does not include the effects of jet/mass outflow, which is believed to be originated from the base of the dynamic corona (Chakrabarti 1999). Very recently, Mondal & Chakrabarti (2021) implemented jet/mass outflow in TCAF (JeTCAF) solution to examine its effect on the emission spectra. The cartoon diagram of the model is shown in Figure 1.
The JeTCAF model has six parameters, including BH mass, if the mass of the central compact object is not known. These parameters are namely (1) mass of the BH ( BH ), (2) Keplerian mass accretion rate ( ), (3) sub-Keplerian mass accretion rate ( ℎ ), (4) size of the dynamic corona or the location of the shock ( ), (5) shock compression ratio (R=post-shock density/pre-shock density), and (6) jet/mass outflow collimation factor ( col =solid angle subtended by the outflow/inflow). Therefore, one can estimate the outflowing opening angle using this parameter. Based on the opening angle it can be inferred whether the outflow is collimated or not.
In this paper, we aim to analyze the joint XMM-Newton and NuSTAR data and fit them using JeTCAF model to understand the accretion-ejection properties around the ULX-1. The recent discovery of optical bubbles also motivated us to estimate the jet/mass outflow properties in this system using the JeTCAF model. In addition, as the mass of the central BH is still under debate, our study also puts some light on the possible mass estimation of the central BH. In the next section, we discuss the observation and data analysis procedures. In section 3, the model fitted results along with the estimation of different accretion-ejection flow quantities are discussed. We also discuss some of the limitations of the model, X-ray data analysis of ULXs, and the model dependence of the results. Finally, we draw our brief conclusion.
OBSERVATION AND DATA REDUCTION
We used all available joint XMM-Newton and NuSTAR (Harrison et al. 2013) observations of ULX-1 during 2012 to 2017. The log of observations is given in Table 1. The XMM-Newton data are reprocessed using Science Analysis System (SAS) version 19.1.0 and followed the standard procedures given in the SAS data analysis threads . As a first step, epproc routine was executed to generate the calibrated and concatenated event lists. The filtering of the data from background particles was done by selecting a source free region in the neighbourhood of ULX-1. Then we viewed the filtered image using 9 software to select the source and background regions. An extraction region of radius 30" circling the source ULX-1 as well as a nearby background region free of any sources was taken into account. The source and background spectra were produced by restricting patterns to singles and doubles, followed by the "rmf" and "arf" file generation using the standard and task. We extracted the source and background spectra using the routine. Finally, we rebinned the spectra using task to have a minimum of 35 counts in each bin. For the analysis of each epoch of observation, we used the data of XMM-Newton in the energy range of 0.3−10 keV, above 10 keV, data is noisy. The NuSTAR data were extracted using the standard NUSTARDAS software. We ran task to produce cleaned event lists and to generate the spectra. The data were grouped by command, with a minimum of 35 counts in each bin. For the analysis of each epoch of observation, we used the data of NuSTAR in the energy range of 3−20 keV. The data is noisy above 20 keV.
https://www.cosmos.esa.int/web/xmm-newton/sas-threads https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/ We used XSPEC (Arnaud 1996) version 12.11.0 for spectral analysis. Each epoch of the joint observation was fitted using JeTCAF model in the total energy range 0.3−20 keV along with a single neutral hydrogen absorption column (using TBABS model) component. We used abundances (Wilms et al. 2000) and cross-section of Verner et al. (1996) in our analysis. We used chi-square statistics for the goodness of the fitting.
RESULTS AND DISCUSSIONS
Spectral Fitting
All epochs of data in the range from 0.3 − 20 keV are fitted using JeTCAF model considering the mass of the BH as a free parameter (hereafter model M1) and keeping its value fixed to 10, 30, and 100 M ⊙ , which we denote as model M2, M3, and M4 respectively. All other model parameters are left free to vary during fitting including the model normalization ("Norm"). We fixed the constant for EPIC detectors of the XMM-Newton satellite to 1 and left it free for the NuSTAR to determine a cross-calibration constant. This takes into account residual cross-calibration between XMM-Newton and NuSTAR and the possible mismatches due to strictly nonsimultaneous observations. The cross-normalization constant between NuSTAR and XMM-Newton spectra is obtained between 1.05 ± 0.04 − 1.20 ± 0.07 for all epochs using model M1. Other models (M2-M4) also showed a similar range of values. Figure 2 shows the M1 model fitted spectra to the data. The spectra in the epochs A2 and A4 are looking alike, while A3 and A5 are similar, however, A1 appears to be in between those two shapes. Therefore it can be possible that during those epochs, the source passed through the same spectral states. We have discussed this later. The best fitted M1 model fitted parameters are given in Table 2. Figure 3 shows the variation of M1 model fitted parameters with MJD. The top to bottom rows indicates the mass of the black hole, mass accretion rates, shock compression ratio, size of the dynamic corona, and the jet/mass outflow collimation factor respectively. The black hole mass obtained https://heasarc.gsfc.nasa.gov/xanadu/xspec/ from the fit varies in a range between 100-166 M ⊙ with an average of 133 ± 33 M ⊙ , marked by the red solid line with blue dashed lines as uncertainties in mass estimation (in the top panel). The disc mass accretion rate varies in the super-Eddington regime between ∼ 4.6 to 9.6 Edd and the halo accretion rate is also in the super-Eddington regime ∼ 4.0 to 6.1 Edd . The size of the dynamic corona/shock location varies between 13 to 17 and the shock compression ratio changes significantly in the range of 3.2-5.2. The col value is moderately high, fluctuating between 0.6 to 0.8. We kept hydrogen column density (N H ) parameter free during the fitting and obtained its value of 0.17-0.28, consistent with other works in the literature . Overall, it can be seen that the parameters are showing two parallel profiles; during 2012-2014 with 2017. It is likely that the accretion flow behaviour and spectral properties are returned back in 2017 after 3 years. This could be verified if we would have continuous observations of the source. The reduced 2 ( 2 ) value obtained from the fit is ∼ 1 for all epochs except the epoch E, where the 2 is ∼ 1.4.
To further check the goodness of the spectral fit and to verify the mass of the BH, we fit the data using model M2-M4. We notice that for the model M2, the fit is relatively poor ( 2 ∼ 1.2 − 1.9) and the uncertainties to the parameters are high. The fit has improved for the model M3 with 2 ∼ 1.1 −1.4. A similar goodness of the fit is obtained while using model M4, however, the uncertainties in model parameters have improved while increasing the M BH parameter value. Furthermore, the parameters in models M1 and M4 are similar within error bar, showing a convergence in spectral fitting parameters, thereby, the JeTCAF parameters seem robust. All model parameter values, goodness of the fit and the uncertainties in model parameters are given in Table 3. Therefore, based on the mass dependence study and the robustness of the parameters, it can be said that the ULX1 is harbouring a black hole of mass at the lower end (nearly) of the intermediate mass range. However, a longterm and daily basis spectro-timing study may give a robust estimation with smaller uncertainties.
In Figure 4, we show the comparison of BH mass obtained from the model fit in this work (the magenta line with error bar) with the estimations using different models in the literature. We note that, as the luminosity is a product of accretion efficiency ( acc ), M BH , and the mass accretion rate, the overall luminosity may scale up/down depending on the increasing/decreasing individual parameter values. Thereby, it is likely to have a degeneracy in results, which might be the scenario in M BH estimation using phenomenological scaling relations. On the contrary, the shape of an observed spectrum is distinctive, therefore direct fitting of the spectrum using M BH , and accretion rate parameters can minimize the degeneracy to some limit which is the case in JeTCAF model. Here, we are simultaneously solving a series of equations and finally getting the spectrum. A noticeable change in accretion rate changes the spectral shape, that may not be able to fit the observed spectrum with good statistics. Also, comparing the parameter values in Table 2 and 3 show that they are converging for higher M BH values and lower uncertainties. This may infer that the estimated model parameters are minimally degenerate.
Considering the model fitted (from Table 2) BH and total mass inflow rate ( in ) as + ℎ , the accretion luminosity can be estimated to be 3.2-5.4 ×10 41 erg s −1 . However, the observed luminosity ( ) obtained from the fit is ∼ 10 40 erg s −1 . From the above two luminosity values, the accretion efficiency ( acc ) can be estimated to be 0.02. This value is low compared to 0.1 often used in the literature, which is unlikely to be the same for different systems. However, the numerical simulations of ULX sources showed that the acc can be as low as 0.002 (Narayan et al. 2017). Therefore, a nearly IMBH accreting in super-Eddington regime can power a ULX at ≤ 10 40 erg s −1 when the accretion efficiency is low.
Outflow properties and the ULX bubble
In this section, we use the model-fitted parameters to estimate different physical quantities of the mass outflows. The mass outflow to inflow ratio is estimated using the following relation (Chakrabarti 1999
), = col 3/2 4 3 2 − ,(1)
where is 2 −1 . Our estimated (in percent) for epochs A1 to A5 are 12.4±2.2, 15.6±1.8, 20.6±4.2, 12.0±2.3, and 18.1±3.0 respectively for the model fitted parameters and col in Table 2. In epoch A3, the higher outflow ratio and the smaller dynamic corona size explain that a significant amount of thermal energy has been taken away by the outflows, and the corona cooled down. It can be possible that during this epoch the source was in the intermediate state as the shock compression ratio is in agreement with the theoretical range suggested in the model (Chakrabarti 1999). In addition to the above estimation and considering in in the post-shock region or the corona, the jet/outflow rate is written as in . Thereby, the jet/outflow power ( j ) can be estimated using, here, is the jet/outflow efficiency and M ⊙ is the mass of the Sun. The values obtained for j across epochs A1 to A5 are (6.5±1.7, 5.0±1.0, 9.0±3.8, 6.5±1.8 and 7.8±1.8) × 10 40 ergs s −1 respectively. However, as we do not have beforehand, different values of it can give different j . Gúrpide et al. (2022) calculated the disc outflow power using nebula expansion rates and reported that the observed bubble has power ∼10 40 erg s −1 . Therefore, to compare with the observed estimation, has to be ∼ 0.1 − 0.2.
j = 1.3 × 10 38 BH ⊙ in erg s −1 ,(2)
In addition, we have estimated the outflowing solid angle of ∼ 1.6 for the observed epochs using the inflow geometry and col parameter. A wide outflowing solid angle implies that the mass outflow is uncollimated which shaped the observed bubble. We have further estimated the mass outflow velocity (v j ), which varies as √ shk , as the shock is driving the outflow in JeTCAF model, where shk is the shock temperature (the proton temperature). The shk is estimated using the relation ) shk = p ( − 1) 2 /2 2 B ( − 1). Here p and B are the proton mass and Boltzmann constant respectively. The calculated shk varies between 5.9 − 8.6 × 10 10 K. Then equating the thermal energy with the kinetic energy of protons at the jet launching region, which is the CENBOL, v j comes out to be in between 0.1c -0.2c. This is in accord with the results found by Walton et al. (2016). This velocity corresponds to absorption lines which originate from inner regions of the disc Pinto et al. 2020). Therefore, our estimated mass outflowing angle and its velocity agree with previous observational results.
Using the above outflow quantities, we further estimate the age of the bubble ( ) considering a free expansion of the shocked outflowing material (Weaver et al. 1977) through the ambient medium, which is given by (
Assuming that the bubble is expanding through the neutral medium with a mean molecular weight = 1.38, and thus = ISM , where ISM is the hydrogen number density. The value of 134 pc and ISM = 0.6 cm −3 are taken from Gúrpide et al. (2022). Considering other jet quantities from the JeTCAF model fit (see Table 2), Equation 3 gives the age of the bubble in the range ∼ 3.3 − 6.5 × 10 5 yr, in agreement with the range suggested by Gúrpide et al. (2022). We note that the mechanical power estimated in the denominator of Equation 3 differs from the jet power estimated using Equation 2 as it estimates total power; both mechanical and thermal.
Hence we report that a nearly IMBH accreting at super-Eddington rates is able to explain the observational features and different time scales of formation and evolution of the ULX-1 bubble. ULX-1 is a suspected BH candidate as discussed in section 1. However, what has not been previously reported is the estimation of mass accretion by the central IMBH and the flow geometry using physical mod-els. In principle, IMBH can accrete at super-Eddington rates. Though the existence of such IMBH is in dispute, and many proposed candidates are not widely accepted as definitive, these IMBHs might be necessary to explain the large gap in mass between StMBH and SMBHs. The strongest observational evidence for the existence of IMBHs was presented by Farrell et al. (2009) in the edge-on spiral galaxy ESO 243-49. Recently, studies through gravitational waves have reported BH mass of a 150 M ⊙ to exist (Abbott et al. 2020). Some other studies also evidenced that SMBHs can accrete above their Eddington limit (Du et al. 2015;Liu et al. 2021, and references therein). In XMM-Newton spectral studies Jin et al. (2016) found the evidence of super-Eddington accretion onto RX J1140.1+0307, active galactic nuclei whose mass lies in the IMBH range (≤ 10 6 M ⊙ ).
Limitations and Directions for Improvements
The JeTCAF model fitted accretion parameters show that ULX−1 is a super-Eddington accretor harboring a nearly IMBH. Such super-Eddington accretion flows would lead to the formation of a strong wind perpendicular to the disc surface (Shakura & Sunyaev 1973). The radiation pressure in this accretion regime may drive the wind (King & Begelman 1999), which can carry a large amount of mass from the disc. Likewise, the outflowing wind may also carry a significant amount of energy and angular momentum depending on the physical processes depositing them to the wind (for radiatively inefficient flow, Blandford & Begelman 1999).
Moreover, extracting information from the observations in the X-ray band is limited by our line of sight. Therefore, testing the degree of anisotropy of the X-ray emission remains challenging (see Middleton et al. 2021). A strong anisotropy is predicted in several theoretical studies in the super-Eddington accretion regime (Shakura & Sunyaev 1973;Poutanen et al. 2007;Narayan et al. 2017), which is still a poorly understood accretion regime. Further, the present model does not include the disc inclination and spin parameter, that may affect the anisotropy effects, which are beyond the scope of the present work. Thus the current estimations of the model parameter values and the related physical quantities (subsection 3.2) may vary in detail modeling, keeping the parameter profiles unchanged.
CONCLUSION
We have conducted a joint XMM-Newton+NuSTAR analysis of the well-known ULX NGC 1313 X-1 which evidenced BH at its center. We have used JeTCAF model to study the observed features of the accretion-ejection system. Our key findings are enlisted below:
• The mass accretion rates returned from the JeTCAF model fits to the data are super-Eddington, which is consistent with the earlier findings (section 1) that the ULX-1 is a super-Eddington accretor.
• The mass outflow to inflow ratio estimated is ∼ 21% -12% with the outflowing solid angle ∼ 1.6 . Such a wide angle may indicate that the outflow is uncollimated which shaped the observed bubble, agrees with optical observations.
• The possible BH mass returned from the data fitting is 133 ± 33 M ⊙ , averaged over all observations. This implies that the ULX-1 harbors a nearly IMBH at its center. We redo the fitting by keeping the BH mass to 10, 30, and 100 M ⊙ and check the consistency of the goodness of the fit and the uncertainties of the model parameters. We find that the BH mass > 30 ⊙ returns a good fit, however, the uncertainty in model parameters improves at higher BH mass value.
• The super-Eddington accretion onto an IMBH can power a ULX at ≤ 10 40 erg s −1 if the accretion efficiency is low ∼ 0.02, however, their efficiency (∼ 0.1 − 0.2) increases when the jet/outflow is taken into account, consistent with numerical simulations in the literature (Narayan et al. 2017).
• The JeTCAF model fitted parameters can explain the observed power of the recently discovered bubble ) around the ULX-1, its age, and the wind launching velocity estimated from highresolution spectroscopy ).
• According to possibilities discussed in King (2004), an IMBH can behave like a ULX when it accretes mass from any large mass reservoir with a high accretion rate 10 −6 M ⊙ yr −1 , consistent with our mass accretion rates. Thus, a fraction of ULXs discovered could be hosted by IMBHs.
The above conclusions are drawn from the X-ray spectral fitting using a physically motivated non-magnetic accretion-ejection based JeTCAF model. In the present model scenario, it emerges as a possibility of powering ULXs (or at least some ULXs) by super-Eddington accretion onto nearly IMBHs, which can also explain the ULX bubble properties. However, analysis of a large sample of ULXs is needed to further support this possibility. As discussed, some physical processes are required to implement in the modelling to further constrain the accretion-ejection parameters. TAR Data Analysis Software ( ) jointly developed by the ASI Science Data Center (ASDC), Italy and the California Institute of Technology (Caltech), USA. This work is based on observations obtained with XMM-Newton, an European Science Agency (ESA) science mission with instruments and contributions directly funded by ESA Member States and NASA. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by NASA/Goddard Space Flight Center.
DATA AVAILABILITY
Data used for this work is publicly available to NASA's HEASARC archive. The JeTCAF model is currently not available in XSPEC, however, we are open to collaborate with the community. Presently, we are running the source code in XSPEC as a local model, it will be freely available in near future.
Figure 1 .
1Illustration of the JeTCAF model. The blue and red colors show the Keplerian and sub-Keplerian flows respectively. Brown colored region is the inflated, hot CENBOL region. The yellow color shows the ejection of jet. The zig-zag arrows show the scattering of disc photons by different Comptonizing mediums. The figure is adapted from(Mondal & Chakrabarti 2021)
Figure 2 .
2Spectral fitting for all five epochs with JeTCAF model are shown. The upper segment is the spectra while the bottom segment is the ratio. The black-colored data corresponds to XMM-Newton EPIC-PN while red-colored data corresponds to NuSTAR. The slight misalignment between XMM-Newton and NuSTAR data is due to residual cross-calibration and possibly the non-perfect simultaneity of the observations.
Figure 3 .
3Best fitted JeTCAF model parameters variation with MJD are shown.
Figure 4 .
4A comparion of BH mass estimated in this work and other works in the literature. Red, green, and blue colored data points represent mass estimates above 1000M ⊙ , between 1000 M ⊙ -100 M ⊙ and less than 100 M ⊙ respectively. The X-axis indicates the models used to estimate the mass. The magenta point with error bar represents the mass estimated in this work
Table 1 .
1Observation log of joint XMM-Newton and NuSTAR data.Epoch
ObsID
ObsID
Date
XMM-Newton
NuSTAR
A1
0803990601 30302016010 2017-12-09 58096
A2
0803990101 30302016002 2017-06-14 57918
A3
0794580601 90201050002 2017-03-29 57841
A4
0742590301 80001032002 2014-07-05 56843
A5
0693850501 30002035002 2012-12-16 56277
Table 2 .
2The broadband spectral parameters of NGC 1313 X-1 when fitted with JeTCAF model. The M BH , , ℎ , , , col and Norm are the mass of the black hole, disk and halo mass accretion rates, location of the shock or size of the corona, shock compression ratio, jet collimation factor, and model normalization respectively. Here, N is the hydrogen column density along the LOS.Obs.Ids.
M BH
d
h
s
R
col
Norm
N
2 /
⊙
Edd
Edd
g
×10 22 cm −2
A1
163.2 ± 21.2 4.61 ± 0.44 3.96 ± 0.26 14.9 ± 2.1 5.1 ± 0.3 0.77 ± 0.06 0.33 ± 0.08 0.28 ± 0.02 267/256
A2
100.1 ± 7.1 6.11 ± 0.71 4.22 ± 0.26 16.9 ± 1.1 4.4 ± 0.2 0.69 ± 0.05 0.57 ± 0.19 0.27 ± 0.02 347/304
A3
128.5 ± 18.5 9.55 ± 1.86 5.21 ± 0.79 15.0 ± 2.6 3.2 ± 0.8 0.60 ± 0.06 0.25 ± 0.06 0.22 ± 0.04 219/211
A4
166.3 ± 26.4 4.58 ± 0.51 3.95 ± 0.28 14.7 ± 2.3 5.2 ± 0.3 0.79 ± 0.07 0.32 ± 0.09 0.28 ± 0.02 259/236
A5
106.0 ± 8.1 8.90 ± 0.86 6.05 ± 0.30 13.3 ± 2.1 3.7 ± 0.4 0.61 ± 0.06 0.18 ± 0.01 0.17 ± 0.01 367/270
Table 3 .
3The broadband spectral parameters of NGC 1313 X-1 fitted with JeTCAF model keeping the M BH parameter fixed (denoted by ) to 10, 30, and 100 M ⊙ .Obs.Ids.
BH
d
h
s
R
col
Norm
2 /
⊙
Edd
Edd
g
×10 22 cm −2
A1
5.18 ± 1.36 4.11 ± 0.75 16.9 ± 1.1 5.2 ± 0.6 0.77 ± 0.18 4.38 ± 0.76 0.16 ± 0.01 300/257
A2
6.57 ± 0.77 3.89 ± 0.35 23.4 ± 1.8 4.5 ± 0.3 0.76 ± 0.06 5.22 ± 0.38 0.20 ± 0.01 354/305
A3
10
10.01 ± 1.75 3.61 ± 0.34 21.2 ± 4.8 1.9 ± 0.7 0.99 ± 0.51 2.66 ± 1.29 0.13 ± 0.02 307/212
A4
4.21 ± 1.57 3.28 ± 0.89 18.4 ± 2.1 5.5 ± 0.8 0.99 ± 0.37 3.94 ± 1.33 0.17 ± 0.02 277/237
A5
4.97 ± 0.44 3.27 ± 0.20 24.4 ± 2.7 4.9 ± 0.2 0.97 ± 0.10 2.05 ± 0.18 0.12 ± 0.01 512/271
A1
5.19 ± 0.94 3.85 ± 0.40 17.8 ± 1.6 5.0 ± 0.4 0.79 ± 0.10 1.66 ± 0.18 0.23 ± 0.01 272/257
A2
6.16 ± 0.64 3.84 ± 0.30 21.1 ± 3.6 4.5 ± 0.2 0.75 ± 0.05 1.86 ± 0.23 0.24 ± 0.02 346/305
A3
30
10.72 ± 3.67 3.86 ± 1.87 22.3 ± 6.7 2.0 ± 0.6 0.97 ± 0.30 0.92 ± 0.59 0.18 ± 0.04 252/212
A4
4.01 ± 0.82 3.30 ± 0.76 18.1 ± 4.2 5.5 ± 0.5 0.99 ± 0.26 1.39 ± 0.37 0.23 ± 0.03 266/237
A5
7.84 ± 0.66 4.53 ± 0.48 22.9 ± 4.7 4.0 ± 0.5 0.73 ± 0.10 0.77 ± 0.10 0.16 ± 0.02 383/271
A1
4.94 ± 0.49 3.88 ± 0.39 17.9 ± 2.9 5.0 ± 0.4 0.78 ± 0.10 0.58 ± 0.12 0.28 ± 0.02 267/257
A2
6.10 ± 0.71 4.22 ± 0.26 16.9 ± 1.0 4.4 ± 0.2 0.69 ± 0.04 0.57 ± 0.04 0.27 ± 0.07 347/305
A3
100
11.82 ± 2.54 4.54 ± 1.03 21.8 ± 4.9 2.1 ± 0.4 0.84 ± 0.24 0.30 ± 0.17 0.22 ± 0.02 239/212
A4
4.12 ± 0.72 3.57 ± 0.34 16.4 ± 2.1 5.4 ± 0.3 0.89 ± 0.11 0.49 ± 0.10 0.27 ± 0.01 260/237
A5
8.91 ± 0.88 6.05 ± 0.36 13.3 ± 1.4 3.7 ± 0.3 0.61 ± 0.02 0.19 ± 0.01 0.17 ± 0.02 369/271
90
125
160
195
M BH
3.8
6.8
9.8
m
d
3.5
4.5
5.5
m
h
11
15
19
X
s
2.2
3.2
4.2
5.2
R
56500
57000
57500
58000
Observation day [MJD]
0.50
0.65
0.80
f col
ACKNOWLEDGEMENTSWe thank the referee for making constructive comments and suggestions that improved the quality of the manuscript. We gratefully acknowledge the Ramanujan Fellowship research grant (file # RJF/2020/000113) by SERB, DST, Govt. of India for this work. This research has made use of the NuS-
. R Abbott, T D Abbott, S Abraham, 10.1103/PhysRevLett.125.101102PhRvL. 125101102Abbott, R., Abbott, T. D., Abraham, S., et al. 2020, PhRvL, 125, 101102, doi: 10.1103/PhysRevLett.125.101102
K A Arnaud, The First Ten Years. G. H. Jacoby & J. Barnes10117Arnaud, K. A. 1996, Astronomical Society of the Pacific Conference Series, Vol. 101, XSPEC: The First Ten Years, ed. G. H. Jacoby & J. Barnes, 17
. M Bachetti, V Rana, D J Walton, 10.1088/0004-637X/778/2/163ApJ. 778163Bachetti, M., Rana, V., Walton, D. J., et al. 2013, ApJ, 778, 163, doi: 10.1088/0004-637X/778/2/163
. M Bachetti, F A Harrison, D J Walton, 10.1038/nature13791Nature. 514202Bachetti, M., Harrison, F. A., Walton, D. J., et al. 2014, Nature, 514, 202, doi: 10.1038/nature13791
. R D Blandford, M C Begelman, 10.1046/j.1365-8711.1999.02358.xMNRAS. 3031Blandford, R. D., & Begelman, M. C. 1999, MNRAS, 303, L1, doi: 10.1046/j.1365-8711.1999.02358.x
. M Brightman, F A Harrison, F Fürst, 10.1038/s41550-018-0391-6Nature Astronomy. 2312Brightman, M., Harrison, F. A., Fürst, F., et al. 2018, Nature Astronomy, 2, 312, doi: 10.1038/s41550-018-0391-6
. S Carpano, F Haberl, C Maitra, G Vasilopoulos, 10.1093/mnrasl/sly030MNRAS. 47645Carpano, S., Haberl, F., Maitra, C., & Vasilopoulos, G. 2018, MNRAS, 476, L45, doi: 10.1093/mnrasl/sly030
. S Chakrabarti, L G Titarchuk, 10.1086/176610ApJ. 455623Chakrabarti, S., & Titarchuk, L. G. 1995, ApJ, 455, 623, doi: 10.1086/176610
. S K Chakrabarti, 10.1086/168125ApJ. 347365Chakrabarti, S. K. 1989, ApJ, 347, 365, doi: 10.1086/168125
. A&A. 351185-. 1999, A&A, 351, 185. https://arxiv.org/abs/astro-ph/9910014
. E J M Colbert, R F Mushotzky, 10.1086/307356ApJ. 51989Colbert, E. J. M., & Mushotzky, R. F. 1999, ApJ, 519, 89, doi: 10.1086/307356
. D Debnath, S K Chakrabarti, S Mondal, 10.1093/mnrasl/slu024MNRAS. 440121Debnath, D., Chakrabarti, S. K., & Mondal, S. 2014, MNRAS, 440, L121, doi: 10.1093/mnrasl/slu024
. G C Dewangan, R Misra, A R Rao, R E Griffiths, 10.1111/j.1365-2966.2010.16893.xMNRAS. 407291Dewangan, G. C., Misra, R., Rao, A. R., & Griffiths, R. E. 2010, MNRAS, 407, 291, doi: 10.1111/j.1365-2966.2010.16893.x
. V Doroshenko, A Santangelo, L Ducci, 10.1051/0004-6361/201425225A&A. 57922Doroshenko, V., Santangelo, A., & Ducci, L. 2015, A&A, 579, A22, doi: 10.1051/0004-6361/201425225
. P Du, C Hu, K.-X Lu, 10.1088/0004-637X/806/1/22ApJ. 80622Du, P., Hu, C., Lu, K.-X., et al. 2015, ApJ, 806, 22, doi: 10.1088/0004-637X/806/1/22
. G Fabbiano, A Zezas, S S Murray, 10.1086/321397ApJ. 5541035Fabbiano, G., Zezas, A., & Murray, S. S. 2001, ApJ, 554, 1035, doi: 10.1086/321397
. A C Fabian, R R Ross, J M Miller, 10.1111/j.1365-2966.2004.08322.xMNRAS. 355359Fabian, A. C., Ross, R. R., & Miller, J. M. 2004, MNRAS, 355, 359, doi: 10.1111/j.1365-2966.2004.08322.x
. S A Farrell, N A Webb, D Barret, O Godet, J M Rodrigues, 10.1038/nature08083Nature. 46073Farrell, S. A., Webb, N. A., Barret, D., Godet, O., & Rodrigues, J. M. 2009, Nature, 460, 73, doi: 10.1038/nature08083
. H Feng, P Kaaret, 10.1086/508613ApJL. 65075Feng, H., & Kaaret, P. 2006, ApJL, 650, L75, doi: 10.1086/508613
. J Fukue, PASJ. 39309Fukue, J. 1987, PASJ, 39, 309
. F Fürst, D J Walton, F A Harrison, 10.3847/2041-8205/831/2/L14ApJL. 83114Fürst, F., Walton, D. J., Harrison, F. A., et al. 2016, ApJL, 831, L14, doi: 10.3847/2041-8205/831/2/L14
. M Gilfanov, H J Grimm, R Sunyaev, 10.1016/j.nuclphysbps.2004.04.065Nuclear Physics B Proceedings Supplements. 132369Gilfanov, M., Grimm, H. J., & Sunyaev, R. 2004, Nuclear Physics B Proceedings Supplements, 132, 369, doi: 10.1016/j.nuclphysbps.2004.04.065
. J C Gladstone, T P Roberts, C Done, 10.1111/j.1365-2966.2009.15123.xMonthly Notices of the Royal Astronomical Society. 3971836Gladstone, J. C., Roberts, T. P., & Done, C. 2009, Monthly Notices of the Royal Astronomical Society, 397, 1836, doi: 10.1111/j.1365-2966.2009.15123.x
. J E Greene, J Strader, L C Ho, 10.1146/annurev-astro-032620-021835ARA&A. 58257Greene, J. E., Strader, J., & Ho, L. C. 2020, ARA&A, 58, 257, doi: 10.1146/annurev-astro-032620-021835
. A Gúrpide, O Godet, F Koliopanos, N Webb, J F Olive, 10.1051/0004-6361/202039572A&A. 649Gúrpide, A., Godet, O., Koliopanos, F., Webb, N., & Olive, J. F. 2021, A&A, 649, A104, doi: 10.1051/0004-6361/202039572
. A Gúrpide, M Parra, O Godet, T Contini, J.-F Olive, arXiv:2201.09333arXiv e-printsGúrpide, A., Parra, M., Godet, O., Contini, T., & Olive, J.-F. 2022, arXiv e-prints, arXiv:2201.09333. https://arxiv.org/abs/2201.09333
. F A Harrison, W W Craig, F E Christensen, 10.1088/0004-637X/770/2/103ApJ. 770103Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013, ApJ, 770, 103, doi: 10.1088/0004-637X/770/2/103
C Y Huang, Contributions of the Astronomical Observatory Skalnate Pleso. 49Huang, C. Y. 2019, Contributions of the Astronomical Observatory Skalnate Pleso, 49, 7. https://arxiv.org/abs/1901.01480
. G L Israel, A Belfiore, L Stella, 10.1126/science.aai8635Science. 355Israel, G. L., Belfiore, A., Stella, L., et al. 2017, Science, 355, 817, doi: 10.1126/science.aai8635
. I Jang, M Gliozzi, S Satyapal, L Titarchuk, 10.1093/mnras/stx2178MNRAS. 473136Jang, I., Gliozzi, M., Satyapal, S., & Titarchuk, L. 2018, MNRAS, 473, 136, doi: 10.1093/mnras/stx2178
. Y.-F Jiang, J M Stone, S W Davis, 10.1088/0004-637X/796/2/106ApJ. 796106Jiang, Y.-F., Stone, J. M., & Davis, S. W. 2014, ApJ, 796, 106, doi: 10.1088/0004-637X/796/2/106
. C Jin, C Done, M Ward, 10.1093/mnras/stv2319MNRAS. 455691Jin, C., Done, C., & Ward, M. 2016, MNRAS, 455, 691, doi: 10.1093/mnras/stv2319
. P Kaaret, A H Prestwich, A Zezas, 10.1046/j.1365-8711.2001.04064.xMNRAS. 32129Kaaret, P., Prestwich, A. H., Zezas, A., et al. 2001, MNRAS, 321, L29, doi: 10.1046/j.1365-8711.2001.04064.x
. E Kara, C Pinto, D J Walton, 10.1093/mnras/stz3318MNRAS. 4915172Kara, E., Pinto, C., Walton, D. J., et al. 2020, MNRAS, 491, 5172, doi: 10.1093/mnras/stz3318
. A R King, 10.1111/j.1745-3933.2008.00594.xdoi: 10.1111/j.1745-3933.2008.00594.xMNRAS. 34741MNRASKing, A. R. 2004, MNRAS, 347, L18, doi: 10.1111/j.1365-2966.2004.07403.x -. 2009, MNRAS, 393, L41, doi: 10.1111/j.1745-3933.2008.00594.x
. A R King, M C Begelman, 10.1086/312126ApJL. 519169King, A. R., & Begelman, M. C. 1999, ApJL, 519, L169, doi: 10.1086/312126
. A R King, M B Davies, M J Ward, G Fabbiano, M Elvis, 10.1086/320343ApJL. 552109King, A. R., Davies, M. B., Ward, M. J., Fabbiano, G., & Elvis, M. 2001, ApJL, 552, L109, doi: 10.1086/320343
. H Kobayashi, K Ohsuga, H R Takahashi, 10.1093/pasj/psx157PASJ. 7022Kobayashi, H., Ohsuga, K., Takahashi, H. R., et al. 2018, PASJ, 70, 22, doi: 10.1093/pasj/psx157
. S B Kobayashi, K Nakazawa, K Makishima, 10.1093/mnras/stz2139MNRAS. 489366Kobayashi, S. B., Nakazawa, K., & Makishima, K. 2019, MNRAS, 489, 366, doi: 10.1093/mnras/stz2139
. H Liu, B Luo, W N Brandt, 10.3847/1538-4357/abe37fApJ. 910103Liu, H., Luo, B., Brandt, W. N., et al. 2021, ApJ, 910, 103, doi: 10.3847/1538-4357/abe37f
. B Méndez, M Davis, J Moustakas, 10.1086/341168AJ. 124213Méndez, B., Davis, M., Moustakas, J., et al. 2002, AJ, 124, 213, doi: 10.1086/341168
. M Middleton, A Gúrpide, D J Walton, 10.1093/mnras/stac3380MNRAS. Middleton, M., Gúrpide, A., & Walton, D. J. 2022, MNRAS, doi: 10.1093/mnras/stac3380
. M J Middleton, D J Walton, A Fabian, 10.1093/mnras/stv2214MNRAS. 3134Middleton, M. J., Walton, D. J., Fabian, A., et al. 2015, MNRAS, 454, 3134, doi: 10.1093/mnras/stv2214
. M J Middleton, D J Walton, W Alston, 10.1093/mnras/stab1280MNRAS. 5061045Middleton, M. J., Walton, D. J., Alston, W., et al. 2021, MNRAS, 506, 1045, doi: 10.1093/mnras/stab1280
. J M Miller, G Fabbiano, M C Miller, A C Fabian, 10.1086/368373ApJL. 58537Miller, J. M., Fabbiano, G., Miller, M. C., & Fabian, A. C. 2003, ApJL, 585, L37, doi: 10.1086/368373
. S Mondal, S K Chakrabarti, 10.3847/1538-4357/ac14c2doi: 10.3847/1538-4357/ac14c2MNRAS. 43141ApJMondal, S., & Chakrabarti, S. K. 2013, MNRAS, 431, 2716, doi: 10.1093/mnras/stt361 -. 2021, ApJ, 920, 41, doi: 10.3847/1538-4357/ac14c2
. S Mondal, S K Chakrabarti, D Debnath, 10.1007/s10509-014-2008-6Ap&SS. 353223Mondal, S., Chakrabarti, S. K., & Debnath, D. 2014, Ap&SS, 353, 223, doi: 10.1007/s10509-014-2008-6
. S Mondal, B Palit, S K Chakrabarti, 10.1007/s12036-022-09881-0Journal of Astrophysics and Astronomy. 4390Mondal, S., Palit, B., & Chakrabarti, S. K. 2022, Journal of Astrophysics and Astronomy, 43, 90, doi: 10.1007/s12036-022-09881-0
. T Mondal, B Mukhopadhyay, 10.1093/mnrasl/sly165MNRAS. 48224Mondal, T., & Mukhopadhyay, B. 2019, MNRAS, 482, L24, doi: 10.1093/mnrasl/sly165
. A A Mushtukov, S Portegies Zwart, S S Tsygankov, D I Nagirner, J Poutanen, 10.1093/mnras/staa3809MNRAS. 5012424Mushtukov, A. A., Portegies Zwart, S., Tsygankov, S. S., Nagirner, D. I., & Poutanen, J. 2021, MNRAS, 501, 2424, doi: 10.1093/mnras/staa3809
. R Narayan, A Saì §dowski, R Soria, 10.1093/mnras/stx1027MNRAS. 4692997Narayan, R., SaÌ §dowski, A., & Soria, R. 2017, MNRAS, 469, 2997, doi: 10.1093/mnras/stx1027
A Population Explosion: The Nature & Evolution of X-ray Binaries in Diverse Environments. M W Pakull, F Grisé, 10.1063/1.2945062American Institute of Physics Conference Series. R. M. Bandyopadhyay, S. Wachter, D. Gelino, & C. R. Gelino1010Pakull, M. W., & Grisé, F. 2008, in American Institute of Physics Conference Series, Vol. 1010, A Population Explosion: The Nature & Evolution of X-ray Binaries in Diverse Environments, ed. R. M. Bandyopadhyay, S. Wachter, D. Gelino, & C. R. Gelino, 303-307, doi: 10.1063/1.2945062
. D R Pasham, S B Cenko, A Zoghbi, 10.1088/2041-8205/811/1/L11ApJL. 81111Pasham, D. R., Cenko, S. B., Zoghbi, A., et al. 2015, ApJL, 811, L11, doi: 10.1088/2041-8205/811/1/L11
. C Pinto, M J Middleton, A C Fabian, 10.1038/nature17417Nature. 533Pinto, C., Middleton, M. J., & Fabian, A. C. 2016, Nature, 533, 64, doi: 10.1038/nature17417
. C Pinto, D J Walton, E Kara, 10.1093/mnras/staa118MNRAS. 4924646Pinto, C., Walton, D. J., Kara, E., et al. 2020, MNRAS, 492, 4646, doi: 10.1093/mnras/staa118
. F Pintore, L Zampieri, L Stella, 10.3847/1538-4357/836/1/113ApJ. 836113Pintore, F., Zampieri, L., Stella, L., et al. 2017, ApJ, 836, 113, doi: 10.3847/1538-4357/836/1/113
. J Poutanen, S Fabrika, A F Valeev, O Sholukhova, J Greiner, 10.1093/mnras/stt487MNRAS. 432506Poutanen, J., Fabrika, S., Valeev, A. F., Sholukhova, O., & Greiner, J. 2013, MNRAS, 432, 506, doi: 10.1093/mnras/stt487
. J Poutanen, G Lipunova, S Fabrika, A G Butkevich, P Abolmasov, 10.1111/j.1365-2966.2007.11668.xMNRAS. 3771187Poutanen, J., Lipunova, G., Fabrika, S., Butkevich, A. G., & Abolmasov, P. 2007, MNRAS, 377, 1187, doi: 10.1111/j.1365-2966.2007.11668.x
. Rodríguez Castillo, G A Israel, G L Belfiore, A , 10.3847/1538-4357/ab8a44ApJ. 89560Rodríguez Castillo, G. A., Israel, G. L., Belfiore, A., et al. 2020, ApJ, 895, 60, doi: 10.3847/1538-4357/ab8a44
. R Sathyaprakash, T P Roberts, D J Walton, 10.1093/mnrasl/slz086MNRAS. 48835Sathyaprakash, R., Roberts, T. P., Walton, D. J., et al. 2019, MNRAS, 488, L35, doi: 10.1093/mnrasl/slz086
. N I Shakura, R A Sunyaev, A&A. 24337Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
. R Soria, 10.1007/s10509-007-9599-0Ap&SS. 311Soria, R. 2007, Ap&SS, 311, 213, doi: 10.1007/s10509-007-9599-0
. D A Swartz, K K Ghosh, A F Tennant, K Wu, 10.1086/422842ApJS. 154519Swartz, D. A., Ghosh, K. K., Tennant, A. F., & Wu, K. 2004, ApJS, 154, 519, doi: 10.1086/422842
. D A Swartz, A F Tennant, R Soria, 10.1088/0004-637X/703/1/159ApJ. 703159Swartz, D. A., Tennant, A. F., & Soria, R. 2009, ApJ, 703, 159, doi: 10.1088/0004-637X/703/1/159
. S Takeuchi, K Ohsuga, S Mineshige, 10.1093/pasj/65.4.88PASJ. 65Takeuchi, S., Ohsuga, K., & Mineshige, S. 2013, PASJ, 65, 88, doi: 10.1093/pasj/65.4.88
. D A Verner, G J Ferland, K T Korista, D G Yakovlev, 10.1086/177435ApJ. 465487Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, ApJ, 465, 487, doi: 10.1086/177435
. D J Walton, T P Roberts, S Mateos, V Heard, 10.1111/j.1365-2966.2011.19154.xMNRAS. 4161844Walton, D. J., Roberts, T. P., Mateos, S., & Heard, V. 2011, MNRAS, 416, 1844, doi: 10.1111/j.1365-2966.2011.19154.x
. D J Walton, M J Middleton, C Pinto, 10.3847/2041-8205/826/2/L26ApJL. 82626Walton, D. J., Middleton, M. J., Pinto, C., et al. 2016, ApJL, 826, L26, doi: 10.3847/2041-8205/826/2/L26
. D J Walton, C Pinto, M Nowak, 10.1093/mnras/staa1129MNRAS. 4946012Walton, D. J., Pinto, C., Nowak, M., et al. 2020, MNRAS, 494, 6012, doi: 10.1093/mnras/staa1129
. Q D Wang, Y Yao, W Fukui, S N Zhang, R Williams, 10.1086/420970ApJ. 609113Wang, Q. D., Yao, Y., Fukui, W., Zhang, S. N., & Williams, R. 2004, ApJ, 609, 113, doi: 10.1086/420970
. R Weaver, R Mccray, J Castor, P Shapiro, R Moore, 10.1086/155692ApJ. 218377Weaver, R., McCray, R., Castor, J., Shapiro, P., & Moore, R. 1977, ApJ, 218, 377, doi: 10.1086/155692
. J Wilms, A Allen, R Mccray, 10.1086/317016ApJ. 542914Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914, doi: 10.1086/317016
| [] |
[
"A Systems Thinking for Cybersecurity Modeling",
"A Systems Thinking for Cybersecurity Modeling"
] | [
"Dingyu Yan [email protected] \nInstitute of Information Engineering\nSchool of Cyber Security\nState Key Laboratory of Information Security\nChinese Academy of Sciences\nBeijingChina\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n"
] | [
"Institute of Information Engineering\nSchool of Cyber Security\nState Key Laboratory of Information Security\nChinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina"
] | [] | Solving cybersecurity issues requires a holistic understanding of components, factors, structures and their interactions in cyberspace, but conventional modeling approaches view the field of cybersecurity by their boundaries so that we are still not clear to cybersecurity and its changes. In this paper, we attempt to discuss the application of systems thinking approaches to cybersecurity modeling. This paper reviews the systems thinking approaches and provides the systems theories and methods for tackling cybersecurity challenges, regarding relevant fields, associated impact factors and their interactions. Moreover, an illustrative example of systems thinking frameworks for cybersecurity modeling is developed to help broaden the mind in methodology, theory, technology and practice. This article concludes that systems thinking can be considered as one of the powerful tools of cybersecurity modeling to find, characterize, understand, evaluate and predict cybersecurity. | null | [
"https://arxiv.org/pdf/2001.05734v1.pdf"
] | 210,701,125 | 2001.05734 | c2bad905d6983cfa64bf43ad5282bd094b61c288 |
A Systems Thinking for Cybersecurity Modeling
Dingyu Yan [email protected]
Institute of Information Engineering
School of Cyber Security
State Key Laboratory of Information Security
Chinese Academy of Sciences
BeijingChina
University of Chinese Academy of Sciences
BeijingChina
A Systems Thinking for Cybersecurity Modeling
Index Terms-Cybersecurity ModelingScience of Cybersecu- ritySystems ThinkingHolistic Approach
Solving cybersecurity issues requires a holistic understanding of components, factors, structures and their interactions in cyberspace, but conventional modeling approaches view the field of cybersecurity by their boundaries so that we are still not clear to cybersecurity and its changes. In this paper, we attempt to discuss the application of systems thinking approaches to cybersecurity modeling. This paper reviews the systems thinking approaches and provides the systems theories and methods for tackling cybersecurity challenges, regarding relevant fields, associated impact factors and their interactions. Moreover, an illustrative example of systems thinking frameworks for cybersecurity modeling is developed to help broaden the mind in methodology, theory, technology and practice. This article concludes that systems thinking can be considered as one of the powerful tools of cybersecurity modeling to find, characterize, understand, evaluate and predict cybersecurity.
I. INTRODUCTION
Ever since the concept of "cyberspace" is defined clearly, the boundary of security is extended to the real-world domain related to digital technology rather than the only virtual environment created by computer networks [1]. Though there are already several works dedicated to cybersecurity modeling [2] [3], we still have a vague understanding on cybersecurity. Firstly, it is difficult to exploit and evaluate the synergies among the defensive measures to enhance cybersecurity [4]. Despite the large investments in the security field from nation, enterprises and individuals, we cannot know whether these defensive measures can really work against cyberattacks. Secondly, the security system lacks the capacity to measure its current security situation comprehensively and precisely. Nowadays, there is no set of the unified and accepted evaluation system and metrics for cybersecurity modeling. Thirdly, systemic components, factors and their interaction are often ignored and omitted in several models [5]. The multiple relationships and interaction among the components greatly increase the difficulty of cybersecurity modeling and analysis.
To better understand the essential characters of cybersecurity and resolve the cybersecurity challenges, several studies attempt to apply systems thinking approaches to the cybersecurity field [6] [7]. Systems thinking is considered to offer a novel and comprehensive perspective to reveal the entire process of cybersecurity as a system. Also, by these systems thinking approaches, researchers plan to establish a conceptual framework for measuring and evaluating the cybersecurity system and its constitutes [8] [9], such as defensive measures, human factors, security policy. Thus, the typical goals of the systems thinking for cybersecurity modeling are exemplified as follows: (1) discovering the multiple impact factors and their interacting effects; (2) investigating fundamental laws in cybersecurity; (3) exploring the theoretical and real solution to the specific security issue; (4) evaluating effective attack weapons and defense measures in the specific scenario.
This article mainly urges the importance of systems thinking in cybersecurity modeling. Firstly, we analyze the primary characteristics of cybersecurity and the challenges in cybersecurity modeling in Section II. Then, Section III introduces systems thinking and explores how systems thinking is applied for cybersecurity modeling. Finally, we give an example of systems frameworks for cybersecurity modeling in Section IV.
II. CYBERSECURITY AS A COMPLEX SYSTEM A. Characteristics of Cybersecurity
Cyberspace can be considered as the ultimate complex adaptive system of interconnected heterogeneous components [10] [11], such as multiple types of networks, devices and stakeholders, intertwined with human behavioral and technical factors, as shown in schematic figure Fig.1. In modeling this complex cybersecurity landscape, the four following characteristics are inevitable.
Complexity. Complexity is the most prominent characteristic of the cybersecurity and has infiltrated each part of the cybersecurity landscape [12] [13]. First of all, complexity in cybersecurity embodies the diversity in security issues and the multiplicity in influence factors. Cybersecurity is a complex intercross area, covering multiple fields, such as society, economics, politics, information technology, etc. The cybersecurity issues are stemmed from these fields and are affected by the combination of factors. For example, Flame, an example of the advanced persistent threat attack targeted Middle Eastern countries, is considered as the highly sophisticated and well-planned nation-state cyberattack for military and political motives. Moreover, the interrelationships between components in cyberspace are extremely complicated. Each component could interact with others in each field. Especially, as the center of cyberspace, the human is the interface between the natural environment, human society and information technology. The components of these three fields such as social distance, network architecture clearly alter the individual behavior and strategy, but, in turn, the participant can influence the dynamic change of components.
Unpredictability. As interactions exist among components and joint effects of the multiple types of factors, the cybersecurity as a whole exhibits an unpredictability and complexity [11]. First, the behavior, action and strategy of participants in cyberspace, either adversaries or users, are irrational, unpredictable and nonuniform [14]. Notably, smart hackers hide themselves by abandoning the conventional attack technology and method, so they are challenging to attribute definitively. Second, vulnerabilities and malfunctions in system and protocol, sometimes, are imperceptible. This feature makes it difficult for the defender to evaluate the system security and analyze the defense effectiveness quantitatively. Third, the cybersecurity system can exhibit the unpredictable emergence [15]. As defined in terms of the system-level patterns, emergence in cybersecurity refers to the new property and macrocosmic phenomenon as a result of the interactions of components in the microcosmic level. Thus, it is difficult for us to evaluate the effectiveness of some specific attack and defense technologies on the whole cyberspace.
Dynamics. To further understand the issues associated with cybersecurity, one must be knowledgeable about the evolution of each component [16]. On the one hand, the state of every component and each interaction between every two components changes dynamically over time. Especially for participants in cyberspace, their behavior and strategy may be dynamical and inconsistent [14]. On the other hand, the dynamics of cybersecurity is the prerequisite of system emergence [17]. The same input and environmental conditions do not always guarantee the same output.
Asymmetry. In cybersecurity, there always exists an asymmetry between the attacker and the defender [18]. This asymmetry is presented in the following three aspects. First, the attacker is positive and proactive, while the defender is passive and reactive [19]. In general, the attacker makes enough preparations in advance, such as vulnerability scanning, intelligence gathering, weaponization, which are ensured not clear to the defender. Moreover, the defender must protect all possible points of the protected object at any time, while the attacker can break through the meticulous defense disposition just by one valid vulnerability. Second, the defender's evaluation of his defense effectiveness is often faulty. As mentioned above, because it is difficult to estimate the effects of the specific defense technology or method on the whole cybersecurity, the defender fails to measure the defense effectiveness comprehensively and accurately. Third, the cost of one attack is less than that of defense. The defend requires an enormous investment of money, labor and resource, regardless of researching new defense technology or establishing the early warning mechanism for large-scale cyber attacks. However, these defense methods and technologies cannot guarantee to resist the cyber attack completely.
B. Challenges of Cybersecurity Modeling
At present, cybersecurity modeling is still in its infancy. The existing models and methods are limited to the technical security study, aiming to address the specific technical problem by the technology and approaches [32]. For example, the modern cryptographic scheme is to solve the problem of [20] Systems Theory is an interdisciplinary methodology which employs several systems approaches to investigating the systems structure, understanding the complex phenomenon and solving the relevant problems. Game Theory [21] [22] Game Theory attempts to explain the interacting strategy of the players with respect to the utilities of other rational players. In the security game model, the attacker and defender act as the players in the game theory. Cybernetics [23] [24] Cybernetics is a broad study of both living and non-living systems guided by principles of feedback, control, and communications. Catastrophe Theory
Catastrophe Theory is a mathematical theory for explaining the abrupt changes and discontinuities of state (E.g., server crash, defense failure). Behaviorism [25] [26] [27] Behaviorism is a learning approach which focuses on the human behavior in cyberspace.
Methods
Item Key Research Description
Dynamics [16] [28] Dynamics is a system methodology technique to model the system problems by dealing with stocks, follows and feedback loops that affect the behavior of the entire system over time. Network Analysis [29] [30] Network analysis is both a theoretical approach and methodological tool for understanding the interactions between the actors, exploring the network structure effects and studying the relevant factors. Agent-based Model [31] Agent-based model is a way to model or simulate the complex system constituted by autonomous, interacting agents (e.g., individual, group). The heterogeneity in agents' strategy decision and complicated interactions between agents can result in the unpredicted results of the system as whole.
preventing the malicious party from obtaining private information. Confidentiality, integrity and availability are the core aspects of the cryptography [33]. Based on where the security technique works, it often classifies these technical studies into three classes: applications-based, hosts-based and network-based security technical studies. For instance, code injection is the applications-based study and firewall belongs to both network-based and host-based security study. Though abundant technical works have made significant contributions to the research field of information security, there are still several typical challenges technical study: (1) the formalized description of the technical problems in cybersecurity; (2) the unified pattern of the quantitative and qualitative analysis towards the security technology; (3) coupling the theoretical guidance with the security technology and practice.
Generally speaking, cybersecurity study should involve the all relevant factors in the fields of politics, society, economy and culture, covering the theory, technology and practices. Fig.1 demonstrates the cybersecurity is rather a complex system, where the multiple components, factors and environment are interacted and twisted. Thus, cybersecurity modeling is broader than mere technical study. Recently, researchers have become interested in the human behavior factors in cybersecurity. The Data Breach Investigations Report (DBIR) from Verizon [34] represented that human factor continues to be a major issue accounting for the most incidents in enterprise. In cybersecurity, humans play as both developers and users for the security products; act as both adversaries and victims for the cyber attack-defense [35]. For example, game theory, such as the static game, dynamic game, Bayesian game, is often applied to investigate the interaction between the attacker and defender [21] [22]. In the Stackelberg Security Game (SSG), where a leader makes a decision first and then a follower reacts subject to the leader's action, the attacker acts as the follower, and the defender acts the role of the leader. However, there are still many theoretical and technical difficulties that need to be tackled in characterizing individual behaviors in cybersecurity. For example, human cognitive bias, gambler psychology, and the heterogeneity, dynamics and uncertainty in individual strategy decision can make the conventional method difficult, even invalid, for investigating the role of human factors in cybersecurity [14].
The cybersecurity study also needs to address the cyberphysical security issues, such as industry control systems, laws and regulations, cybercrime. Playing an essential role in financial services, power grid, transportation and medical system, industry control systems are often selected as targeted for cyberattacks [36], especially advanced persistent threat attacks. These cyberattacks tend to disrupt the order of the nation, cause public panic and disorder [37]. For example, Stuxnet, a sophisticated malware with four 0-day exploits targeting the Windows system and one targeting SCADA, delayed the process of Iranian nuclear program [38]. In December 2015, a cyberattack on Ukraine power grid by Black Energy group took place, and about 225 thousand customers lose power before Christmas [39]. In order to the fields as mentioned above, other specialists strive for a systematic set of cybersecurity metrics to define, measure and quantify
III. APPLY SYSTEMS THINKING TO CYBERSECURITY MODELING
A. What is Systems Thinking
System thinking is a holistic approach intended to analyze how the parts of the system interact and how the emergence changes as a whole entity [41]. Unlike the reductionist thinking, which actually treats the world from a static, simple and one-sided perspective, this holistic thinking emphasizes the complexity, dynamism and entirety of the system, as well as the interconnected and multifaceted relationships between the system components [42] [43]. Systems thinking arose in the early 20th century and now has been used to the diverse research fields, such as public health, environmental protection, urban management and international relationship. Nowadays, a tiny amount of researchers attempt to implement this systems thinking to the cybersecurity study [6] [7].
In our opinion, the best study for finest and resilient cybersecurity modeling needs to consider the systems thinking approaches at this stage. On the one hand, systems thinking for cybersecurity does not only treat a particular area of cyberspace, but allows for the cybersecurity of the whole entity. This holistic approach to cybersecurity is more readily able to identify and understand the cybersecurity system, describe the interaction among cyberspace components, predict the evolution of cybersecurity actually and help us address the cybersecurity issues effectively [6]. Systems thinking helps broaden the cybersecurity study scope to integrate people, environment, government and other vital aspects. On the other hand, unlike the traditional enumerative and analytical methods, which focus the linear and static causality from an individual perspective, systems thinking emphasizes on the complexity in the interaction of constituents of the cyber system. Despite conventional approaches have made significant achievements in network security technology and cryptology, e.g., detection & prevention technology and public key cryptography [33], these traditional approaches are not enough for us to depict, characterize and predict the cybersecurity issue and its evolution. In this systems perspective, the purpose of cybersecurity modeling is to promote the whole security situation of cyberspace rather just deal with a specific technological challenge. This requires us to apply the systems thinking to gain insight into cybersecurity from a holistic perspective, rather complement the conventional approach in some deficiencies. Thus, this radical shift in cybersecurity modeling is requisite.
B. Systems Theories and Methods
Cybersecurity modeling is a scientific way to make the cybersecurity and its related activities to represent, define, quantify and understand easier. For one proper research, in our view, the model is as equally significant as the experimentation and results analysis [44]. Thus, one of the difficulties in systems thinking for cybersecurity modeling is which theories and scientific methods should be most applicable in the cybersecurity model, with respective with different research scenarios and purposes. Systems thinking provides a logical method to view the cybersecurity from the guidance of the systems theories, such as system theory, cybernetics and game theory, to the assistance and analysis of relevant scientific methods, such as network analysis, dynamics and agent modeling from the particular perspective. Table I lists a few typical theories and scientific methods often used in the cybersecurity models briefly. There are many theories in systems thinking which refer to a set of contemplative and rational type of thinking, ideas and principles from a specific perspective. Meanwhile, a wide range of scientific methods are applied to establish the system models, understand the interactions among multiple actors, analyze the phenomena, find the explanations and predict the future. Thus, the theories and methods in one proper research are needed to tackle with specific complex cybersecurity issues.
C. Relevant Stakeholders in Cybersecurity
At the center of the cybersecurity system, stakeholder involves all aspects of the cybersecurity. They are not only the ties between all sub-systems in cybersecurity, but also act the driving force of cybersecurity. One of the vital aspects in systems thinking for cybersecurity modeling revolves around who are the relevant stakeholders in cybersecurity and how these stakeholders interact.
Not all stakeholders in cyberspace required to be considered in cybersecurity modeling. Table II lists four typical stakeholder clusters. Relevant stakeholders in the cybersecurity may include: government agencies; academia, standardization information security enterprises, private sectors, infrastructures and users. Each group of stakeholders can be considered to act as the sub-system, which has its own role in cybersecurity.
IV. AN EXAMPLE OF SYSTEMS FRAMEWORK FOR CYBERSECURITY MODELING
Currently, cybersecurity researchers usually establish the cybersecurity model based on research fields with which they are familiar. In this section, we provide a typical systems framework for cybersecurity modeling in Fig.2. This schematic framework outlines five essential elements:
• Real World includes both the physical and virtual aspects of both the cyberspace. It not only includes the embodiment of concepts, parameters and equations in the mathematical model, but also provides the observations for the empirical models, such as data, events and cases. • Mathematical Modeling is a type of theoretical approach to translating the behavior of the cybersecurity system into exact formulations by mathematical concepts and language. The mathematical model aims to represent what is the real-world cybersecurity problem and how the cybersecurity system evolves. • Empirical Modeling is a typical study approach established from observations of cybersecurity system by measuring the system outputs, such as relevant data, security event or cyberattack case. Its goals include finding out the empirical rule or characterization of the real observation, depicting the current network security situation and estimating the probabilistic future trends. • Inference refers to the process of concluding by a series of analytical methods and tools. Driven by the specific issue in cybersecurity, the inference is a purposive action, which aims to find the optimal methods and solution for the real cybersecurity. • Practice is a process of study, development and implementation of the real cybersecurity solution under the guidance of analytic results. This belongs to one aspect of the technical study, which aims to covert the theoretical analysis to the real cybersecurity techniques or tools, and then apply to the real cyberspace scenario.
Mathematical models and empirical models are two significant aspects of cybersecurity modeling. A mathematical model is an abstraction or simplification of a real-world cybersecurity system and scenario, and mathematical modeling is one of the processes to perform this abstraction and simplification from the real-world cybersecurity by various mathematical tools. Notably, both the mathematical modeling and its future analysis are based on the hypotheses for the basic framework of the models (Step (1)) [5]. Then, one of the most challenges in this modeling is to find an appropriate mathematical language to establish this framework, including the mathematical equation, variables, function and so on [9]. Empirical modeling mainly depends on empirical data in cybersecurity, such as security events, cyberattack cases, experiment results, observed and obtained from the real-world cybersecurity (Step(2)) [45]. Without the specific theory and mathematical equation, this model is challenging to adopt the theoretical analysis. However, the hypothesized laws and equations in mathematical models describe the idealized system-level or network-level situations and often fail to apply to the complex cyber-level situation. The empirical model is considered as the highly feasible approach in cybersecurity modeling. Although these two modeling approaches seems different owing to the dif-ferent perspective, they can complement each other (Step (3)). The empirical data are the significant source in mathematical modeling, while the mathematical model, in turn, can help refine the empirical model.
The mathematical modeling and empirical modeling help to translate the complex cybersecurity environment into a descriptive model, which is easy for researchers to define, understand and infer by relevant knowledge. Next, researchers need to analyze the cybersecurity models further and explore the solutions to the specific cybersecurity issue. Deduction (Step (4)) always begins with the assumptions, axioms and equations in the mathematical models. The conclusion from deduction follows with certainty if the premise meets the observations of the real-world. By contrast, the inductive inference (Step (5)) is directly derived from empirical observations of the real world. For example, the efficient defense measure to the current typical cyberattack is no guarantee against encountering the new one [9]. Based on the above inferences, the solutions or tools to the specific cybersecurity issue (Step (6)) are proposed and then implemented to tackle with the real-world cybersecurity (Step (7)).
As mentioned in Section II, it is too difficult to predict the effects of the solution from the analytic inference and its implementation because the real-world cybersecurity is complex and subtle. Therefore, an entire cybersecurity model needs the feedback from cyberspace (Step (8)) [23], adds the validation to the analytic inference method (Step (9)) and finally provides the improvement to both the mathematical and empirical modeling (Step (10) and (11)). The cybersecurity model and its analytic inference can help guide the practice in cyberspace, in turn, the feedback from the cybersecurity practice can verify the validity of the analysis and improve the current models.
V. CONCLUSION
Systems thinking allows us to think about cybersecurity modeling in a holistic and rational perspective. On the one hand, systems thinking provides a conceptual blueprint or framework for cybersecurity, where the components, factors and environments are integrated dynamically. The cybersecurity modeling with the systems approach helps us better understand and characterize the cybersecurity issues, such as unpredictability, complexity, emergence, asymmetry and dynamics, which are often ignored in most the current cybersecurity study. On other hand, through a set of analytic methods and real tools in modeling, inference and practice, systems thinking offers an innovative and universe roadmap to solve the specific cybersecurity problems, so that we can not only obtain the theoretical conclusion and the corresponding realworld solutions, but also validate the analytic conclusion and then improve the theoretical model. In this paper, despite we highlight that systems thinking should be the necessary foundation for cybersecurity modeling, the cybersecurity modeling with systems thinking still has a long way to go.
…Fig. 1 .
1The main components, impact factors and their interaction in Cybersecurity.
Fig. 2 .
2An illustrative example of systems thinking approach for cybersecurity modeling
TABLE I SYSTEMS
ITHINKING THEORIES AND METHODSTheories
Item
Key Research
Description
Systems Theory
[7]
TABLE II
IIRELEVANT STAKEHOLDERS IN CYBERSECURITY
Stakeholders
clusters
Constituent Sub-system Description
Government
Ministries, law enforcements, regulatory
agencies
As policy system from the perspective of government
Academia
Universities, research institutes
As research system from the perspective of security researchers
Private Sectors
Information security enterprises, computer
and network companies
As market system from the perspective of providers of security services
and products
Infrastructure
Internet service providers, urban managers
As urban manage system from the infrastructure managers and planners
cybersecurity [4] [40].
Defining cybersecurity. D Craigen, N Diakun-Thibault, R Purse, Technology Innovation Management Review. 410D. Craigen, N. Diakun-Thibault, and R. Purse, "Defining cybersecurity," Technology Innovation Management Review, vol. 4, no. 10, 2014.
Cyber-attack modeling analysis techniques: An overview. H Al-Mohannadi, Q Mirza, A Namanya, I Awan, A Cullen, J Disso, 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops. IEEEH. Al-Mohannadi, Q. Mirza, A. Namanya, I. Awan, A. Cullen, and J. Disso, "Cyber-attack modeling analysis techniques: An overview," in 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW). IEEE, 2016, pp. 69-76.
Sure: A modeling and simulation integration platform for evaluation of secure and resilient cyber-physical systems. X Koutsoukos, G Karsai, A Laszka, H Neema, B Potteiger, P Volgyesi, Y Vorobeychik, J Sztipanovits, Proceedings of the IEEE. 1061X. Koutsoukos, G. Karsai, A. Laszka, H. Neema, B. Potteiger, P. Vol- gyesi, Y. Vorobeychik, and J. Sztipanovits, "Sure: A modeling and simulation integration platform for evaluation of secure and resilient cyber-physical systems," Proceedings of the IEEE, vol. 106, no. 1, pp. 93-112, 2017.
An improved quantitative evaluation method for network security. R.-R Xi, X.-C Yun, Y.-Z Zhang, Z.-Y Hao, Chinese Journal of Computers. 384R.-R. Xi, X.-C. Yun, Y.-Z. Zhang, and Z.-Y. Hao, "An improved quantitative evaluation method for network security," Chinese Journal of Computers, vol. 38, no. 4, pp. 749-758, 2015.
Sok: Science, security and the elusive goal of security as a scientific pursuit. C Herley, P C Van Oorschot, 2017 IEEE Symposium on Security and Privacy (SP). IEEEC. Herley and P. C. Van Oorschot, "Sok: Science, security and the elusive goal of security as a scientific pursuit," in 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017, pp. 99-120.
Systems thinking for safety and security. W Young, N Leveson, Proceedings of the 29th Annual Computer Security Applications Conference. the 29th Annual Computer Security Applications ConferenceACMW. Young and N. Leveson, "Systems thinking for safety and security," in Proceedings of the 29th Annual Computer Security Applications Conference. ACM, 2013, pp. 1-8.
Cyber safety: A systems thinking and systems theory approach to managing cyber security risks. H M Salim, Massachusetts Institute of Technology. Ph.D. dissertationH. M. Salim, "Cyber safety: A systems thinking and systems theory approach to managing cyber security risks," Ph.D. dissertation, Mas- sachusetts Institute of Technology, 2014.
E , Foundational Cybersecurity Research: Improving Science, Engineering, and Institutions. National Academies PressMedicineE. National Academies of Sciences, Medicine et al., Foundational Cy- bersecurity Research: Improving Science, Engineering, and Institutions. National Academies Press, 2017.
Practicing a science of security: a philosophy of science perspective. J M Spring, T Moore, D Pym, Proceedings of the 2017 New Security Paradigms Workshop. the 2017 New Security Paradigms WorkshopACMJ. M. Spring, T. Moore, and D. Pym, "Practicing a science of security: a philosophy of science perspective," in Proceedings of the 2017 New Security Paradigms Workshop. ACM, 2017, pp. 1-18.
Cyberspace: The ultimate complex adaptive system. P W PhisterJr, OFFICE OF THE ASSISTANT SECRETARY OF DEFENSE WASH-INGTON DC COMMAND AND , Tech. Rep. P. W. Phister Jr, "Cyberspace: The ultimate complex adaptive system," OFFICE OF THE ASSISTANT SECRETARY OF DEFENSE WASH- INGTON DC COMMAND AND , Tech. Rep., 2011.
Insights from nature for cybersecurity. E Rzeszutko, W Mazurczyk, Health security. 132E. Rzeszutko and W. Mazurczyk, "Insights from nature for cybersecu- rity," Health security, vol. 13, no. 2, pp. 82-87, 2015.
Mathematical challenges in cybersecurity. D M Dunlavy, B Hendrickson, T G Kolda, Sandia Report. D. M. Dunlavy, B. Hendrickson, and T. G. Kolda, "Mathematical challenges in cybersecurity," Sandia Report, February, 2009.
Complexity science challenges in cybersecurity. R Armstrong, J Mayo, F Siebenlist, Sandia National Laboratories SAND Report. R. Armstrong, J. Mayo, and F. Siebenlist, "Complexity science chal- lenges in cybersecurity," Sandia National Laboratories SAND Report, 2009.
Towards a human factors ontology for cyber security. A Oltramari, D S Henshel, M Cains, B Hoffman, STIDS. A. Oltramari, D. S. Henshel, M. Cains, and B. Hoffman, "Towards a human factors ontology for cyber security." in STIDS, 2015, pp. 26-33.
Emergent behavior in cybersecurity. S Xu, arXiv:1502.05102arXiv preprintS. Xu, "Emergent behavior in cybersecurity," arXiv preprint arXiv:1502.05102, 2015.
arXiv:1502.05100Cybersecurity dynamics. arXiv preprint--, "Cybersecurity dynamics," arXiv preprint arXiv:1502.05100, 2015.
Dynamical model for individual defence against cyber epidemic attacks. D Yan, F Liu, Y Zhang, K Jia, Iet Information Security. 136D. Yan, F. Liu, Y. Zhang, and K. Jia, "Dynamical model for individ- ual defence against cyber epidemic attacks," Iet Information Security, vol. 13, no. 6, pp. 541-551, 2019.
The challenge of cyber attack deterrence. K Geers, Computer Law & Security Review. 263K. Geers, "The challenge of cyber attack deterrence," Computer Law & Security Review, vol. 26, no. 3, pp. 298-303, 2010.
Moving target defense:state of the art and characteristics. G L Cai, B S Wang, H U Wei, T Z Wang, Frontiers of Information Technology and Electronic Engineering. 1711G. L. Cai, B. S. Wang, H. U. Wei, and T. Z. Wang, "Moving target defense:state of the art and characteristics," Frontiers of Information Technology and Electronic Engineering, vol. 17, no. 11, pp. 1122-1153, 2016.
Cybersecurity: Challenges from a systems, complexity, knowledge management and business intelligence perspective. S M Tisdale, Issues in Information Systems. 163S. M. Tisdale, "Cybersecurity: Challenges from a systems, complexity, knowledge management and business intelligence perspective." Issues in Information Systems, vol. 16, no. 3, 2015.
Game theory meets network security and privacy. M H Manshaei, Q Zhu, T Alpcan, T Bacşar, J.-P Hubaux, ACM Computing Surveys (CSUR). 45325M. H. Manshaei, Q. Zhu, T. Alpcan, T. Bacşar, and J.-P. Hubaux, "Game theory meets network security and privacy," ACM Computing Surveys (CSUR), vol. 45, no. 3, p. 25, 2013.
Game theory for cyber security and privacy. C T Do, N H Tran, C Hong, C A Kamhoua, K A Kwiat, E Blasch, S Ren, N Pissinou, S S Iyengar, ACM Computing Surveys (CSUR). 50230C. T. Do, N. H. Tran, C. Hong, C. A. Kamhoua, K. A. Kwiat, E. Blasch, S. Ren, N. Pissinou, and S. S. Iyengar, "Game theory for cyber security and privacy," ACM Computing Surveys (CSUR), vol. 50, no. 2, p. 30, 2017.
Application of cybernetics and control theory for a new paradigm in cybersecurity. M D Adams, S D Hitefield, B Hoy, M C Fowler, T C Clancy, arXiv:1311.0257arXiv preprintM. D. Adams, S. D. Hitefield, B. Hoy, M. C. Fowler, and T. C. Clancy, "Application of cybernetics and control theory for a new paradigm in cybersecurity," arXiv preprint arXiv:1311.0257, 2013.
A cybernetics paradigms framework for cyberspace: Key lens to cybersecurity. T Vinnakota, 2013 IEEE International Conference on Computational Intelligence and Cybernetics (CYBERNETICSCOM). T. Vinnakota, "A cybernetics paradigms framework for cyberspace: Key lens to cybersecurity," in 2013 IEEE International Conference on Computational Intelligence and Cybernetics (CYBERNETICSCOM).
. IEEE. IEEE, 2013, pp. 85-91.
The role of psychology in enhancing cybersecurity. B K Wiederhold, B. K. Wiederhold, "The role of psychology in enhancing cybersecurity," 2014.
The human factor in cybersecurity: Robust & intelligent defense. J L Marble, W F Lawless, R Mittu, J Coyne, M Abramson, C Sibley, Cyber Warfare. SpringerJ. L. Marble, W. F. Lawless, R. Mittu, J. Coyne, M. Abramson, and C. Sibley, "The human factor in cybersecurity: Robust & intelligent defense," in Cyber Warfare. Springer, 2015, pp. 173-206.
Gender difference and employees' cybersecurity behaviors. M Anwar, W He, I Ash, X Yuan, L Li, L Xu, Computers in Human Behavior. 69M. Anwar, W. He, I. Ash, X. Yuan, L. Li, and L. Xu, "Gender dif- ference and employees' cybersecurity behaviors," Computers in Human Behavior, vol. 69, pp. 437-443, 2017.
Active cyber defense dynamics exhibiting rich phenomena. R Zheng, W Lu, S Xu, Proceedings of the 2015 Symposium and Bootcamp on the Science of Security. the 2015 Symposium and Bootcamp on the Science of SecurityACM2R. Zheng, W. Lu, and S. Xu, "Active cyber defense dynamics exhibiting rich phenomena," in Proceedings of the 2015 Symposium and Bootcamp on the Science of Security. ACM, 2015, p. 2.
Role of network topology in cybersecurity. R J La, 53rd IEEE Conference on Decision and Control. IEEER. J. La, "Role of network topology in cybersecurity," in 53rd IEEE Conference on Decision and Control. IEEE, 2014, pp. 5290-5295.
. R E Pino, SpringerR. E. Pino, Network science and cybersecurity. Springer, 2014.
Intelligent cybersecurity agents. J M Such, N Criado, L Vercouter, M Rehak, IEEE Intelligent Systems. 315guest editors' introductionJ. M. Such, N. Criado, L. Vercouter, and M. Rehak, "Intelligent cyber- security agents [guest editors' introduction]," IEEE Intelligent Systems, vol. 31, no. 5, pp. 3-7, 2016.
G B White, E A Fisch, U W Pooch, Computer system and network security. CRC pressG. B. White, E. A. Fisch, and U. W. Pooch, Computer system and network security. CRC press, 2017.
Cryptography and network security: principles and practice. W Stallings, Pearson Upper Saddle RiverW. Stallings, Cryptography and network security: principles and prac- tice. Pearson Upper Saddle River, 2017.
Verizon, data breach investigations report. Verizon. (2018) 2018 data breach investigations report.
Characterizing the optimal attack strategy decision in cyber epidemic attacks with limited resources. D Yan, F Liu, Y Zhang, K Jia, Y Zhang, International Conference on Science of Cyber Security. SpringerD. Yan, F. Liu, Y. Zhang, K. Jia, and Y. Zhang, "Characterizing the optimal attack strategy decision in cyber epidemic attacks with limited resources," in International Conference on Science of Cyber Security. Springer, 2018, pp. 65-80.
The cybersecurity landscape in industrial control systems. S Mclaughlin, C Konstantinou, X Wang, L Davi, A.-R Sadeghi, M Maniatakos, R Karri, Proceedings of the IEEE. 1045S. McLaughlin, C. Konstantinou, X. Wang, L. Davi, A.-R. Sadeghi, M. Maniatakos, and R. Karri, "The cybersecurity landscape in industrial control systems," Proceedings of the IEEE, vol. 104, no. 5, pp. 1039- 1057, 2016.
Modeling an information-based advanced persistent threat attack on the internal network. D Yan, F Liu, K Jia, ICC 2019-2019 IEEE International Conference on Communications (ICC). IEEED. Yan, F. Liu, and K. Jia, "Modeling an information-based advanced persistent threat attack on the internal network," in ICC 2019-2019 IEEE International Conference on Communications (ICC). IEEE, 2019, pp. 1-7.
Advanced persistent threat (apt) beyond the hype. M Ask, P Bondarenko, J E Rekdal, A Nordbø, P Bloemerus, D Piatkivskyi, IMT4582 Network Security at GjoviN University CollegeProject ReportM. Ask, P. Bondarenko, J. E. Rekdal, A. Nordbø, P. Bloemerus, and D. Piatkivskyi, "Advanced persistent threat (apt) beyond the hype," Project Report in IMT4582 Network Security at GjoviN University College, 2013.
The 2015 ukraine blackout: Implications for false data injection attacks. G Liang, S R Weller, J Zhao, F Luo, Z Y Dong, IEEE Transactions on Power Systems. 99G. Liang, S. R. Weller, J. Zhao, F. Luo, and Z. Y. Dong, "The 2015 ukraine blackout: Implications for false data injection attacks," IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1-1, 2016.
A survey on systems security metrics. M Pendleton, R Garcia-Lebron, J H Cho, S Xu, Acm Computing Surveys. 49462M. Pendleton, R. Garcia-Lebron, J. H. Cho, and S. Xu, "A survey on systems security metrics," Acm Computing Surveys, vol. 49, no. 4, p. 62, 2016.
A definition of systems thinking: a systems approach. R D Arnold, J P Wade, Procedia Computer Science. 44R. D. Arnold and J. P. Wade, "A definition of systems thinking: a systems approach," Procedia Computer Science, vol. 44, pp. 669-678, 2015.
Systems thinking for health systems strengthening. World Health Organization. D , De Savigny, T Adam, D. De Savigny and T. Adam, Systems thinking for health systems strengthening. World Health Organization, 2009.
Systems thinking in combating infectious diseases. S Xia, X.-N Zhou, J Liu, Infectious diseases of poverty. 6144S. Xia, X.-N. Zhou, and J. Liu, "Systems thinking in combating infectious diseases," Infectious diseases of poverty, vol. 6, no. 1, p. 144, 2017.
Simulations in cybersecurity: A review of cognitive modeling of network attackers, defenders, and users. V D Veksler, N Buchler, B E Hoffman, D N Cassenti, C Sample, S Sugrim, Frontiers in psychology. 9691V. D. Veksler, N. Buchler, B. E. Hoffman, D. N. Cassenti, C. Sample, and S. Sugrim, "Simulations in cybersecurity: A review of cognitive modeling of network attackers, defenders, and users," Frontiers in psychology, vol. 9, p. 691, 2018.
Science of security: Combining theory and measurement to reflect the observable. C Herley, P C Van Oorschot, IEEE Security & Privacy. 161C. Herley and P. C. Van Oorschot, "Science of security: Combining theory and measurement to reflect the observable," IEEE Security & Privacy, vol. 16, no. 1, pp. 12-22, 2018.
| [] |
[
"Superstrings with Intrinsic Torsion",
"Superstrings with Intrinsic Torsion"
] | [
"Jerome P Gauntlett \nIntroduction\n\n",
"Dario Martelli ",
"Daniel Waldram ",
"\nDepartment of Physics Queen Mary\nUniversity of London Mile End Rd\nE1 4NSLondonU.K\n"
] | [
"Introduction\n",
"Department of Physics Queen Mary\nUniversity of London Mile End Rd\nE1 4NSLondonU.K"
] | [] | We systematically analyse the necessary and sufficient conditions for the preservation of supersymmetry for bosonic geometries of the form Ê 1,9−d × M d , in the common NS-NS sector of type II string theory and also type I/heterotic string theory. The results are phrased in terms of the intrinsic torsion of G-structures and provide a comprehensive classification of static supersymmetric backgrounds in these theories. Generalised calibrations naturally appear since the geometries always admit NS or type I/heterotic fivebranes wrapping calibrated cycles.Some new solutions are presented. In particular we find d = 6 examples with a fibred structure which preserve N = 1, 2, 3 supersymmetry in type II and include compact type I/heterotic geometries. fibration determined by an Abelian SU(3) instanton (i.e. a holomorphic gauge field satisfying the Donaldson-Uhlenbeck-Yau equation).More generally, we will determine the most general static supersymmetric geometries of the form Ê 1 × M 9 preserving any number of Killing spinors ǫ + . If there is one Killing spinor the geometry will have a Spin(7) structure but now in d = 9. Additional Killing spinors lead to additional Spin(7) structures or equivalently a G-structure where G is the maximal common subgroup of them embedded in SO(9). The G-structures that arise are still given by the groups as in figure 1 but now in d = 9. We will show that the most general geometries consist of a number of flat directions non-trivially fibred over manifolds M d that possess G-structures in the canonical dimension. The fibration is determined by Abelian generalisedAnother purpose of this paper is to present the new and the known results in a uniform way. In particular, as emphasised in [7], the expression for the three-form can always be expressed in terms of the G-structure in a way related to "generalised calibrations"[17,18].Specifically we always have an expression of the formwhere Ξ is an invariant form which specifies, at least partially, the G-structure. Generalised calibrations extend the original definition of a calibration form to cases where the background has non-vanishing fluxes. In particular a generalised calibration form, here Ξ, is no longer closed and its exterior derivative is related to the flux, here H (and the dilaton Φ) as in (1.5).The physical significance of generalised calibrated cycles is that they minimise the energy functional of a brane wrapping the cycle in the presence of the fluxes.The reason that (1.5) might have been anticipated is as follows. First one notes that the type of geometries under discussion arise as solutions describing NS fivebranes wrapping supersymmetric cycles in manifolds of special holonomy including the full back-reaction of the brane on the geometry. To see this first recall that the geometry of an unwrapped NS fivebrane is a product of Ê 1,5 along the world-volume of the fivebrane with a transverse fourdimensional space with non-vanishing H and Φ. In addition, we know that a probe fivebrane with world-volume Ê 1,5−p × Σ p will be supersymmetric if Σ p is a calibrated p-cycle in some special holonomy background. When we go beyond the probe approximation and consider the back reaction of the fivebrane on the geometry, we thus expect a geometry of the form Ê 1,5−p × M p+4 with non-vanishing H and Φ. This is precisely the type of geometry we are considering. Now, on physical grounds, we know that we can always add a second probe brane without breaking supersymmetry provided it is wrapping a cycle calibrated by the Finally in later calculations we found it useful to determine the relationships between the ten six-forms J A ∧ J B ∧ J C . A general six-form, which is Hodge-dual to a two-form, corresponds to the Sp(2) representations in the decomposition 28 → 10 + 3(5) + 3(1). As the six-forms of interest are constructed from Sp(2)-singlets, they must correspond to the three singlets in the decomposition, and hence there must be seven relationships amongst the ten six-forms. They are given by SU (2) × SU (2)-structures in d = 8: The structure is defined by a pair of orthogonal SU (2)structures which we can write as two triplets of almost complex structures (J A , J ′A ) satisfyingNote, the eigenvalues under (γ 1256 , γ 1357 , γ 1458 ) of ǫ (a) are (+1, −1, −1), (−1, −1, +1) and(−1, +1, −1) for a = 1, 2, 3 respectively. The three two-forms J A are then given by(B.12) Given γ 12 ǫ (2) = γ 56 ǫ (2) = ǫ (3) , γ 14 ǫ (3) = γ 58 ǫ (3) = ǫ (1) , and γ 13 ǫ (1) = γ 57 ǫ (1) = ǫ (2) , we have the explicit expressions J 1 = e 12 + e 34 + e 56 + e 78 , J 2 = e 14 + e 23 + e 58 + e 67 , J 3 = e 13 + e 42 + e 57 + e 86 . (B.13)The corresponding volume form is given by (B.4) as above. Note that each almost complex structure J A as a SU (4)-structure has a corresponding (4, 0)-form Ω A given by Each spinor ǫ (a) also defines a corresponding Spin(7)-structure given byThe three two-forms J A are given by combinations, self-dual on the (a) index,The second set of J ′A two-forms are given by the corresponding anti-self-dual combinations with minus signs between the first and second terms in parentheses. Given γ 12 ǫ (2) = γ 56 ǫ (2) = ǫ (3) , γ 14 ǫ (3) = γ 58 ǫ (3) = ǫ (1) , and γ 13 ǫ (1) = γ 57 ǫ (1) = ǫ (2) , together with γ 12 ǫ (1) = −γ 56 ǫ (1) = ǫ (4) , γ 14 ǫ (2) = −γ 58 ǫ (2) = ǫ (4) , and γ 13 ǫ (3) = −γ 57 ǫ (3) = ǫ (4) , we have the explicit expressions J 1 = e 12 + e 34 , J ′1 = e 56 + e 78 , J 2 = e 14 + e 23 , J ′2 = e 58 + e 67 , J 3 = e 13 + e 42 , J ′3 = e 57 + e 86 . | 10.1103/physrevd.69.086002 | [
"https://export.arxiv.org/pdf/hep-th/0302158v3.pdf"
] | 9,939,376 | hep-th/0302158 | f8f7e00223408879ec93dfaa626dce8a26243f60 |
Superstrings with Intrinsic Torsion
11 Jun 2003
Jerome P Gauntlett
Introduction
Dario Martelli
Daniel Waldram
Department of Physics Queen Mary
University of London Mile End Rd
E1 4NSLondonU.K
Superstrings with Intrinsic Torsion
11 Jun 2003
We systematically analyse the necessary and sufficient conditions for the preservation of supersymmetry for bosonic geometries of the form Ê 1,9−d × M d , in the common NS-NS sector of type II string theory and also type I/heterotic string theory. The results are phrased in terms of the intrinsic torsion of G-structures and provide a comprehensive classification of static supersymmetric backgrounds in these theories. Generalised calibrations naturally appear since the geometries always admit NS or type I/heterotic fivebranes wrapping calibrated cycles.Some new solutions are presented. In particular we find d = 6 examples with a fibred structure which preserve N = 1, 2, 3 supersymmetry in type II and include compact type I/heterotic geometries. fibration determined by an Abelian SU(3) instanton (i.e. a holomorphic gauge field satisfying the Donaldson-Uhlenbeck-Yau equation).More generally, we will determine the most general static supersymmetric geometries of the form Ê 1 × M 9 preserving any number of Killing spinors ǫ + . If there is one Killing spinor the geometry will have a Spin(7) structure but now in d = 9. Additional Killing spinors lead to additional Spin(7) structures or equivalently a G-structure where G is the maximal common subgroup of them embedded in SO(9). The G-structures that arise are still given by the groups as in figure 1 but now in d = 9. We will show that the most general geometries consist of a number of flat directions non-trivially fibred over manifolds M d that possess G-structures in the canonical dimension. The fibration is determined by Abelian generalisedAnother purpose of this paper is to present the new and the known results in a uniform way. In particular, as emphasised in [7], the expression for the three-form can always be expressed in terms of the G-structure in a way related to "generalised calibrations"[17,18].Specifically we always have an expression of the formwhere Ξ is an invariant form which specifies, at least partially, the G-structure. Generalised calibrations extend the original definition of a calibration form to cases where the background has non-vanishing fluxes. In particular a generalised calibration form, here Ξ, is no longer closed and its exterior derivative is related to the flux, here H (and the dilaton Φ) as in (1.5).The physical significance of generalised calibrated cycles is that they minimise the energy functional of a brane wrapping the cycle in the presence of the fluxes.The reason that (1.5) might have been anticipated is as follows. First one notes that the type of geometries under discussion arise as solutions describing NS fivebranes wrapping supersymmetric cycles in manifolds of special holonomy including the full back-reaction of the brane on the geometry. To see this first recall that the geometry of an unwrapped NS fivebrane is a product of Ê 1,5 along the world-volume of the fivebrane with a transverse fourdimensional space with non-vanishing H and Φ. In addition, we know that a probe fivebrane with world-volume Ê 1,5−p × Σ p will be supersymmetric if Σ p is a calibrated p-cycle in some special holonomy background. When we go beyond the probe approximation and consider the back reaction of the fivebrane on the geometry, we thus expect a geometry of the form Ê 1,5−p × M p+4 with non-vanishing H and Φ. This is precisely the type of geometry we are considering. Now, on physical grounds, we know that we can always add a second probe brane without breaking supersymmetry provided it is wrapping a cycle calibrated by the Finally in later calculations we found it useful to determine the relationships between the ten six-forms J A ∧ J B ∧ J C . A general six-form, which is Hodge-dual to a two-form, corresponds to the Sp(2) representations in the decomposition 28 → 10 + 3(5) + 3(1). As the six-forms of interest are constructed from Sp(2)-singlets, they must correspond to the three singlets in the decomposition, and hence there must be seven relationships amongst the ten six-forms. They are given by SU (2) × SU (2)-structures in d = 8: The structure is defined by a pair of orthogonal SU (2)structures which we can write as two triplets of almost complex structures (J A , J ′A ) satisfyingNote, the eigenvalues under (γ 1256 , γ 1357 , γ 1458 ) of ǫ (a) are (+1, −1, −1), (−1, −1, +1) and(−1, +1, −1) for a = 1, 2, 3 respectively. The three two-forms J A are then given by(B.12) Given γ 12 ǫ (2) = γ 56 ǫ (2) = ǫ (3) , γ 14 ǫ (3) = γ 58 ǫ (3) = ǫ (1) , and γ 13 ǫ (1) = γ 57 ǫ (1) = ǫ (2) , we have the explicit expressions J 1 = e 12 + e 34 + e 56 + e 78 , J 2 = e 14 + e 23 + e 58 + e 67 , J 3 = e 13 + e 42 + e 57 + e 86 . (B.13)The corresponding volume form is given by (B.4) as above. Note that each almost complex structure J A as a SU (4)-structure has a corresponding (4, 0)-form Ω A given by Each spinor ǫ (a) also defines a corresponding Spin(7)-structure given byThe three two-forms J A are given by combinations, self-dual on the (a) index,The second set of J ′A two-forms are given by the corresponding anti-self-dual combinations with minus signs between the first and second terms in parentheses. Given γ 12 ǫ (2) = γ 56 ǫ (2) = ǫ (3) , γ 14 ǫ (3) = γ 58 ǫ (3) = ǫ (1) , and γ 13 ǫ (1) = γ 57 ǫ (1) = ǫ (2) , together with γ 12 ǫ (1) = −γ 56 ǫ (1) = ǫ (4) , γ 14 ǫ (2) = −γ 58 ǫ (2) = ǫ (4) , and γ 13 ǫ (3) = −γ 57 ǫ (3) = ǫ (4) , we have the explicit expressions J 1 = e 12 + e 34 , J ′1 = e 56 + e 78 , J 2 = e 14 + e 23 , J ′2 = e 58 + e 67 , J 3 = e 13 + e 42 , J ′3 = e 57 + e 86 .
Introduction
Supersymmetric backgrounds of string/M-theory with non-vanishing fluxes are currently an active area of study for at least two reasons. Firstly, they provide a framework for searching for new models with attractive phenomenology and secondly, they appear in generalisations of the AdS/CFT correspondence. For both applications a detailed mathematical understanding of the kinds of geometry that can arise is important for further elucidating physical results. Such an understanding can also lead to new methods for constructing explicit examples.
Here we will analyse supersymmetric geometries of the common NS-NS sector of type IIA and IIB supergravity. That is, we consider non-vanishing dilaton Φ and three-form H but with all R-R fields and fermions set to zero. The closely related type I and heterotic geometries which allow in addition non-trivial gauge fields will also be considered. Let us introduce the basic conditions. A type II geometry will preserve supersymmetry if and only if there is at least one ǫ + or ǫ − satisfying
∇ ± M ǫ ± ≡ ∇ M ± 1 8 H M N P Γ N P ǫ ± = 0, Γ M ∂ M Φ ± 1 12 H M N P Γ M N P ǫ ± = 0,(1.1)
where for type IIB (respectively IIA) ǫ ± are two Majorana-Weyl spinors of Spin (1,9) of the same (respectively opposite) chirality and ∇ is the Levi-Civita connection. Geometrically ∇ ± are connections with totally anti-symmetric torsion given by ± 1 2 H. Locally the threeform is given by H = dB and hence satisfies the Bianchi identity dH = 0.
(1.2)
For heterotic/type I string theory, the bosonic field content also includes a gauge field A, with field strength F , in the adjoint of E 8 × E 8 or SO(32)/ 2 . We choose conventions where a geometry preserves supersymmetry if there is at least one spinor ǫ + satisfying (1.1) and, in addition, the gaugino variation vanishes,
Γ M N F M N ǫ + = 0. (1.3)
The Bianchi identity reads dH = 2α ′ (Tr F ∧ F − tr R ∧ R) (1.4) where the second term on the right hand side is the leading string correction to the supergravity expression. The equations of motion for these conventions can be found in Appendix A.
The geometries we consider here will be of the form Ê 1,9−d ×M d , and hence with H, Φ only non-vanishing on M d . When d = 9 the analysis covers the most general static geometries. As is well known, for the special case when H = Φ = 0, the necessary and sufficient conditions for preservation of supersymmetry is that M d admits at least one covariantly constant spinor and hence has special holonomy. Apart from the trivial case of flat space this gives rise to the possibilities presented in figure 1. These manifolds are all Ricci-flat and hence they automatically also solve the supergravity equations of motion 1 . Note that figure 1 only presents the minimal "canonical" dimension d of the manifold in order for it to have the corresponding special holonomy. It is also possible to have manifolds of higher dimension with the same special holonomy group: when H = Φ = 0, after going to the covering space, the resulting geometries are simply direct products of special holonomy manifolds in the canonical dimensions given in figure 1 with one or more flat directions. The analysis of a set of necessary and sufficient conditions for the preservation of supersymmetry in certain cases where H and Φ are non-zero was initiated some time ago in [2] (see also [3,4]). In general, from the first condition in (1.1), it is necessary that there is at least one spinor which is covariantly constant with respect to one of the connections ∇ ± with totally anti-symmetric torsion, ∇ + say. This is equivalent to requiring that ∇ + has 1 Note that there are also higher order corrections to the equations of motion that give rise to tadpoles for type IIA in d = 8 and IIB in d = 6 (via F -theory) [1]. The tadpoles can often be cancelled by the addition of spacetime filling strings or D3-branes, respectively. Here we shall not explicitly refer to these corrections further.
d = 8 Spin(7) G G SU (4) G G Sp(2) G G SU (2) × SU (2) Ô Ô Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù Ù d = 7 G 2 7 7 u u u u u u u u u u d = 6 SU (3)
holonomy given by one of the groups in figure 1. As we discuss in more detail below this implies the existence of various invariant forms on M d satisfying certain differential constraints.
The second equation in (1.1) then imposes additional conditions on the forms. Finally, one
shows that the existence of such a set of forms with constraints is in fact sufficient for the existence of one or more solutions to the supersymmetry conditions (1.1).
It is also important to know what extra conditions are required in order that the geometry solves the equations of motion. By analysing the integrability conditions of (1.1), it was proved in [5] for the entire class of geometries under consideration, that it is only necessary to impose the Bianchi identity (1.2). Note, it was actually shown that one needs to impose the Bianchi identity for H and the H equation of motion. However, the expression for H implied by supersymmetry, to be discussed below and given in (1.5), implies that the H equation of motion is automatically satisfied so only the Bianchi identity is required.
Recently it has been appreciated that the necessary and sufficient conditions derived in [2], which just analysed the SU(n) cases in d = 2n, can also be phrased in terms of Gstructures, and this has allowed a number of generalisations [7,8,9,10,5]. Similar ideas have been used to analyse other supergravity solutions in [11,12,13,14,15]. The invariant forms on M d define the G-structure, while the differential conditions correspond to restricting the class of the intrinsic torsion of the G-structure. We will briefly review some aspects of G-structures later, but we refer to, e.g., [6] for further details. The necessary and sufficient conditions for the G 2 in d = 7 [7,8,9] and Spin (7) in d = 8 [10] cases have also been analysed from this point of view. Thus, when only ∇ + , say, has special holonomy G, we now have a fairly complete set of results, assuming that M d has the canonical dimension for G as given in figure 1. We shall review all known cases including the results of [2]. Note the SU (3) case was also recently reviewed in detail from the new perspective of intrinsic torsion in [16]. One new result of this paper will be to analyse the remaining two cases in d = 8 when ∇ + has holonomy Sp (2) or SU(2) × SU (2).
One can also ask what happens when M d does not have the canonical dimension for G. For example, we might consider geometries of the form Ê 1,2 × M 7 , with M 7 admitting two Killing spinors leading to M 7 having an SU(3) structure corresponding to ∇ + having SU(3) holonomy. In the case that H = Φ = 0, as already noted, this would necessarily imply that M 7 is a direct product of a flat direction with a Calabi-Yau three-fold. When H, Φ = 0, however, we will show that the geometries can be more general than simply the direct product of a flat direction with a six manifold M 6 with SU(3) structure of the type derived in [2]. In particular, the flat direction can be non-trivially fibred over M 6 with the same calibration form Ξ as the original probe brane. This implies that as we switch on the back reaction, Ξ should still be a calibrating form, though now, since H and Φ are non-zero, it is a generalised calibration. In other words, if the original probe brane wraps a cycle calibrated by a calibration form Ξ, the final geometry M p+4 should admit the corresponding generalised calibration form, that is Ξ satisfying (1.5).
In table 1 (1,0) Spin(7) Spin(8) Spin (7) Cayley in Spin(7) Table 1: G-structures when ∇ + has special holonomy.
It is interesting to note that the more general geometries in d = 9 mentioned above, with a number of flat directions fibred over M d , have a fascinating interpretation in this regard. In particular, the flat directions correspond to directions along the world-volume of the fivebrane wrapping a flat direction, and so it is surprising that supersymmetry does not require the fibration to be trivial. Note that this interpretation is mirrored in the refined version of (1.5) for the flux that one obtains in d = 9:
* H = e 2Φ d e −2Φ Ξ ∧ K 1 ∧ · · · ∧ K 9−d (1.6)
where Ξ, K i (partly) determine the G-structure, with K i one forms corresponding to the flat directions of the fivebrane.
The fact that the geometries all satisfy calibration conditions of the form (1.5) connects with a simple vanishing theorem for compact backgrounds [29,30] The theorem is reproduced in the special supersymmetric sub-case as a consequence of the calibration condition (1.5) and the Bianchi identity. This is a reflection of the general result [30,5] that the equations of motion are implied by the preservation of supersymmetry and the Bianchi identity. One has
M d e −2Φ H ∧ * H = M d H ∧ d(e −2Φ Ξ) = − M d e −2Φ dH ∧ Ξ. (1.8)
The simplest case [5] is when dH = 0 (as is true for any type II background). We then have H = Φ = 0 by the same positivity argument as above. (This simplifies and extends 2 an earlier vanishing theorem that was given for the SU (n) cases only in [19] each define a different structure with groups G ± . Equivalently, together they define a single structure with group G which is the maximal common subgroup of the two embedded in SO(d), and this is also listed in table 2. It is noteworthy that from the wrapped fivebrane perspective, in all cases this minimal G-structure is the same as the holonomy of the initial special holonomy manifold that one started with. Since both ǫ ± are required to define the G-structure, unlike the G ± -structures, it is not covariantly constant with respect to a connection with totally anti-symmetric torsion. Table 2: G-structures in type II theories when both ∇ ± have special holonomy.
dim(M) N IIB N IIA Hol(∇ + ) Hol(∇ − ) G-structure calibrated cycle 4 (1,1) (2,0) SU (2) SU (2) {½} point in Ê 4 6 2 2 SU (3) SU (3) SU (2) Kähler-2 in CY 2 7 2 2 G 2 G 2 SU (3) SLAG-3 in CY 3 8 (2,2) (4,0) SU (4) SU (4) SU (3) Kähler-4 in CY 3 8 (4,0) (2,2) SU (4) SU (4) SU (2) 2 Kähler-2×Kähler-2 in CY 2 × CY ′ 2 8 (3,0) (2,1) SU (4) Spin(7) Sp(2) C-LAG-4 in HK 2 8 (2,0) (1,1) Spin(7) Spin(7) SU (4) SLAG-4 in CY 4 8 (1,1) (2,0) Spin(7) Spin(7) G 2 coassociative in G 2
The particular class of geometries with ∇ ± each having G 2 holonomy, with a common SU(3) subgroup was analysed in detail in [5]. The necessary and sufficient conditions on the SU (3) structure in order that the geometry preserves supersymmetry were presented. This case is associated with fivebranes wrapping SLAG three-cycles in manifolds with SU (3) holonomy. It was also shown that the three-form flux can be expressed as a generalised calibration associated with a (3,0) form, as expected for a special Lagrangian cycle. This result again refines that of (1.5) in a way expected from physical considerations. Here we shall extend the analysis of [5] to cover all cases discussed in [7]. Table 2 lists the geometries associated with fivebranes wrapping calibrated cycles. Note that explicit solutions corresponding to three more cases were discussed in [20]: ∇ + has Sp (2) holonomy, while ∇ − has Spin (7), SU (4) or Sp(2) holonomy. They correspond to fivebranes wrapping certain quaternionic planes in Ê 8 . Such calibrations are linear and it is plausible that the solutions found in [20] are the most general kinds of solution. In any case, we will not consider these cases further in this paper.
The geometries listed in table 2 are all in their "canonical" dimension. We will argue that they can be generalised to d = 9, as before, by adding a number of flat directions. In order that both ǫ + and ǫ − Killing spinors survive, the fibration must be given by a generalised instanton with respect to the common G-structure.
It is natural to wonder if supersymmetric geometries admitting both ǫ + and ǫ − Killing spinors are necessarily of the type given in [21].
In this paper we will not explicitly present many detailed proofs since the arguments follow the same lines as those in [7,5], and also because we do not want to obscure the main results. The plan of the rest of the paper is as follows. In section 2 we review G-structures and their intrinsic torsion. In section 3 we discuss the geometries summarised in table 1.
We also comment on the additional constraints arising in type I/heterotic string theory.
Section 4 analyses the general supersymmetric geometries in d = 9 when one of the connections ∇ ± has special holonomy, which generalises the geometries of
G-structures in canonical dimension
It will be useful first to recall some aspects of the classification of G-structures (for further details see e.g. [6] fined since ∇ ± ǫ ± = 0 implies they have constant norm, which we take to be unity,ǭ ± ǫ ± = 1, and so are nowhere vanishing. This necessarily defines a G-structure with G ⊂ Spin(d). In this section we will summarise the definition of the structures and how the generic intrinsic torsion is encoded in each case. We will only consider the structures in their canonical dimensions: Spin(7) in d = 8, G 2 in d = 7, etc. It is straightforward to generalise to the case that the structure is in a higher dimension (for an example, see appendix E of [13]). In the following sections we then turn to the particular necessary and sufficient conditions on the structure for supersymmetry.
SU (n)-structure in d = 2n: The structure is completely specified by a real two-form J of maximal rank and a complex n-form Ω satisfying
J ∧ Ω = 0, Ω ∧Ω = i n(n+2) 2 n n! J n ,(2.1)
where J n is defined using the wedge product. Together these define a metric g d and an
orientation chosen as vol = J n /n!. Raising an index on J using this metric defines an almost complex structure satisfying J 2 = −½. With respect to this almost complex structure, Ω is an (n, 0)-form while the two-form J is of type (1,1). Furthermore the metric g d is almost
Hermitian. Note that the almost complex structure is actually determined solely by the choice of Ω and is independent of the two-form J.
For generic SU (n) structures, the intrinsic torsion decomposes into five modules W i [6,23,24]. Consider for instance SU (4). The adjoint representation of Spin(8) decomposes as 28 → 1 + 6 +6 + 15 where 15 is the adjoint representation of SU (4), and so the remaining representations correspond to su(4) ⊥ . The one-form Λ 1 representation decomposes as 8 → 4 +4. We then have
T ∈ Λ 1 ⊗ su(n) ⊥ = W 1 ⊕ W 2 ⊕ W 3 ⊕ W 4 ⊕ W 5 . (2.2)
where the corresponding SU (4) representations of W i are given by (4 +4) × (1 + 6 +6) = (4 +4) + (20 +20) + (20 +20) + (4 +4) + (4 +4).
(2.3)
For n = 2 and n = 3 the corresponding representations are
(2 +2) × (1 + 1 + 1) = (2 +2) + (2 +2) + (2 +2) , (3 +3) × (1 + 3 +3) = (1 + 1) + (8 + 8) + (6 +6) + (3 +3) + (3 +3) ,(2.4)
respectively. In particular, for n = 2 the modules W 1 and W 3 are absent. For n = 3 note that the W 1 and W 2 modules can be further decomposed into real modules W ± 1 and W ± 2 as discussed in detail in [24].
Each component of the intrinsic torsion W i ∈ W i can be given in terms of the exterior derivative of J or Ω, or in one case both. Generically, we have the decompositions
dJ ∈ W 1 ⊕ W 3 ⊕ W 4 , dΩ ∈ W 1 ⊕ W 2 ⊕ W 5 .
(2.5)
Explicitly, since J is a (1, 1)-form, dJ has a (3, 0) piece and a (2, 1) piece (plus the complex conjugates). The former defines an irrep of SU(n) and gives the W 1 component of T . The latter splits into a primitive dJ (1, 0)-form, giving W 4 , and which can be written as
W 4 ≡ J dJ. (2.6)
The same expression appears in characterising any almost Hermitian metric and is known as the Lee form (of J). Here we have introduced the notation ω ν which contracts a p-form ω into a (n + p)-form ν via
(ω ν) i 1 ...in = 1 p! ω j 1 ...jp ν j 1 ...jpi 1 ...in . (2.7)
Similarly, since Ω is a (n, 0)-form, dΩ has a (n, 1) piece plus a (n − 1, 2) piece. Let us first consider n = 2. Again the former defines an irrep, which gives W 5 and can be written as a Lee form for either Re Ω or equivalently Im Ω
W 5 ≡ 1 4
(Ω dΩ +Ω dΩ), = Re Ω d(Re Ω) = Im Ω d(Im Ω), n = 2.
(2.8)
The second line is obtained by noting that Ω dΩ = 0. In general, the (n − 1, 2) piece of dΩ splits into a primitive piece dΩ (n−1,2) 0
giving W 2 plus another piece that encodes the same W 1 component of T as dJ (3,0) due to the second compatibility condition in (2.1). Note that for SU(3), W ± 1,2 can be defined as the real and imaginary parts of W 1,2 , respectively. For SU (2), as noted, the classes W 1 and W 3 are absent. In this case W 5 is still given by the first line of (2.8), while W 2 is defined by
W 2 = 1 4 (Ω dΩ +Ω dΩ). (2.9)
Recall that we have SU (n)-holonomy if all the components of the intrinsic torsion vanish.
In this case the manifold is Calabi-Yau. Clearly this occurs if and only if dJ = dΩ = 0. It will be useful to note some two further cases. First, the almost complex structure is integrable if and only if W 1 = W 2 = 0. Secondly, we note that under a conformal transformation of the SU (n)-structure, such that J → e 2f J and Ω → e nf Ω, which implies the metric scales as g → e 2f g, W 1 , W 2 and W 3 are invariant as is the following combination (2n − 2)W 5 + (−1) n+1 2 n−2 nW 4 . where e m define a local frame and e mnpq = e m ∧ e n ∧ e p ∧ e q . The structure defines a metric g 8 = (e 1 ) 2 + · · · + (e 8 ) 2 and an orientation which we take to be vol = e 1 ∧ · · · ∧ e 8 implying * Ψ = Ψ.
The adjoint representation of SO(8) decomposes under Spin(7) as 28 → 7+21, where 21
is the adjoint representation of Spin (7). One then finds that the intrinsic torsion decomposes into two modules [25] T ∈ Λ 1 ⊗ Spin(7) ⊥ = W 1 ⊕ W 2 ,
8 × 7 = 8 + 48.
(2.12)
The components W i of T in W i are given in terms of the exterior derivative dΨ as, again decomposing into Spin (7) representations,
dΨ ∈ Λ 5 ∼ = W 1 ⊕ W 2 , 56 → 8 + 48.
(2.13)
In particular the W 1 component in the 8 representation is given by Given the definition (2.11) one has a number of standard identities, which will be useful in what follows. We have Ψ m 1 m 2 m 3 p Ψ n 1 n 2 n 3 p = 6δ m 1 m 2 m 3 n 1 n 2 n 3 + 9Ψ [m 1 m 2 [n 1 n 2 δ
W 1 ≡ Ψ dΨ,(2.m 3 ]
n 3 ] , Ψ m 1 m 2 p 1 p 2 Ψ n 1 n 2 p 1 p 2 = 12δ m 1 m 2 n 1 n 2 + 4Ψ m 1 m 2 n 1 n 2 , Ψ mp 1 p 2 p 3 Ψ np 1 p 2 p 3 = 42δ m n . This defines a metric g 7 = (e 1 ) 2 + · · · + (e 7 ) 2 and an orientation vol = e 1 ∧ · · · ∧ e 7 . Explicitly The adjoint representation of SO(7) decomposes as 21 → 7 + 14 where 14 is the adjoint representation of G 2 . The intrinsic torsion then decomposes into four modules [26],
T ∈ Λ 1 ⊗ g ⊥ 2 = W 1 ⊕ W 2 ⊕ W 3 ⊕ W 4 , 7 × 7 = 1 + 14 + 27 + 7.
(2.18)
The components of T in each module W i are encoded in terms of dφ and d * φ which decompose as dφ ∈ Λ 4 ∼ = W 1 ⊕ W 3 ⊕ W 4 ,
35 → 1 + 27 + 7, d * φ ∈ Λ 5 ∼ = W 2 ⊕ W 4 , 21 → 14 + 7. (2.19)
Note that the W 4 component in the 7 representation appears in both dφ and d * φ. It is the Lee form, given by
W 4 ≡ φ dφ = − * φ d * φ.
(2.20)
The W 1 component in the singlet representation can be written as [27].
W 1 ≡ * (φ ∧ dφ
Again there are a number of useful identities given the definition (2.16). We have * φ m 1 m 2 m 3 p * φ n 1 n 2 n 3 p = 6δ m 1 m 2 m 3 n 1 n 2 n 3
+ 9 * φ [m 1 m 2 [n 1 n 2 δ m 3 ] n 3 ] − φ m 1 m 2 m 3 φ n 1 n 2 n 3 , * φ m 1 m 2 p 1 p 2 * φ n 1 n 2 p 1 p 2 = 8δ m 1 m 2 n 1 n 2 + 2 * φ m 1 m 2 n 1 n 2 , * φ mp 1 p 2 p 3 * φ np 1 p 2 p 3 = 24δ m n , (2.22) while φ m 1 m 2 p φ n 1 n 2 p = 2δ m 1 m 2 n 1 n 2 + * φ m 1 m 2 n 1 n 2 , φ mp 1 p 2 φ np 1 p 2 = 6δ m n ,(2.23)
and
φ m 1 m 2 p * φ n 1 n 2 n 3 p = φ [m 1 [n 1 n 2 δ m 2 ]
n 3 ] , φ mp 1 p 2 * φ n 1 n 2 p 1 p 2 = 4φ m n 1 n 2 .
(2.24)
Sp(n)-structures in d = 4n: The structure is specified by three almost complex structures J A with A = 1, 2, 3 satisfying the algebra
J A · J B = −δ AB ½ + ǫ ABC J C .
(2.25)
Together these define a metric g d . Lowering one index with this metric on each almost complex structure gives a set of maximal rank two-forms J A . Note that the Sp(n)-structure could be equally well defined in terms of these forms. We also have a natural orientation given by vol = (J A ) 2n /(2n)! for any J A .
For n = 1 recall that Sp(1) ∼ = SU (2) and this case has already been considered above. We can make the correspondence by identifying J ≡ J 3 and Ω ≡ J 2 + iJ 1 . In more detail, first note that one can define nine Lee-forms L AB ≡ J A dJ B , but for SU (2) only the diagonal Lee-forms are independent, since J A ·L AB is independent of A for each B. The three classes of intrinsic torsion defined above from the SU(2) point of view, are given by
W 2 = 1 2 (L 22 −L 11 ), W 4 = L 33 and W 5 = 1 2 (L 11 + L 22 )
. Note that the almost complex structure J 3 is integrable if and only if L 11 − L 22 = 0 and similarly for J 1 and J 2 [28].
The only other case of interest in the context of this paper is Sp(2). The adjoint representation of SO(8) decomposes under Sp(2) as 28 → 3(1) + 3(5) + 10, where 10 is the adjoint representation of Sp(2). One then finds that the intrinsic torsion decomposes into 9 different Sp(2) modules
T ∈ Λ 1 ⊗ sp(2) ⊥ = 9 i=1 W i ,
(4 + 4) × (3(1) + 3(5)) = 6(4 + 4) + 3(16 + 16),
(2.26)
where the notation takes into account that while the torsion is real, the representations 4 and 16 are pseudo-real. One can show that all the components of T in W i are specified in terms of the exterior derivatives dJ A . Thus the Sp(2) manifold has Sp(2) holonomy if and only if dJ A = 0. In this case six of the nine Lee forms L AB ≡ J A dJ B are linearly independent (this is actually true for any Sp(n)-structure), and these precisely correspond to the six (4 + 4) representations appearing in (2.26). To be more precise, one can show that (2.27) and hence six independent Lee-forms are given by L 11 , L 22 and L 33 and L 12 − L 21 , L 31 − L 13 and L 23 − L 32 . (Note that similar definitions of the independent Lee forms in the case of almost quaternionic manifolds are given in [31].) One also notes the relation
L 12 + L 21 = J 3 · (L 11 − L 22 ), L 31 + L 13 = J 2 · (L 33 − L 11 ), L 23 + L 32 = J 1 · (L 22 − L 33 ),* (J A ∧ J B ∧ dJ C ) = J A · L BC + J B · L AC .
(2.28)
J A · J ′B = 0. (2.30)
Again these define a metric. Lowering one index on the almost complex structures gives six half-maximal rank two-forms. We also have a natural eight-dimensional orientation given by vol ∧ vol ′ where vol = (J A ) 2 /2 and vol ′ = (J ′B ) 2 /2 for any A and B.
Following the usual prescription decomposing the adjoint representation of SO(8) into SU (2) × SU (2) representations to give (su(2) ⊗ su(2)) ⊥ one finds 28 different real modules:
T ∈ Λ 1 ⊗ (su(2) ⊗ su(2)) ⊥ = 28 i=1 W i , ((2 +2, 1) + (1, 2 +2)) × (6(1, 1) + (2 +2, 2 +2)) = 10(2 +2, 1) + 10(1, 2 +2)+4(3, 2 +2) + 4(2 +2, 3). (2.31)
Since the SU (2)-structures are orthogonal, we necessarily have an almost product structure Π. This is a tensor Π m n satisfying Π · Π = ½. It can be written in terms of the complex structure as Π = J A · J A − J ′B · J ′B for any A and B. This can be written as the product of two commuting almost complex structures J ± = J A ± J ′B . As discussed in appendix C, generically the almost product structure is not integrable.
Geometries with ǫ + Killing spinors in canonical dimension
We now consider generic supersymmetric type II geometries (M d , g d , Φ, H) when only one of the connections ∇ ± has special holonomy. For definiteness we choose it to be ∇ + . The different possible holonomies are the usual groups given in figure 1. In this section we will only consider geometries with ∇ + having special holonomy in its minimal canonical dimension:
the cases are listed in table 1. Our aim is to summarise the known cases in a uniform way as well as to present new results on the two remaining cases, Sp(2) and SU (2) × SU (2). At the end of the section we will also discuss the generalisations needed for the heterotic/type I string theories.
The basic technique to derive the results of this and subsequent sections is to construct tensors from bi-linears in the Killing spinor ǫ + , which characterise the structure. Differential constraints on the structure are obtained from the vanishing of the dilatino and gravitino variations. The expression for the three-form H as a generalised calibration, that we are emphasising, can easily be obtained using the method of [7]. We will not present any details of these calculations in this section, for reasons of clarity. Note, however, that the next section will contain some representative calculations.
SU (n)-geometries in d = 2n: We start with the case where ∇ + has SU (n) holonomy in d = 2n first considered in the case of heterotic/type I theories in [2]. The necessary and sufficient conditions for preservation of supersymmetry are that the manifold M 2n has an SU(n) structure satisfying the differential conditions
d(e −2Φ Ω) = 0, d(e −2Φ * J) = 0, (3.1)
with the flux given in terms of the structure, in each case, by [7]
* H = −e 2Φ d(e −2Φ ) for SU (2), * H = −e 2Φ d(e −2Φ J) for SU (3), * H = −e 2Φ d(e −2Φ 1 2 J ∧ J)
for SU (4). These conditions on J and Ω are equivalent to those in [2] (after setting the gauge field to zero). In particular, as we discuss below, they imply that J is integrable. As a result, the expression for H can be rewritten in the form, as given in [2],
H = i(∂ − ∂)J,(3.3)
where d =∂ + ∂. (Note that this corrects a sign in the corresponding expression in [2]. 3 )
However, it is the form (3.2) that naturally generalises to other cases.
In particular we note that the expression for the three-form flux is that of a generalised Kähler calibration. This is physically reasonable since we expect geometries with flux should arise as solutions describing fivebranes wrapping supersymmetric cycles, as discussed in detail in [7]. For instance, in the SU (4) case, geometries with non-zero flux with ∇ + having SU (4)holonomy correspond to a fivebrane wrapped on a Kähler four-cycle in a Calabi-Yau four-fold.
Such branes are calibrated by 1 2 J ∧ J which is precisely the generalised calibration appearing in the expression for H. Similarly, the SU (3) geometries correspond to fivebranes wrapping Kähler two-cycles in CY three-folds which are calibrated by J. The solutions found in [39] are of this type (see [40] for an explicit discussion). Finally the slightly degenerate SU (2) case corresponds to a fivebrane wrapping a point in a CY two-fold, i.e., the fivebrane is transverse to the CY 2 . Such configurations are calibrated by the unit function.
The conditions on the SU (n) structure (3.1) can be rephrased in terms of the classification of intrinsic torsion. The first condition in (3.1) implies that W 1 = W 2 = 0, and hence the almost complex structure is in fact integrable (as pointed out in [2]). Thus for SU (3) and SU (4) the intrinsic torsion lies in W 3 ⊕ W 4 ⊕ W 5 . For SU (2), since W 1 and W 3 are always absent, we have T ∈ W 4 ⊕ W 5 . In all cases the second condition in (3.1) is equivalent to the statement that the Lee-form is exact and related to Φ, namely W 4 = 2dΦ. The first condition also implies that Lee form for Ω is similarly proportional to dΦ with W 5 = (−1) n 2 n−2 W 4 .
For SU (3), this was first noticed in [16].
Note that this relation implies that under a conformal transformation, the invariant combination (2.10) is proportional to (n − 2)W 4 . Thus only when n = 2 is it possible to have geometries that are conformal to Calabi-Yau n-folds, as noticed by [2]. In this case W 5 = W 4 = 2dΦ with W 2 = 0. The general form of these geometries in ten dimensions is 3 To see this one must take into account that our convention for the definition of H has the opposite sign (and a factor of two) to that in [2]. thus given by ds 2 = ds 2 (Ê 1,5 ) + e 2Φ ds 2 ,
∇ 2 e 2Φ = 0 , (3.4)
with H given as in (3.2) and ds 2 the metric on CY 2 . This is is just the usual fivebrane solution transverse to CY 2 . The possibility of conformally CY 2 geometries was considered in [2] but here we claim the stronger result that it is in fact necessary.
It is worth emphasising that if Φ = constant then the leading order equations of motion imply H = 0 and in addition F = 0 for the heterotic/type I case (see for example (A.4b)).
Thus, for instance, the solutions presented in [16] based on the Iwasawa manifold, although supersymmetric, do not solve the leading order equations of motion. In general, solutions with H = 0, and Φ non-constant must have W 4 = 0 and W 5 = 0. Similar comments apply to other cases considered below.
Spin (7)-geometries in d = 8: Now consider the case when ∇ + has Spin (7) holonomy. The only condition on the Spin (7) structure is that the Lee-form is again exact [10]
W 1 = 12dΦ (3.5)
with flux given by [7]
* H = −e 2Φ d e −2Φ Ψ . (3.6)
These geometries preserve a single chiral spinor of Spin (8). As in the SU (n) case we can understand these geometries and conditions in terms of wrapped branes. They arise as solutions for fivebranes wrapping Cayley four-cycles in manifolds with Spin(7) holonomy and the expression for H indeed corresponds to a generalised calibration for such a cycle.
It is interesting to note that if we perform a conformal transformationg ≡ e −6/7Φ g, then the corresponding Spin (7)-structure definingg has vanishing Lee-form, and hence has intrinsic torsion just in the class W 2 [10]. One might entertain the idea of solutions that are conformal to a Spin(7) holonomy manifold, i.e. withg having Spin (7) holonomy. While such a geometry, with non-vanishing flux, certainly admits Killing spinors, we cannot solve the Bianchi identity dH = 0 with non-zero flux. To see this observe that the geometry has the form g = e 6/7Φg ,
H mnp = − 1 3Ψ mnp q∇ q (e 6/7Φ ). (3.7)
The expression for dH contains both the 35 and 1 representations of Spin (7). The singlet is proportional to∇ 2 (e 6/7φ ) while the 35 corresponds to the trace-free part of∇ l∇p (e 6/7φ ).
We thus conclude that dH = 0 implies that Φ = constant which in turn implies H = 0.
G 2 -geometries in d = 7: Next consider the case when ∇ + has G 2 holonomy. These geometries preserve a single d = 7 spinor. The necessary conditions were derived in [7,8,9] and sufficiency was proved in [8,9]. This case was discussed in detail from the point of view of this paper in [5]. The conditions placed on the G 2 structure are given by
φ ∧ dφ = 0, d(e −2Φ * φ) = 0, (3.8)
which means that the intrinsic torsion lies in W 3 ⊕W 4 in the representations 27+7. Moreover it implies that the Lee form is again exact with W 4 = −6dΦ. The flux is given by [7]
* H = e 2Φ d(e −2Φ φ). (3.9)
It is worth noting that these geometries are special cases of integrable G 2 -structures in which one can introduce a G 2 Dolbeault cohomology [27].
These backgrounds arise as solutions describing fivebranes wrapped on associative threecycles in manifolds of G 2 holonomy. This is reflected in the expression for the flux which is the condition on a generalised calibration for such a cycle. Solutions of this type were presented in [41,42,5] (see [5] for an explicit demonstration of [41]).
If we perform a conformal transformationg ≡ e −Φ g, then the corresponding G 2 -structure has vanishing Lee-form, and hence has intrinsic torsion just in the class W 3 [9]. In particular one can consider an ansatz for solutions that are conformal to a G 2 -holonomy manifold:
g = e Φg , H mnp = − 1 2 * φ mnp q ∇ q (e Φ ). (3.10)
However, as in the Spin(7) case, (3.7), the Bianchi identity dH = 0 implies that Φ is constant and hence H = 0.
Sp(2)-geometries in d = 8: Next consider the case when ∇ + has Sp(2) holonomy. Such
geometries are examples of manifolds known as hyper-Kähler with torsion. A discussion of these geometries can be found, for example, in [43] and also [31]. The dilaton further constrains the geometry in the following way. The conditions on the structure are given by those for the SU(4) case for each complex structure
d(e −2Φ Ω A ) = 0, for A = 1, 2, 3, d(e −2Φ * J A ) = 0, for A = 1, 2, 3, (3.11)
with the flux being given by
* H = −e 2Φ d e −2Φ 1 2 J A ∧ J A , for A = 1, 2, 3. (3.12)
These geometries preserve three chiral d = 8 spinors with the same chirality. The conditions (3.11) imply that the parts of dJ A transforming in the two 16's are independent of A. In addition the 12 4's are determined by the dilaton. The "diagonal" Lee-forms are all equal L 11 = L 22 = L 33 = 2dΦ and hence the off diagonal Lee-forms L AB , A = B are anti-symmetric
with L 12 = −2J 3 · dΦ, L 31 = −2J 2 · dΦ and L 23 = −2J 1 · dΦ.
It is worth noting that this case arises when fivebranes wrap quaternionic planes in Ê 8 , that is cycles that are complex with respect to all three complex structures. It was shown in [44] that these are linear. In [20] solutions were written down for these configurations and it is plausible that they are the most general, once the Bianchi identity is imposed.
SU(2) × SU(2)-geometries in d = 8:
Finally consider the case when ∇ + has SU(2) × SU (2) holonomy. The conditions on the structure are
d(e −2Φ J A ∧ vol ′ ) = 0, d(e −2Φ J A′ ∧ vol) = 0, d(e −2Φ J A ∧ J B ′ ) = 0, (3.13)
where, e.g., vol = (J A ∧ J A )/2 for each A, while the flux is given by * H = −e 2Φ d e −2Φ vol +e −2Φ vol ′ .
(3.14)
These geometries preserve four chiral d = 8 spinors, all with the same chirality. As discussed in Appendix C, the almost product structure defined by Π = (
J A +J ′B )·(J A −J ′B )
is not integrable. This is because the mixed components H ija and H abi , using the notation of Appendix C, are generically non-zero. A notable subclass of solutions, with integrable products, is given by those corresponding to two orthogonal fivebranes intersecting in a string, one fivebrane wrapping CY 2 and the other CY ′ 2 in CY 2 × CY ′ 2 . Such solutions are discussed for instance in [45].
Let us now consider the modifications required for heterotic/type I string theory. In addition to g d , H, Φ, the bosonic field content also includes a gauge field A, with field strength F , in the adjoint of E 8 × E 8 or SO(32)/ 2 . In order to preserve supersymmetry we require the expressions in (1.1) for ǫ + only, and thus the cases described in table 1 and the above discussion are equally applicable to the heterotic/type I theories. In addition, preservation of supersymmetry requires the vanishing of the gaugino variation (1.3)
Γ M N F M N ǫ + = 0. (3.15)
For each case in table 1, since ǫ + is a singlet of the special holonomy group G of ∇ + this is satisfied, breaking no further supersymmetry, if the two-form F , considered as the adjoint of SO(d), lies within the adjoint of G.
For the Spin (7) case we therefore need to consider F to be a Spin (7) instanton satisfying
F mn = − 1 2 Ψ mn pq F pq ,(3.16)
while for G 2 we need
F mn = − 1 2 * φ mn pq F pq . (3.17)
For the SU (n) cases, we require
F mn = − 1 2 1 2 J ∧ J mn pq F pq (3.18)
which, in complex coordinates, is equivalent to
J αβ F αβ = F αβ = Fᾱβ = 0. (3.19)
That is we need a holomorphic gauge field on a holomorphic vector bundle satisfying the Donaldson-Uhlenbeck-Yau equation, as noticed in [2]. For the Sp(2)-case we require that the gauge field satisfies (3.18) for all three complex structures, or equivalently,
F mn = J A m p J A n q F pq , no sum on A,(3.20)
which are the same as the BPS equations of [46]. For SU (2) 2 , with self-dual complex structures, the gauge fields must describe an anti-self-dual instanton for each of the SU (2) structures. This can be written as
F mn = − 1 2 vol mn pq F pq = − 1 2 vol ′ mn pq F pq . (3.21)
Note that in all case the instanton condition can be written as
* F = Ξ ∧ F (3.22)
where Ξ is the invariant form entering the generalised calibration expression for the flux * H = e 2Φ d(e −2Φ Ξ).
As shown in [5] the equations of motion of type I supergravity are automatically satisfied if one imposes the modified Bianchi identity for H
dH = 2α ′ Tr F ∧ F. (3.23)
In type I/heterotic string theory the Bianchi identity is modified by higher order corrections
dH = 2α ′ (Tr F ∧ F − tr R ∧ R) (3.24)
which allows solutions with dH = 0 as for the type II theories.
We noted above for the Spin (7) case that the ansatz (3.7) preserves Killing spinors but does not solve the Bianchi identity dH = 0, and hence the equations of motion, for non-vanishing H, Φ. It is interesting to ask whether there are heterotic solutions solving dH = 2α ′ Tr F ∧ F . Indeed, wheng is flat such solutions have already been found [32].
Similarly heterotic solutions for d = 7 that are conformal to flat space were found in [33].
It would be interesting to construct heterotic solutions wheng is conformal to a non-flat Spin (7) or G 2 -holonomy manifold.
General geometries with ǫ + Killing spinors
In the previous section, we gave the necessary and sufficient conditions for preservation of supersymmetry for a geometry of the form Ê 1,9−d × M d when ∇ + has special holonomy in the corresponding canonical number of dimensions, Spin (7) in d = 8, G 2 in d = 7 and so on.
The analysis for ∇ − is simply obtained by taking H → −H. More generally one can ask for the generic static supersymmetric background of the form Ê × M 9 preserving some number of supersymmetries. In this section, we give a complete analysis of this question when the spinors are all of the same type and show that in addition to recovering the results of the previous section we find more general classes of geometries. As before, for definiteness we take the Killing spinors to be all of the type ǫ + satisfying ∇ + ǫ + = 0. In the next section we turn to the case where some Killing spinors satisfy ∇ + ǫ + = 0 and some ∇ − ǫ − = 0.
Suppose we have N independent spinors ǫ + (i) in d = 9 all satisfying ∇ + ǫ + (i) = 0. In general, these define a G-structure, where G ⊂ Spin(9) is the stabiliser group of rotations which leave all the spinors invariant. One finds the seven special holonomy groups given in figure 1 as possibilities. Furthermore these embed in Spin(9) in the conventional way following the pattern of the dimensional reduction. That is to say G ⊂ SO(n) ⊂ SO(9) where n is the canonical dimension for the G-structure as given in figure 1.
As usual the structures can also be defined in terms of a set of forms which can be constructed out of the spinors. In general, these are of the type (K 1 , . . . , K 9−n , Ξ A ) with
i K i Ξ A = 0.
Here Ξ A are the set of forms used to define the structure in its canonical dimension n as described in section 2. The K i are a set of 9 − n independent one-forms required to define the additional orthogonal dimensions to give a structure in d = 9. Thus for instance a G 2 -structure in d = 9 is defined by the set (K 1 , K 2 , φ) with i K i φ = 0. In a local orthonormal frame e m , we can take the form φ to have the standard form (2.11) in terms of e 1 , . . . , e 7 while K 1 = e 8 and K 2 = e 9 . Thus, at any given point in M 9 , the forms K 1 and K 2 define a reduction of Ê 9 into Ê 7 ⊕ Ê 2 and hence define a SO(7) ⊂ SO(9) structure. The three-form φ then describes a G 2 ⊂ SO(7) structure on the Ê 7 subspace in the usual way. Note that the structure always defines a metric. Using this metric we can also view the K i as vectors, which we will also denote as K i# . In addition, as we will see, the inner product K i · K j is constant for all i and j and so we normalise K i to be orthonormal.
If the flux H is zero, we have ∇K i = 0 and M 9 is then, after going to the covering space, just a product M 9 = Ê 9−n × M n where M n is a G-holonomy manifold in the canonical dimension. From this point of view, G-holonomy extends trivially to nine dimensions. With flux however, this is no longer the case. We will show that there are new possibilities which are not simply direct products of the geometries given in the previous section with flat space.
We discuss the most general case of G = Spin (7), corresponding to one Killing spinor, in detail and then summarise the analogous results for the other structure groups, corresponding to the existence of more than one Killing spinor.
Single
Killing spinor: Spin(7)-structure in d = 9
First assume we have a single Killing spinor ǫ + on M 9 , and since ∇ + ǫ + = 0, we can takē ǫ + ǫ + = 1. It is easy to show that the stability group is Spin(7) ⊂ Spin(9). Equivalently we have the set of Spin(7)-invariant forms (K, Ψ) with i K Ψ = 0 and K 2 = 1. In a particular basis e m , we can take K = e 9 and Ψ given by the standard form (2.11) in terms of e 1 , . . . , e 8 .
In terms of the spinor ǫ + , we have
K m =ǭ + γ m ǫ + , Ψ mnpq = −ǭ + γ mnpq ǫ + ,(4.1)
where γ m are nine-dimensional gamma matrices with γ 1···9 = ½. From the Killing spinor conditions (1.1), as in the previous section, one derives a set of necessary and sufficient conditions on (K, Ψ). The condition ∇ + ǫ + = 0 simply translates into ∇ + Ψ = ∇ + K = 0.
From the latter constraint we immediately see, since H is totally antisymmetric, that K is a
Killing vector, and in addition that the norm of K is constant, as claimed above. In addition one finds
dK = G,(4.2)
where we have made the generic SO (8) decomposition
H ≡ H 0 − K ∧ G, (4.3) with i K H 0 = i K G = 0.
We can now introduce local coordinates such that the metric has the canonical form of a fibration
ds 2 = ds 2 (M 0 ) + (dy + B) 2 ,(4.4)
with K = dy + B, while dB = G is a two-form on M 0 and the metric ds 2 (M 0 ) is independent of y and admits a Spin (7)-structure defined by Ψ, which may, however, at this point, depend on y.
Now we turn to the dilatino equation. Following the discussion in [7], given the symmetry properties of the nine-dimensional gamma matrices, one has
∂ m Φǭ + [A, γ m ] ∓ ǫ + + 1 12 H mnpǭ + [A, γ mnp ] ± ǫ + = 0 (4.5)
where Note that we have fixed our orientation by vol m 1 ...m 9 =ǭ + γ m 1 ...m 9 ǫ + .
If we decompose (4.7) into SO(8) representations, consistency with (4.2) requires
G mn = − 1 2 Ψ mn pq G pq . (4.8)
In other words, G satisfies the Spin (7) instanton equation on M 0 . As a result, K is not only a Killing vector but actually preserves the Spin(7) structure. That is, the Lie-derivative of the spinor ǫ + vanishes and hence the Lie derivative of Ψ also vanishes, it easy to see that these conditions are sufficient for supersymmetry. We should point out that it is straightforward to also define and characterise the intrinsic torsion of the Spin (7) structure directly in d = 9 but as it provides no extra information on how to characterise the geometries we shall not present any details here.
L K Ψ = 0,(4.
To summarise, the general d = 9 geometry is simply a flat direction fibred over a d = 8
Spin (7) geometry, with the fibration determined by an Abelian Spin (7) instanton in d = 8.
The metric is given by (4.4), the three-form by (4.3), (4.11) and the dilaton by (4.10). In order to obtain a supersymmetric solution to the equations of motion we also need to impose the Bianchi identity for H. Explicitly we get
d 0 H 0 − G ∧ G = 0 for type II 2α ′ (Tr F ∧ F − tr R ∧ R) for heterotic/type I ,(4.12)
where F is a Spin (7) instanton.
A number of further comments are in order. First, when the flux is zero, we commented above that, after going to the covering space, the geometry is necessarily a direct product of a d = 8 Spin (7) (7) geometry considered in the last section with a S 1 . The S 1 is a flat direction on the worldvolume of the fivebrane. The analysis of this section shows that more complicated geometries can arise leading to the world-volume direction being fibred over the d = 8 manifold. As wrapped branes have holographic duals, it will be interesting to determine the holographic interpretation of this.
Multiple Killing spinors
The case of multiple ǫ + Killing spinors is completely analogous to the Spin(7) case discussed above. As mentioned, the set of spinors ǫ + (i) in general define a G-structure in d = 9 with G being one of the standard special holonomy groups SU (4), Sp(2), SU (2) × SU (2), G 2 , SU (3) or SU (2). One way to view how these groups appear is to see that the stability group of each ǫ + (i) defines a different embedding of Spin (7) in Spin (9). The structure group G is then the common subgroup of this set of embedded Spin(7) groups. From this perspective, each G-structure is equivalent to a set of distinct Spin(7)-structures.
The structure group G is then the common subgroup of these embeddings. Recall the structure can be defined in terms of (K i , Ξ A ) where Ξ A are forms used to define the structure in its canonical dimension n and K i are 9 − n one-forms. The condition ∇ + K i = 0 implies each K i is Killing and we can take them to be orthonormal. In addition, as in the Spin (7) case one can always derive as set of necessary and sufficient conditions on (K i , Ξ A ) using the dilatino constraint. One always finds the familiar calibration condition for * H. Explicitly, for the cases where n = 8 one has * H =
e 2Φ d e −2Φ 1 2 J ∧ J ∧ K for SU (4), e 2Φ d e −2Φ 1 2 J A ∧ J A ∧ K for Sp(2) with A = 1, 2, 3, e 2Φ d e −2Φ vol ∧K + e −2Φ vol ′ ∧K for SU (2) × SU (2),(4.13)
where K is the single one-form, while for the n < 8 cases we have * H =
e 2Φ d e −2Φ φ ∧ K 1 ∧ K 2 for G 2 , e 2Φ d e −2Φ J ∧ K 1 ∧ K 2 ∧ K 3 for SU (3),
e 2Φ d e −2Φ K 1 ∧ · · · ∧ K 5 , for SU (2).
(4.14)
The necessary and sufficient conditions also imply that the Killing vectors K i all commute and furthermore each preserves the underlying G-structure Ξ A . This implies that the metric can be put in the canonical fibration form
ds 2 = ds 2 (M 0 ) + 9−n i=1 (dy i + B i ) 2 ,(4.15)
where M 0 is a n-dimensional manifold and K i = dy i +B i . Furthermore, M 0 has a G-structure defined by Ξ A independent of y i . The flux H has the related decomposition
H ≡ H 0 − 9−n i=1 K i ∧ G i ,(4.16)
where G i = dB i are two-forms on M 0 . In addition one finds a set of constraints on the G-structure Ξ A on M 0 . As in the Spin (7) case these turn out to be precisely the canonical dimension conditions given in the last section.
The additional freedom in nine-dimensional geometries are given by the two-forms G i defining the fibration. Again as in the Spin (7) Note that there is one possible caveat to this analysis which is the existence of geometries with exactly five, six or seven Killing spinors. This necessarily defines an SU (2)-structure and would require the existence of a compatible connection ∇ + without the particular fibration structure described in the text. Similar comments apply to the existence of solutions with nine or more supersymmetries (so defining an identity structure) which are not simply flat space.
It is interesting to note that particular examples of these general types of solutions have already appeared in the literature. Examples of SU(2)-structure in d = 6 and SU(2) 2 in d = 9 were considered in [34] using conformally Eguchi-Hanson metrics. Similar solutions related to D3-branes were considered in [35]. Further examples will be presented in the next section.
We should also note that d = 6 geometries of the type discussed here with two flat directions are similar to those studied in [36]. However, the motivation of that work was rather different. Namely, the idea was to exploit the fibration structure in order to construct examples of manifolds with SU(3) structures in six-dimensions of the type described in the last section.
Explicit examples I
We now present explicit solutions of the type described in the last section. where G = dB is an Abelian anti-self-dual instanton andǫ is the volume form on ds 2 .
Generically, these solutions preserve 1/2 of the ǫ + supersymmetries, and none of the ǫ − supersymmetries for the type II theories, corresponding to eight supercharges for both the heterotic and the type II theories. For solutions, we must impose the Bianchi identity for H. This gives
−d * d e 2Φ = G ∧ G for type II, G ∧ G + 2α ′ (Tr F ∧ F − tr R ∧ R) for heterotic/type I. (5.2)
Recall that supersymmetry implies that F is also an anti-self-dual instanton on the base. In the special case that tr R∧R = 0, satisfying the Bianchi identity then implies that the leading equations of motion are automatically satisfied. Otherwise, one must separately check that one has a solution of the equations of motion, including at this order α ′ corrections.
Then a simple anti-self-dual instantons is given for instance by
B = γ(x 1 dx 2 − x 3 dx 4 ),(5.4)
corresponding to a constant field-strength. A radial solution for the dilaton is given by
e 2Φ = 1 + m r 2 − 1 4 γ 2 r 2 . (5.5)
A different radial solution can be obtained by writing the flat metric in terms of leftinvariant one-forms on the three-sphere:
ds 2 = dr 2 + 1 4 r 2 (σ 1 R ) 2 + (σ 2 R ) 2 + (σ 3 R ) 2 (5.6)
with positive orientation given by dr ∧ σ 1 R ∧ σ 2 R ∧ σ 3 R (our conventions are as in [11]). A singular anti-self-dual instanton is then given by
B = γ 4r 2 σ 3 R . (5.7)
A radial solution for the dilaton is
e 2Φ = 1 + m r 2 − γ 2 12r 6 . (5.8)
When the hyper-Kähler metric is Eguchi-Hanson space or Taub-NUT space any of the anti-self-dual harmonic two-forms on these spaces can be used as the Abelian instanton and if they are normalisable they lead to non-singular solutions. These cases have already appeared in the literature [34].
Let us now consider whether we can obtain compact heterotic solutions of the form (5.1).
(Recall that there are no compact solutions with flux for the type II cases [30].) The base spaceM must admit a hyper-Kähler metric so is either T 4 or K3. In addition, we will compactify the fibre direction on a circle S 1 of radius R. By construction such a background preserves eight supersymmetries. For a solution we must also satisfy the Bianchi identity.
The left-hand side of (5.2) is exact, thus the sum of the sources on the right-hand side must be trivial in cohomology. Since the manifold is compact, each of the sources is also quantised, being some multiple of the first Pontrjagin p 1 ∈ H 2 (M 5 , ) class (instanton charge) of the corresponding bundle. If E is the bundle describing the S 1 fibration and V the bundle of the heterotic/type I gauge fields we have
R 2 p 1 (E) + 2α ′ p 1 (V ) − 2α ′ p 1 (T M 5 ) = 0 (5.9)
in cohomology. Note that given the definition of G the field strength entering p 1 (E) is G/R hence the factor of R 2 in the first term. Since both G and F are anti-self-dual instantons on the base p 1 (V ) cannot cancel against p 1 (E) and we can only satisfy (5.9) by including non-trivial p 1 (T M 5 ). The equation for the dilaton onM then becomes,
∇ 2 e 2Φ = − 1 2G 2 − α ′ TrF 2 − trR 2 . (5.10)
One would then have to check whether such a solution for Φ in fact leads to a background satisfying the full (higher-order) equations for motion. One important point to note is that satisfying (5.9) with non-vanishing p 1 (E) requires R 2 ∼ α ′ . In other words the size of the the S 1 fibre must be of order the string scale. As such the supergravity description of these compactifications is breaking down. (Note, in addition, that R 2 is constrained to be a rational multiple of α ′ , so cannot be a modulus.) It would be interesting to find a corresponding conformal field theory description, for instance by taking the orbifold limit of the base K3 manifold. Note that it is trivial to extend these solutions to six-dimensional compactifications with N = 2 supersymmetry simply by including a second fibred direction. Now let us consider solutions where the base geometry M 0 is in more than four dimensions.
Specifically we consider solutions where M 0 is conformal to a special holonomy manifold. We noted in section 3 that this rules out the SU (n)-cases for n = 2. Let us thus consider M 0 to be conformal to a G 2 -holonomy manifold. An eight dimensional geometry preserving two ǫ + supersymmetries, one of each d = 8 chirality, is given by
ds 2 = e Φ (ds 2 ) + (dy + B) 2 , H mnp = − 1 2 * φ mnp q∇ q e Φ − 3B [m G np] ,
H ymn = −G mn (5.11) where G = dB is an Abelian G 2 -instanton on the G 2 -holonomy manifoldM . A type II solution is then obtained by solving the Bianchi identity which reads * φ [mnp
q∇ l]∇q e Φ = 3G [mn G pl] . (5.12)
Given that G is a G 2 -instanton, this is equivalent tõ
∇ m∇n e Φ = −2G m k G nk + 1 4G 2g mn . (5.13)
To get explicit solutions we need explicit G 2 -holonomy metrics ds 2 and explicit Abelian instantons G. One approach is to note the if the G 2 -holonomy metric admits a Killing vector v, then the two-form dv is a G 2 -instanton if and only if v preserves the G 2 -structure:
L v φ = d i v φ = 0.
Since all of the known explicit G 2 -manifolds have many isometries, this result allows one in principle to find new solutions and would be interesting to investigate further.
If the G 2 -holonomy manifold is flat, solutions with constant flux can be obtained as follows. We take B = 1 2 C mn x m dx n , (5.14)
giving constant field strength G = C. This is an G 2 Abelian instanton provided C mn = − 1 2 * φ mn pq C pq . In other words, using a suitable projection, we have in general
C mn = 2 3 δ pq mn − 1 4 * φ mn pq D pq , (5.15)
for an arbitrary constant two-form D mn . We then find that
e Φ = − 1 2 2G m k G nk − 1 4G
2g mn x m x n + constant (5.16) solves (5.13).
6 Geometries with both ǫ + and ǫ − Killing spinors
Let us now turn our attention to the type II cases summarised in table 2. These geometries preserve both ǫ + and ǫ − Killing spinors and thus define two different structures, G ± , one for each set of Killing spinors, of the type described in section 3. Taking both sets together defines a G-structure where G is the maximal common subgroup of G + and G − given their particular embeddings in SO(d). One can follow the detailed strategy of [5] to derive the necessary and sufficient conditions on this G-structure in order that the geometry preserves the corresponding supersymmetry. This is based on direct manipulations of the Killing spinor equations and some details of this approach appear in [7].
Equivalently, we can obtain the conditions on the G-structure by writing the G ± -structures in terms of the G-structure and then imposing the conditions on the G ± -structures derived in section 3. In implementing this strategy it is crucial to recall that the signs presented in section 3 assumed that the preserved spinors were of the ǫ + -type and also took, in the relevant cases, the preserved spinors to have a definite chirality. In order to get the results of this section, one needs the appropriate generalisations for ∇ − and sometimes the opposite chirality.
SU (2)-geometries in d = 6: This case arises when both ∇ ± have SU (3) holonomy with a common SU (2) subgroup. The SU (2) structure in d = 6 is specified by a two-form J, a complex two-form Ω and two one-forms K i with i = 1, 2. They satisfy (2.1) for n = 2 and in addition
i K i Ω = i K i J = 0. (6.1)
The corresponding SU (3) structures associated with ∇ ± are given by
J ± = J ± K 1 ∧ K 2 , Ω ± = Ω ∧ (K 1 ± iK 2 ). (6.2)
Demanding that the SU (3)-structures each satisfy the necessary and sufficient conditions for supersymmetry discussed in (3.1), (3.2) (with appropriate sign changes for ∇ − , as mentioned above) leads to necessary and sufficient conditions on the SU(2) structure. They also possess an almost product structure Π = 2K 1 ⊗ K 1# + 2K 2 ⊗ K 2# − ½, (6.5) where K # is the vector field dual to the one-form K, satisfying Π·Π = ½. Since d(e −Φ K i ) = 0 this structure is integrable and hence the metric can be cast in the canonical form
Specifically, we find d(e −Φ K i ) = 0, d(e −Φ Ω) = 0, dJ ∧ K 1 ∧ K 2 = 0.ds 2 = g 4 ab (x, y) dx a dx b + e 2Φ(x,y) δ ij dy i dy j . (6.6)
The conditions (6.3) then imply that at fixed y i , the SU(2) structure on the four-manifold has W 2 = W 4 = 0 and W 5 = dΦ. Such geometries, which in particular are Kähler, are called almost Calabi-Yau.
This case corresponds to fivebranes wrapping Kähler-two-cycles in CY 2 . This is mirrored in the expression for the flux (6.4), and also in the structure of the metric (6.6) with the y directions corresponding to the two directions transverse to the fivebrane and the initial
CY 2 .
Explicit examples of such solutions were presented in [47,48] and were further explored from the world-sheet point of view in [37].
SU(3)-geometries in d = 7:
This case arises when ∇ ± each have G 2 holonomy and was discussed in [5]. The SU (3) structure in d = 7 is specified by J and Ω satisfying (2.1) for n = 3, and a vector K such that
i K Ω = i K J = 0. (6.7)
The two G 2 structures are given by φ ± = J ∧ K ∓ Im Ω (6.8) and demanding that they satisfy The six-dimensional slices at fixed y have an SU(3) structure with intrinsic torsion lying in W 2 ⊕ W 4 ⊕ W 5 , and it is straightforward to see that W 4 = −W 5 = 2dΦ. Recall that for SU(3) the module W 2 splits into two modules W ± 2 . The third condition in (6.9) implies that while W + 2 vanishes W − 2 does not. These geometries are not Hermitian, as noted in [5]. This case corresponds to fivebranes wrapping SLAG three-cycles and explicit solutions were given in [7,49]. SU(3)-geometries in d = 8: This is one of the cases when ∇ ± each have SU(4) holonomy.
It is in fact very similar to the case of an SU(2) structure in d = 6 considered above. The SU(3) structure in d = 8 is specified by J, Ω satisfying (2.1) for n = 3 and two vectors K i satisfying (6.1). The two SU(4) structures are given by
J ± = J ± K 1 ∧ K 2 , Ω ± = Ω ∧ (K 1 ± iK 2 ). (6.12)
Demanding that they satisfy the necessary and sufficient conditions for SU (4) Such geometries preserve two pairs of d = 8 spinors with opposite chirality, two ǫ + and two ǫ − . Again there is an integrable product structure and the metric can be written in the form ds 2 = g 6 ab (x, y) dx a dx b + e 2Φ(x,y) δ ij dy i dy j . (6.14)
At fixed y i , the SU(3) structure on the six-manifold is almost Calabi-Yau, with the only non-vanishing class being W 5 = dΦ. This case corresponds to fivebranes wrapping Kählerfour-cycles in CY 3 and solutions were found in [37,38]. where e.g. vol = 1 2 J 3 ∧ J 3 . These geometries preserve four d = 8 spinors with the same chirality, two ǫ + and two ǫ − . The almost product structure
Π = J + · J − = J 3 · J 3 − J ′3 · J ′3 , (6.17)
is integrable 4 since ∇ ± J ± = 0, J ± commute and J ± are integrable (see Appendix C) and
implies the canonical form of the metric These geometries arise when a fivebrane wraps a two-cycle in one Calabi-Yau two-fold and a second two-cycle in a second Calabi-Yau two-fold.
ds 2 = g 4 ij (x, y) dx i dx j + g ′ 4 ab (x, y) dy a dy b ,(6.
Sp(2)-geometries in d = 8: This case arises when ∇ + has SU(4) holonomy while ∇ − has Spin(7) holonomy. These correspond to fivebranes wrapping C-LAG four-cycles in hyper-Kähler eight manifolds. Recall that these are complex with respect to one complex structure and special Lagrangian with respect to the remaining two. We have a Sp(2) structure given by a triplet of complex structures satisfying (2.25). The SU(4) structure is given by (J 3 , Ω 3 ),
where
Ω 3 = 1 2 J 2 ∧ J 2 − 1 2 J 1 ∧ J 1 + i(J 1 ∧ J 2 ),(6.20)
and satisfies (3.1) while the Spin(7) structure is defined by These geometries preserve three d = 8 spinors of the same chirality, two ǫ + and one ǫ − . Note that the conditions imply that the two 16's in each of dJ 1 and dJ 2 vanish. Moreover, the six independent Lee forms are given by
Ψ = 1 2 J 1 ∧ J 1 + 1 2 J 2 ∧ J 2 − 1 2 J 3 ∧ J 3 (6.L 11 = 3dΦ, L 22 = 3dΦ, L 33 = 2dΦ, L 12 − L 21 = −2J 3 · dΦ, L 31 − L 13 = −J 2 · dΦ, L 23 − L 32 = −J 1 · dΦ. (6.24)
It is worth emphasising that the intrinsic torsion of this Sp (2) The intrinsic torsion of the SU(4) structure lies in W 2 ⊕ W 4 ⊕ W 5 , with 2W 4 = W 5 = 6dΦ, and so in particular the geometries are not Hermitian.
G 2 -geometries in d = 8: This is the second case when ∇ ± each have Spin(7) holonomy. It occurs when fivebranes wrap co-associative four-cycles in G 2 manifolds. In this case the two Spin(7) structures give rise to a G 2 structure with φ as in (2.16) and
i K φ = 0. (6.28)
The two Spin (7) structures are given by
Ψ ± = −i K * φ ± φ ∧ K (6.29)
and satisfy (3.5) (and sign changes for ∇ − ) leading to the necessary and sufficient conditions The second comment is to note that we have only considered structures G ± that are orthogonal in the sense that preserved spinors ǫ + and ǫ − are orthogonal, that isǭ + ǫ − = 0. In fact, as we now show, this is a necessary condition for a non-trivial solution to be supersymmetric. Take any two Killing spinors ǫ + and ǫ − . The vanishing of the gravitini variations implies that
d(e −Φ K) = 0, (6.30) d(e −Φ φ) ∧ K = 0, (6.31) * d(i K * φ) ∧ φ ∧ K = 0, (6.32) * d(i K * φ) ∧ i K * φ = 4 * dΦ,(6.∇ m (ǭ + ǫ − ) = 1 4 H mabǭ + γ ab ǫ − . (6.36)
The dilatino equation implies that for any gamma-matrix operator A we have
∂ m Φǭ + [A, γ m ] ± ǫ − = 1 12 H mnpǭ + [A, γ mnp ] ± ǫ − . (6.37)
Taking A = γ m and using the upper sign, we conclude that
∇ m (ǭ + ǫ − ) = ∂ m Φ(ǭ + ǫ − ). (6.38)
This is trivially satisfied if the G ± -structures are orthogonal since thenǭ + ǫ − = 0. If the structures are not orthogonal, we have some point whereǭ + ǫ − is non-zero and then by continuity there will be a neighbourhood in which it is non-zero. In this neighbourhood we haveǭ + ǫ − = e Φ+Φ 0 , for some constant Φ 0 .
The two spinors ǫ ± define a pair of G ± -structures both of which are sub-bundles of the same SO(d)-bundle of orthonormal frames defined by the metric g d 5 .
Together ǫ ± define a common G-structure sub-bundle of the two G ± -structures. Furthermore, there always exists some metric-compatible connection∇ that preserves this G-structure. (Note this connection generically does not have totally antisymmetric torsion.) Necessarily it preserves the G ±structures, so that∇ǫ ± = 0. Thus in fact we have ∇(ǭ + ǫ − ) =∇(ǭ + ǫ − ) = 0 implying Φ is a constant. However, the equations of motion then imply that H is constant. We thus conclude that there are no supersymmetric solutions with non-vanishing flux when the structures G ± are not orthogonal.
Explicit examples II
In this section, we present some further explicit solutions in d = 6, some preserving both ǫ + and ǫ − supersymmetries, for the type II theories, including a solution that preserves the unusual fraction of 12/32 supersymmetry. The basic solutions have two flat directions fibred over a four dimensional base-space, with the fibration being specified by two Abelian instantons on the base, and thus generalise those discussed in section 5. We shall also discuss compact heterotic geometries in d = 6 preserving both eight and four supercharges. We now twist the two flat directions, as in section 5, with two Abelian instantons,
Itds 2 = e 2Φ ds 2 + (dy + B 1 ) 2 + (dz + B 2 ) 2 , H mnp = −ǫ mnp q∇ q e 2Φ + 3B 1 [m G 1 np] + 3B 2 [m G 2 np] , H mny = G 1 mn , H mnz = G 2 mn ,(7.2)
giving the dilaton equation∇
2 e 2Φ = − 1 2 (G 1 ) 2 + (G 2 ) 2 ,(7.3)
where m, n = 1, . . . 4 and now G i = dB i are taken to be self-dual instantons on the Ê 4 base space ds 2 . This twisting still preserves eight ǫ − spinors so that ∇ − still has SU (2) − holonomy. The generic constant flux solution to these equations is given by
G 1 + iG 2 = kΩ (7.6)
for some complex constant k. (Note, as we discuss below, this is the same twisting that appears in the Iwasawa manifold analysed in [16].) The Bianchi identity then implies the equation for the dilaton∇ 2 e 2Φ = −8|k| 2 , (7.7)
which can easily be solved. To summarise, the solution (7.2) with flat base space will preserve eight ǫ − and four ǫ + spinors for the specific choice of self-dual instantons (7.6) and dilaton satisfying (7.7).
A number of comments are now in order. First, this special solution corresponds to N = 3 supersymmetry in the remaining four spacetime dimensions. It would be interesting to relate this solution to those discussed in [21].
Secondly, the holonomy of the connections ∇ ± for the special solution are SU (3) and
SU (2), respectively. This is not a combination appearing in table 2. The form of the solution indicates that this solution is related to fivebranes wrapping two flat directions, but a world-volume interpretation of the twisting and preservation of supersymmetry are obscure to us at present.
Thirdly, this special background is also a heterotic/type I solution. In this case, one loses the ǫ − supersymmetries and the solution preserves only four ǫ + spinors, and so has N = 1 supersymmetry in four-dimensions. Including additional heterotic instantons simply add to the source |k| 2 in the dilaton equation (7.7). Note that by taking H → −H and switching the orientation of the base, we switch ǫ + and ǫ − and hence we can also obtain a heterotic solution from the generic solution (7.2) with an SU(2) structure and N = 2 superymmetry.
Finally, the metric and three-form obtained by setting the dilaton to constant in (7.2) with G 1 + iG 2 = kΩ, were first considered in the heterotic case (including an additional Abelian instanton embedded in E 8 × E 8 or SO(32)) in [16]. There it was demonstrated that the conditions for the preservation of ǫ + supersymmetry with ∇ + having SU (3) holonomy were satisfied. However, given the analysis here, the background in [16] is problematic for the following somewhat subtle reason. As we have already noted when the dilaton is constant and H = 0, the leading-order type II (or heterotic/type I) equations of motion are not satisfied.
As shown in [5], these equations of motion are a direct consequence of the preservation of supersymmetry once the Bianchi identity (3.23) is imposed (or equivalently (3.24) if tr R ∧ R = 0 as for the geometry considered in [16]). This contradiction is resolved by the fact that the background in [16] actually satisfies a Bianchi identity with the opposite sign to the one arising in type I supergravity. This discrepancy is probably related to the sign discrepancy between the expression (3.3) and the corresponding expression in [2] 6 .
The type II solutions we have been discussing can also be generalised by replacing the flat space in (7.2) with a generic Calabi-Yau two-fold CY 2 . As usual for type II, the Calabi-Yau two-fold cannot be compact in order to satisfy the Bianchi identity dH = 0. If we take the orientation of the CY 2 to be such that the complex structures are self-dual, we impose the projections γ 1234 ǫ ± = −ǫ ± . In this case, the solution preserves no ǫ − supersymmetry, and generically no ǫ + supersymmetry. However, choosing G 1 + iG 2 = kΩ, whereΩ is the holomorphic (2, 0) form on CY 2 , we find that ∇ + has SU ( the fact that one is twisting two flat directions and not just one as considered in [34].
Similarly, one can obtain heterotic/type I geometries preserving N = 1, 2 supersymmetry.
By taking the flat directions to be a two-torus, and M 0 to to be either conformally T 4 or conformally K3, we get compact and supersymmetric heterotic geometries. It will be interesting to see whether it is possible to solve the heterotic Bianchi identity for these geometries; if it is, as in section 5, the tr R ∧ R contribution will be essential. In addition, one should again find that the radius of the two-torus is required to be of order the string scale and that several of the moduli are fixed.
Discussion
In this paper we have studied the necessary and sufficient conditions for static geometries of type I/heterotic string theory, or type II theories with only non-vanishing NS-NS fields, to preserve supersymmetry and solve the equations of motion. The Killing spinors define G-structures on the geometries and we determined the intrinsic torsion of the G-structure.
We emphasised the universal expression for the three-form flux in terms of generalised calibrations and the connection with wrapped branes, following [7,5]. This universal expression for the flux leads to a very simple proof of a vanishing theorem on compact manifolds.
The geometries always have a connection with totally anti-symmetric torsion, ∇ + (or ∇ − for the type II theories), which has special holonomy. We first discussed the geometries in the canonical dimension for the special holonomy group, d = 8 for Spin (7), d = 7 for G 2 , etc.
We then showed that the most general geometries in d = 9 have a number of flat directions fibred over these geometries in the canonical dimensions, with the fibration being determined by Abelian generalised instantons. We also discussed the physical interpretation of these geometries in terms of wrapped fivebranes. For example, the eight-dimensional geometries with a single flat dimension fibred over a seven-dimensional geometry with G 2 -structure correspond to fivebranes wrapping supersymmetric cycles of the form
S 1 × Σ 3 ⊂ S 1 × M G 2
where Σ 3 ⊂ M G 2 is an associative three-cycle in a G 2 -holonomy manifold. The fact that the resulting eight-dimensional geometry is not necessarily a direct product of S 1 with a sevendimensional geometry is worth further investigation. We presented some explicit examples, that would be worth studying further and generalising.
These results provide a comprehensive classification of all of the supersymmetric static geometries of the heterotic/type I theory. For the type II theories, we also analysed the geometries that arise when both connections ∇ ± have special holonomy. Our analysis covers all cases of NS fivebranes wrapping calibrated cycles, as listed in tables 1 and 2.
We also presented an explicit solution with a torus T 2 fibred over an Ê 4 base with ∇ + indicates that many moduli are fixed. We showed that the size of the torus is necessarily of order the string scale, indicating that the supergravity approximation is breaking down.
One would also has to check the equations for motion are satisfied. To pursue these models further we aim to construct a conformal field theory description. It would also be interesting to relate our compactifications to those of [50,51,52].
We have emphasised that the expression for the three-form flux is easy to understand as a generalised calibration since the geometry should still admit fivebranes wrapping the corresponding cycles. It is very interesting to note that many, and in some cases all, of the other conditions constraining the intrinsic torsion can be interpreted in the same way. An important motivation for this work is that a good understanding of the geometry underlying supergravity configurations might allow us to find new explicit solutions. Indeed for the cases listed in table 1 a co-homogeneity one ansatz is useful for finding solutions [5]. This is a practical alternative to finding solutions describing wrapped fivebranes using the gauge supergravity approach initiated in [22]. For the cases in table 2, on the other hand, a simple generalisation of this technique can lead to co-homogeneity one but also to a co-homogeneity two or more ansatz, and progress in the latter case is much more difficult [5].
At present the gauge supergravity approach is the best available tool to produce solutions for these latter cases. It should be noted, however, that since the configurations in table 2 preserve more supersymmetry than those in table 1, one expects that with new techniques, ultimately, they could be easier to analyse.
Finally, it is natural to generalise this work to also include RR fields in the type II theories, as well as to consider Lorentzian geometries. Such geometries will allow one to describe both wrapped NS and D-branes, as well pp-waves and general non-static backgrounds. Based on this work and on [13] we expect generalised calibrations to play an important role.
Acknowledgments
We would like to thank Atish Dabholkar, Jan Gutowski and James Sparks for useful dis-
A Equations of motion
The low-energy effective action for heterotic/type I string theory is given by the type I supergravity action
S = 1 2κ 2 d 10 x √ −ge −2Φ R + 4(∇Φ) 2 − 1 12 H 2 − α ′ Tr F 2 (A.1)
where F is in the adjoint of SO (32) Including the leading order string correction from anomaly cancellation we get
dH = 2α ′ (Tr F ∧ F − tr R ∧ R) (A.3)
but to fully consistently implement this one should also include modifications to the action.
The equations of motion coming from (A.1) are given by
R M N − 1 4 H M RS H N RS + 2∇ M ∇ N Φ − 2α ′ Tr F M R F N R = 0, (A.4a) ∇ 2 (e −2Φ ) − 1 6 e −2Φ H M N R H M N R − α ′ e −2Φ Tr F M N F M N = 0, (A.4b) ∇ M (e −2Φ H M N R ) = 0, (A.4c) D M (e −2Φ F M N ) − 1 2 e −2Φ F RS H RSN = 0. (A.4d)
The action and equations of motion for the type II theories with all RR fields set to zero are obtained by simply setting the gauge field F to zero and using the Bianchi identity dH = 0.
B Spinor and G-structure conventions
In doing calculations it is often useful to have an explicit set of projections defining the Killing spinors and the corresponding G-structures. Here we define one possible set of conventions consistent with the expressions given in the paper. In particular, we will use the same set of projectors (or subset of them) to define the invariant spinors in all cases. Specifically, the Killing spinors will be defined by their ±1 eigenvalues for the set of commuting gamma matrices γ 1234 , γ 5678 , γ 1256 , γ 1357 .
(B.1)
We concentrate on the cases of G-structure in canonical dimension. However, in each case we also give how the structure embeds in the next simplest structure group following figure 1.
Using these embeddings one can obtain conventions for any of the G-structures in arbitrary
dimensions d ≤ 9.
Note that in all dimensions the gamma matrix algebra is taken to be {γ m , γ n } = 2δ mn and the adjoint spinor is written asǭ and the conjugate spinor as ǫ c . We always normalise the Killing spinors to satisfyǭǫ = 1. with γ 1357 ǫ (1) = +ǫ (1) , γ 1357 ǫ (2) = −ǫ (2) .
(B.6)
Defining a complex spinor η = 1 √ 2 (ǫ (1) + iǫ (2) ), the forms J and Ω can then be written as
J mn = −iηγ mn η, Ω mnpq =η c γ mnpq η. (B.7)
Note in the basis whereǭ = ǫ T , we have the more familiar expressions J mn = iη † γ mn η and Ω mnpq = η T γ mnpq η. Given γ 12 ǫ (1) = −ǫ (2) we get the standard expressions J = e 12 + e 34 + e 56 + e 78 , Ω = (e 1 + ie 2 )(e 3 + ie 4 )(e 5 + ie 6 )(e 7 + ie 8 ).
(B.8)
The corresponding volume form is given by (B.4) as above. Note that each real spinor ǫ (a) also defines a corresponding Spin (7)-structure as in (B.3) given by Again, the corresponding volume form is given by (B.4) as above. Note there are six SU (4)structures given by J A ± = J A ± J ′A and similarly each spinor ǫ (a) defines a corresponding Spin(7)-structure given by Note the relation between φ and vol is slightly non-standard. It is the opposite to the conventions given, for instance in [53]. To match the expressions in [53], one replaces e 7 with −e 7 and permutes the new basis vol = −e 1234567 to e 3254761 . Note that one can choose an imaginary basis for the γ-matrices whereǭ = ǫ T .
Ψ (1) = 1 2 J ∧ J − Re Ω, Ψ (2) = 1 2 J ∧ J + Re Ω.Ψ (1) = vol + vol ′ −J 1 ∧ J ′1 + J 2 ∧ J ′2 + J 3 ∧ J ′3 , Ψ (2) = vol + vol ′ +J 1 ∧ J ′1 − J 2 ∧ J ′2 + J 3 ∧ J ′3 , Ψ (3) = vol + vol ′ +J 1 ∧ J ′1 + J 2 ∧ J ′2 − J 3 ∧ J ′3 , Ψ (4) = vol + vol ′ −J 1 ∧ J ′1 − J 2 ∧ J ′2 − J 3 ∧ J ′3 .
Lifting to d = 8, the G 2 -structure defines a pair of real spinors ǫ (a) with a = 1, 2 satisfying (B.21) of opposite chirality. They can be distinguished by γ 5678 ǫ (1) = −ǫ (1) , γ 5678 ǫ (2) = +ǫ (2) . (B.24)
The G 2 -structure is defined by φ and K given by
φ mnp = −ǭ (1) γ mnp ǫ (2) ,
K m =ǭ (1) γ m ǫ (2) . The two Spin (7)-structures defined by ǫ (a) are given by
Ψ (1) = −i K * φ + φ ∧ K, Ψ (2) = −i K * φ − φ ∧ K. (B.27)
Note with these conventions, i K * φ = − * 7 φ where * 7 φ is the usual coassociative four-form, that is the Hodge dual of φ on the seven-dimensional subspace orthogonal to K. Lifting to d = 7, the SU (3)-structure defines a pair of invariant spinors ǫ (a) with a = 1, 2 satisfying (B.28). Fixing iγ 1···7 = ½, they can be distinguished by γ 1357 ǫ (1) = −ǫ (1) , γ 1357 ǫ (2) = +ǫ (2) . (B.32)
The SU (3)-structure is given by J mn = −ǭ (1) γ mn ǫ (2) , (1) γ m ǫ (2) . Lifting to d = 6, the SU (2)-structure defines a pair of complex invariant spinors ǫ (a) with a = 1, 2 satisfying (B.28). These have opposite chirality and can be distinguished by γ 3456 ǫ (1) = −ǫ (1) , γ 3456 ǫ (2) = +ǫ (2) . (B.40)
Ω mnp = iǭ (1) γ mnp ǫ (2) − 1 2 ǭ (1) γ mnp ǫ (1) −ǭ (2) γ mnp ǫ (2) , K m = −iǭ
The SU (2)-structure is given by J mn = − 1 2 i ǭ (1) γ mn ǫ (1) +ǭ (2) γ mn ǫ (2) , Ω mn =ǭ c (1) γ mn ǫ (2) , K 1 m + iK 2 m =ǭ (2) γ m ǫ (1) .
(B.41)
Given γ 12 ǫ (i) = ǫ (i) and γ 135 ǫ (i) = ǫ c (i) while γ 5 ǫ (1) = ǫ (2) and γ 6 ǫ (1) = iǫ (2) , we have K 1 = e 5 , K 2 = e 6 and J and Ω take the standard form (B.38). The corresponding volume form vol = e 1 ∧ · · · ∧ e 6 is given by The almost complex structure is integrable if and only if the Nijenhuis tensor vanishes and in this case one can introduce holomorphic co-ordinates on the manifold. If J is compatible with a metric, namely J mq ≡ J n m g nq is a two-form, then the metric is called almost Hermitian and Hermitian if J is integrable.
Similarly, an almost product structure is a GL(P, Ê) × GL(Q, Ê)-structure on a P + Qdimensional manifold, which is characterised by a tensor Π m n satisfying Π · Π = +½. At any point the tangent space splits accordingly as T p M = T p M P ⊕ T p M Q , where P (respectively Q) is the number of +1 (respectively −1) eigenvalues of Π. The Nijenhuis tensor for the almost product structure is defined again by (see e.g. [54]). If furthermore the almost product structure is metric compatible, i.e. Π mq ≡ Π n m g nq is a symmetric tensor, one can introduce "separating co-ordinates" on the manifold such that the metric takes the (P × P, Q × Q) block-diagonal form If instead we assume that J + , J − are commuting and are both integrable, and also ∇ + J + = ∇ − J − = 0, then all the components of N mn r vanish, hence Π is integrable [55]. To see this we first note that given the assumptions, H is a (2, 1) + (1, 2) form with respect to either complex structure J ± :
ds 2 = g P ij (x, y) dx i dx j + g Q ab (x,
H mnr = J m p J n q H pqr + J r p J m q H pqn + J n p J r q H pqm . (C.7)
To proceed, write 2Π = J + · J − + J − · J + to get It is sometimes incorrectly stated in the literature (see for instance [55,56,57]) that Π, defined by (C.4), is integrable if and only if the two commuting almost complex structures are integrable. A concrete class of counter-example is provided by the geometry (7.2) for generic instantons G. This geometry has an SU (2) structure, built from the ǫ − Killing spinors, which can be specified by two SU (3) structures. The corresponding two almost complex structures, written as two-forms, are given by
J = e 2Φ (dx 1 ∧ dx 2 − dx 3 ∧ dx 4 ) + (dy + B 1 ) ∧ (dz + B 2 ), J ′ = e 2Φ (dx 1 ∧ dx 2 − dx 3 ∧ dx 4 ) − (dy + B 1 ) ∧ (dz + B 2 ). (C.10)
Both almost complex structures are integrable. A quick way to see this is to note that the geometry is a special example of the canonical SU(3) geometry in d = 6 (preserving twice as much supersymmetry) that was discussed in section 3 (with expressions for ǫ + spinors rather than ǫ − spinors that we have here) for either SU (3) Computing the corresponding Nijenhuis tensor, we find that it has the non-zero components
given by (C.11) with G 1 = G 2 = dx 1 ∧ dx 2 + dx 3 ∧ dx 4 . It would be interesting to investigate the consequences of this counter example, especially in the context of the sigma model literature.
Figure 1 :
1Special holonomies of manifolds in d-dimensions with covariantly constant spinors with respect to either the Levi-Civita connection or a connection with totally anti-symmetric torsion H. Only the minimal "canonical" dimension d is presented. The arrows represent the different ways the groups can be embedded in each other.
)
we have listed for each of the different holonomies of ∇ + , in the canonical dimensions, the corresponding type of calibrated cycle that a NS-fivebrane wraps in order to give the geometry. We have also included the number of minimal Spin(d) spinors preserved in each case. Note that for the d = 4 and d = 8 cases we have listed the six-and two-dimensional chirality of the preserved supersymmetry. Also CY n corresponds to a Calabi-Yau n-fold and HK 2 to a hyper-Kähler manifold in d = 8. dim(M) N Hol(∇ + ) Hol(∇ − ) SU (2) 2 Spin(8) SU (2) 2 CY 2 and/or CY ′ 2 in CY 2 × CY
The possible groups G are precisely the possible special holonomy groups appearing in figure 1. The necessary and sufficient conditions for solutions of the particular supersymmetry constraints (1.1) then translate into the G-structure being of a particular type with certain components of the intrinsic torsion vanishing. Since G ⊂ Spin(d) the metric g d is completely determined by the G-structure. Similarly, one finds expressions for H and Φ in terms of the intrinsic torsion of the G-structure.
combination together with W 1 , W 2 and W 3 all vanish and W 4,5 are exact, the manifold is conformally Calabi-Yau. Spin(7)-structures in d = 8: The structure is specified by a Spin(7)-invariant Cayley fourform, Ψ, which at any given point in M 8 can be written as Ψ = e 1234 + e 1256 + e 1278 + e 3456 + e 3478 + e 5678 + e 1357 − e 1368 − e 1458 − e 1467 − e 2358 − e 2367 − e 2457 + e 2468 , (2.11)
G 2
2-structures in d = 7: The structure is specified by an associative three-form φ. In a local frame this can be given by φ = e 246 − e 235 − e 145 − e 136 + e 127 + e 347 + e 567 .(2.16)
we then have * φ = e 1234 + e 1256 + e 3456 + e 1357 − e 1467 − e 2367 − e 2457 .(2.17)
here and throughout the paper the Hodge star is defined with respect to the canonical orientation fixed by the structure. For SU (n) this is vol = J n /n!. In terms of Killing spinors, the geometries preserve two complex chiral d = 2n spinors related by complex conjugation. For n = 2, 4 both spinors have the same chirality, while for n = 3 they have opposite chirality. Our conventions for defining the spinors, J, Ω and vol are given in Appendix B.
A is an operator built out of gamma matrices and [ · , · ] ± refer to the anti-commutator and commutator respectively. By taking A = γ m 1 with the lower sign and A = γ m 1 ...m 6 with the upper sign in (4.5), one finds two constraints on (K, Ψ). First one has the Leethe familiar calibration form for the flux * H = e 2Φ d(e −2Φ Ψ ∧ K).(4.7)
similarly that L K H = L K Φ = 0. The Lee-form condition in (4.6) can then be writtenΨ d 0 Ψ = 12d 0 Φ (4.10)where d 0 is the exterior derivative on the eight-dimensional space M 0 . Similarly the condi-tion (4.7) reduces to * 0 H 0 = −e 2Φ d 0 (e −2Φ Ψ), (4.11)where * 0 is the Hodge-star on M 0 . In other words, the d = 8 Spin(7)structure Ψ on M 0 is independent of y and satisfies exactly the same conditions (3.5) and (3.6) as in the last section. In particular, the only constraint on the intrinsic torsion in d = 8 is that the Lee form is given as in (4.6). By substituting back into the supersymmetry conditions (1.1)
holonomy manifold with a flat direction. By contrast when the flux is non-zero, it is only in the special case when dK = G = 0, when the fibration is trivial, that the geometries are simply the product of the d = 8 Spin(7) geometries considered in the last section with a flat direction.Secondly, since K generates a symmetry of the full solution, including the spinors, we can dimensionally reduce a type II solution to get a supersymmetric heterotic solution in d = 8 with an Abelian instanton F proportional to G. Similarly, given a heterotic solution (g 0 , H 0 , Φ, F ) in d = 8 with an Abelian Spin(7) instanton F , we can oxidise it to obtain a type II solution in d = 9 with G proportional to F , a metric given by(4.4) and H = H 0 − G ∧ K.Thirdly, the solutions are invariant under a T -duality in the y-direction.Finally, note that the d = 9 expression for the flux (4.7) is again that of a generalised calibration. It corresponds to a NS fivebrane wrapping a supersymmetric five-cycle Σ 4 × S 1 in the product of a Spin(7) manifoldM with a circle,M × S 1 , with Σ 4 ⊂M being a Cayley four-cycle. (Note one could equally well replace the circle with a line.) The simplest way of wrapping the fivebrane leads to a d = 9 geometry consisting of the product of a d = 8 Spin
case consistency between the calibration conditions (4.13) and (4.14) and the expansion (4.16) implies that each G i satisfies the appropriate Abelian G-instanton equation on M 0 .In summary, general supersymmetric geometries in d = 9 are closely related to the supersymmetric geometries in the canonical dimensions discussed in the last section. They all have a fibred structure where the base space M 0 has a G-structure in canonical dimension satisfying one of the sets of conditions given in section 3. The flux is given by a generalised calibration condition (4.13) or (4.14), corresponding to a fivebrane wrapping a five-cycle. The twisting of the fibration is described by two-forms G i which are all Abelian G-instantons on M 0 . If one makes a dimensional reduction on the K i , the solutions correspond to heterotic solutions in canonical dimension d = n with 9 − n Abelian instantons. In order to obtain a solution to the equations of motion the flux H 0 on M 0 must also satisfy a (Tr F ∧ F − tr R ∧ R) for heterotic/type I.
provide a comprehensive classification of all the possible supersymmetric heterotic/type I or NS-NS type II bosonic geometries of the form Ê 1,9−d × M d preservingKilling spinors satisfying (1.1) for ǫ + . Any solution with d < 9 can be obtained simply by setting 9 − d of the B i twists to zero, so that the fibration becomes, at least partially, a product M 9 = Ê 9−d × M d .
For illustration we shall consider here just a single flat direction fibred over a base-manifold M 0 . Additional examples with two flat directions fibred over a four-dimensional base will be considered in section 7. To begin with we consider M 0 to be four-dimensional, and the three complex structures are taken to be self-dual. As noted in section 3, M 0 is necessarily conformally hyper-Kähler. The five-dimensional geometry thus takes the form ds 2 = e 2Φ (ds 2 ) + (dy + B) 2 , H mnp = −ǫ mnp l∇ l e 2Φ − 3B [m G np] , H ymn = −G mn ,
flux given by * H = −e 2Φ d e −2Φ J . (6.4) These geometries preserve two complex chiral d = 6 spinors, one ǫ + and one ǫ − .
(3.8), (3.9) (and their generalisation for ∇ − ) leads to the differential conditions d(e −Φ K) flux given by * H = −e 2Φ d(e −2Φ Im Ω). (6.10)These geometries preserve two d = 7 spinors, one ǫ + and one ǫ − . The obvious almost product structure is again integrable and hence the metric can be cast in the canonical form ds 2 = g 6 ab (x, y) dx a dx b + e 2Φ(x,y) dy 2 . (6.11)
structures given in (3.1), (3.2) (and their generalisation for ∇ − ) leads to the differential conditions as in (6.3) with the flux given by * H = −e 2Φ d e −2Φ 1 2 J ∧ J . (6.13)
SU( 2 )
2× SU(2)-geometries in d = 8: The second way that ∇ ± both have SU (4) holonomy is when they give a common SU(2) × SU(2) structure. The two orthogonal SU(2) structures J A and J ′A satisfy the conditions (2.30). The two SU(4)-structures are given byJ ± = J 3 ± J ′3 Ω ± = Ω ∧ Ω ′ , Ω ∧Ω ′ (6.15)where e.g. Ω = J 2 + iJ 1 . Demanding that they satisfy the necessary and sufficient conditions for SU(4) structures given in (3.1), (3.2) (and their generalisation for ∇ − ) leads to the necessary and sufficient conditions on the SU(2) × SU(2)
being four-by-four. The four-dimensional slices each have an SU(2) structure, with W 2 = W 4 = 0 and W 5 = dΦ at any point in their transverse directions. The flux is given by * H = −e 2Φ d(e −2Φ J 3 ∧ J ′3 ). (6.19)
given by * H = −e 2Φ d e −2Φ Re Ω 1 = e 2Φ d e −2Φ Re Ω 2 = −e 2Φ d e −2Φ 1 2 J 3 ∧ J 3 .(6.23)
structure is not totally antisymmetric, and hence the geometry is not HKT. It would be interesting to find explicit examples. SU(4)-geometries in d = 8: This is the first case when ∇ ± each have Spin(7) holonomy. It corresponds to fivebranes wrapping SLAG four-cycles in CY 4 . In this case we have an SU(4) structure J, Ω satisfying (2.1) for n = 4. The two Spin(7) (3.5) (with sign changes for ∇ − ) leading to the conditions on the SU(4)-structure d e −Φ J = 0, * ( * d Re Ω ∧ Re Ω) = −6 dΦ, (6.26) with flux given by * H = −e 2Φ d e −2Φ Re Ω . (6.27)These geometries preserve two d = 8 spinors with the same chirality, one ǫ + and one ǫ − .
given by * H = e 2Φ d e −2Φ i K * φ .(6.34) These geometries preserve one ǫ + and one ǫ − d = 8 spinor of opposite chirality. The intrinsic torsion of the G 2 structure lies in W 2 ⊕ W 4 with W 4 = −4dΦ. This means one cannot introduce a G 2 Dolbeault cohomology[27].{½}-geometries: For completeness let us briefly mention the case corresponding to the first entry in table 2. This case has two different SU (2) structures each satisfying (2.1) giving a trivial structure defined by four real K i vectors. A little calculation reveals that this case can always be put in the canonical formds 2 = e 2Φ ds 2 (Ê 4 ) , * H = −e 2Φ d(e −2Φ ),(6.35) which is just the transverse space to the simple fivebrane solution.We conclude this section with two comments. First, considering either set of ǫ + or ǫ − Killing spinors we see that the geometries of this section are special cases of those appearing in section 3. It is then clear, from the results of section 4, that supersymmetric geometries in d = 9 can be obtained by fibering an appropriate number of flat directions over the geometries in this section. In order that the same amount of supersymmetry is preserved, the fibrations are determined by Abelian instantons that satisfy the generalised self-duality conditions for both of the G ± -structures. In other words they must satisfy the generalised self-duality conditions for the maximal common subgroup G. Note that in general the Bianchi identity for H may further restrict which fibrations are possible. For instance in the cases where both ∇ + and ∇ − have SU (n + 1) holonomy, one can show that dH has no components transforming as a four-form under SO(2n) ⊃ SU (n) for the common SU (n)-structure. As such, there are in fact no solutions with non-trivial twisting.
will be convenient in this section to distinguish different six-dimensional solutions by the number of preserved supersymmetries. Let us start with the most supersymmetric case corresponding to a flat NS five-brane as discussed at the end of the last section. Recall that the d = 4 solution transverse to a simple fivebrane (6.35) preserves eight ǫ + spinors and eight ǫ − spinors satisfying the projections γ 1234 ǫ + = −ǫ + , γ 1234 ǫ − = +ǫ − . (7.1) As previously noted ∇ ± have SU (2) ± holonomy in SO(4) = SU (2) + × SU (2) − with the maximal common subgroup being the identity. We can trivially lift this to a six-dimensional solution by adding two extra flat directions. This still preserves 16 supercharges corresponding to N = 4 supersymmetry in the remaining four spacetime dimensions.
For
non-zero G i , generically the solution however breaks all of the ǫ + supersymmetry. (Note, simply for convenience of later discussion, we have exchanged the roles of ∇ + and ∇ − , by taking H → −H and changing the orientation on the base, as compared to the discussion in section 5. There we took anti-self-dual instantons so that ǫ + spinors were preserved. This accounts for the difference in signs of the terms involving B and G in (7.2) compared to those in (5.1)). Hence, generically these solutions preserve N = 2 supersymmetry in the remaining four spacetime dimensions. Interestingly, it is nonetheless possible to preserve four ǫ + Killing spinors corresponding to ∇ + having SU (3) holonomy, for suitably chosen non-generic instantons. To see this we define an SU (3) structure by J = e 2ΦJ + (dy + B 1 ) ∧ (dz + B 2 ), Ω = e 2ΦΩ ∧ (dy + B 1 ) + i(dz + B 2 ) ,(7.4) whereJ = dx 1 ∧dx 2 +dx 3 ∧dx 4 andΩ = (dx 1 +idx 2 )∧(dx 3 +idx 4 ) define the SU (2) + structure on Ê 4 . Demanding that the SU (3) structure satisfies the conditions for supersymmetry (
3) holonomy and the solution still preserves four ǫ + supersymmetries, corresponding to N = 1 supersymmetry in four dimensions. Alternatively, if the orientation of the CY 2 is chosen so that the complex structures are anti-self-dual, we impose the projections γ 1234 ǫ ± = +ǫ ± . These solutions break all of the ǫ + supersymmetry, but preserve eight ǫ − spinors. The latter choice of orientation corresponds (after exchanging ǫ + with ǫ − by taking H → −H and switching the orientation on the base) to a simple generalisation from d = 5 to d = 6 of the solutions discussed in section 5 and explicitly obtained in[34] for the cases of Taub-NUT and Eguchi-Hanson. The former choice of orientation on the other hand, gives a new kind of supersymmetric solution that exploits
having SU (3) holonomy and ∇ − having SU(2) holonomy. This solution has four ǫ + Killing spinors and eight ǫ − spinors. The form of the flux suggests that the solution should be interpreted as a flat fivebrane with two of the world-volume directions further wrapped on the two torus. Naively, one would therefore expect 8 plus 8 Killing spinors and so it would also be interesting to find a physical interpretation of the twisting which leads to this reduction of supersymmetry. In[21] type II solutions on T 6 orientifolds with non-vanishing R-R and NS-NS fluxes were presented that also preserve 12 Killing spinors and it would be interesting to see if they are related. Perhaps our solutions provide a local descriptions of blow-ups of geometries around certain fixed points.Candidate heterotic compactifications in d = 6 were also presented, preserving both four and eight supersymmetries. They are based on manifolds which are fibrations of T 2 over a K3 base. The models with four supersymmetries arise for non-generic complex structure on the K3 and there are additional constraints on the radii of the circles of the torus. This
For
example, consider the case of the SU (3) structure with only ǫ + Killing spinors. The expression for the flux (3.2) is the general calibration condition for a fivebrane wrapping a Kähler two-cycle in a Calabi-Yau three-fold. In addition the intrinsic torsion is constrained to satisfy (3.1). Suppose we consider the trivial product of our SU (3) manifold M 6 with a torus T 2 . Let K 1 = dy 1 and K 2 = dy 2 represent the extra directions. The full set of conditions on the structure can then be written on the eight-dimensional space M 6 × T 2 as d[e −2Φ J ∧ J] = 0, d[e −2Φ Ω ∧ (K 1 + iK 2 )] = 0, d[e −2Φ J ∧ K 1 ∧ K 2 ] = −e −2Φ * H.
H lies solely in M6 , we see that all three expressions are calibration conditions of the form * H = e 2Φ d(e −2Φ Ξ) just for wrapping different cycles. The first is for a fivebrane wrapping a Kähler four-cycle in the Calabi-Yau, the second for wrapping a special Lagrangian cycle (and one of the K i directions), while the last is the familiar expression for the wrapping of a Kähler two-cycle in the Calabi-Yau together with the torus T 2 . This is physically reasonable, since the geometry M 6 × T 2 , corresponding to the full backreaction solution around a brane wrapping a Kähler two-cycle, should still admit probe branes wrapping the special Lagrangian three-and Kähler four-cycles. Similar arguments extend to the fibration cases in section 4 and the geometries with ǫ + and ǫ − in section 6.
cussions. D. M. is supported by an EC Marie Curie Individual Fellowship under contract number HPMF-CT-2002-01539. D. W. is supported by a Royal Society University Research Fellowship.
Spin( 7 ):
7In eight dimensions, a Spin(7)-structure defines a single real chiral invariant spinor ǫ. For definiteness, we choose γ 1···8 ǫ = ǫ. A possible set of independent, commuting projections defining a ǫ are γ 1234 ǫ = γ 5678 ǫ = γ 1256 ǫ = γ 1357 ǫ = −ǫ. (B.2) Writing the Cayley four-form Ψ as Ψ mnpq = −ǭγ mnpq ǫ (B.3) then matches the expression (2.11). The corresponding volume form is given by vol m 1 ...m 8 =ǭγ m 1 ...m 8 ǫ, (B.4) Note one can always choose a real basis for the gamma matrices so thatǭ = ǫ T . The conventions for lifting a Spin(7)-structure to d = 9 are given in section 4.1. SU (4): An SU (4)-structure leaves invariant two real orthogonal spinors ǫ (a) with a = 1, 2 of the same chirality in d = 8. These can be defined by γ 1234 ǫ (a) = γ 5678 ǫ (a) = γ 1256 ǫ (a) = −ǫ (a) (B.5)
(B. 9 )SU ( 2 )
92Sp(2): We now have three real orthogonal invariant spinors ǫ (a) with a = 1, 2, 3 of the same chirality in d = 8. These can be defined by γ 1234 ǫ (a) = γ 5678 ǫ (a) = γ 1256 + γ 1357 + γ 1458 ǫ (a) = −ǫ (a) × SU (2): We now have four orthogonal, real invariant spinors all of the same chirality in d = 8. They can be defined by γ 1234 ǫ (a) = γ 5678 ǫ (a) = −ǫ (a) (B.16) with γ 1256 ǫ (a) = −ǫ (a) for a = 2, 3 +ǫ (a) for a = 1, 4 , γ 1357 ǫ (a) = −ǫ (a) for a = 1, 2 +ǫ (a) for a = 3, 4 . (B.17) (B.19)
: A G 2 -structure defines a single invariant spinor in d = 7. This can be defined by the projections γ 1234 ǫ = γ 1256 ǫ = γ 1357 ǫ = −ǫ, (B.21) where we have taken iγ 1···7 = ½. The associative three-form (2.16) is then given by φ mnp = −iǭγ mnp ǫ. (B.22) The corresponding volume form is given by vol m 1 ...m 7 = iǭγ m 1 ...m 7 ǫ. (B.23)
8 ǫ (1) = ǫ (2) , we have K = e 8 and φ takes the standard form (2.16). The corresponding volume form vol = e 1 ∧ · · · ∧ e 8 is given by vol m 1 ...m 8 =ǭ (1) γ m 1 ...m 8 ǫ (1) = −ǭ (2) γ m 1 ...m 8 ǫ (2) (B.26)
SU ( 3 )
3: The SU (3)-structure defines a single chiral complex spinor ǫ. This can be defined by the conditions γ 1234 ǫ = γ 1256 ǫ = −ǫ. (B.28) We choose the chirality iγ 1...6 ǫ = ǫ so that γ 12 ǫ = iǫ. The forms J and Ω are then given by J mn = −iǭγ mn ǫ, Ω mnp =ǭ c γ mnp ǫ. (B.29) Given γ 135 ǫ = ǫ c , we get the standard expressions J = e 12 + e 34 + e 56 , Ω = (e 1 + ie 2 )(e 3 + ie 4 )(e 5 + ie 6 ). (B.30) The corresponding volume form is vol m 1 ...m 6 = iǭγ m 1 ...m 6 ǫ. (B.31)Again one can always choose a basis whereǭ = ǫ † and ǫ c = ǫ * .
12 ǫ (1) = ǫ(2) , this gives K = e 7 and J and Ω take the standard form (B.30). The corresponding volume form vol = e 1 ∧ · · · ∧ e 7 is given byvol m 1 ...m 7 = iǭ (1) γ m 1 ...m 7 ǫ (1) = iǭ (2) γ m 1 ...m 7 ǫ (2) (B.34)The two G 2 -structures defined by ǫ (a) are given byφ (1) = J ∧ K − Im Ω, φ (2) = J ∧ K + Im Ω, (B.35)SU(2): Finally for SU (2) the structure again defines a single complex spinor of definite chirality. We take the negative chiralityγ 1234 ǫ = −ǫ. (B.36)The forms J and Ω are then given byJ mn ≡ J 3 mn = −iǭγ mn ǫ, Ω mn ≡ J 2 mn + iJ 1 mn =ǭ c γ mn ǫ.(B.37) Given γ 12 ǫ = iǫ and γ 13 ǫ = ǫ c we get the self-dual combinations J 1 = e 14 + e 23 , J 2 = e 13 + e 42 , J 3 = e 12 + e 34 . (B.38) The corresponding volume form is vol m 1 ...m 4 = iǭγ m 1 ...m 4 ǫ. (B.39)Again one can always choose a basis whereǭ = ǫ † and ǫ c = ǫ * .
vol m 1 ...m 6 = iǭ (1) γ m 1 ...m 6 ǫ (1) = −iǭ (2) γ m 1 ...m 6 ǫ (2) (B.42)The two SU (3)-structures defined by ǫ (a) are given byJ (1) = J + K 1 ∧ K 2 , Ω (1) = Ω ∧ (K 1 + iK 2 ), J (2) = J − K 1 ∧ K 2 , Ω (2) = Ω ∧ (K 1 − iK 2 ). (B.43)C Almost product structuresAn almost complex structure is a GL(n, )-structure on a 2n-dimensional manifold, which is characterised by a tensor J m n satisfying J · J = −½. Using this one can split the tangent space T p M at any point in the two subspaces T p M + ⊕ T p M − corresponding to the +i and −i eigenvalues of J respectively. The Nijenhuis tensor for the almost complex structure
almost product structure is integrable if and only if the Nijenhuis tensor vanishes
y) dy a dy b (C.3) where i, j = 1, . . . , P and a, b = 1, . . . , Q.Two commuting almost complex structures J, J ′ , satisfying J · J ′ = J ′ · J give rise to an almost product structureΠ = J · J ′ . (C.4)Suppose J and J ′ are metric compatible and satisfy∇ + J = ∇ + J ′ = 0, or ∇ − J = ∇ − J ′ = 0,where ∇ ± is a metric connection with totally anti-symmetric torsion± 1 2 H. − Π rp Π m q H pqn − Π n p Π rq H pqm . (C.5)Using the tangent space decomposition, one finds that the only non-
2∇ m Π n p = J + n r J −sp H mrs − J − n r J +sp H mrs (C.it easily follows that N(Π) = 0.
structure. In particular, as pointed out in section 3, the almost complex structures are integrable. Moreover, the two complex structures clearly commute and thus define an almost product structure given by Π = J · J ′ .On the other hand, because ∇ − J = ∇ − J ′ = 0 and hence ∇ − Π = 0, from (C.6) we see that there are non-zero components of the associated Nijenhuis tensor, namely , let us briefly present a simple example very explicitly. In particular, set the dilaton field to zero and B 1 = B 2 = x 1 dx 2 + x 3 dx 4 . Then the almost complex structures corresponding to (C.
. Consider the dilaton equation of motion (A.4b) as given in Appendix A for the type I case, setting F = 0 for the type II case. Suppose M d is compact, integrating the equation of motion gives Since the integrand in each term is positive semi-definite, we must have H = F = 0 and hence Φ is constant. Thus, we see that there are no compact solutions in type II and type I supergravities with non-zero flux H and dilaton. This vanishing theorem can of course be evaded if one includes leading-order heterotic/type I string corrections which introduce additional tr R 2 terms in the dilaton equation of motion.M d
e −2Φ H ∧ * H + 2α ′
M d
e −2Φ Tr F ∧ * F = 0.
(1.7)
). In the case of Type I supergravity, one finds that the Bianchi identity together with the conditions on F Until this point the discussion has focused on geometries admitting one or more Killing spinors of the same type, ǫ + , say. This covers all static cases of the type I/heterotic theories.However, for the type II theories when H and Φ are non-zero, there are solutions to (1.1) for both ǫ + and ǫ − , if both connections ∇ + and ∇ − have special holonomy. This means that the general classification of supersymmetric geometries indicated in table 1, as well as the generalisations to d = 9, can be refined. In[7] we analysed the different ways in which probe fivebranes can wrap calibrated cycles in manifolds of special holonomy and determined the holonomies of ∇ ± that are expected in the corresponding supergravity solutions, after including the back reaction. The results are summarised in table 2. In these cases, ǫ ±for supersymmetry (see (3.22) below) imply the last expression in (1.8) can be rewritten as
minus the second term in (1.7), and again we find H = Φ = F = 0.
table 2 .
2We shall present an interesting explicit example in d = 6 which shows that this is not the case. The example is a torus T 2 nontrivially fibred over a flat Ê 4 base with non-vanishing dilaton. For a particular carefully chosen fibration we show that ∇ + has SU(3) holonomy while ∇ − has SU(2) holonomy. This solution thus preserves twelve supercharges which corresponds to N = 3 supersymmetry in the remaining four spacetime dimensions. It would be interesting to see how it is related to the type IIB solutions preserving the same amount of supersymmetry with both R-R and NS-NS fluxes presented in
table 1. In section 5 we present some simple explicit solutions of the type discussed in section 4 including candidate heterotic/type I compactifications based on fibrations over K3 surfaces that preserve eight supersymmetries. Section 6 discusses the cases summarised in table 2 when both ∇ + and ∇ −have special holonomy. Section 7 presents some further explicit solutions in d = 6 including a type II example preserving 12 supersymmetries corresponding to N = 3 supersymmetry and candidate heterotic/type I compactifications based on fibrations over K3 surfaces that preserve four supersymmetries. Section 8 concludes with some discussion and a summary of our main results.
). A manifold M d admits a G-structure if its frame bundle admits a sub-bundle with fibre group G. This implies that all tensors and, when appropriate, spinors on M d can be decomposed globally into representations of G. A G-structure is typically equivalent to the existence of a set of globally defined G-invariant tensors, or alternatively a set of globally defined G-invariant spinors. In particular, when G ⊂ Spin(d) as is the case for G-invariant spinors, the structure defines a metric, since the corresponding sub-bundle of the frame bundle can be viewed as a set of orthonormal frames.The G-structure is classified by the intrinsic torsion. When G ⊂ Spin(d) this is a measure of the failure of the tensors/spinors to be covariantly constant with respect to the Levi-Civita connection of the metric defined by the structure. As a result, all of the components of the intrinsic torsion are encoded in derivatives of the invariant tensors/spinors. Furthermore, the intrinsic torsion, T , then takes values in Λ 1 ⊗ g ⊥ where Λ p is the space of p-forms and g ⊥ ⊕ g = spin(d) where g is the Lie algebra of G. The intrinsic torsion can then be decomposed into irreducible G-modules, T ∈ ⊕ i W i . We will denote specific components of T in each module W i by W i . Only if the intrinsic torsion completely vanishes does themanifold have G-holonomy.
For a supersymmetric background (M d , g d , H, Φ), where g d is the metric on M d , we need
some non-trivial globally defined spinors satisfying (1.1). Note, the spinors are globally de-
transformation we have Ψ → e 4f Ψ for some function f , which implies that the metric scales as g → e 2f g. Such a transformation leaves the W 2 component of T invariant while the Lee-form W 1 transforms as W 1 → W 1 + 28df .14)
and is the Lee form for Ψ. The W 2 component in the 48 representation is then given by the
remaining pieces of dΨ. Note that the Spin(7) manifold has Spin(7) holonomy only when
the intrinsic torsion vanishes which is equivalent to dΨ = 0. In addition, under a conformal
The remaining components of dφ and d * φ encode W 3 and W 2 respectively. The G 2 manifold has G 2 holonomy if and only if the intrinsic torsion vanishes which is equivalent to dφ = d * φ = 0. Note that under a conformal transformation φ → e 3f φ the metric transforms as g → e 2f g and hence * φ → e 4f * φ. Under this transformation W 1 , W 2 and W 3 are invariant, while the Lee-form transforms as W 4 → W 4 − 12df . Finally, note that G 2 -structures of the).
(2.21)
type W 1 ⊕ W 3 ⊕ W 4 are called integrable as one can introduce a G 2 Dolbeault cohomology
Particular solutions can be found whenever we have an explicit anti-self-dual Abelian instanton G on a hyper-Kähler manifold. The simplest cases are when the hyper-Kähler metric is flat. Let us present some examples just for the type II case, for simplicity, wherethe Bianchi identity becomes∇
2 e 2Φ = − 1
2G
2 .
(5.3)
or E 8 × E 8 . In type I supergravity the three-form H satisfies a modified Bianchi identity dH = 2α ′ Tr F ∧ F. (A.2)
Note that[19] includes results for the SU (n) case when dH = 0.
Note that the existence of a generic pair J ± of integrable complex structures satisfying only [J + , J − ] = 0 does not guarantee that the almost product structure Π = J + · J − is integrable. A concrete counter example is discussed in Appendix C.
Note that this is only true for D ± ǫ ± = 0 with D ± a pair of spin-connections, compatible with the metric g d , and not, for instance, if D ± are general Clifford connections.
Following recent correspondence the authors of[16] have independently confirmed this discrepancy in[2].
Constraints on low-dimensional string compactifications. S Sethi, C Vafa, E Witten, hep-th/9606122Nucl. Phys. B. 480S. Sethi, C. Vafa and E. Witten, "Constraints on low-dimensional string compactifica- tions," Nucl. Phys. B 480 (1996) 213 hep-th/9606122.
Superstrings With Torsion. A Strominger, Nucl. Phys. B. 274253A. Strominger, "Superstrings With Torsion," Nucl. Phys. B 274 (1986) 253.
Superstring Compactifications With Torsion And Space-Time Supersymmetry. C M Hull, Turin 1985, Proceedings, Superunification and Extra Dimensions. C. M. Hull, "Superstring Compactifications With Torsion And Space-Time Supersym- metry," in Turin 1985, Proceedings, Superunification and Extra Dimensions, 347-375;
Compactifications Of The Heterotic Superstring. Phys. Lett. B. 178357"Compactifications Of The Heterotic Superstring," Phys. Lett. B 178 (1986) 357.
Residual Supersymmetry Of Compactified D = 10 Supergravity. B De Wit, D J Smit, N D Hari Dass, Nucl. Phys. B. 283165B. de Wit, D. J. Smit and N. D. Hari Dass, "Residual Supersymmetry Of Compactified D = 10 Supergravity," Nucl. Phys. B 283 (1987) 165.
G-structures and wrapped NS5-branes. J P Gauntlett, D Martelli, S Pakis, D Waldram, hep-th/0205050J. P. Gauntlett, D. Martelli, S. Pakis and D. Waldram, "G-structures and wrapped NS5-branes," hep-th/0205050.
Riemannian Geometry and Holonomy Groups. S Salamon, of Pitman Research Notes in Mathematics. Longman, Harlow201S. Salamon, "Riemannian Geometry and Holonomy Groups," Vol. 201 of Pitman Re- search Notes in Mathematics, Longman, Harlow, 1989.
Fivebranes wrapped on SLAG three-cycles and related geometry. J P Gauntlett, N Kim, D Martelli, D Waldram, hep-th/0110034JHEP. 0111J. P. Gauntlett, N. Kim, D. Martelli and D. Waldram, "Fivebranes wrapped on SLAG three-cycles and related geometry," JHEP 0111 (2001) 018 hep-th/0110034.
Parallel spinors and connections with skew-symmetric torsion in string theory. T Friedrich, S Ivanov, math.dg/0102142T. Friedrich and S. Ivanov, "Parallel spinors and connections with skew-symmetric torsion in string theory," math.dg/0102142.
Killing spinor equations in dimension 7 and geometry of integrable G 2 -manifolds. T Friedrich, S Ivanov, math.dg/0112201T. Friedrich and S. Ivanov, "Killing spinor equations in dimension 7 and geometry of integrable G 2 -manifolds," math.dg/0112201.
Connection with torsion, parallel spinors and geometry of Spin(7) manifolds. S Ivanov, math.dg/0111216S. Ivanov, "Connection with torsion, parallel spinors and geometry of Spin(7) mani- folds," math.dg/0111216.
All supersymmetric solutions of minimal supergravity in five dimensions. J P Gauntlett, J B Gutowski, C M Hull, S Pakis, H S Reall, hep-th/0209114J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis and H. S. Reall, "All supersym- metric solutions of minimal supergravity in five dimensions," hep-th/0209114.
Mirror Symmetry in Generalized Calabi-Yau Compactifications. S Gurrieri, J Louis, A Micu, D Waldram, hep-th/0211102S. Gurrieri, J. Louis, A. Micu and D. Waldram, "Mirror Symmetry in Generalized Calabi-Yau Compactifications," hep-th/0211102.
The Geometry of D=11 Killing Spinors. J P Gauntlett, S Pakis, hep-th/0212008J. P. Gauntlett, S. Pakis, "The Geometry of D=11 Killing Spinors," hep-th/0212008.
Kaluza-Klein bundles and manifolds of exceptional holonomy. P Kaste, R Minasian, M Petrini, A Tomasiello, hep-th/0206213JHEP. 0209Nontrivial RR two-form field strength and SU(3)-structureP. Kaste, R. Minasian, M. Petrini and A. Tomasiello, "Nontrivial RR two-form field strength and SU(3)-structure," hep-th/0301063; "Kaluza-Klein bundles and manifolds of exceptional holonomy," JHEP 0209 (2002) 033 hep-th/0206213.
K Behrndt, C Jeschek, hep-th/0302047Fluxes in M-theory on 7-manifolds and G structures. K. Behrndt and C. Jeschek, "Fluxes in M-theory on 7-manifolds and G structures," hep-th/0302047.
Non-Kaehler String Backgrounds and their Five Torsion Classes. G L Cardoso, G Curio, G Dall'agata, D Lust, P Manousselis, G Zoupanos, hep-th/0211118G. L. Cardoso, G. Curio, G. Dall'Agata, D. Lust, P. Manousselis and G. Zoupanos, "Non-Kaehler String Backgrounds and their Five Torsion Classes," hep-th/0211118.
AdS calibrations. J Gutowski, G Papadopoulos, hep-th/9902034Phys. Lett. B. 462J. Gutowski and G. Papadopoulos, "AdS calibrations," Phys. Lett. B 462 (1999) 81 hep-th/9902034.
Supersymmetry and generalized calibrations. J Gutowski, G Papadopoulos, P K Townsend, Phys. Rev. D. 609905156J. Gutowski, G. Papadopoulos and P. K. Townsend, "Supersymmetry and generalized calibrations," Phys. Rev. D 60 (1999) 106006 hep-th/9905156.
A no-go theorem for string warped compactifications. S Ivanov, G Papadopoulos, hep-th/0008232Phys. Lett. B. 497S. Ivanov and G. Papadopoulos, "A no-go theorem for string warped compactifications," Phys. Lett. B 497 (2001) 309 hep-th/0008232.
Brane solitons and hypercomplex structures. G Papadopoulos, math.dg/0003024G. Papadopoulos, "Brane solitons and hypercomplex structures," math.dg/0003024.
N = 3 warped compactifications. A R Frey, J Polchinski, hep-th/0201029Phys. Rev. D. 65A. R. Frey and J. Polchinski, "N = 3 warped compactifications," Phys. Rev. D 65 (2002) 126009 hep-th/0201029.
Supergravity description of field theories on curved manifolds and a no go theorem. J M Maldacena, C Nunez, hep-th/0007018Int. J. Mod. Phys. A. 16J. M. Maldacena and C. Nunez, "Supergravity description of field theories on curved manifolds and a no go theorem," Int. J. Mod. Phys. A 16 (2001) 822 hep-th/0007018.
The Sixteen Classes of Almost Hermitian Manifolds and Their Linear Invariants. A Gray, L M Hervella, Ann. Mat. Pura. e Appl. 282A. Gray, L. M. Hervella, "The Sixteen Classes of Almost Hermitian Manifolds and Their Linear Invariants," Ann. Mat. Pura. e Appl. 282 (1980), 1-21.
The intrinsic torsion of SU(3) and G 2 structures. S Chiossi, S Salamon, math.dg/0202282S. Chiossi and S. Salamon, "The intrinsic torsion of SU(3) and G 2 structures," math.dg/0202282
M Fernandez, A classification of Riemannian manifolds with structure group Spin. M. Fernandez, "A classification of Riemannian manifolds with structure group Spin(7),"
. Ann. Mat. Pura. Appl. 143Ann. Mat. Pura. Appl. 143 (1982), 101-122
Riemannian Manifolds with Structure Group G 2. M Fernandez, A Gray, Ann. Mat. Pura. e Appl. 32M. Fernandez, A. Gray, "Riemannian Manifolds with Structure Group G 2 ," Ann. Mat. Pura. e Appl. 32 (1982), 19-45.
Dolbeault cohomology for G 2 -manifolds. M Fernandez, L Ugarte, Geom. Dedicata. 7057M. Fernandez and L. Ugarte, "Dolbeault cohomology for G 2 -manifolds", Geom. Dedi- cata, 70 (1998) 57.
Hyperhermitian metrics with symmetry. P Gauduchon, P Tod, J. Geom. Phys. 25291P. Gauduchon and P.Tod, "Hyperhermitian metrics with symmetry", J. Geom. Phys. 25 (1998) 291.
Ten Into Four Won't Go. D Z Freedman, G W Gibbons, P C West, Phys. Lett. B. 124491D. Z. Freedman, G. W. Gibbons and P. C. West, "Ten Into Four Won't Go," Phys. Lett. B 124, (1983) 491.
Residual Supersymmetry Of Compactified D = 10 Supergravity. B De Wit, D J Smit, N D Hari Dass, Nucl. Phys. B. 283165B. de Wit, D. J. Smit and N. D. Hari Dass, "Residual Supersymmetry Of Compactified D = 10 Supergravity," Nucl. Phys. B 283, (1987) 165.
Geometry of quaternionic Kaehler connections with torsion. S Ivanov, math.dg/0003214J. Geom. Phys. 41S. Ivanov, "Geometry of quaternionic Kaehler connections with torsion," J. Geom. Phys. 41 (2002) 235 math.dg/0003214.
Octonionic Superstring Solitons. J A Harvey, A Strominger, Phys. Rev. Lett. 66549J. A. Harvey and A. Strominger, "Octonionic Superstring Solitons," Phys. Rev. Lett. 66 (1991) 549.
Seven-dimensional octonionic Yang-Mills instanton and its extension to an heterotic string soliton. M Gunaydin, H Nicolai, hep-th/9502009Addendumibid. B. 351169Phys. Lett. BM. Gunaydin and H. Nicolai, "Seven-dimensional octonionic Yang-Mills instanton and its extension to an heterotic string soliton," Phys. Lett. B 351 (1995) 169 [Addendum- ibid. B 376 (1996) 329] hep-th/9502009.
Resolution of overlapping branes. H Lu, J F Vazquez-Poritz, hep-th/0202075Phys. Lett. B. 534H. Lu and J. F. Vazquez-Poritz, "Resolution of overlapping branes," Phys. Lett. B 534 (2002) 155 hep-th/0202075.
S(1)-wrapped D3-branes on conifolds. H Lu, J F Vazquez-Poritz, hep-th/0202175Nucl. Phys. B. 633H. Lu and J. F. Vazquez-Poritz, "S(1)-wrapped D3-branes on conifolds," Nucl. Phys. B 633 (2002) 114 hep-th/0202175.
Geometric model for complex non-Kaehler manifolds with SU(3) structure. E Goldstein, S Prokushkin, hep-th/0212307E. Goldstein and S. Prokushkin, "Geometric model for complex non-Kaehler manifolds with SU(3) structure," hep-th/0212307.
Worldsheet descriptions of wrapped NS five-branes. K Hori, A Kapustin, hep-th/0203147K. Hori and A. Kapustin, "Worldsheet descriptions of wrapped NS five-branes," hep-th/0203147.
Various wrapped branes from gauged supergravities. M Naka, hep-th/0206141M. Naka, "Various wrapped branes from gauged supergravities," hep-th/0206141.
Towards the large N limit of pure N = 1 super Yang Mills. J M Maldacena, C Nunez, hep-th/0008001Phys. Rev. Lett. 86J. M. Maldacena and C. Nunez, "Towards the large N limit of pure N = 1 super Yang Mills," Phys. Rev. Lett. 86 (2001) 588 hep-th/0008001.
Complex geometry of conifolds and 5-brane wrapped on 2-sphere. G Papadopoulos, A A Tseytlin, hep-th/0012034Class. Quant. Grav. 181333G. Papadopoulos and A. A. Tseytlin, "Complex geometry of conifolds and 5-brane wrapped on 2-sphere," Class. Quant. Grav. 18 (2001) 1333 hep-th/0012034.
Fivebranes wrapped on associative threecycles. B S Acharya, J P Gauntlett, N Kim, hep-th/0011190Phys. Rev. D. 63106003B. S. Acharya, J. P. Gauntlett and N. Kim, "Fivebranes wrapped on associative three- cycles," Phys. Rev. D 63 (2001) 106003 hep-th/0011190.
The supergravity dual of a theory with dynamical supersymmetry breaking. J M Maldacena, H Nastase, hep-th/0105049JHEP. 0109J. M. Maldacena and H. Nastase, "The supergravity dual of a theory with dynamical supersymmetry breaking," JHEP 0109 (2001) 024 hep-th/0105049.
Geometry of Hyper-Kähler Connections with Torsion. G Grantcharov, Y S Poon, math.DG/9908015Commun.Math.Phys. 213G. Grantcharov, Y. S. Poon, "Geometry of Hyper-Kähler Connections with Torsion", Commun.Math.Phys. 213 (2000) 19-37, math.DG/9908015.
Calibrations on R 8. J Dadok, F R Harvey, F Morgan, Trans. Am. Math. Soc. 3071J. Dadok, F.R. Harvey and F. Morgan, "Calibrations on R 8 ," Trans. Am. Math. Soc. 307 (1988) 1.
Hyper-Kaehler manifolds and multiply intersecting branes. J P Gauntlett, G W Gibbons, G Papadopoulos, P K Townsend, Nucl. Phys. B. 500133J. P. Gauntlett, G. W. Gibbons, G. Papadopoulos and P. K. Townsend, "Hyper- Kaehler manifolds and multiply intersecting branes," Nucl. Phys. B 500 (1997) 133
Completely Solvable Gauge Field Equations In Dimension Greater Than Four. R S Ward, Nucl. Phys. B. 236381R. S. Ward, "Completely Solvable Gauge Field Equations In Dimension Greater Than Four," Nucl. Phys. B 236 (1984) 381.
Wrapped fivebranes and N = 2 super Yang-Mills theory. J P Gauntlett, N Kim, D Martelli, D Waldram, hep-th/0106117Phys. Rev. D. 64J. P. Gauntlett, N. Kim, D. Martelli and D. Waldram, "Wrapped fivebranes and N = 2 super Yang-Mills theory," Phys. Rev. D 64 (2001) 106008 hep-th/0106117.
N = 2 gauge theories from wrapped five-branes. F Bigazzi, A L Cotrone, A Zaffaroni, hep-th/0106160Phys. Lett. B. 519F. Bigazzi, A. L. Cotrone and A. Zaffaroni, "N = 2 gauge theories from wrapped five-branes," Phys. Lett. B 519 (2001) 269 hep-th/0106160.
D = 2 + 1 N = 2 Yang-Mills theory from wrapped branes. J Gomis, J G Russo, hep-th/0109177JHEP. 0110J. Gomis and J. G. Russo, "D = 2 + 1 N = 2 Yang-Mills theory from wrapped branes," JHEP 0110 (2001) 028 hep-th/0109177.
Heterotic strings with torsion. K Becker, K Dasgupta, hep-th/0209077JHEP. 0211K. Becker and K. Dasgupta, "Heterotic strings with torsion," JHEP 0211 (2002) 006 hep-th/0209077.
Compactification with flux on K3 and tori. P K Tripathy, S P Trivedi, hep-th/0301139P. K. Tripathy and S. P. Trivedi, "Compactification with flux on K3 and tori," hep-th/0301139.
Compactifications of heterotic theory on non-Kaehler complex manifolds. I. K Becker, M Becker, K Dasgupta, P S Green, hep-th/0301161K. Becker, M. Becker, K. Dasgupta and P. S. Green, "Compactifications of heterotic theory on non-Kaehler complex manifolds. I," hep-th/0301161.
D D Joyce, Compact Manifolds with Special Holonomy, Oxford Mathematical Monographs. Oxford University PressD.D. Joyce, Compact Manifolds with Special Holonomy, Oxford Mathematical Mono- graphs, Oxford University Press, 2000.
Differential geometry on complex and almost complex spaces. K Yano, MacmillanNew YorkK. Yano, "Differential geometry on complex and almost complex spaces," Macmillan, New York, 1965.
Twisted Multiplets And New Supersymmetric Nonlinear Sigma Models. S J Gates, C M Hull, M Rocek, Nucl. Phys. B. 248157S. J. Gates, C. M. Hull and M. Rocek, "Twisted Multiplets And New Supersymmetric Nonlinear Sigma Models," Nucl. Phys. B 248 (1984) 157.
Off-shell formulation of N = 2 non-linear sigma-models. A Sevrin, J Troost, arXiv:hep-th/9610102Nucl. Phys. B. 492623A. Sevrin and J. Troost, "Off-shell formulation of N = 2 non-linear sigma-models," Nucl. Phys. B 492 (1997) 623 [arXiv:hep-th/9610102].
N = 2 boundary conditions for non-linear sigma models and Landau-Ginzburg models. U Lindstrom, M Zabzine, arXiv:hep-th/0209098JHEP. 03026U. Lindstrom and M. Zabzine, "N = 2 boundary conditions for non-linear sigma models and Landau-Ginzburg models," JHEP 0302, 006 (2003) [arXiv:hep-th/0209098].
| [] |
[
"Passive Shape Locking for Multi-Bend Growing Inflated Beam Robots",
"Passive Shape Locking for Multi-Bend Growing Inflated Beam Robots"
] | [
"Rianna Jitosho ",
"Sofia Simón-Trench ",
"Allison M Okamura ",
"Brian H Do "
] | [] | [] | Shape change enables new capabilities for robots. One class of robots capable of dramatic shape change is soft growing "vine" robots. These robots usually feature global actuation methods for bending that limit them to simple, constant-curvature shapes. Achieving more complex "multibend" configurations has also been explored but requires choosing the desired configuration ahead of time, exploiting contact with the environment to maintain previous bends, or using pneumatic actuation for shape locking. In this paper, we present a novel design that enables passive, on-demand shape locking. Our design leverages a passive tip mount to apply hook-and-loop fasteners that hold bends without any pneumatic or electrical input. We characterize the robot's kinematics and ability to hold locked bends. We also experimentally evaluate the effect of hook-and-loop fasteners on beam and joint stiffness. Finally, we demonstrate our proof-of-concept prototype in 2D. Our passive shape locking design is a step towards easily reconfigurable robots that are lightweight, lowcost, and low-power. | 10.1109/robosoft55895.2023.10122027 | [
"https://export.arxiv.org/pdf/2303.02335v1.pdf"
] | 257,365,818 | 2303.02335 | c579be6aa1fe7344459312adbae24e0c114664d6 |
Passive Shape Locking for Multi-Bend Growing Inflated Beam Robots
Rianna Jitosho
Sofia Simón-Trench
Allison M Okamura
Brian H Do
Passive Shape Locking for Multi-Bend Growing Inflated Beam Robots
Shape change enables new capabilities for robots. One class of robots capable of dramatic shape change is soft growing "vine" robots. These robots usually feature global actuation methods for bending that limit them to simple, constant-curvature shapes. Achieving more complex "multibend" configurations has also been explored but requires choosing the desired configuration ahead of time, exploiting contact with the environment to maintain previous bends, or using pneumatic actuation for shape locking. In this paper, we present a novel design that enables passive, on-demand shape locking. Our design leverages a passive tip mount to apply hook-and-loop fasteners that hold bends without any pneumatic or electrical input. We characterize the robot's kinematics and ability to hold locked bends. We also experimentally evaluate the effect of hook-and-loop fasteners on beam and joint stiffness. Finally, we demonstrate our proof-of-concept prototype in 2D. Our passive shape locking design is a step towards easily reconfigurable robots that are lightweight, lowcost, and low-power.
I. INTRODUCTION
Robots traditionally feature a fixed morphology incapable of changing after design. However, in many real-world applications it is advantageous to have robots that can change their shape to adapt to tasks rather than being immutable.
Inflatable robots inherently offer some reconfigurability, from a compact, stowed state to a deployed state. In this work, we focus on "vine" robots, a class of growing inflated beam robots previously developed for exploration [1]- [5] and manipulation [6]- [8]. They are capable of significant length change by "growing" via tip eversion. Vine robots are also capable of dramatic shape change [6].
Many implementations of vine robots feature global bending actuators such as cables or pneumatic muscles routed along the length of the robot [4]- [6], [9]. These actuators shorten one side of the vine robot, resulting in bending along the length of the entire robot. However, they are only able to produce a single, constant-curvature bend [4]- [6], [9]. The ability to form multiple bends along the length of the vine body increases its dexterity. In this work, we present a design that achieves this by pairing global bending actuators with hook-and-loop fasteners (commonly known as Velcro). By using hook-and-loop fasteners to enforce strain limits, we *These authors contributed equally to this work. This work was supported in part by National Science can control which regions of the vine robot can be shortened by the bending actuators. While there are other existing methods for achieving multiple bends that we discuss in Sec. II [10]- [16], our design 1) enables multi-bending on demand, 2) achieves passive shape locking without relying on environmental interactions, and 3) is easy to fabricate and reset during use. To the best of our knowledge, our design is the first to combine all of these features.
We see our work as a step towards completely reconfigurable structures. One potential area where our work could be applied is reconfigurable inflated deployable space structures [17], [18]. Another area is construction, where rapid inflated structures could be fabricated on-site without the use of traditional fabrication materials [19].
The contributions of our work are as follows: 1) A design for multi-bend growing robots that allows choosing a deployed shape on demand, achieves passive shape locking without relying on external contact forces, and is easy to fabricate and reset. 2) Models and experimental characterizations that provide guidelines for implementing our design. 3) Demonstrations on a physical prototype validating the performance of our shape locking design integrated with a vine robot. Fig. 1 shows example deployments of our prototype. Fig. 2 shows an overview of our system.
II. PRIOR WORK ON MULTI-BEND VINE ROBOTS
Previous work on achieving multiple bends for vine robots can be categorized into three general strategies: preformed bending, contact-based bending, and shape locking. Here we provide key points of comparison relevant to our work; Table I summarizes these points. Additional details about these methods can be found in Blumenschein et al. [20]. Fig. 2. Overall system design. Pulling on the cable would cause bending in the blue unlocked region. Growing the vine via tip eversion would push the tip mount forward and apply hooks onto the currently exposed loops.
Preformed bending involves choosing the desired configuration ahead of time. Typically, this is achieved by pinching material at desired bend locations along the vine body such that the inflated shape bends to match a desired configuration. One method for holding these pinches in place is to tape over the pinch [14]. Another option for vine robots made from thermoplastics is to heat, reshape, and cool the body material such that the deployed and desired configurations match [10], [11]. Vine robots that utilize preformed bending are typically easy to fabricate and allow for passive multibending, but require the user to commit to a specific deployed configuration ahead of time.
Contact-based bending leverages contact with the environment to maintain previous bends. Contact-based vine robots use simple, constant-curvature bending actuation [12], or carefully chosen preformed bends [13], and then utilize contact forces to achieve multi-bend configurations. Both examples are efficient in that they use the environment rather than additional actuation to achieve multi-bending. This leads to simpler, lighter, and cheaper systems. However, their ability to create multiple bends is dependent on being deployed in an environment that provides the necessary contact forces. In addition, the work in [13] required knowledge of the environment and desired configuration a priori.
Shape locking involves maintaining multi-bends without requiring external elements. Passive shape locking has previously been achieved by a series of latches around and along the vine robot body [14]. These latches held pinches of material, and when opened due to pneumatic pressure, would cause asymmetric lengthening at the tip and thus steer the tip [11], [14] / Contact-based [12], [13] Latches [14] Active shape locking [15], [ of the vine robot. While effective and simple during deployment, these vine robots were time consuming to manufacture and reset for subsequent use, making them impractical. There has also been work on active shape locking. These methods are able to "lock" or "unlock" regions of the robot body such that global bending actuators only cause bending in the unlocked regions. Wang et al. used pressurized chambers that can grow along the sides of a vine robot [15]. These chambers grow independently from the main vine body such that the vine robot is shape-locked from its base to the tip of the chambers. Do et al. fabricated a vine with segments that stiffen via layer jamming [16]. In this method, discrete bending occurs at the unstiffened sections of the vine. Both of these active shape locking designs enable environmentindependent multi-bending but require additional actuation to maintain their bends. Our proposed design addresses the drawbacks of each prior strategy for multi-bending. First, our design allows the robot configuration to be chosen at the time of deployment. Second, our design achieves multi-bending without relying on contact from the environment. Third, our design locks its shape passively. Finally, our design is easy to fabricate and simple to reset between deployments.
III. DESIGN AND IMPLEMENTATION
Here, we describe the details of the shape locking concept, passive tip mount, and fabricated proof-of-concept prototype.
A. Utilizing Hook-and-Loop Fasteners for Holding Bends
Our proposed design utilizes hook-and-loop fasteners, and in this paper, we refer to the constituent parts of these fasteners as "hooks" and "loops". Our design applies hooks onto loops on the vine body at a set distance from the tip, leaving the entire length locked except for the most distal region. Leaving the most distal region unlocked enables our bending actuators (cables) to bend only the unlocked region, even though the cables route along the full length of the vine. Fig. 3 illustrates this. In a), the vine grows straight. In b), cable tension T is applied to one side, which causes bending in the unlocked section of the vine (blue). In c), the vine continues growing with the tension maintained, which forms and locks a bend. In d), the cable tension is released and the proximal portion of the previous bend (orange) remains locked. This process can be repeated to form additional bends, and previously locked bends will remain in place.
We leverage three key characteristics of hook-and-loop fasteners to achieve this. First, these fasteners have indeterminate match-up between the hooks and loops. Thus, any part of the hooks can engage with any part of the loops, enabling the freedom to pinch material anywhere along the vine body for forming bends. Second, these fasteners are easily engaged but difficult to disengage, enabling the robot to apply fasteners easily and hold bends passively. Third, the loops are flexible enough such that when attached to the vine body they do not hinder robot eversion.
B. Tip Mount for Passive Fastener Application
We designed an external tip mount that is able to passively apply the hook-and-loop fasteners onto the vine, shown in Fig. 2. Our tip mount is pushed forward by the eversion of the vine. There are three legs corresponding to three lines of loops on the vine. Each leg has a slot to guide the hooks onto the loops. The hooks are then pressed down with the far edge of the leg. A torsion spring pushes the legs inward to maintain contact with the vine. The legs apply the hooks to the loops at a fixed distance away from the tip, which allows the robot to have its most distal region unlocked and able to bend. The length of the legs determines the length of the unlocked region. The rollers on the tip mount reduce the friction between the outside vine material and the tip mount, allowing vine material to evert easily.
C. System Implementation
We fabricated a complete vine robot system based on the components described in the previous sections. The vine body is 40-denier thermoplastic polyurethane (TPU)-coated ripstop nylon that is sealed using an ultrasonic welder. The diameter of the vine is 10.8 cm, which provides sufficient surface area to attach the fasteners and cables. The length is 2.3 m to ensure there is enough length to make more than one bend. The loops (McMaster 9652K167) are attached to the outside of the vine with double sided tape. For bending actuation, there are plastic stoppers and cables (spectra fiber braided fishing line) routed through them. Stoppers are used on the outside of the vine to hold the cable to the body. The stoppers are cut from 5 mm outer diameter Teflon tubing and taped onto the vine. Having more and shorter stoppers creates a smoother curve than fewer and longer stoppers, but this increases the fabrication time. In our implementation, the stoppers are 19 mm long and are spaced 19 mm apart, allowing for a maximum contraction ratio of 0.5.
The tip mount is 3D printed with polylactic acid (PLA) filament. The leg length was chosen based on empirical tests. A minimum length is needed to provide an unlocked region, but longer legs become cumbersome. There is a 270°t orsion spring to ensure that the legs have sufficient range of motion to maintain contact with the vine body for all possible bend angles. We chose a spring constant that would ensure the legs applied pressure to the vine without causing noticeable deformation. The depth of the tip mount was chosen empirically such that it was deep enough to capture the tip, but short enough so as not to impede vine bending. The unattached section of the hooks is coiled (Fig. 9), and they are pulled by the tip mount during deployment.
IV. MODELING
In this section, we present models that describe capabilities for any general implementation of our proposed shape locking design. First, we describe the relationship between relevant design parameters and the minimum possible radius of curvature for bends created during deployment. These kinematic relationships allow 1) the designer to set parameters according to the curvature of bends they will need to create and 2) the human operator to understand what is required to achieve a desired configuration. Second, we describe a relationship that enables the designer to understand the tradeoffs between higher bend curvature and higher beam stiffness when implementing our design.
A. Kinematics
To describe the kinematics of our multi-bending vine, we first consider the bending of an inflated beam with cables and stoppers. The relevant geometric parameters are illustrated in Fig. 4. To achieve the maximum bend angle θ (and minimum radius of curvature R), we pull on one cable until all its stoppers are in contact. If each stopper has length l s and the gap between adjacent stoppers is length l g , then we define the contraction ratio a as:
a := L 0 − L S L 0 = l g l s + l g(1)
where L 0 is the original length of the inflated beam and L S is its shortened length due to pulling on the cable. Using the relationship for arc lengths and the associated subtended angles, we can form the following two relationships:
L 0 = (R + r)θ (2) L S = (R − r)θ(3)
where R is the bend's radius of curvature, r is the radius of the inflated beam, and θ is the bend angle. By combining these equations with Eq. 1, we have the following model for the bend's radius of curvature:
R = r 2 − a a .(4)
Intuitively, with higher contraction ratios we can make tighter bends, and we can tune a according to the highest curvature bend we expect the system to make during use. However, we are limited in that l s , l g ≥ 0 for fabrication, which means 0 ≤ a ≤ 1 (Eq. 1). This results in R ≥ 2r (Eq. 4).
To apply this relationship to the full multi-bending vine, consider what happens during deployment. As a simple strategy for achieving a desired configuration, the vine can either be grown with zero tension in the cables or with cable tension T that results in all the stoppers in the unlocked region being in contact. Thus, the overall shape can be composed of a series of bends, each with radius of curvature R, and straight line segments. To achieve some desired bend angle θ , we would grow the vine by length L = (R + r)θ while maintaining the cable tension T .
B. Bend-Holding Capability
An inflated beam increases in stiffness when its internal pressure increases. However, inflated beams also generate a resistance torque when bent which acts to straighten out the beam. This torque scales with internal pressure. As a result, there exists a design trade-off wherein we would like to operate our vine at higher internal pressures for added stiffness, but we still require the hook-and-loop fasteners to overcome the resistance torque to lock bends.
We consider an inflated beam with a bend locked by hookand-loop fasteners (Fig. 5) since this is comparable to a short section of our robot. The bend angle is θ , beam radius is r, and center of rotation is O. We assume one end of the beam is fixed and find a model that describes the pressure required to cause fastener separation. For this, we consider one half of the beam as shown in Fig. 5, and treat O as a pin joint. There exists a tension T from the hook-and-loop fastener as well as a resistance torque τ that tries to straighten out the bent beam. Before fastener separation, we have moment balance about O:
τ − rT cos α − r tan θ 2 + d T sin α = 0 (5)
where d is the distance shown in Fig. 5 and α is the angle of T with respect to horizontal. Assuming symmetry about the line s in Fig. 5a, α = π−θ 2 . Nesler et al. provides a closedform expression for the resistance torque τ that results from bending an inflated beam:
τ = P dV (θ ) dθ = −πr 3 P tan 2 θ 2 + 1(6)
where P is the gauge pressure, V is the volume, r is the radius, and θ is the angular deflection of the beam [21]. We evaluate the maximum possible tension prior to fastener separation by using a strength criterion similar to that presented by Salama et al. [22]:
σ n σ * 2 + σ s τ * 2 ≤ 1 (7)
where σ n , σ s are the actual normal and shear stresses, respectively. σ * , τ * are the pure normal and pure shear stresses required for separation, respectively, and are determined experimentally. From Fig. 5b, we see that σ n = T sin α A and σ s = T cos α A , where A is the area of fastener that experiences stresses. From Salama et al. [22], A = 8wt, where w is the fastener width and t is the fastener thickness.
This yields a two-step process for computing the minimum pressure P for fastener separation. First, compute the maximum tension T within the strength criterion using Eq. 7. Second, use Eq. 5-6 to solve for the P that results in the maximum T .
V. EXPERIMENTAL CHARACTERIZATION
In this section, we perform multiple experiments to quantify the shape locking performance of our design. Specifically, we consider the system's ability to hold bends in spite of pressurization of the main body, resist body deflections due to external forces, and form new bends while maintaining previously locked bends.
A. Verifying Bend-Holding Capability
We experimentally verify our model presented in Sec. IV-B by measuring the minimum pressure required to initiate separation of hooks and loops on a bend held by the hookand-loop fastener (Fig. 6). For our experiment, we used an inflated beam with radius 4.0 cm and length 41 cm. The beam also had a strip of loops for locking bends. For a single experiment trial, we first created a bend and locked the bend by applying the hooks onto the loops. We then measured the bend angle. Finally, we increased the pressure until the hooks began to separate from the beam. We repeated this process for 18 different bend angles. Fig. 6 shows the agreement between the measurements and model, and the root mean square error (RMSE) is 2.7 kPa. The RMSE is small relative to the typical body pressure of our vines (about 7 kPa).
B. Beam Stiffness of Unlocked vs. Locked Inflated Beams
Here, we characterize the added stiffness that arises from applying hook-and-loop fasteners onto inflated beams. Fig. 7 shows the experimental setup. We used an inflated beam made out of TPU coated ripstop nylon with length 40 cm, diameter 10.8 cm, and gauge pressure 6.9 kPa. To measure the stiffness, one end of the beam was fixed by a clamp pressed onto an internal ring, and a load on the opposite end of the beam was applied with a force gauge (Mark-10, NY) that moved at constant speed. Stiffness was then computed from this force-displacement data. We executed this process for a beam without fasteners and a beam with a single strip of hook-and-loop fasteners; we refer to these cases as unlocked and locked, respectively. We computed a stiffness of 152 N/m and 199 N/m for the unlocked and locked beams, respectively. The additional 47 N/m in stiffness aids the locked beam in resisting forces from the environment and bending cables. We are interested in increasing this stiffness change in future work.
C. Effect of Cable Tension on Unlocked vs. Locked Bends
Typically, applying tension to cables along a vine robot utilizing a cable and stopper implementation results in a constant curvature along the entire length of the vine. However, by having unlocked and locked regions of a vine, we change how the cables influence the robot configuration. Here we quantify the effect of cable tension on bends by considering the two scenarios that appear during deployment of our system. First, we consider applying cable tension to an unlocked beam. On our full system, the most distal region of the vine is unlocked, and we want applied tension to cause this region to bend so that we steer the direction of vine growth. Second, we consider applying cable tension that opposes an existing, locked bend. On our full system, most of the everted vine is locked, and we do not want subsequently applied tension to cause these regions to change shape. The inset illustrations in Fig. 8 show these two cases, and in both, a cable tension T causes a change in tip angle θ . The left, blue beam depicts cable tension forming a new bend, and the right, orange beam depicts cable tension disturbing a previously locked bend.
For the two scenarios previously described, we measured the relationship between cable tension and tip angle deflection. Both measurements were taken with an inflated beam with cables and stoppers for bending as well as loops for the locked bend. The cable tension was measured with a force gauge on a linear rail, and the tip angle was measured with a motion capture system (OptiTrack). The measured data is shown in Fig. 8 trend that locked bends require more cable tension to cause beam deflection. The exact values for required cable tension would depend on vine robot parameters such as the length of the unlocked region. One limitation in our method is that it does not provide idealized "locking". For this specific experiment, forming bends requires up to 10 N, which results in a disturbance of 10°or less for locked bends. In future work, we plan to further reduce this distrubance.
VI. DEMONSTRATIONS
We used our fabricated prototype to demonstrate the ability of our proposed system to achieve multi-bending via passive shape locking. To show this, we used the same prototype to achieve two different multi-bend configurations. Fig. 9a shows the first demonstration, in which the vine i) grows to the right and locks that bend in place, ii) grows straight briefly before bending left, and iii) follows the path straight again. This shows the vine's ability to make bends in different directions. In the second demonstration, a different configuration was achieved where the vine made two bends in the same direction. In Fig. 9b, the vine i) grows straight and bends right, ii) grows straight again, and iii) turns right again. The curvature of the bends in the desired configuration were set based on the kinematics presented in Sec. IV-A. In practice, we found it difficult to achieve the theoretical maximum curvature due to limitations in manual operation.
To measure the ability of our system to achieve a desired configuration, we analyzed the final configuration for the demonstration in Fig. 9a. We used image processing to extract points along the deployed vine and compared this to the desired configuration Fig. 10. To quantify the accuracy, we took evenly spaced points along the deployed and desired configurations and evaluated the Euclidean distance between pairs of points. The average distance (18 mm) is small relative to the diameter of our vine (108 mm).
VII. CONCLUSION AND FUTURE WORK
In this work, we presented a passive shape locking system to enable multi-bend vine robots that are configurable on demand without relying on environment contact and are easily reset and manufactured. We described models that aid in choosing design parameters, presented experiments that quantify the performance of our design, and provided demonstrations that validate the vine robot's ability to achieve accurate multi-bending. In the future, we plan to integrate a retracting mechanism that autonomously removes hooks, explore containment or routing options for the unattached section of hooks, and characterize how well this design scales to longer vines. Our system showed consistent behavior in testing, but we would like to formally characterize robustness and repeatability. Finally, we are interested in demonstrating 3D shapes, exploring other ways to lock bends passively, and investigating how to further rigidize the deployed robot. Our work is a step towards lightweight, low-cost, and low-power reconfigurable deployed inflated structures.
Foundation grant 2024247, a National Science Foundation Graduate Research Fellowship, the U.S. Department of Energy, National Nuclear Security Administration, Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D) under subcontract from Lawrence Berkeley National Laboratory; and the United States Federal Bureau of Investigation contract 15F06721C0002306. The authors are with the Dept. of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA. Email: {rjitosho, sofiast, aokamura, brianhdo}@stanford.edu
Fig. 1 .
1Photos showing various final robot configurations possible with our proposed shape locking design.
Fig. 4 .
4Relevant geometric values for solving robot kinematics. For an inflated beam with initial length L 0 and radius r, pulling a cable until all stoppers (in red) are in contact yields the contracted length L s . The resulting bend has angle θ and radius of curvature R.
Fig. 5 .
5Relevant parameters for an inflated beam (orange) with a bend held by a hook-and-loop fastener (purple). a) Bent inflated beam fixed at one end. b) Free body diagram for half of an inflated beam. There exists a resistance torque τ due to bending the beam and a tension T from the hooks.
Fig. 6 .Fig. 7 .
67Minimum pressure to cause hook-and-loop fastener separation. Experimental values are in red, and the modeled relationship is in blue. Experimental setup to measure the beam stiffness of locked and unlocked inflated beams.
Fig. 9 .Fig. 10 .
910Two demonstrations of our prototype deploying to a desired configuration. Excess hooks are coiled and pulled forward by the tip mountComparison of the final configuration for our physical proof of concept prototype versus the desired configuration. The prototype was grown and steered by a human operator in real time. The average distance between evenly spaced points on the desired configuration and the corresponding points on the actual configuration is 18 mm.
TABLE I COMPARISON
IOF SOFT GROWING MULTI-BEND ROBOTSPreformed [10],
. Our measurements verify the generalFig. 8. Change in bend angle versus applied cable tension for unlocked and locked beams. The inset illustration shows the two loading cases. For an unlocked beam, applying cable tension T causes an initially straight beam (light blue) to form a bend (dark blue) with a change in tip angle θ . For a locked beam (light orange), applying cable tension T disturbs a locked bend with a change in tip angle θ (dark orange). The measured data shows that locked bends require more cable tension to cause beam deflection.0
2
4
6
8
10
12
14
0
20
40
Cable Tension (N)
Tip Angle
Deflection (deg)
Forming bend
on unlocked beam
Disturbing bend
on locked beam
ACKNOWLEDGEMENTThe authors thank Alexander Kübler for design discussions.
Roboa: Construction and evaluation of a steerable vine robot for search and rescue applications. P A Der Maur, B Djambazi, Y Haberthür, P Hörmann, A Kübler, M Lustenberger, S Sigrist, O Vigen, J Förster, F Achermann, 2021 IEEE 4th International Conference on Soft Robotics. P. A. der Maur, B. Djambazi, Y. Haberthür, P. Hörmann, A. Kübler, M. Lustenberger, S. Sigrist, O. Vigen, J. Förster, F. Achermann et al., "Roboa: Construction and evaluation of a steerable vine robot for search and rescue applications," in 2021 IEEE 4th International Conference on Soft Robotics, 2021, pp. 15-20.
Eversion and Retraction of a Soft Robot Towards the Exploration of Coral Reefs. J Luong, P Glick, A Ong, M S Devries, S Sandin, E W Hawkes, M T Tolley, IEEE International Conference on Soft Robotics. J. Luong, P. Glick, A. Ong, M. S. deVries, S. Sandin, E. W. Hawkes, and M. T. Tolley, "Eversion and Retraction of a Soft Robot Towards the Exploration of Coral Reefs," in IEEE International Conference on Soft Robotics, 2019, pp. 801-807.
Soft Robotic Burrowing Device with Tip-Extension and Granular Fluidization. N D Naclerio, C M Hubicki, Y O Aydin, D I Goldman, E W Hawkes, IEEE/RSJ International Conference on Intelligent Robots and Systems. N. D. Naclerio, C. M. Hubicki, Y. O. Aydin, D. I. Goldman, and E. W. Hawkes, "Soft Robotic Burrowing Device with Tip-Extension and Granular Fluidization," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 5918-5923.
Vine robots: Design, teleoperation, and deployment for navigation and exploration. M M Coad, L H Blumenschein, S Cutler, J A R Zepeda, N D Naclerio, H El-Hussieny, U Mehmood, J.-H Ryu, E W Hawkes, A M Okamura, IEEE Robotics and Automation Magazine. 273M. M. Coad, L. H. Blumenschein, S. Cutler, J. A. R. Zepeda, N. D. Naclerio, H. El-Hussieny, U. Mehmood, J.-H. Ryu, E. W. Hawkes, and A. M. Okamura, "Vine robots: Design, teleoperation, and deployment for navigation and exploration," IEEE Robotics and Automation Magazine, vol. 27, no. 3, pp. 120-132, 2020.
A soft, steerable continuum robot that grows via tip extension. J D Greer, T K Morimoto, A M Okamura, E W Hawkes, Soft Robotics. 61J. D. Greer, T. K. Morimoto, A. M. Okamura, and E. W. Hawkes, "A soft, steerable continuum robot that grows via tip extension," Soft Robotics, vol. 6, no. 1, pp. 95-108, 2019.
Helical actuation on a soft inflated robot body. L H Blumenschein, N S Usevitch, B Do, E W Hawkes, A M Okamura, IEEE International Conference on Soft Robotics. L. H. Blumenschein, N. S. Usevitch, B. Do, E. W. Hawkes, and A. M. Okamura, "Helical actuation on a soft inflated robot body," in IEEE International Conference on Soft Robotics, 2018, pp. 245-252.
Human interface for teleoperated object manipulation with a soft growing robot. F Stroppa, M Luo, K Yoshida, M M Coad, L H Blumenschein, A M Okamura, IEEE International Conference on Robotics and Automation. F. Stroppa, M. Luo, K. Yoshida, M. M. Coad, L. H. Blumenschein, and A. M. Okamura, "Human interface for teleoperated object manip- ulation with a soft growing robot," in IEEE International Conference on Robotics and Automation, 2020, pp. 726-732.
A tip mount for transporting sensors and tools using soft growing robots. S Jeong, M M Coad, L H Blumenschein, M Luo, U Mehmood, J Kim, A M Okamura, J Ryu, IEEE/RSJ International Conference on Intelligent Robots and Systems. S. Jeong, M. M. Coad, L. H. Blumenschein, M. Luo, U. Mehmood, J. Kim, A. M. Okamura, and J. Ryu, "A tip mount for transporting sen- sors and tools using soft growing robots," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 8781-8788.
Series pneumatic artificial muscles (sPAMs) and application to a soft continuum robot. J D Greer, T K Morimoto, A M Okamura, E W Hawkes, IEEE. J. D. Greer, T. K. Morimoto, A. M. Okamura, and E. W. Hawkes, "Series pneumatic artificial muscles (sPAMs) and application to a soft continuum robot," in IEEE International Conference on Robotics and Automation, 2017, pp. 5503-5510.
Hapwrap: Soft growing wearable haptic device. N Agharese, T Cloyd, L H Blumenschein, M Raitor, E W Hawkes, H Culbertson, A M Okamura, IEEE International Conference on Robotics and Automation. N. Agharese, T. Cloyd, L. H. Blumenschein, M. Raitor, E. W. Hawkes, H. Culbertson, and A. M. Okamura, "Hapwrap: Soft growing wearable haptic device," in IEEE International Conference on Robotics and Automation, 2018, pp. 5466-5472.
Design of a soft catheter for low-force and constrained surgery. P Slade, A Gruebele, Z Hammond, M Raitor, A M Okamura, E W Hawkes, IEEE/RSJ International Conference on Intelligent Robots and Systems. P. Slade, A. Gruebele, Z. Hammond, M. Raitor, A. M. Okamura, and E. W. Hawkes, "Design of a soft catheter for low-force and constrained surgery," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 174-180.
An obstacle-interaction planning method for navigation of actuated vine robots. M Selvaggio, L A Ramirez, N D Naclerio, B Siciliano, E W Hawkes, IEEE International Conference on Robotics and Automation. M. Selvaggio, L. A. Ramirez, N. D. Naclerio, B. Siciliano, and E. W. Hawkes, "An obstacle-interaction planning method for navigation of actuated vine robots," in IEEE International Conference on Robotics and Automation, 2020, pp. 3227-3233.
Robust navigation of a soft growing robot by exploiting contact with the environment. J D Greer, L H Blumenschein, R Alterovitz, E W Hawkes, A M Okamura, International Journal of Robotics Research. 3914J. D. Greer, L. H. Blumenschein, R. Alterovitz, E. W. Hawkes, and A. M. Okamura, "Robust navigation of a soft growing robot by exploiting contact with the environment," International Journal of Robotics Research, vol. 39, no. 14, pp. 1724-1738, 2020.
A soft robot that navigates its environment through growth. E W Hawkes, L H Blumenschein, J D Greer, A M Okamura, Science Robotics. 283028E. W. Hawkes, L. H. Blumenschein, J. D. Greer, and A. M. Okamura, "A soft robot that navigates its environment through growth," Science Robotics, vol. 2, no. 8, p. eaan3028, 2017.
A Dexterous Tip-extending Robot with Variable-length Shape-locking. S Wang, R Zhang, D A Haggerty, N D Naclerio, E W Hawkes, IEEE International Conference on Robotics and Automation. S. Wang, R. Zhang, D. A. Haggerty, N. D. Naclerio, and E. W. Hawkes, "A Dexterous Tip-extending Robot with Variable-length Shape-locking," in IEEE International Conference on Robotics and Automation, May 2020, pp. 9035-9041.
Dynamically reconfigurable discrete distributed stiffness for inflated beam robots. B H Do, V Banashek, A M Okamura, IEEE International Conference on Robotics and Automation. B. H. Do, V. Banashek, and A. M. Okamura, "Dynamically recon- figurable discrete distributed stiffness for inflated beam robots," in IEEE International Conference on Robotics and Automation, 2020, pp. 9050-9056.
Functional and Qualification Testing of the InflateSail Technology Demonstrator. A Viquerat, M Schenk, V Lappas, B Sanders, AIAA Spacecraft Structures Conference. A. Viquerat, M. Schenk, V. Lappas, and B. Sanders, "Functional and Qualification Testing of the InflateSail Technology Demonstrator," in AIAA Spacecraft Structures Conference, 2015, pp. 1627-1638.
Development of a 3.2m-long Inflatable and Rigidizable Solar Array Breadboard. V Peypoudat, B Defoort, D Lacour, P Brassier, O Le Couls, S Langlois, S Liénard, M Bernasconi, M Götz, AIAA Structures, Structural Dynamics and Materials Conference. V. Peypoudat, B. Defoort, D. Lacour, P. Brassier, O. Le Couls, S. Langlois, S. Liénard, M. Bernasconi, and M. Götz, "Development of a 3.2m-long Inflatable and Rigidizable Solar Array Breadboard," in AIAA Structures, Structural Dynamics and Materials Conference, 2005, pp. 1881-1886.
Feasibility of Rigidified Inflatable Structures for Housing. S Van Dessel, A Chini, A Messac, Journal of Architectural Engineering. 9S. Van Dessel, A. Chini, and A. Messac, "Feasibility of Rigidified In- flatable Structures for Housing," Journal of Architectural Engineering, vol. 9, pp. 1-10, 2003.
Design, modeling, control, and application of everting vine robots. L H Blumenschein, M M Coad, D A Haggerty, A M Okamura, E W Hawkes, Frontiers in Robotics and AI. 7L. H. Blumenschein, M. M. Coad, D. A. Haggerty, A. M. Okamura, and E. W. Hawkes, "Design, modeling, control, and application of everting vine robots," Frontiers in Robotics and AI, vol. 7, pp. 90- 113, 2020.
Initial Design and Experimental Evaluation of a Pneumatic Interference Actuator. C R Nesler, T A Swift, E J Rouse, Soft Robotics. 52C. R. Nesler, T. A. Swift, and E. J. Rouse, "Initial Design and Experimental Evaluation of a Pneumatic Interference Actuator," Soft Robotics, vol. 5, no. 2, pp. 138-148, 2018.
Resistive Deployment of Inflatable Structures Using Velcro. M Salama, H Fang, M Lou, Journal of Spacecraft and Rockets. 395M. Salama, H. Fang, and M. Lou, "Resistive Deployment of Inflatable Structures Using Velcro," Journal of Spacecraft and Rockets, vol. 39, no. 5, pp. 711-716, 2002.
| [] |
[
"Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills",
"Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills"
] | [
"Maxence Hussonnois [email protected] \nDeakin University Geelong\n2 2Australia\n",
"Thommen George Karimpanal [email protected] \nDeakin University Geelong\n2 2Australia\n",
"Santu Rana [email protected] \nDeakin University Geelong\nAustralia\n"
] | [
"Deakin University Geelong\n2 2Australia",
"Deakin University Geelong\n2 2Australia",
"Deakin University Geelong\nAustralia"
] | [
"ACM Reference Format"
] | Autonomously learning diverse behaviors without an extrinsic reward signal has been a problem of interest in reinforcement learning. However, the nature of learning in such mechanisms is unconstrained, often resulting in the accumulation of several unusable, unsafe or misaligned skills. In order to avoid such issues and ensure the discovery of safe and human-aligned skills, it is necessary to incorporate humans into the unsupervised training process, which remains a largely unexplored research area. In this work, we propose Controlled Diversity with Preference (CDP) 1 , a novel, collaborative human-guided mechanism for an agent to learn a set of skills that is diverse as well as desirable. The key principle is to restrict the discovery of skills to those regions that are deemed to be desirable as per a preference model trained using human preference labels on trajectory pairs. We evaluate our approach on 2D navigation and Mujoco environments and demonstrate the ability to discover diverse, yet desirable skills. | 10.5555/3545946.3598755 | [
"https://export.arxiv.org/pdf/2303.04592v1.pdf"
] | 257,405,433 | 2303.04592 | 28544cb4c409bde8239f03b595bb82219a35aecd |
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills
2023
Maxence Hussonnois [email protected]
Deakin University Geelong
2 2Australia
Thommen George Karimpanal [email protected]
Deakin University Geelong
2 2Australia
Santu Rana [email protected]
Deakin University Geelong
Australia
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills
ACM Reference Format
. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023)London; United Kingdom2023Skill DiversityHuman PreferencesReinforcement Learning, IFAAMAS, 9 pages
Autonomously learning diverse behaviors without an extrinsic reward signal has been a problem of interest in reinforcement learning. However, the nature of learning in such mechanisms is unconstrained, often resulting in the accumulation of several unusable, unsafe or misaligned skills. In order to avoid such issues and ensure the discovery of safe and human-aligned skills, it is necessary to incorporate humans into the unsupervised training process, which remains a largely unexplored research area. In this work, we propose Controlled Diversity with Preference (CDP) 1 , a novel, collaborative human-guided mechanism for an agent to learn a set of skills that is diverse as well as desirable. The key principle is to restrict the discovery of skills to those regions that are deemed to be desirable as per a preference model trained using human preference labels on trajectory pairs. We evaluate our approach on 2D navigation and Mujoco environments and demonstrate the ability to discover diverse, yet desirable skills.
INTRODUCTION
Deep Reinforcement learning [17] is a powerful computational approach for solving sequential decision making tasks by maximizing prespecified rewards over time. Despite its proven success in a number of applications ranging from Atari games to robotics [15,17], the framework is typically task-specific, and the effectiveness of the learned policy is contingent on a carefully designed extrinsic reward function.
However, in the real world, an agent is likely to come across complex and unstructured tasks, for which it may need to learn several sub-behaviors or skills, possibly, without access to any extrinsic rewards. In order to autonomously discover and learn these skills, prior works have proposed information theory-based diversity objectives as an intrinsic reward to explore and learn diverse task-agnostic skills without a reward function [4,6,21]. While such unsupervised methods of skill discovery can produce promising results, their unconstrained nature may lead to the acquisition of useless, dangerous, or misaligned skills. For example, as depicted in Figure 1, a robot tasked with learning diverse skills with a kitchen knife may learn undesirable skills such as harming a human. This type of behavior can occur because the agent lacks context about the real world. Without context, the agent views all aspects of the environment as equally relevant, and learns to correlate its skills with any part of the environment regardless of its importance or safety.
In order to address this issue, Eysenbach et al. [6] attempted to limit the diversity of skills by manually selecting features for the agent to be diverse in. However, the effectiveness of this approach is limited, as it is still possible for agents to learn undesirable skills while being diverse about a specific feature. Recent works [11] have suggested relying on expert demonstrations to guide the agent towards expert-visited regions. Such demonstrations are generally expensive and thus would not be available in large quantities, thereby negatively impacting skill diversity. As such, designing online approaches for learning simultaneously diverse and desirable behaviors remains an important and challenging open problem.
In contrast to the approaches mentioned above, in this work, we contend that the agent can learn more desirable skills through guidance provided by humans in the loop during the learning of skills. Using human feedback to infer context allows much greater flexibility, adaptability and less engineering than relying on predefined extrinsic rewards. The key idea behind our approach is that we frame the problem of controlling skill diversity as finding regions of the environment where skill discovery will more likely produce desirable skills. Due to the difficulty of identifying such regions without human-provided context, we propose leveraging recent work in learning from human preferences [5,12,26] to infer preferred regions in the environment. Intuitively, these are regions of the environment which are generally associated with favorable agent behaviors. We posit that such regions also correspond to suitable regions for learning a diverse set of skills. Once such regions are identified, we adapt recent exploration methods to direct the agent's exploration towards those preferred regions. Furthermore, by learning a representation of the state space from human preferences, we show that our approach scales to higher dimensional problems and learns skills that are discernibly diverse to human eyes. Thus, by restricting the diversity in skill discovery to humanpreferred regions of the environment, we are capable of learning skills that are both diverse and desirable.
In summary, the main contributions of this work are:
• Controlled diversity with Preference (CDP), a novel method to control diversity in skill discovery using human preferences. • Demonstration that our proposed approach guides the agent's exploration towards preferred state regions. • Learning a representation of the state space for skill discovery, that contains features relevant to human preferences. • Qualitative and quantitative evaluation of the proposed framework, with suitable comparisons with existing baselines for learning diverse skills.
RELATED WORK
Human in the loop and Preference based RL: Human in the loop reinforcement learning (HIL-RL) aims to improve reinforcement learning (RL) agents by using human knowledge. In contrast to imitation learning and inverse RL, HIL-RL uses human knowledge during the training process rather than prior to it.
To enable the use of human feedback for more complex and challenging tasks, Christiano et al. [5] learned a reward model from human preference labels over trajectories. Such preferencebased frameworks offer the advantage of relatively easy/intuitive supervision, while being sample efficient enough to quickly learn a reward function. PEBBLE [12] was an approach that further developed this framework to design a more sample-and feedback-efficient preferencebased RL algorithm without any additional supervision. This was achieved by leveraging off-policy learning and utilizing unsupervised pre-training to collect data to substantially improve efficiency. Although we follow a PEBBLE-like approach to learn a reward function from preference labels, our approach differs from PEBBLE in that in addition to learning reward functions from preferences, we use this learned reward function to determine a distribution of states for guiding the agent's exploration. We essentially utilize the learned preference based rewards as a means for determining the dynamic space that humans prefer.
In the context of our proposed approach, Skill Preferences (SkiP) [25] is a related approach that combines skill learning and human preferences. SkiP was shown to learn a reward model over skills with human preferences and used that model to extract humanaligned skills from offline data. In contrast to SkiP, our approach targets the online learning setting, where skills are still under development when we obtain preferences.
Unsupervised Reinforcement learning and Skill discovery: Unsupervised RL is an approach for autonomously learning relevant behavior in any environment based on task-agnostic intrinsic rewards. Intrinsic rewards form the basis for many agent concepts such as curiosity [19] , novelty [16], and empowerment [20]. In contrast to curiosity [19], which guides exploration towards regions where predictive models perform poorly, novelty [16] guides exploration toward areas that are less frequently visited. In order to maximize the agent's future potential, empowerment approaches [20] direct the agent to explore regions that offer it more possible states to visit. DIAYN [6], VIC [9], and VALOR [1] suggested an empowerment objective based on mutual information. This objective was shown to enable the discovery and acquisition of a variety of skills relevant to complex locomotion. To add predictability to the set of diverse skills, DADS [21] formulated a variation of an objective based on mutual information. EDL [4] showed that such skill discovery methods suffer from poor exploration, and proposed to split the process into three independent phases: exploration, discovery, and learning. In our work, we use the EDL framework, thereby separating the discovery and learning of skills. However, we integrate the discovery of skills within the exploration process by using them to gather data more from the preferred regions.
Despite various advances in the area of autonomous skill discovery, it remains challenging to learn and discover meaningful skills in high-dimensional state spaces due to the curse of dimensionality. Many works have mitigated this problem by learning representations of the state space, to distinguish skills based on more relevant features. Nieto et al. [18] leverages self-supervised learning of state representation techniques such as contrastive techniques to learn a compact latent representation of the states. IBOL [10] proposes a linearization of environments that promotes more diverse and distant state transitions. Unlike these works, we do not learn or change the representation of the state. Instead, we redirect diversity to specific regions of the state space likely associated with meaningful, desirable skills. We note that the aforementioned methods for dealing with high dimensionality remain orthogonal to our work, and could possibly be combined with our proposed framework to realise more scalable solutions.
As far as controlling diversity using human data is concerned, the work by Klemsdal et al. [11] is most closely related to ours. By leveraging prior expert data, they obtain a state projection that makes expert-visited states recognizable and, consequently, encourages skills to visit them. However, in contrast to this approach, our proposed framework does not require access to expert trajectories. It instead only assumes a finite number of human-generated preference labels based on the agent's trajectories. We contend that this type of feedback is relatively easier to collect, with minimal cognitive load on the human collaborator.
Restraining behavior: Several works have aimed at controlling the behavior of agents. For example, Giacomo et al. [7] introduced restraining bolts to restrain agents' behavior by offering additional rewards when logical specifications of desired actions are satisfied. In another direction, Alizadeh Alamdari et al. [2] also augments the reward of the agent to consider the future wellbeing of others and thus restraining its behavior to reduce negative side effects. Our work differs from these in that, through interaction using human preferences, we learn how to regulate the diversity of skills.
In this paper, we consider the problem of controlling diverse skill discovery by combining the EDL framework with human guidance in the form of preference-based RL. Here, we briefly present related concepts, before describing our method in detail in Section 4.
Skill Discovery
Consistent with prior work [4] the skill discovery problem is formalized as a Markov Decision Process (MDP) M = (S, A, P) without external rewards, S and A respectively denote the state and action spaces, and P is the transition function. Skills introduced by Sutton et al. [22] are temporally extended actions (sub-behavior), which consist of a sequence of primitive actions. We define skills as policies ( | , ) conditioned on a fixed latent variable ∈ .
Skill discovery methods aim to learn these latent-conditioned policies by maximising the mutual information between S and . Due to symmetry, the corresponding mutual information can be expressed in two forms:
(S; ) = ( ) − ( |S) reverse = (S) − (S| ) forward (1)
where (·; ·) and (·) are respectively the mutual information and the Shannon entropy. Following prior work, we refer to these as the reverse and forward forms. By using either of the two forms of the objective, prior works [4,6,21] have demonstrated the learning of latent-conditioned policies that execute diverse skills. Our method uses the forward form in Equation (1) to learn the latent-conditioned policies ( | , ) .
EDL Framework
EDL optimizes the same information theoretic objective in Equation (1), but separates the skill discovery into three distinct stagesexploration, discovery and learning of the skill.
Exploration stage.
The Exploration stage aims to collect environment transitions; it can be achieved via any exploration method [14,16,19].
3.2.2 Skill Discovery stage. Given a distribution over states ( ), the Skill Discovery stage trains a Vector-Quantized VAE (VQ-VAE) to model the posterior ( | ) as an encoder ( | ), and ( | ) as the decoder ( | ). The VQ-VAE has the advantage of having a discrete bottleneck, which in our case is the categorical distribution of ( ). Typically, VQ-VAEs are trained to optimize for the objective:
L VQ-VAE = E ∼ ( ) [log( ( | ( | ))) +∥sg[ ( )] − ∥ + ∥ ( ) − sg[ ] ∥](2)
where ( ) and are respectively the codebook vector and the codebook index, and [.] is the operation 'stop gradient'. For more details, we refer the reader to van den Oord et al. [24].
Skill Learning stage.
Finally, the Skill Learning stage consists of training the latent-conditioned policies ( | , ) that maximize the forward form of the mutual information (Equation (1)) between states and latent variables. The corresponding reward function is then defined by:
( , ) = log ( | ), ∼ ( )(3)
where ( | ) is given by the decoder of the VQ-VAE at the discovery stage. This reward function reinforces the policy to visit states that the decoder generates for each latent variable . Our proposed method builds upon the EDL Framework, although we enhance it via two novel contributions: (1) an exploration phase guided by preferences, which integrates with the skill discovery phase to improve coverage relevance, and (2) a way to transform ( ) into a more suitable distribution for discovering desirable skills.
Reward Learning from Preferences
In this work, we use preference-based RL to identify preferred regions, which are then used to constrain the diversity of learned skills. In preference-based RL, a human is presented with two trajectory segments (state-action sequences) and , and is asked to indicate their preference for one over the other. For instance, the label = (1, 0) would imply that the first segment is preferred over the second. We follow the same framework as prior works in preferencebased RL [5,12,26], where the aim is to model the human's internal reward function responsible for the indicated preferences. This is usually done via the Bradley-Terry model [3], which models a preference predictor using the reward functionˆas follows:
[ ≻ ] = exp( ˆ( , )) ∈ {0,1} exp ( ˆ( , ))(4)
where ≻ denotes the event that the segment is preferable to the segment . As in Lee et al. [12], we model the reward function as a neural network with parameters , which is updated by minimizing the following loss:
L Reward = −E ( 0 , 1 , )∼D [ (0) log [ 0 ≻ 1 ] + (1) log [ 1 ≻ 0 ]](5)
In the current work, the above framework is used to infer context regarding the importance of each region of the environment by estimating the human's reward functionˆfrom preferences labels . Specifically, we use these rewards to identify regions associated with favorable agent behaviors. For simplicity, we choose to work with the trajectory represented as state sequences rather than stateaction sequences as introduced.
METHODS
In this section, we present CDP (Controlled Diversity with Preferen ce), a skill discovery method that utilizes preference-based RL methods to control diversity and discover more preferred skills based on human feedback. Our main idea is that with the reward learned from human preference feedback, we can estimate a region of the state space where it is more likely for the agent to discover desirable skills and subsequently learn them. To this end, we introduce the concept of controlled diversity and preferred regions. Then, we present how to integrate them with EDL for a more efficient exploration of the preferred region. Figure 2: Illustration of the guided exploration process. The agent iterates through four steps to explore. First it learns a reward from human preferences (b) so that it can update its belief over the preferred region from the existing data in the buffer (c) then it discovers skills in this region (d) and finally it collects experience regarding the beliefs of the preferred region.
Influencing Skill Discovery with Human Feedback
4.1.1 How to control diversity? We define controlled diversity as limiting diversity to a certain region of the state space. It differs from the standard setting for skill-discovery problems, where diversity is applied to the entire state space in an unconstrained manner. To achieve this, we follow the EDL framework, where we encourage skill discovery towards targeted behaviors by modifying priors through a distribution over a target region * ( ) of the state space. Performing skill discovery on * ( ) assigns latent variable to regions within the target region. In other words, a carefully designed target region containing only desirable skills will enable us to correlate only with desirable skills.
It is however, difficult to design such a region of the state space or to gain direct access to it. Thus, we formulate our problem of 'controlled diversity' as finding an approximation of this target region. In this work, we identify such regions through their high preference rewards, learned from human preferences.
Preferred regions.
We define a preferred region as those regions associated with high estimated preference rewardsˆ, wherê is learnt using the preference based RL framework, as described in Section 3.3. Concretely, a preferred state regionˆ⊆ is a region of the state space whereˆ( ) ≥ , where ∈ [0, 1] is a preference reward threshold, andˆ( ) is normalised to be within the range [0, 1]. That is,
= {∀ ∈ |ˆ( ) ≥ }(6)
Ideally, the preferred region would be aligned with the intended skills if the state space was fully explored. However, the assumption of full state coverage may not be realistic. Alternatively, we iteratively build a more accurate preference model by first using the current estimate of the preference model to sample more trajectories from the highly-preferred regions, and then updating the preference model with human labels on those trajectories. This directed sampling makes our method more query-efficient.
Exploration Towards a Preferred Region
In this section, we adapt the exploration phase of EDL to explore preferred regions more effectively. To this end, we add three components to the exploration phase -We learn a reward from human preferences, we estimate the potential preferred regions and we discover skills based on the potential preferred regions. Formally, we consider latent-conditioned policies ( | , ), a reward function , a preferred state regionˆand a discriminator ( | ), which are updated by the following processes, as illustrated in Figure ( In the following sections, we explain how these components can be integrated into existing exploration methods to guide exploration towards preferred regions.
State Marginal Matching.
We base the exploration phase of our work on SMM (State Marginal Matching [13]), although our approach is not limited to this method. SMM aims to learn a state marginal distribution log ( ) to match a given target distribution * ( ) by minimising their Kullback-Leibler (KL) divergence. Additionally, to explore more efficiently, Lee et al. [13] proposed to learn latent-conditioned policies ( | , ) by adding the diversity objective from Eysenbach et al. [6]. Thus, the reward function is defined as:
( ) = exploration ( ) + diversity ( )(7)
where :
exploration ( ) = log * ( ) ( ) − log ( ) ( ) (8) diversity ( ) = log ( | ) ( ) − log( ( )) ( )(9)
Intuitively, according to Lee et al. [13], the above equations imply that the agent should go to states (a) with high probability under the target state distribution, (b) where this agent has not been before, and (c) where this skill is clearly distinguishable from other skills.
The last term (d) encourages exploration in the space of mixture components .
Adding reward from preferences.
In order to direct the exploration towards preferred regions, we useˆas the target distribution * ( ) in Equation (8) to motivate the agent to explore regions with high preference-based rewards. Therefore Equation (8) can be rewritten as:
exploration ( , ) =ˆ( ) − log ( )(10)
4.2.3 Adding preferred regions and skills discovery. Following the definition of preferred region and skill discovery described in Section 3.2.2, we define a potential preferred regionˆwithˆaccording to (Equation (6)) on states collected online, and use it to train a discriminator presented in Section 3.2.2. The discriminator , encourages each skill to explore distinct regions related to the potential preferred region. In other words, we incentivize the agent to learn diverse skills within the preferred region. Therefore, we define the diversity reward as: diversity = log (ˆ| ), withˆ∈ˆ (11) 4.2.4 Overall objective. By combining each of the different reward components mentioned, the overall reward function to enable exploration towards preferred regions is given by:
( , ) =ˆ( ) ( ) − log ( ) ( ) + log (ˆ| ) ( )(12)
Intuitively, Equation (12) implies that the agent should go to (a) states with high preference rewards (b) states where the agent has not been before, and (c) to distinct regions within potential preferred regions. Our overall guided exploration method is described in Algorithm 1.
Learning skills.
By following the previous objective in Equation (12) we explore the preferred region and train a discriminator to assign diverse regions of the preferred region to skills. We then use the discriminator to learn skills in the skill learning phase, as described in Section 3.2.3.
Preferred Latent Representation
Despite being able to restrict diversity to specific regions of the environment, skills discovered in the state space might not appear diverse from a human point of view. The state space in MuJoCo [23] environments, for example, is a concatenation of joint positions and velocities. Discovering skills in this space often results in static positions. Even though easily distinguishable by the discriminator, they may seem similar to the human eye. Hence, as recommended by Eysenbach et al. [6], we examine using prior knowledge to identify discernably diverse skills.
This prior can be represented as any function of the state space and used as a prior to condition the discriminator. In this case, the discriminator is defined as ( ( )| ) with ( ) being the prior.
Although it can be useful to encourage the learning of specific types of skills by specifying a prior, relying on specifically designed priors may be limiting. Thus, we present an alternative to manually specifying this prior to learn skills that are more discernably diverse to human eyes. Specifically, we simply use the representation in the last hidden layer of the reward modelˆlearnt from human preferences as the prior. The intuition is that the last hidden layer of the neural network that models the internal reward function of a human, should learn a latent state representation that captures features that matter for human preferences. We refer to this as the preferred latent representation.
Formally we can writeˆ( ) as:
( ) = ℎ ( ( ))(13)
where , represents all layers of the reward model all layers except the output layer and ℎ is the output layer of the neural network. Hence, we define the discriminator in Equation (11) as ( (ˆ)| ).
As depicted later in the experiments, this general approach for specifying priors achieves discernably diverse behaviors, while obviating the need for any additional training. Step environment +1 ∼ ( +1 | , ) ; Set reward ( ) as in (12) Optimize L VQ-VAE in (2) with respect to end foreach end if end foreach
EXPERIMENTS
In this section, we examine our proposed method to control diversity with preference and to guide the agent's exploration towards preferred regions. We first demonstrate our approach on a 2D navigation environment, following which we also show the performance of our method in higher dimensional environments such as MuJoCo in Section 5.3 and 5.4. The 2D environment consists of a two-dimensional room enclosed by walls that restrain the agent. The agent begins each episode in the middle of the room, until episode termination, which occurs after 100 steps. The agent has only access to its horizontal and vertical coordinates (X,Y). It can deterministically change its direction and amplitude of steps to freely move in the environment. Both state space and action space are continuous. Following previous work on preference-based RL, we simulate human preference with an oracle 'true' reward function. The true reward function is designed to be a gaussian distribution, centered around a goal position, and the reward is computed as the negative distance to the goal.
Results in 2D navigation
In the 2D navigation environment, we intend to demonstrate that a preferred region can be used to define a relevant area of interest. In the interest of studying the effectiveness of preferred regions for discovering skills, in this section, we assume an ideal state coverage and an oracle reward function. We show both EDL and CDP results to demonstrate the full impact of the preferred region.
By applying the definition of the preferred region described in Section 4.1.2 to the assumed state coverage in Figure 3a, we identify the preferred region in the top right corner, as indicated in Figure 3b. We then discover and learn skills in those proposed regions. As illustrated in Figure 3c and 3e, EDL discovers and learns skills uniformly across the environment. In our case (CDP), the discriminator concentrates all skills' assigned regions in the top right corner, as shown in Figure 3d. Additionally, in Figure 3d, centroids (the most likely state under the discriminator for each skill) are located in the top corner, resulting in skills moving to the top right corner as illustrated in Figure 3f.
Exploration of the Preferred Region
This section aims to demonstrate that the modifications we made to the SMM method in Section 4.2 have significant advantages with regards to exploring the preferred region. We place ourselves in more realistic settings where we don't have full state coverage, or have access to the oracle reward. We compare our method with SMM as described in EDL and SMM+prior that uses the same prior as us. The prior is a reward function learned from preference, and used as described in Section 4.2.2.
As illustrated in Figure 4, we compared each method in terms of their average returns as per the target reward function. Intuitively, exploring more of the preferred region should result in a higher return. Results in Figure 4 suggest that our proposed method visits more states with higher rewards than the other methods, which implies that it explores the preferred region more efficiently.
From a qualitative perspective, Figure 5 depicts the states visited by each method and shows that our method visits more states in the top right corner (preferred region). Further, the presence of darker shades (indicating the later stages of interaction) in the top right corner indicates that the skills from our method tend to end near or in the preferred region. This can be explained by the discriminator incentivizing the agent to learn diverse skills within the preferred region. This is in contrast to the other methods in which the discriminator only encourages agents to acquire diverse skills, as indicated by the darker points in Figure 5a and 5b being relatively more evenly distributed in different state regions, and not particularly within the preferred region. The comparisons with the first method are unfair since they do not have access to any information about the preferred region. In spite of this, we still feel that the comparison is relevant to emphasize the choice to use human preference to control diversity.
Results using Preferred Latent Representations
In this section, we demonstrate that preferred latent representations facilitate the acquisition of appropriate skills in a general manner, that are capable of scaling to larger state and action spaces. To this end, we performed experiments on a MuJoCo-based modified Half Cheetah agent, in which moving backwards was preferred. It was specified by using a version of the original half-cheetah reward function modified (by multiplying the original reward by -1) to encourage the agent to move backwards as far as possible along the horizontal axis. In other words, we aim to achieve diverse velocities corresponding to the desired behavior of moving backwards. As shown in Figure 7a, without additional prior knowledge about the state space, the agent does not learn any relevant skills. However, when using the preferred latent representation (Figure 7b), the agent is able to learn diverse skills that go backward at varying speeds similar to skills learned while using a manually-specified prior (the agent's velocity) over velocity (Figure 7c).
Additionally, we repeat the 2D navigation experiments from Section 5.1, but using preferred latent representations to learn diverse and desirable skills. As seen in Figure 6 the agent learns skills comparable to those in Section 5.1. We note that the trajectories in Figure 6 are relatively more noisy when compared to those in Figure 3f, probably due to the inherent noise associated with learning the preferred latent representations. However, the fact that preferred latent representations also enable the agent to learn the intended skills implies that they do indeed capture relevant features of the state space, be it in the navigation task, or the more complex backwards Half Cheetah environment. Figure 6: Skill learned using the preferred latent representation as a prior to discover skills.
(a) Modified Half cheetah's skill learned using the state space to discover skills (b) Modified Half cheetah's skill using the preferred latent representation to discover skills (c) Modified Half cheetah's skill using a manually specified prior over velocity to discover skills
Direction of motion
Effect of
Here, we examine the effect of varying (used in Equation (6)) on the resulting skills obtained. We show results for both the MuJoCobased modified Half Cheetah and 2D navigation experiments. determines how much emphasis is placed on skill discovery centered around high rewards. This can be viewed as a parameter that controls how much we exploit the reward function to constrain skill discovery. In Figure 8, we use the setting described in Section 5.1 to show that a low will produce skills that may be far away from the goal, while a high will learn skills around the goal. Similarly, in Figure 11, a low results in skills that only cover shorter distances, as these are easier to learn. On the other hand, an agent that exploits the reward (high ) learns skills that cover larger distances. However, high values may cause the agent to be overly exploitative, leading to a lower diversity of learned skills. This phenomenon is illustrated in Figures 9 and 10 which show that the variance of velocity across skills is relatively low for both high and low values of , while it is the highest for the intermediate value of = 0.5. Hence, a user favoring a more uniform distribution of skills might choose a more balanced of 0.5, while one favoring skills more relevant to the task should select a relatively high .
CONCLUSION
We introduced a novel approach for addressing the issue of underconstrained skill discovery. Our proposed approach, Controlled diversity with preference (CDP) was designed to leverage human feedback to identify human-preferred regions, following which we discovered diverse skills within those regions, thereby ensuring the learning of diverse and desirable skills. In addition, we show that our method can be used to guide exploration towards possible preferred regions. We validated our proposed approach in 2D navigation and Mujoco environments. Empirically, our agents demonstrated the ability to favor the exploration of the preferred regions and to learn diverse skills in these regions. We also empirically studied the effect of the user-controlled hyperparameter to demonstrate its effects on the diversity of learned skills. As such, we believe that our approach presents a way to control the autonomous discovery of skills, while still obtaining safe, aligned and desirable skills.
Figure 1 :
1With unconstrained skill discovery, a cooking robot may discover undesirable skills (such as harming humans) using a kitchen knife.
•
Step (a): Reward Learning -We query human for preference over trajectories and update a reward functionˆfrom the preferences.• Step (b): Preferred regions estimations -We update our belief about the preferred subsetˆas described in Section 4.1.2. • Step (c): Discovery -We train the discriminator ( | ), following VQ-VAE training, based on the most recent belief about the preferred subset. • Step (d): Exploration -We train latent-conditioned policies ( | , ) using guided intrinsic motivation to explore and collect diverse experiences.
Algorithm 1 :
1Guided exploration with preferences Initialize B , , , ; foreach timestep do Sample ∼ ( ) ; // Collect data ; for each timestep do Sample action ∼ ( | , );
; Update policy ( ) to maximise with SAC [8]; Store transitions B ← B ∪ {( , , +1 , )} ; end for if it's time to update the preference then // Query instructor ; foreach query to instructor do Sample ( 0 , 1 ) ∼ B ; Collect preference from instructor = 0 ≻ 1 ; Store transitions D ← D ∪ {( 0 , 1 , )} end foreach // Update reward model ; foreach each gradient step do Sample minibatch ( 0 , 1 , ) D =1 ∼ D ; Optimize L reward in (5) with respect to end foreach // Estimate the preferred region ; = {∀ ∈ B|ˆ( ) ≥ }; // Skill Discovery phase ; foreach query to instructor do Sample minibatch ( )ˆ= 1 ∼ˆ;
Figure 3 :
3(a), (c) and (e) are respectively the full state space, the regions assigned to each skill by the discriminator trained on the full state space and the skills learned with this discriminator. (b), (d) and (f) are respectively the preferred regions of the state space obtained by our method, the regions assigned to each skill by the discriminator trained on the preferred region and the skills learned with this discriminator.
Figure 4 :
4Average return achieved by each method.
Figure 5 :
5States visited by each of the methods, SMM (a), SMM+prior(b), ours(c).
Figure 7 :
7Modified Half cheetah's skill learned using different representations of the state space to discover skills
Figure 8 :
8Regions assigned to each skill by the discriminator trained on preferred regions set by different values of .
Figure 9 :
9Average velocity over time for each skill.
Figure 10 :Figure 11 :
1011Comparaison of the variance between skill's velocity over time for each of the values. Modified Half cheetah skills learned with different values.
See code here: (https://github.com/HussonnoisMaxence/CDP) Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Yeoh, N. Agmon, B. An (eds.), May 29 -June 2, 2023, London, United Kingdom. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Joshua Achiam, Harrison Edwards, Dario Amodei, Pieter Abbeel, arXiv:1807.10299Variational option discovery algorithms. arXiv preprintJoshua Achiam, Harrison Edwards, Dario Amodei, and Pieter Abbeel. 2018. Vari- ational option discovery algorithms. arXiv preprint arXiv:1807.10299 (2018).
Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems. Toryn Q Parand Alizadeh Alamdari, Rodrigo Klassen, Sheila A Toro Icarte, Mcilraith, Richland, SCParand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, and Sheila A. McIlraith. 2022. Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 18-26.
Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Allan Ralph, Milton E Bradley, Terry, Biometrika. 39324Ralph Allan Bradley and Milton E. Terry. 1952. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika 39 (1952), 324.
Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills. Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro I Nieto, Jordi Torres, ICML. Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro i Nieto, and Jordi Torres. 2020. Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills. In ICML.
Deep Reinforcement Learning from Human Preferences. Jan Paul F Christiano, Tom Leike, Miljan Brown, Shane Martic, Dario Legg, Amodei, Advances in Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep Reinforcement Learning from Human Preferences. In Ad- vances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/file/ d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf
. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine, Diversity is All You Need: Learning Diverse Skills without a Reward Function. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2018. Diversity is All You Need: Learning Diverse Skills without a Reward Function. (2018).
Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications. Giuseppe De Giacomo, Luca Iocchi, Marco Favorito, Fabio Patrizi, International Conference on Automated Planning and Scheduling. Giuseppe De Giacomo, Luca Iocchi, Marco Favorito, and Fabio Patrizi. 2018. Foundations for Restraining Bolts: Reinforcement Learning with LTLf/LDLf Restraining Specifications. In International Conference on Automated Planning and Scheduling.
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine, PMLRProceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research. Jennifer Dy and Andreas Krausethe 35th International Conference on Machine Learning ( Machine Learning Research80Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 1861-1870. https://proceedings.mlr.press/v80/ haarnoja18b.html
Danilo Rezende , Karol Gregor, Daan Wierstra, Variational Intrinsic Control. International Conference on Robotic Learning. 00Danilo Rezende Karol Gregor and Daan Wierstra. 2016. Variational Intrinsic Control. International Conference on Robotic Learning 0, 0 (2016), 0.
Unsupervised Skill Discovery with Bottleneck Option Learning. Jaekyeom Kim, Seohong Park, Gunhee Kim, ICML. Jaekyeom Kim, Seohong Park, and Gunhee Kim. 2021. Unsupervised Skill Dis- covery with Bottleneck Option Learning. In ICML.
. Even Klemsdal, Sverre Herland, Abdulmajid Murad, arXiv:2108.018692021. Learning Task Agnostic Skills with Data-driven Guidance. arXiv preprintEven Klemsdal, Sverre Herland, and Abdulmajid Murad. 2021. Learning Task Agnostic Skills with Data-driven Guidance. arXiv preprint arXiv:2108.01869 (2021).
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training. Kimin Lee, Laura Smith, Pieter Abbeel, International Conference on Machine Learning. Kimin Lee, Laura Smith, and Pieter Abbeel. 2021. PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training. International Conference on Machine Learning (2021).
Efficient Exploration via State Marginal Matching. Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, Ruslan Salakhutdinov, Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, and Ruslan Salakhutdinov. 2019. Efficient Exploration via State Marginal Matching. (2019).
Learning to Coordinate Manipulation Skills via Skill Behavior Diversification. Youngwoon Lee, Jingyun Yang, Joseph J Lim, International Conference on Learning Representations. Youngwoon Lee, Jingyun Yang, and Joseph J. Lim. 2020. Learning to Coordinate Manipulation Skills via Skill Behavior Diversification. In International Conference on Learning Representations. https://openreview.net/forum?id=ryxB2lBtvH
Continuous control with deep reinforcement learning. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Manfred , Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra, CoRR abs/1509.02971Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2016. Continuous control with deep reinforcement learning. CoRR abs/1509.02971 (2016).
Behavior From the Void: Unsupervised Active Pre-Training. Hao Liu, Pieter Abbeel, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Hao Liu and Pieter Abbeel. 2021. Behavior From the Void: Unsupervised Active Pre-Training. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 18459-18473. https://proceedings.neurips.cc/paper/ 2021/file/99bf3d153d4bf67d640051a1af322505-Paper.pdf
Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin A Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Nature. 518Ioannis Antonoglou. and Demis HassabisVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518 (2015), 529-533.
Juan José Nieto, Roger Creus, Xavier Giro-I Nieto, arXiv:2107.08398Unsupervised Skill-Discovery and Skill-Learning in Minecraft. arXiv preprintJuan José Nieto, Roger Creus, and Xavier Giro-i Nieto. 2021. Unsupervised Skill-Discovery and Skill-Learning in Minecraft. arXiv preprint arXiv:2107.08398 (2021).
Curiosity-Driven Exploration by Self-Supervised Prediction. Deepak Pathak, Pulkit Agrawal, Alexei A Efros, Trevor Darrell, IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPRWDeepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. 2017. Curiosity-Driven Exploration by Self-Supervised Prediction. 2017 IEEE Con- ference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017), 488-489.
. Christoph Salge, Cornelius Glackin, Daniel Polani, abs/1310.1863Empowerment -an Introduction. ArXivChristoph Salge, Cornelius Glackin, and Daniel Polani. 2013. Empowerment -an Introduction. ArXiv abs/1310.1863 (2013).
Dynamics-Aware Unsupervised Discovery of Skills. Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman, International Conference on Learning Representations. Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. 2020. Dynamics-Aware Unsupervised Discovery of Skills. In International Confer- ence on Learning Representations. https://openreview.net/forum?id=HJgLZR4KvH
Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Richard S Sutton, Doina Precup, Satinder Singh, 10.1016/S0004-3702(99)00052-1Artificial Intelligence. 112Richard S. Sutton, Doina Precup, and Satinder Singh. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learn- ing. Artificial Intelligence 112, 1 (1999), 181-211. https://doi.org/10.1016/S0004- 3702(99)00052-1
MuJoCo: A physics engine for model-based control. Emanuel Todorov, Tom Erez, Yuval Tassa, 10.1109/IROS.2012.63861092012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. 5026-5033. https://doi.org/10.1109/IROS.2012.6386109
Neural Discrete Representation Learning. Aaron Van Den Oord, Oriol Vinyals, Advances in Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. 2017. Neural Discrete Representation Learning. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/ paper/2017/file/7a98af17e63a0ac09ce2e96d03992fbc-Paper.pdf
Skill preferences: Learning to extract and execute robotic skills from human feedback. Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, Michael Laskin, Conference on Robot Learning. PMLR. Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, and Michael Laskin. 2022. Skill preferences: Learning to extract and execute robotic skills from human feedback. In Conference on Robot Learning. PMLR, 1259-1268.
A Bayesian Approach for Policy Learning from Trajectory Preference Queries. Aaron Wilson, Alan Fern, Prasad Tadepalli, NIPS. Aaron Wilson, Alan Fern, and Prasad Tadepalli. 2012. A Bayesian Approach for Policy Learning from Trajectory Preference Queries. In NIPS.
| [
"https://github.com/HussonnoisMaxence/CDP)"
] |
[
"Quantification of flexibility from the thermal mass of residential buildings in",
"Quantification of flexibility from the thermal mass of residential buildings in"
] | [
"Wales \nDr Alexandre Canet -Cardiff University\nCF24 3AACardiffUK\n\nProf. Meysam Qadrdan -Cardiff University\nCF24 3AACardiffUK\n"
] | [
"Dr Alexandre Canet -Cardiff University\nCF24 3AACardiffUK",
"Prof. Meysam Qadrdan -Cardiff University\nCF24 3AACardiffUK"
] | [] | The increased integration of variable renewable generation into the power systems, along with the phase-out of fossil-based power stations, necessitate procuring more flexibility from the demand sectors. The electrification of the residential heat sector is an option to decarbonise the heat sector in the United Kingdom. The inherent flexibility that is available in the residential heat sector, in the form of the thermal inertia of buildings, is expected to play an important role in supporting the critical task of short-term balancing of electricity supply and demand. This paper proposes a method for characterising the locally aggregated flexibility envelope from the electrified residential heat sector, considering the most influential factors including outdoor and indoor temperature, thermal mass and heat loss of dwellings. Applying the method to England and Wales as a case study, demonstrated a significant potential for a temporary reduction of electricity demand for heating even during cold weather. Total electricity demand reductions of approximately 25 GW to 85 GW were shown to be achievable for the outdoor temperature of 10 o C and -5 o C, respectively. Improving the energy performance of the housing stock in England and Wales was shown to reduce the magnitude of available flexibility to approximately 18 GW to 60 GW for the outdoor temperature of 10 o C and -5 o C, respectively. This is due to the use of smaller size heat pumps in the more efficient housing stock. However, the impact of the buildings' retrofit on their thermal mass and consequently on the duration of the flexibility provision is uncertain. | null | [
"https://export.arxiv.org/pdf/2304.07881v1.pdf"
] | 258,179,780 | 2304.07881 | 8a247b8b02e2ff78d8ac0d3e821736bb5068286a |
Quantification of flexibility from the thermal mass of residential buildings in
Wales
Dr Alexandre Canet -Cardiff University
CF24 3AACardiffUK
Prof. Meysam Qadrdan -Cardiff University
CF24 3AACardiffUK
Quantification of flexibility from the thermal mass of residential buildings in
The increased integration of variable renewable generation into the power systems, along with the phase-out of fossil-based power stations, necessitate procuring more flexibility from the demand sectors. The electrification of the residential heat sector is an option to decarbonise the heat sector in the United Kingdom. The inherent flexibility that is available in the residential heat sector, in the form of the thermal inertia of buildings, is expected to play an important role in supporting the critical task of short-term balancing of electricity supply and demand. This paper proposes a method for characterising the locally aggregated flexibility envelope from the electrified residential heat sector, considering the most influential factors including outdoor and indoor temperature, thermal mass and heat loss of dwellings. Applying the method to England and Wales as a case study, demonstrated a significant potential for a temporary reduction of electricity demand for heating even during cold weather. Total electricity demand reductions of approximately 25 GW to 85 GW were shown to be achievable for the outdoor temperature of 10 o C and -5 o C, respectively. Improving the energy performance of the housing stock in England and Wales was shown to reduce the magnitude of available flexibility to approximately 18 GW to 60 GW for the outdoor temperature of 10 o C and -5 o C, respectively. This is due to the use of smaller size heat pumps in the more efficient housing stock. However, the impact of the buildings' retrofit on their thermal mass and consequently on the duration of the flexibility provision is uncertain.
Introduction
In the United Kingdom (UK), the decarbonisation of the economy is planned to be supported by the uptake of low carbon electricity generations and the electrification of services such as heating and transport. According to the Great Britain's electricity system operator, National Grid [1], by 2050, the electricity demand is expected to increase between 50% to 100%, whilst between 78% to 87% of the total electricity will be supplied by variable sources such as wind turbines and photovoltaics panels.
To compensate for the variable electricity generation and to address the challenge of balancing electricity supply and demand, the magnitude of flexibility required by the future power system will increase [2,3]. In the context of the power system operation, the term flexibility refers to the ability of the system to always balance electricity supply and demand in response to any changes in the expected generation and consumption. The continuous balancing of supply and demand can be achieved by modifying electricity production and/or consumption. In 2020, 45 GW of the 60 GW of flexibility required came from thermal generating units [1]. However, significantly higher flexibility required by the power system in 2050 (approximately two to four times higher [1]) is expected to be procured from low carbon means such as low carbon controllable generation, electrolysis, electricity storage, vehicle to grid and demand side response (DSR). The UK electricity system operator also recognises the potential for shifting the demand by using smarter controls for electricity based heating systems in combination with thermal storage [4].
Several studies investigated the provision of flexibility from the thermal mass of dwellings using heat pumps. A review of power-to-heat options to integrate renewable energy identified heat pumps and using thermal mass of buildings as the most favourable options [5]. A framework identified the energy generation capability of a building and its thermal mass as two major contributors to account for when quantifying DSR for residential and commercial buildings [6]. Authors in [7] used the EnergyPLAN software and showed the thermal mass of buildings is the most cost effective storage system when using heat pumps. Another study, in which the authors used OpenIDEAS/Modelica for their analysis [8], demonstrated that for a single-family dwelling equipped with an Air Source Heat Pump (ASHP), the use of the thermal mass of the dwelling decreases the electricity consumption of the ASHP by 75% to 94% during peak times [9]. The impacts on the level of comfort of providing flexibility with heat pumps in dwellings without thermal storage tanks was studied for two houses with different level of thermal mass using the thermal-dynamic simulation software TRNSYS [10]. It was shown that the house with higher thermal mass provided more thermal comfort and higher available power for flexibility services. The relation between thermal mass and amount of flexibility was also demonstrated for residential building located in a cold climate [11].
For the state of California, the aggregated magnitude of demand response that could be provided by heat pumps over a 15 minutes duration was estimated to be 9 GW, whilst the peak power capacity was ~40 GW in 2014 [12]. The impacts of insulation on the magnitude of flexibility available were investigated by comparing poorly insulated and well-insulated buildings in Denmark. The results showed that the heat load modulation could be large for poorly insulated buildings but only done for a short period of time (2 to 5 hours) whereas for well-insulated buildings the magnitude of flexibility will be low but could be provided for a longer timeframe. The provision of frequency response services using domestic heat pumps was explored in the literature [13,14].
A virtual thermal storage approach was used to represent the building stock and study the potential of heat pumps for demand side management in Germany [15]. However, this representation of the building stock does not seem to capture the non-homogeneity of the building stock such as differences in thermal losses and thermal mass between buildings, and thus overestimates the potential for flexibility provision. A generic quantification method was implemented using OpenIDEAS/Modelica to estimate the storage capacity of the thermal mass of buildings without compromising the comfort of the occupants [16]. This study showed that the insulation level, the type of heating systems and the duration of the demand side response event were key parameters when providing flexibility to the grid through the modulation of the heating system.
The majority of the literature looks at the level of flexibility that can be provided by single buildings or a small group of buildings or focuses on flexibility services with specific duration. There is a wide range of modelling techniques for quantifying the potential of inherent flexibility of buildings used in the literature including energy hub models [17], commercial and custom transient models but no specific technique appears to be prevalent. In this paper, we aim to use a transient model to characterise the magnitude and duration of technically available flexibility that can be provided by the thermal mass of the residential buildings using Air Source Heat Pumps (ASHPs) in England and Wales. The rationale behind focusing on ASHPs in this paper was based on the decarbonisation pathways for Great Britain (GB) published by the GB electricity system operator [4]. In the three pathways leading to net-zero by 2050, ASHPs are a dominant technology in comparison to ground source heat pumps (GSHPs) and direct electric heaters. Furthermore, our analysis attempts to investigate the scale of flexibility available from thermal inertia of buildings which is an understudied research area, for this reason we excluded hot water tank and other thermal energy storage technologies. Finally, detailed input data and outputs from our analysis are published and available online, which will allow other researchers and users to produce new sets of results for different scenarios such as more efficient ASHPs, combination of ASHPs/GSHPs or only direct electric heaters.
Key contributions of this paper are:
• a methodology was developed to characterise the thermal parameters of the dwelling stock from Energy Performance Certificates, • a methodology was developed to quantify the magnitude and duration of flexibility from the residential heat sector for local areas (known as Lower Layer Super Output Areas: LSOAs) across England and Wales. • an investigation of the impacts of buildings retrofits on the amount of flexibility available from the dwelling stock was carried out. Figure 1 shows the methodology used for this study. The first step was to create a database of thermal characteristics of dwellings for each LSOA in England and Wales using Energy Performance Certificates collected from the open data communities platform [18] and the information in the Standard Assessment Procedure 2012 [19] guidelines which are used for building regulation compliance in the UK. Using the thermal characteristics of dwellings in a thermal model of buildings, the magnitude and duration of two DSR flexibility services were characterised for a case study where all the dwellings in England and Wales have ASHPs installed:
Methods
1. Positive flexibility -an increase in the electricity consumption of heat pumps when all heat pumps increase their outputs to their maximum capacity. This provides a demand increase service to the public electricity network. 2. Negative flexibility -a decrease in the electricity consumption of heat pumps when all heat pumps are switched off. This provides a demand reduction service to the public electricity network.
The above naming convention was chosen for the sake of simplicity. Different names for such services might be used in the flexibility market. The methods describe in the following were implemented in Python [20]. The code is available (see details in section 6).
The dwelling stock of England and Wales used in this study is distributed over 34,753 LSOAs and 16 dwelling categories. A dwelling category is the combination of a dwelling type (i.e., detached, semidetached, terraced, or flat) and a heating system (i.e., natural gas boiler, resistance heater, biomass boiler or oil boiler). Figure 2 shows an overview of the steps used to calculate the magnitude and duration of the flexibility services for each dwelling category. After the indoor and outdoor air temperatures are set, they were used to calculate the current heating output of the dwelling to maintain the indoor air temperature constant. The current heating output of the ASHP at the set outdoor air temperature were used to derive the magnitude of the positive and negative flexibility services. In the last step, the duration for which the flexibility services can be provided are calculated.
Estimate current heating output of the dwelling category Set indoor and outdoor air temperatures 1 Estimate magnitude of positive/negative flexibility that can be provided 2 3
Thermal characteristics of the dwelling category Estimate duration for which positive/negative flexibility that can be provided 4 Figure 2: Overview of the method to calculate the magnitude and duration of the flexibility services that can be provided to the main electricity network.
Thermal characteristics of dwellings
For each LSOA, the average thermal losses, average size of the heating systems and the average thermal capacity were calculated for the 16 dwelling categories defined previously.
Calculating the thermal losses of dwellings and sizing air source heat pumps
The thermal losses or each dwelling category in each LSOA were derived using Equation 1.
, = ,(1)
Where, [ /˚] is the thermal losses, is the dwelling category, is the target LSOA,
[ ℎ] is the average annual heat demand and [˚ * ℎ ] is the number of heating degree hours in the region of the LSOA (see Table 1).
Calculating the thermal capacity of dwellings
The thermal capacity was calculated using Equation 2.
, = , ×(2)
Where, [ / ] is the thermal capacity,
[ 2 ]
is the average floor area of the dwelling category in the LSOA and [ / 2 / ] is the specific thermal capacity value.
Input data
An input dataset was created to calculate the thermal losses and thermal capacity of the dwelling stock. It includes for each LSOA and each dwelling category:
• the average annual heat demand before energy efficiency measures,
• the average annual heat demand after energy efficiency measures,
• the average total floor area,
• the specific thermal capacity,
The average annual heat demand before and after energy efficiency measures for each dwelling category in each LSOA in England and Wales published on the UKERC Energy Data Centre was used [21]. Using the same approach, average floor area for each building archetype in each LSOA was calculated using data available in Energy Performance Certificate of buildings.
The outliers for the average annual heat demand and floor areas for each dwelling category and each LSOA were dealt by:
• The thermal capacity level called "medium" with a specific thermal capacity of 250 / 2 / published in SAP 2012 [19] was used in this study. [19]. The design temperatures of heating systems were derived from the microgeneration installation standard 3005 [22].
Region
Sizing of air source heat pumps
The sizing of the ASHPs in the dwelling stock was derived using Equation 3.
, = ∆ × ,(3)
Where, [ ] is the size of the ASHPs and ∆ is the temperature difference between the indoor design temperature, which was set at 21˚C, and the outdoor design temperature of the heating system (see Table 1).
Estimating the magnitude and duration of flexibility services
A lumped parameter model (1R 1C) [23] was used to create a thermal model of a dwelling which was used to:
1. Calculate the magnitude of the flexibility service that can be provided, and, 2. Calculate the duration for which the flexibility service can be provided. Figure 3 shows a diagram of the RC model used. Equations 4 describe the heat balance equation of this model [23].
( ) − 1 ℎ ( ( ) − ) = ℎ ( )(4
Magnitude of the flexibility service
The magnitude of positive and negative flexibility that can be provided were calculated using Equation 5 and 6.
= − (5) = −(6)
Where, [ ] is the initial heating output, [ ] is the maximal heating output of the heating system, [ ] is the magnitude of positive flexibility, [ ] is the magnitude of negative flexibility and is the coefficient of performance of the ASHPs at the outdoor air temperature .
The initial heating output is the output of the heating system to maintain the indoor air temperature constant. It is defined by equation 7.
= − ℎ (7) Table 2 shows the average COP of ASHPs for the outdoor air temperatures used in this study. The lower and higher limits for were based on literature data and summarised in Equation 8. The low indoor air temperature threshold was fixed at +18˚C following recommendations from Public Health England [26]. The high indoor air temperature threshold was fixed at +24˚C as temperatures above could cause discomfort and potential harms [27].
18°≤
≤ 24°
The heat balance Equation 4 was used to derive Equation 9 which is used to calculate the duration of a flexibility service based on the parameters of the thermal building model and the heating output of the ASHP. Figure 4 shows the process followed to calculate the duration of a positive flexibility service for a dwelling for specific case when Equation 9 is not valid which includes:
• When the initial indoor temperature of the dwelling is above the maximum limits of the indoor air temperature, the duration of the service will be 0s, and, • When based on the magnitude of the positive flexibility service provided, the minimum or maximum indoor air temperature will never be reached, the duration of the service will be infinite.
A similar process is conducted when providing a negative flexibility service.
Results
The methods described in this paper were demonstrated on the dwelling stock of England and Wales in 2018 which comprises 23.4 million dwellings with a total heat demand of 350 TWh per annum [21]. Figure 5 shows the number of dwellings by dwelling forms and heating systems in the dwelling stock. More than 85% of the dwellings have gas boilers, 9% resistance heating and the rest oil and biomass boilers. Semi-detached houses represent 31% of the dwellings and the rest is distributed almost equally into detached houses, terraced houses and flats. The flexibility from dwellings was calculated for a scenario where the heating systems in 100% of the dwellings in the dwelling stock were converted to ASHPs.
Thermal characteristics of dwellings in England and Wales
For each dwelling category in each LSOAs, the thermal losses and the thermal capacity were calculated using methods described in Sections 2.1.1 and 0. Due to lack of accurate data, three levels of thermal capacity were calculated named: low, medium and high. In the rest of this paper, if not stated otherwise, the medium thermal capacity values were used to produce the results. The estimated thermal losses of the dwellings, the indoor and outdoor design temperature in each geographical region were used to calculate the installed capacity of the ASHPs (see Section 2.2). The use of backup heating solutions and/or thermal storages could impact the sizing of ASHPs but were not considered in this study. Figure 7 shows the total capacity of ASHPs for different dwellings forms, assuming all buildings will install ASHPs. In total, it was estimated that the total capacity of ASHPs that will be installed is 176 GWthermal.
Flexibility from dwellings in England and Wales
The magnitude and duration of providing flexibility (i.e. adjusting the electricity consumption of heat pumps) to the power system were calculated for the England and Wales dwelling stock assuming that all dwellings have ASHPs. Figure 8 shows the magnitude and duration of the positive and negative flexibility services for four outdoor air temperatures -5, 0, +5 and +10˚C, considering that all dwellings had the same initial indoor air temperature of +19˚C.
The orange line with circle marker shows the results for an outdoor air temperature of 0˚C. The positive flexibility can be provided for "unlimited" duration as even if we are increasing the outputs of the heat pumps to their maximum, the maximum indoor temperature of +24˚C will never be reached. This is because the size of the heat pump was selected to compensate for heat losses of the buildings for a temperature gradient of almost +24˚C. Depending on the region, the outdoor design temperature of the heating systems varies from -1˚C to -5˚C (see Section Methods). The demand decrease can be provided for less than an hour before the indoor air temperature of the dwellings reaches the minimum indoor temperature of +18˚C.
At outdoor temperature of -5˚C, the heating systems in all the dwellings are working at almost maximum capacity to maintain the indoor air temperature of +19˚C. Hence, close to 100% (ca. 87 GW -considering COP of 2) of the capacity installed is available to provide negative flexibility.
The magnitude of positive flexibility increases with the outdoor air temperature but the duration for which it can be provided decreases. It is explained because:
• At higher outdoor temperature, heat pumps operate at reduced capacity to maintain the desired indoor temperature. This means larger spare capacity is available to ramp up. • At higher outdoor temperature, running the heating systems at maximum capacity makes the indoor temperature to reach the maximum set limit faster.
The opposite is observed with the magnitude of negative flexibility that can be provided and its duration and how it varies with outdoor air temperature.
Parametric sensitivity analysis
In the following, the sensitivity of the results to the choice of the indoor air temperature of the dwellings and the impact of thermal losses and the thermal capacity were assessed. Figure 10 shows the results when setting the indoor air temperature of all dwelling to +20˚C. The magnitude of positive flexibility is decreased, and the magnitude of negative flexibility is increased compared to when indoor air temperature was set at +19˚C (Figure 9 ).
Impact of accounting for diversity of the indoor air temperature of dwellings
The previous results represent scenarios where all dwellings have the same indoor air temperature, however it is unlikely to happen. To estimate the impact of having a different indoor air temperature in each dwelling in England and Wales on the magnitude of flexibility that can be provided, the probability density function (PDF) shown in Figure 11 was used to assign an indoor air temperature to each dwelling. Figure 11: Probability density function used to assign an indoor air temperature to every dwelling in England and Wales. Truncated normal distribution at +14 and +24˚C with a mean value of +19˚C and a standard deviation of 2.5˚C. The PDF parameters were derived from the results from measured indoor temperatures in social housing located in England [28]. Figure 12 shows the magnitude and duration of positive and negative flexibility obtained when using the PDF to assign the indoor air temperature of each dwelling (due to the use of a PDF impacting the initial indoor temperature and consequently operating level of heat pumps, each run of the model may provide slightly different results).
It can be observed that:
• The magnitude of negative flexibility is smaller than shown in Figure 9 . It is due to having a 34% probability that a dwelling will be assigned an initial air temperature below the threshold of +18˚C and thus not being able to provide any demand reduction service. This is not the case for the magnitude of positive flexibility as the max temperature of the PDF is +24˚C and this is also the threshold of the model for maximum indoor air temperature. • The duration for which the flexibility services can be provided is also impacted because all dwellings do not have the same initial indoor air temperature. Figure 12: Estimated magnitude and duration of flexibility services provided when the initial indoor air temperature in dwellings is based on a probability distribution function. The initial indoor air temperature of each dwelling was assigned using a PDF function which is shown in Figure 11.
Impact of a decrease in thermal losses compared to the current configuration
In the future, energy efficiency measures are expected to be implemented and impact the thermal losses of the dwelling stock. To represent this scenario, the annual heat demand after energy efficiency measures [25] was used to model a dwelling stock with lower thermal losses. Additionally, the size of heat pumps was re-calculated for each dwelling to compensate for the heat loss at the design outdoor and indoor temperature, considering their new reduced heat loss rate. Figure 13 compares the magnitude and duration of positive and negative flexibility services for the dwelling stock in England & Wales before and after implementing energy efficiency measures for two outdoor air temperatures of -5˚C and +10˚C. A decrease in the magnitude of flexibility that can be provided is observed due to a decrease in the size of the heating systems installed. Furthermore, as the thermal losses are lower, but the thermal capacity remained the same, the duration of providing flexibility is longer.
Sensitivity analysis of the thermal mass
Different types of insulation techniques could have different impacts on the thermal capacity of a building. For example, while internal insulation could reduce the usable thermal capacity of a building, external insulation could increase its usable thermal capacity. Therefore, there is uncertainty regarding the thermal capacity of future dwelling stock that will implement energy efficiency measures [29]. To investigate the impacts of such uncertainty on the available flexibility from the residential heat sector, flexibility envelop of the future housing stock were produced for three levels of thermal capacity: medium, medium + 10% and medium -10 %. Figure 14 compares the magnitude and duration of positive and negative flexibility services for a dwelling stock which implemented energy efficiency measures for three levels of thermal capacity and two outdoor air temperatures. The thermal capacity influences the duration for which flexibility can be provided. A higher thermal capacity will result in a longer duration and vice-versa.
A 10% increase/decrease in the thermal capacity of the dwellings results in a 10% increase/decrease in the magnitude of energy that can be provided for negative flexibility at -5˚C and +10˚C.
A 10% increase/decrease in the thermal capacity of the dwellings results in a 9.6% increase/decrease in the magnitude of energy that can be provided for positive flexibility at +10˚C.
Discussion
There are several aspects of the results which can be discussed including the relation between the available flexibility and the outside air temperature, the impact of energy efficiency measures on the available flexibility and the uncertainties surrounding the approach.
The difference in the magnitude and duration of flexibility that can be provided when the outdoor air temperature and the indoor air temperature vary was highlighted in the results. As the outdoor air temperature decreases, the magnitude of positive flexibility decreases but the duration for which it can be provided increases. The opposite was observed with negative flexibility. A decrease in the initial indoor temperature of the dwellings would increase the magnitude of the positive flexibility but decrease the magnitude of the negative flexibility that could be provided. Furthermore, the flexibility services provided by the ASHPs would only be available on heating days, thus an alternative source of flexibility would be required in summer.
The potential uptakes of energy efficiency measures described in the sensitivity section (Section 3.3.3) highlighted that a decrease in the thermal losses of the buildings will lead to a lower magnitude in the flexibility services provided but an increase in their duration. However, the impact of energy efficiency measures on the thermal mass of the buildings is difficult to assess with certainty. For instance, external wall insulation may increase the thermal mass of the dwelling whereas internal wall insulation decreases it. Previous studies demonstrated the challenges of optimising thermal mass and insulation and their impacts on operational energy consumption [29,30].
Options to maintain or improve the duration of flexibility services which do not depend on insulation measures, would include installing thermal storage systems [6] and switch to district heating supply systems that could embed large thermal storage and leverage the thermal energy stored within the district heating network [31].
The approach used to quantify the flexibility from buildings have some limitations due to the uncertainties around the accuracy of the thermal parameters of the dwelling stock and the model used.
The EPC dataset used to derive the thermal losses and thermal capacitances of the residential dwelling stock is known to have limitations [32] but it currently offers the most accurate source of dwelling data in the UK. A difference of less than 10% was found between the average annual heat demand of dwelling categories calculated from EPC registers and the results from a study by the Centre for Sustainable Energy [25]. In the same paper, similar difference was found when comparing to the heat demand estimated using residential gas demand data with sub-national gas demand statistics.
A lumped parameter thermal model of building (1R 1C) was used. It is acknowledged that the accuracy of the results could be improved by utilising more complex models which consider variables such as the number of heated rooms, occupancy, schedule of appliances, solar irradiation and wind.
Further improvements of the modelling approach could be achieved by modelling additional dwelling categories and using more accurate data for the indoor air temperature of the dwellings, the thermal characteristics of dwellings, the sizing of the heat-pumps and their controls. The type of controls used could have a significant impact on the availability of the flexibility services.
Conclusions
This study aimed at quantifying the potential of the electrification of the residential heat sector and the inherent thermal energy storage of dwellings to provide demand-side flexibility services to the electric power system.
The thermal parameters of the England and Wales dwelling stock were derived from EPCs and the SAP 2012. They were used as input data to the transient model developed to quantify the flexibility from the thermal mass of dwellings. The magnitude and duration of the flexibility services available were quantified for a scenario in which 100% of the dwellings in England and Wales are equipped with ASHPs.
With a predicted uptake of residential heat pumps in the FES 2021 pathways between 28% and 80% by 2050 in the UK [1], the residential heat sector could significantly help to provide flexibility to the public electricity network. Based on these heat-pumps uptakes, our analysis showed that for England and Wales, at +5˚C, between 8.5 and 24.3 GW of positive flexibility and 12 to 34 GW of negative flexibility can be provided from several minutes to few hours. As a comparison, the total DSR requirements were estimated to be between 19.2 and 44 GW and the total flexibility requirements were between 120 GW and 232 GW in the FES 2021 pathways.
At local levels, GB electricity distribution network operators (DNOs) have launched services for customers to provide DSR to help manage the constraints of the electricity distribution network. For instance, the DNO that supplies electricity to Cornwall, is currently looking for flexibility providers for the green-shaded areas in Figure 15. For November 2022, they are looking to have 678 MW available for negative flexibility at 18:00 for every day of the week. Our results showed that if all dwellings in Cornwall were using ASHPs, 457 MW of negative flexibility could be provided when the outdoor air temperature is +5˚C. This represents 70% of the requirements considering our modelling assumptions. There are current market and technical barriers for households to access the flexibility market. To enable and increase the provision of flexibility services from the residential heat sector, a number of measures and changes in the electricity market will need to take place [33]. Furthermore, the infrastructure to tap into this resource will need to be installed including remote control of heating systems and data metering solutions.
Code availability
The source code of this study is available on Github at: https://github.com/AlexandreLab/flexibilitydwellings.
Data availability
The results of this study are available to download on the UKERC Energy Data Centre website at https://ukerc.rl.ac.uk/DC/cgi-bin/edc_search.pl?GoButton=Detail&WantComp=282&&RELATED=1
Acknowledgements
This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) through UKERC (EP/S029575/1) and MISSION project (EP/S001492/1).
References
Figure 1 :
1Overview of the methodology.
Replacing values where heat demand/floor area values is above the 99 th percentile such as: o Heat demand value is the heat demand value of the 99 th percentile. o Floor area value is the floor area value of the 99 th percentile. • Replacing values where heat demand/floor area values below the 1 st percentile such as: o Heat demand value is the heat demand value of the 1 st percentile. o Floor area value is the floor area value of the 1 st percentile.
is shown in Equation 10 and is shown in Equation 11.
Figure 4 :
4Process to calculate the duration of a positive flexibility service for a dwelling.
Figure 5 :
5Number of dwellings in England and Wales in 2018[21].
Figure 6
6shows the distribution of the average thermal capacity and thermal losses of dwellings per dwelling form. Flats have on average, lower thermal capacity and lower thermal losses. Detached houses have higher thermal losses and higher thermal capacity. Semi-detached and terraced houses have similar characteristics.
Figure 6 :
6Distribution of the thermal characteristics of four dwelling forms in England and Wales. The average thermal capacity is based on medium thermal capacity level. The figures were smoothed for visualization purposes by grouping the values into 50 bins.
Figure 7 :
7Capacity installed of residential heating systems in England and Wales for different dwelling forms.
Figure 8 :
8Estimated magnitude and duration of flexibility services provided when the initial indoor air temperature in dwellings is +19˚C.
Figure 9
9shows a map of the aggregate magnitude of positive and negative flexibility services for the outdoor air temperature of +5˚C by local authority. The local authority with the darker colours represents the areas with the higher capacity of heating systems installed. The number of dwellings and their characteristics are the main factors to explain the differences between local authorities.
Figure 9 :
9Maps of the positive and negative flexibility at local authority level of England and Wales. These maps are based on an indoor air temperature for all dwellings of +19˚C and an outdoor air temperature of +5˚C. The distribution of the dwellings in England and Wales was extracted from the dwelling stock dataset[21].
Figure 10 :
10Estimated magnitude and duration of flexibility services provided when the initial indoor air temperature in dwellings is +20˚C.
Figure 13 :
13Impact of energy efficiency measures on the provision of flexibility services. Comparison of the magnitude and duration of flexibility services provided for the England and Wales dwelling stock before and after implementing energy efficiency measures for two outdoor air temperatures.
Figure 14 :
14Impact of thermal capacity of dwellings on the provision of flexibility services. Comparison of the magnitude and duration of flexibility services provided for the England and Wales dwelling stock with different levels of thermal capacity for two outdoor air temperatures.
Figure 15 :
15Areas with flexibility requirements in Cornwall, UK. Extracted from a map showing the areas that are procuring flexibility (August 2022) source: Western Power Distribution (Distribution Network Operator of the Southwest of England).
Table 1
1shows the number of heating degree days and the design temperature of heating systems in each region of England and Wales. A lookup table linking each LSOA to a region was used to assign the number of heating degree days and design temperature.
Table 1 :
1Number of heating degree days and design temperature of heating systems in regions of England and Wales. The heating degree days were calculated based on a 15.5˚C temperature and the monthly average temperature from SAP 2012
Resistor-capacitor (RC) model with a single R and a single C used as a thermal building model of a dwelling. ℎ [°/ ] is the thermal resistance, and is the inverse of the thermal losses of the dwelling, ℎ is the thermal capacitance [ /°] of the dwelling, is the indoor air temperature [°], is the outdoor air temperature [°] and is the heating output [ ] from the ASHP.)
P
T outdoor
T indoor
R th
C th
Figure 3:
Table 2 :
2Outdoor air temperature and average coefficient of performance of ASHPs used in this study.To calculate the duration of the flexibility service, it was determined how long the heating output[W] can be kept at a given value before the higher or lower limits of indoor temperature [°] are reached. The outdoor air temperature[°] was assumed to stay constant.Outdoor air temperature in England and Wales Average COP used in this study based on ASHPs
-5˚C
2[24]
0˚C
2.3[25]
+5˚C
2.4[25]
+10˚C
2.6[25]
FES 2021 data workbook. National Grid ESO ; National Grid ESONational Grid ESO, FES 2021 data workbook, National Grid ESO, 2021. https://www.nationalgrideso.com/future-energy/future-energy-scenarios/documents (accessed February 26, 2021).
Flexibility requirements of renewable energy based electricity systems -a review of research results and methodologies. H Kondziella, T Bruckner, 10.1016/j.rser.2015.07.199Renewable and Sustainable Energy Reviews. 53H. Kondziella, T. Bruckner, Flexibility requirements of renewable energy based electricity systems -a review of research results and methodologies, Renewable and Sustainable Energy Reviews. 53 (2016) 10-22. https://doi.org/10.1016/j.rser.2015.07.199.
Grid flexibility and storage required to achieve very high penetration of variable renewable electricity. P Denholm, M Hand, 10.1016/j.enpol.2011.01.019Energy Policy. 39P. Denholm, M. Hand, Grid flexibility and storage required to achieve very high penetration of variable renewable electricity, Energy Policy. 39 (2011) 1817-1830. https://doi.org/10.1016/j.enpol.2011.01.019.
. Future Energy Scenarios. 2022National Grid ESOFuture Energy Scenarios 2022, National Grid ESO, n.d. https://www.nationalgrideso.com/future-energy/future-energy-scenarios#fullsuite (accessed March 17, 2023).
Power-to-heat for renewable energy integration: A review of technologies, modeling approaches, and flexibility potentials. A Bloess, W.-P Schill, A Zerrahn, 10.1016/j.apenergy.2017.12.073Applied Energy. 212A. Bloess, W.-P. Schill, A. Zerrahn, Power-to-heat for renewable energy integration: A review of technologies, modeling approaches, and flexibility potentials, Applied Energy. 212 (2018) 1611-1626. https://doi.org/10.1016/j.apenergy.2017.12.073.
Y Chen, P Xu, J Gu, F Schmidt, W Li, 10.1016/j.enbuild.2018.08.003Measures to improve energy demand flexibility in buildings for demand response (DR): A review, Energy and Buildings. 177Y. Chen, P. Xu, J. Gu, F. Schmidt, W. Li, Measures to improve energy demand flexibility in buildings for demand response (DR): A review, Energy and Buildings. 177 (2018) 125-139. https://doi.org/10.1016/j.enbuild.2018.08.003.
Wind power integration using individual heat pumps -Analysis of different heat storage options. K Hedegaard, B V Mathiesen, H Lund, P Heiselberg, 10.1016/j.energy.2012.09.030Energy. 47K. Hedegaard, B.V. Mathiesen, H. Lund, P. Heiselberg, Wind power integration using individual heat pumps -Analysis of different heat storage options, Energy. 47 (2012) 284-293. https://doi.org/10.1016/j.energy.2012.09.030.
OpenIDEAS -An Open Framework for Integrated District Energy Simulations. R Baetens, R De Coninck, F Jorissen, D Picard, L Helsen, D Saelens, Proceedings of Building Simulation. Building Simulation17R. Baetens, R. De Coninck, F. Jorissen, D. Picard, L. Helsen, D. Saelens, OpenIDEAS -An Open Framework for Integrated District Energy Simulations, in: Proceedings of Building Simulation 2015, 2015. https://lirias.kuleuven.be/1565321 (accessed March 17, 2023).
Potential of structural thermal mass for demand-side management in dwellings. G Reynders, T Nuytten, D Saelens, 10.1016/j.buildenv.2013.03.010Building and Environment. 64G. Reynders, T. Nuytten, D. Saelens, Potential of structural thermal mass for demand-side management in dwellings, Building and Environment. 64 (2013) 187-199. https://doi.org/10.1016/j.buildenv.2013.03.010.
Potentials of Demand Side Management Using Heat Pumps with Building Mass as a Thermal Storage. C Ellerbrok, 10.1016/j.egypro.2014.01.175Energy Procedia. 46C. Ellerbrok, Potentials of Demand Side Management Using Heat Pumps with Building Mass as a Thermal Storage, Energy Procedia. 46 (2014) 214-219. https://doi.org/10.1016/j.egypro.2014.01.175.
Energy flexibility of residential buildings using short term heat storage in the thermal mass. J Le Dréau, P Heiselberg, 10.1016/j.energy.2016.05.076Energy. 111J. Le Dréau, P. Heiselberg, Energy flexibility of residential buildings using short term heat storage in the thermal mass, Energy. 111 (2016) 991-1002. https://doi.org/10.1016/j.energy.2016.05.076.
Resource and revenue potential of California residential load participation in ancillary services. J L Mathieu, M E H Dyson, D S Callaway, 10.1016/j.enpol.2015.01.033Energy Policy. 80J.L. Mathieu, M.E.H. Dyson, D.S. Callaway, Resource and revenue potential of California residential load participation in ancillary services, Energy Policy. 80 (2015) 76-87. https://doi.org/10.1016/j.enpol.2015.01.033.
Dynamic Frequency Response From Controlled Domestic Heat Pumps. M T Muhssin, L M Cipcigan, N Jenkins, S Slater, M Cheng, Z A Obaid, 10.1109/TPWRS.2017.2789205IEEE Transactions on Power Systems. 33M.T. Muhssin, L.M. Cipcigan, N. Jenkins, S. Slater, M. Cheng, Z.A. Obaid, Dynamic Frequency Response From Controlled Domestic Heat Pumps, IEEE Transactions on Power Systems. 33 (2018) 4948-4957. https://doi.org/10.1109/TPWRS.2017.2789205.
Experimental Study of Grid Frequency Regulation Ancillary Service of a Variable Speed Heat Pump. Y.-J Kim, E Fuentes, L K Norford, 10.1109/TPWRS.2015.2472497IEEE Transactions on Power Systems. 31Y.-J. Kim, E. Fuentes, L.K. Norford, Experimental Study of Grid Frequency Regulation Ancillary Service of a Variable Speed Heat Pump, IEEE Transactions on Power Systems. 31 (2016) 3090- 3099. https://doi.org/10.1109/TPWRS.2015.2472497.
Potential of Heat Pumps for Demand Side Management and Wind Power Integration in the German Electricity Market. G Papaefthymiou, B Hasche, C Nabe, 10.1109/TSTE.2012.2202132IEEE Transactions on Sustainable Energy. 3G. Papaefthymiou, B. Hasche, C. Nabe, Potential of Heat Pumps for Demand Side Management and Wind Power Integration in the German Electricity Market, IEEE Transactions on Sustainable Energy. 3 (2012) 636-642. https://doi.org/10.1109/TSTE.2012.2202132.
Generic characterization method for energy flexibility: Applied to structural thermal storage in residential buildings. G Reynders, J Diriken, D Saelens, 10.1016/j.apenergy.2017.04.061Applied Energy. 198G. Reynders, J. Diriken, D. Saelens, Generic characterization method for energy flexibility: Applied to structural thermal storage in residential buildings, Applied Energy. 198 (2017) 192- 202. https://doi.org/10.1016/j.apenergy.2017.04.061.
Operational and structural optimization of multi-carrier energy systems. M Geidl, G Andersson, 10.1002/etep.112European Transactions on Electrical Power. 16M. Geidl, G. Andersson, Operational and structural optimization of multi-carrier energy systems, European Transactions on Electrical Power. 16 (2006) 463-477. https://doi.org/10.1002/etep.112.
Domestic Energy Performance Certificate Register, EPC Register. Ministry of Housing, Communities and Local Government. Ministry of Housing, Communities and Local Government, Domestic Energy Performance Certificate Register, EPC Register. (2020). www.epcregister.com (accessed November 13, 2020).
The Government's Standard Assessment Procedure for Energy Rating of Dwellings -2012 edition. BRE. The Government's Standard Assessment Procedure for Energy Rating of Dwellings -2012 edition, BRE, 2014.
Python Software Foundation, Python Language Reference, 3.7.9. Python Software Foundation, Python Language Reference, 3.7.9, 2022. http://www.python.org (accessed June 30, 2022).
Spatio-temporal heat demand for LSOAs in England and Wales. A Canet, 10.5286/UKERC.EDC.000944UK Energy Research CentreA. Canet, Spatio-temporal heat demand for LSOAs in England and Wales, UK Energy Research Centre. (2021). https://doi.org/10.5286/UKERC.EDC.000944.
M Gibson, Microgeneration Installation Standard: MIS 3005. Department of Energy & Climate ChangeM. Gibson, Microgeneration Installation Standard: MIS 3005, Department of Energy & Climate Change, 2013.
Thermal parameter identification of simplified building model with electric appliance. H Park, M Ruellan, A Bouvet, E Monmasson, R Bennacer, 10.1109/EPQU.2011.612882211th International Conference on Electrical Power Quality and Utilisation. H. Park, M. Ruellan, A. Bouvet, E. Monmasson, R. Bennacer, Thermal parameter identification of simplified building model with electric appliance, in: 11th International Conference on Electrical Power Quality and Utilisation, 2011: pp. 1-6. https://doi.org/10.1109/EPQU.2011.6128822.
Analysis of retrofit air source heat pump performance: Results from detailed simulations and comparison to field trial data. N J Kelly, J Cockroft, 10.1016/j.enbuild.2010.09.018Energy and Buildings. 43N.J. Kelly, J. Cockroft, Analysis of retrofit air source heat pump performance: Results from detailed simulations and comparison to field trial data, Energy and Buildings. 43 (2011) 239- 245. https://doi.org/10.1016/j.enbuild.2010.09.018.
Spatial and temporal data to study residential heat decarbonisation pathways in England and Wales, Sci Data. A Canet, M Qadrdan, N Jenkins, J Wu, 10.1038/s41597-022-01356-99246A. Canet, M. Qadrdan, N. Jenkins, J. Wu, Spatial and temporal data to study residential heat decarbonisation pathways in England and Wales, Sci Data. 9 (2022) 246. https://doi.org/10.1038/s41597-022-01356-9.
Met Office, Cold weather plan for England. Uk Health Security Agency, England, accessedUK Health Security Agency, NHS England, Met Office, Cold weather plan for England, 2015. https://www.gov.uk/government/publications/cold-weather-plan-cwp-for-england (accessed June 29, 2022).
Health and thermal comfort: From WHO guidance to housing strategies. D Ormandy, V Ezratty, 10.1016/j.enpol.2011.09.003Energy Policy. 49D. Ormandy, V. Ezratty, Health and thermal comfort: From WHO guidance to housing strategies, Energy Policy. 49 (2012) 116-121. https://doi.org/10.1016/j.enpol.2011.09.003.
Wintertime indoor temperatures in social housing dwellings in England and the impact of dwelling characteristics. A Beizaee, J Morey, A Badiei, 10.1016/j.enbuild.2021.110837Energy and Buildings. 238110837A. Beizaee, J. Morey, A. Badiei, Wintertime indoor temperatures in social housing dwellings in England and the impact of dwelling characteristics, Energy and Buildings. 238 (2021) 110837. https://doi.org/10.1016/j.enbuild.2021.110837.
The impact of thermal mass and insulation of building structure on energy efficiency. E Zilberberg, P Trapper, I A Meir, S Isaac, 10.1016/j.enbuild.2021.110954Energy and Buildings. 241110954E. Zilberberg, P. Trapper, I.A. Meir, S. Isaac, The impact of thermal mass and insulation of building structure on energy efficiency, Energy and Buildings. 241 (2021) 110954. https://doi.org/10.1016/j.enbuild.2021.110954.
Detailed energy saving performance analyses on thermal mass walls demonstrated in a zero energy house. L Zhu, R Hurt, D Correia, R Boehm, 10.1016/j.enbuild.2008.10.003Energy and Buildings. 41L. Zhu, R. Hurt, D. Correia, R. Boehm, Detailed energy saving performance analyses on thermal mass walls demonstrated in a zero energy house, Energy and Buildings. 41 (2009) 303-310. https://doi.org/10.1016/j.enbuild.2008.10.003.
Controlling district heating and cooling networks to unlock flexibility: A review. A Vandermeulen, B Van Der Heijde, L Helsen, 10.1016/j.energy.2018.03.034Energy. 151A. Vandermeulen, B. van der Heijde, L. Helsen, Controlling district heating and cooling networks to unlock flexibility: A review, Energy. 151 (2018) 103-115. https://doi.org/10.1016/j.energy.2018.03.034.
Energy performance certificates -New opportunities for data-enabled urban energy policy instruments?. O Pasichnyi, J Wallin, F Levihn, H Shahrokni, O Kordas, 10.1016/j.enpol.2018.11.051Energy Policy. 127O. Pasichnyi, J. Wallin, F. Levihn, H. Shahrokni, O. Kordas, Energy performance certificates - New opportunities for data-enabled urban energy policy instruments?, Energy Policy. 127 (2019) 486-499. https://doi.org/10.1016/j.enpol.2018.11.051.
Review of energy system flexibility measures to enable high levels of variable renewable electricity. P D Lund, J Lindgren, J Mikkola, J Salpakari, 10.1016/j.rser.2015.01.057Renewable and Sustainable Energy Reviews. 45P.D. Lund, J. Lindgren, J. Mikkola, J. Salpakari, Review of energy system flexibility measures to enable high levels of variable renewable electricity, Renewable and Sustainable Energy Reviews. 45 (2015) 785-807. https://doi.org/10.1016/j.rser.2015.01.057.
| [
"https://github.com/AlexandreLab/flexibilitydwellings."
] |
[
"Strong Gravitational Lensing in Horndeski theory of Gravity",
"Strong Gravitational Lensing in Horndeski theory of Gravity"
] | [
"Pedro Bessa \nPPGCosmo\nCCE -Federal University of Espírito Santo\n29075-910VitóriaESBrazil\n\nDepartment of Theoretical Physics\nUniversité de Genève\nQuai E. Ansermet 241211GenèveSwitzerland\n"
] | [
"PPGCosmo\nCCE -Federal University of Espírito Santo\n29075-910VitóriaESBrazil",
"Department of Theoretical Physics\nUniversité de Genève\nQuai E. Ansermet 241211GenèveSwitzerland"
] | [] | In this paper we build the general formalism of gravitational lensing in luminal Horndeski models, deriving the Jacobi matrix equation and the general angular diameter distance in Horndeski theories, using the screen space formalism. We generalize the focusing and multiple lensing theorems to include Scalar Tensor theories belonging to the luminal Horndeski class and derive constraints they must satisfy to exhibit the same gravitional lensing behavior in General Relativity. This provides a way to test theories through Strong Lensing effects, as well as a full theoretical framework for testing lensing in these theories. We find that for some theories, like metric f (R) and unified k-essence, the conditions are satisified in general physical cases, while for others like Galileon Condensate models, the conditions impose constraints on the parameter space of the theory. | null | [
"https://export.arxiv.org/pdf/2304.08141v1.pdf"
] | 258,179,952 | 2304.08141 | d4fb2a4c1b355c61021e3c2a1471e41c16b43ec1 |
Strong Gravitational Lensing in Horndeski theory of Gravity
17 Apr 2023
Pedro Bessa
PPGCosmo
CCE -Federal University of Espírito Santo
29075-910VitóriaESBrazil
Department of Theoretical Physics
Université de Genève
Quai E. Ansermet 241211GenèveSwitzerland
Strong Gravitational Lensing in Horndeski theory of Gravity
17 Apr 2023(Dated: April 18, 2023)
In this paper we build the general formalism of gravitational lensing in luminal Horndeski models, deriving the Jacobi matrix equation and the general angular diameter distance in Horndeski theories, using the screen space formalism. We generalize the focusing and multiple lensing theorems to include Scalar Tensor theories belonging to the luminal Horndeski class and derive constraints they must satisfy to exhibit the same gravitional lensing behavior in General Relativity. This provides a way to test theories through Strong Lensing effects, as well as a full theoretical framework for testing lensing in these theories. We find that for some theories, like metric f (R) and unified k-essence, the conditions are satisified in general physical cases, while for others like Galileon Condensate models, the conditions impose constraints on the parameter space of the theory.
In this paper we build the general formalism of gravitational lensing in luminal Horndeski models, deriving the Jacobi matrix equation and the general angular diameter distance in Horndeski theories, using the screen space formalism. We generalize the focusing and multiple lensing theorems to include Scalar Tensor theories belonging to the luminal Horndeski class and derive constraints they must satisfy to exhibit the same gravitional lensing behavior in General Relativity. This provides a way to test theories through Strong Lensing effects, as well as a full theoretical framework for testing lensing in these theories. We find that for some theories, like metric f (R) and unified k-essence, the conditions are satisified in general physical cases, while for others like Galileon Condensate models, the conditions impose constraints on the parameter space of the theory.
I. INTRODUCTION
Gravitational Lensing promises to be a powerful probe of Gravitation on large scales, with weak lensing by clusters and large scale structure providing tests of the concordance cosmological model [1,2] and strong lensing by Black Holes and compact objects providing tests of Gravity on small scales beyond Solar System constraints [3,4].
The search for a solution to the nature of Dark Energy has led to intense research in Scalar-Tensor theories and their behavior in the cosmological setting [5]. Since these theories in general modify the gravitational coupling and energy content of gravity, one would expect deviations from the behavior predicted by General Relativity. Beyond the usual PPN formalisms [6], the deviation from GR should be derived from principle, starting from the Modified Theory.
Developing a rigorous approach to the behaviour of gravitational lensing in Modified Gravity is important when new lensing regimes become accessible through advances in observational capabilities, with both the current and next generation of surveys expected to increase the statistics of strong gravitational lensing in a 10 5 -fold way [7]. Ever growing precision in observations requires a full theory to distinguish the pure relativistic effects arising from GR from the possible effects of modifications of gravity.
The study of imprints of Modified Gravity in Gravitational lensing dates back to Bekenstein [8], which predicted the expected light bending for nonminimally coupled theories and their underestimation of the mass in galaxy clusters. Research on TeVeS and MOND-like theories and their effects on both weak and strong gravitational lensing has been extensive [9][10][11][12][13], while theories of the Jordan-Brans-Dicke type have been explored in [6,14] using the PPN formalism; in [15,16] for spacetimes * [email protected] in the weak field limit and perturbed cosmologies; and in [17,18] in general spherically symmetric spacetimes for specific theories. Observational tests and constraints of modified gravity through weak lensing, mainly using parametrized perturbations, can be found in [19,20], and recently, using the EHT observations, in [21].
While the aforementioned studies deal with specific theories and regimes, there's been a lack of a systematic and rigorous treatment of lensing in general Modified Gravity theories. The present paper attempts to fill that gap, developing the mathematical formalism necessary to deal with Gravitational Lensing in the class of Luminal Horndeski theories, the most general 2nd order Scalar Tensor theories with non-degenerate Lagrangian and luminal tensor propagation speed, which include theories such as quintessence, f (R), Brans-Dicke, k-essence and cubic galileons [22,23].
We develop our formalism from the top down, first describing the general behavior of light bundles in modified gravity theories using an effective geometrical stressenergy tensor T eff µν . We then derive the Jacobi equation and its immediate consequences, the focusing and lensing equations, which dictate the behavior of light rays in the general lensing regime [24], their stretching, magnification and deflection. We then prove a couple of theorems that extend the focusing and multiple image theorems for General Relativity, under general weak energy and average energy condition assumptions [25]. Finally, w1e discuss how the detection of lensing effects that depart from the General Relativity predictions can be used as constraints on the parameter space of certain theories.
The paper is structured as follows: In section II, we review the Horndeski theory of Gravity, its field equations and luminal limit; in III we review the basic mathematical formalism of gravitational lensing in General Relativity; in section IV we adapt this formalism to Horndeski theories and obtain the focusing and lensing equation in arbitrary spacetimes. We also obtain the main theorems of the paper and test their assumptions against 4 classes of theories in the Horndeski family. Finally, in V we discuss possible uses of the formalism and how the results can put constraints in Horndeski theories and test Modified Gravity using lensing.
II. HORNDESKI GRAVITY AND FIELD EQUATIONS
In the paper [26] the most general stable Scalar-Tensor lagrangian with second order equations of motion was obtained. In [27], this Lagrangian was rediscovered in the context of Inflation and in connection to the so called Generalized Galileon theories [28]. The generality and stability of the theory provided the basis of the Effective Field Theory of Dark Energy [29,30] and other effective approaches, which has been developed as a standard way to treat deviations from GR in the cosmological setting [31].
In this paper, we'll use the Lagrangian formulation of the theory using the so called Horndeski functions. The other approaches, such as the EFT of DE, while useful in certain settings, are not suited for the generality that we require in this paper; for instance, these approaches often require that the space-time has a well defined ADM decomposition [30]. Using the convention of [31], the Horndeski Lagrangian can be written in the form
S = d 4 x √ −g 5 n=1 L (n) ,(1)L (1) = 1 2 R, L (2) = G 2 (X, φ) , L (3) = −G 3 (X, φ) φ,(2)L (4) = G 4 (X, φ) R + G 4X (X, φ) ( φ) 2 − (∇ µ ∇ ν φ) 2 (3) L (5) = G 5 (X, φ) G ab ∇ a ∇ b φ (4) − G 5X (X, φ) 6 ( φ) 3 − 3 φ (∇ a ∇ b φ) 2 + 2 (∇ a ∇ b φ) 3 ,
where we have explicitly separated the pure GR density R/2 from the Horndeski density L (4) , against convention. This will be useful when defining effective tensors. We define X ≡ −∇ µ φ∇ µ φ/2, and G iX = ∂ X G i . One also has, in general, the matter field lagrangian, which is coupled only to gravity through the metric
L (m) = L(g µν , Ψ),(5)
Ψ the matter fields of, e.g., perfect fluids, the standard model or radiation. The G 5 and G 4 are related to the propagation of gravitational waves [31], and the recent detection of the gravitational event GW170817 and its electromagnetic counterpart has put tight constraints on the deviation of the propagation speed of gravitational waves from the speed of light [32,33]. [34] and [32] argue that the most natural way to avoid fine-tuning while still demanding that the theories have luminal speed of gravitational waves is to set
G 4X = G 5X = G 5φ = 0,
which means no kinetic coupling to the curvature, and no tuning in the coupling of the Einstein tensor. From these constraints, the most general Horndeski Lagrangian with propagation speed of tensor modes c T = c is the one given by the Lagrangian
L = R 2 +G 2 (X, φ) − G 3 (X, φ) φ (6) +G 4 (φ)R + G 5 G αβ ∇ α ∇ β φ.
This will be the general kind of theory we will develop our formalism on. Here onwards, when we refer to "Horndeski theories" this is to be understood as the ones described by the lagrangian (6).
A. Field Equations
The dynamics of the fields φ and g µν are obtained by variation of (6). We first write them out explicitly, and then separate the parts related to each coupling term in effective stress energy tensors T eff
i G µν = G 2 g µν + G 2X ∇ µ φ∇ ν φ +G 3X ∇ α φ∇ α Xg µν − φ∇ µ φ∇ ν φ − 2∇ (µ φ∇ ν) X −2G 3φ (Xg µν + ∇ µ ∇ ν φ) −2G 4 G µν + 2G 4φ (− φg µν + ∇ µ ∇ ν φ) +2G 4φφ (2Xg µν + ∇ µ φ∇ ν φ) .(7)
We define the right hand side of equation (7) as a sum of effective stress energy tensors T (i) µν , defined by variation of each term in (6) containing the coupling G i in terms of the metric:
T (i) µν ≡ −2 √ −g δ( √ −gL (i) ) δg µν .(8)
Equation (7) is then written as
G µν = (1 + 2G 4 ) −1 T (2) µν + T (3) µν + T (4) µν + T (5) µν + T (m) µν ,(9)
where the last term is the stress-energy tensor of ordinary matter,uncoupled to the scalar field.
III. LENSING FORMALISM IN GENERAL RELATIVITY
The lensing formalism for arbitrary spacetimes in the case of General Relativity has been thoroughly studied, with classic texts such as [35], and modern reviews and treatments [36][37][38]. In this section, we'll briefly review the basic tools of gravitational lensing formalism in General Relativity in order to extend it to the Horndeski theories.
A. Jacobi map and null geodesics
For a given geodesic γ defined on a spacetime (M, g µν ), with affine parameter s and tangent vector field k ≡ ∇ s γ, we define its geodesic neighbourhood, parameterized by an infinitesimal vector ξ µ and a parameter ǫ, as being the set of curves x(s, ǫ) satisfying
∇ s x(0, 0) = k, ∇ ξ x(0, ǫ) = ǫ, x(s, 0) = γ(s). (10)
This defines a map R 2 − → M, its image called the screen space S [24]. The deviation vector χ µ is parallely transported through the geodesic bundle, and satisfies the relation
L k ξ = [k µ , ξ ν ] = 0.
From the above relations, one can obtain the Geodesic Deviation Equation
D ds 2 ξ µ = R µ αβν k α k β ξ ν .(11)
We now define a frame basis for the screen space, which is commonly called the Sachs Basis [36], satisfying
E A µ ∈ S, E A µ E µB = δ AB , k µ E A µ = 0.(12)
It is clear that this basis is orthonormal and tangent to the geodesic bundle defined by (10). The indexes A ∈ 1, 2 label the 2 real dimensions of the parametrization, while the greek indexes label the coordinates in spacetime. This basis is the one which we measure distortion by the gravitational lenses, providing unitary vectors to which one can measure the lensing angles.
In relation to the basis (12), we write a vector y µ A in the screen space S as
y µ A = D B A E B + Y A k µ .(13)
Rewriting the vector ξ µ in (11) in the Sachs basis and using (11) , we obtain that the matrix D B A satisfies the Jacobi matrix equation:
∇ k ∇ k D A B = R B αβC k α k β D C A ,(14)
where
∇ k = k ν ∇ ν .
This equation describes the evolution of the Jacobi matrix on the manifold. Setting initial conditions at the source plane S S , this defines the mapping of the separation angle θ of two points, or objects, at the source plane, to the observed angle β at the observer plane S O . We can omit the screen space indices A, B and use the subscript notation D SO to denote a Jacobi matrix that maps a vector in S S to a vector in S O . It can be shown that D SO = −D T OS , that is, the Jacobi matrix is antihermitean, and therefore diagonalizable with orthogonal eigenvectors.
For a given observer O with 4-velocity u µ , We define the measured energy of a null ray in the bundle, as
E O = −k µ u µ ,(15)
and the redshift z as the ratio
1 + z S ≡ E S /E O(16)
between the energy measured at the event S and the observer O in the worldline of the null ray.
We now consider a thin lens, meaning a space-like hypersurface which is pierced by the null ray bundle of geodesics, defined in (10), at the lens plane S L . If two rays separated by an angle θ at the source plane S S cross the lens plane S L and are deflected by an angle α then the Lens map, which maps the separation θ at the source plane to the observed separation β at the observer, is given by [37]
β(θ) = θ − (1 + z L )D −1 OS [D LS [α]] (D OL [θ]),(17)
where we note that the D are matrices on the respective vector spaces that span θ and α. The deflection angle α is defined in terms of the surface mass density of the lens Σ, which gives the mass profile of the lens at the lens plane for the thin lens approximation.
IV. GRAVITATIONAL LENSING IN HORNDESKI GRAVITY
A. Strong Gravitational Lensing
In order to derive the observed angle β of the lens map in Horndeski gravity, we need to obtain the Jacobi matrix (13) and the deflection angle α. These should be modified by the new couplings and interactions in the gravitational sector, which were rewritten as effective stress-energy tensors related to the Einstein tensor using (9).
It is useful to write the Riemann tensor R α µβν in terms of the effective stress-energy tensors T (i) µν using the field equations (9) and its relation to its trace and trace-less parts
R αµβν = C αµβν(18)+ 1 2 (g αβ R µν − g αν R βµ + g µν R αβ − g µβ R να ) − R 6 (g αβ g µν − g αν g βµ ).
Using the definition of the Einstein tensor G µν , the previous equation can be written as
R αµβν = C αµβν + 1 2 (g αβ G µν − g αν G βµ + g µν G αβ − g µβ G να ) + R 3 (g αβ g µν − g αν g βµ ).
In this way, we can finally write (18) using the effective stress-energy tensors
R αµβν = C αµβν (19) + (5) 2 g αβ T (i) µν − g αν T (i) βµ + g µν T (i) αβ − g µβ T (i) να 2(1 + 2G 4 ) + (5) 2 T (i) 3(1 + 2G 4 ) (g αβ g µν − g αν g βµ ),
where the T (i) are the traces of the T (i) µν . From the Riemann tensor (19), we obtain a modified solution to the Jacobi matrix equation (14), with the new terms involving the scalar field. We thus define the solution to this modified GDE, with Riemann tensor given by (19)
∇ k ∇ k D A(eff) B (φ, X) = R B αβC k α k β D C A ,(20)
as the effective Jacobi matrix D A(eff) B
(φ, X). This Jacobi matrix therefore naturally defines the maps between lens, observer and source, as well as the angular diameter distance d A (z, φ, X) as a function of redshift z and the new kinetic and scalar couplings, for the Horndeski theories (6).
B. Distances and caustics
Through the solution of equation (20), one obtains the angular diameter distances for the space-time given by the solution of the field equations (9). As in GR, one can define the luminosity distance [35] at the observer as
d L (z, φ, X) = det(D A(eff) B (φ, X)),(21)
which is equivalent to the definition derived from the comoving distance η(z) for spherically symmetric metrics [36]
d L = (1 + z)χ(z) = (1 + z) 2 d A ,(22)
χ the comoving distance of the spacetime, reparametrized by z and d A the angular diameter distance. This relation is commonly known as the Etherington Reciprocity relation, and its derivation can be found in, e.g. [35]. One must note that in Horndeski theories this does not change, as the photon number remains conserved and the geodesics are uniquely defined. From (22), one can see that, when the determinant of D B A vanishes, distances become singular. Points O and S in the manifold joined by the distance detD B A and where the map D B
A vanishes non trivially are called conjugate points [25]. For a given source S, the light rays defined as in the previous section and mapped to the observer O for which the distance is given by d L (z) may have conjugate points in its path to the observer. The set of all points conjugate to S is called the caustic [35].
In particular, we can write equation (14) as a matrix equationD
= RD,(23)
where
R = − 1 2 R αβ k α k β 0 0 R αβ k α k β + −Re(ψ) Im(ψ) Im(ψ) Re(ψ) ,(24)
ψ is defined as
ψ ≡ − 1 2 C α βγδ (E 1 α − iE 2 α )k β k γ (E 1 α − iE 2 α ).
For the Horndeski terms (19), we can expand this as to make explicit the Modified Gravity terms. Equation (23) then becomes
R = − 1 2 T (m) αβ k α k β 0 0 T (m) αβ k α k β − (G 2X + φG 3X ) 4(1 + 2G 4 ) (∇ α φ∇ β φ) k α k β 0 0 (∇ α φ∇ β φ) k α k β + 2 (G 3φ − G 4φφ ) 4(1 + 2G 4 ) (∇ α φ∇ β φ) k α k β 0 0 (∇ α φ∇ β φ) k α k β − 2G 4φ 4(1 + 2G 4 ) (∇ α ∇ β φ) k α k β 0 0 (∇ α ∇ β φ) k α k β + 2G 3X 4(1 + 2G 4 ) ∇ (α φ∇ β) X k α k β 0 0 ∇ (α φ∇ β) X k α k β + −Re(ψ) Im(ψ) Im(ψ) Re(ψ) .(25)
In the next subsection, we discuss how the new terms coming from the Horndeski modifications are related to the optical scalars and the focusing and distortion of light beams.
C. Optical scalars and multiple imaging
To uniquely solve the Jacobi Equation, one needs two initial conditions, for the value of D andḊ at the source or observer. Conventionally, one imposes the conditions at the source [36], so that we understand the evolution of the quantities as a light ray past-oriented and starting at the observer, therefore inside the light cone of the observer. In this way, we impose the conditions at the observer, which we'll call from here on the vertex, and assume that the affine parameter is s = 0 at O D(0) = 0,Ḋ(0) = 1.
From the Jacobi matrix relation (13) one can define the optical scalars from [36]
D = SD,(27)
where the matrix S is given by
S = θ + σ 1 σ 2 σ 2 θ − σ 1 .(28)
θ is the so called expansion of the light bundle, and γ = γ 1 + iγ 2 is its shear. The geometrical interpretation of these quantities is that the expansion measures the stretching of the bundle, whereas the shear measures its distortion in the eigendirections E i of the Sachs basis [35]. These quantities can be equivalently defined, and as to make their geometrical interpretation more manifest, as
θ = 1 2 k α ;α , σ = 1 2 k α;β (E α A + iE α B )(E β A + iE β B ) (29)
From the Geodesic Deviation Equation (20), and the definition of the optical scalars, one obtains the Sachs Equations in Horndeski Gravity:
θ = −θ 2 − σ 2 − 1 2 T (m) αβ k α k β − (30) B 1 2 (∇ k φ) 2 − B 2 2 D 2 φ ds 2 − B 3 2 (2∇ k φ∇ k X), σ =−2θσ − 1 2 ψ,(31)
where the B i are given by
B 1 = (G 2X + φG 3X − 2G 3φ + 2G 4φφ ) 2(1 + 2G 4 ) ,(32)B 2 = G 4φ (1 + 2G 4 ) , B 3 = −G 3X (1 + 2G 4 )
.
The second equation of (12) is of notice, as it shows that Modified Gravity does not affect the shear of the bundle, as the Weyl tensor is not modified. Therefore images are stretched in the same way as in General Relativity. One should also note that this is not frame dependent, as the Weyl tensor is preserved under conformal transformations to the Jordan frame. Equation (30), however, is modified by the extra terms arising from the effective Stress-Energy tensors. One can impose stability and energy conditions on the Horndeski functions as a restriction on the effect on the expansion and the distortion of the light beams. A discussion on Energy Conditions on Modified Gravity using the effective stress energy tensor treatment similar to the one used in this paper can be found in [39,40].
Here we prove a first theorem on the properties of multiple lensing and the effect of the modification of gravity. We follow closely the arguments presented in [41] and [42], and use the results presented in [25], section 4.4 on conjugate points. Theorem 1. Suppose that the matter stress-energy tensor satisfies the null energy condition, and that the Horndeski functions satisfy B 1 (∇φ) 2 + B 2 ( D 2 φ ds 2 ) + 2B 3 (∇ k φ∇ k X) ≥ 0 on the light bundle generated by k µ . Then the following statements are true:
• The lens produces multiple images.
• If the scalar field is smooth and bounded at the lens, then the number of images is the same as in General Relativity.
Proof. First we note that there is no loss in generality in redefining (1 + 2G 4 ) as G eff , and ask that it is positive. Thus, assuming that the Horndeski functions satisfy the mentioned conditions, the right hand side of (30) is strictly negative. Note, from the definition of θ and the luminosity distance (22) that
θ =ḋ L d L ,
and thus that if θ → ∞ then d L → 0. From the negativity ofθ, and the initial conditionṡ d L = 1, there must be a point where θ < 0. Then there is a conjugate point to the observer, applying the mean value theorem for integrals, Proposition 4.4.1 of [25].
The existence of a conjugate point to the observer guarantees that there are multiple images from the effect of the lens, following the main theorem of [41]. This proves the first item.
From the assumption that the scalar field is bounded at the lens,the total amount of energy density of the lens must be bounded, as the effect of the scalar field is limited. Then the lensing angle is bounded [35]. Therefore, as argued in [42], there is not only multiple imaging, but the number of images is odd exactly as in GR, as per the result of Burke's theorem.
The result of the previous theorem shows that, for a space-time under the same energy conditions as in General Relativity, we don't expect different behavior in Modified Gravity as long as the coefficients (32) obey certain inequalities. We proceed to apply the theorem to some of the theories described in [34], and note that in the notation of our paper, L (3) = − φG 3 (φ, X) and L (4) = (G 4 (φ, X)−1/2)R, such that we take these factors into account in the Lagrangians for the models described below.
f(R) and Brans Dicke theories
f (R) theories can be mapped, both in the metric and Palatini formalism, to Brans-Dicke theories with Brans Dicke parameter ω ≥ 0. For this kind of theory, one has the Horndeski functions
G 2 = ω X φ , G 3 = 0, G 4 = φ 2 − 1 2 ,(33)=⇒ B 1 = ω 2φ 2 , B 2 = 1 2φ , B 3 = 0,(34)
such that, in order to satisfy the theorem, one needs the condition
ω φ 2 (∇φ) 2 ≥ − 1 φ D 2 φ ds 2 .(35)
One can take φ as positive, which guarantees stability of solutions and nondegeneracy of the equations of motion. The previous equation is then simplified to
ω (∇φ) 2 φ ≥ − D 2 φ ds 2 .(36)
For metric f (R) theories, the BD parameters is ω = 0, and the condition is satisfied if the second derivative of the scalar field is nonnegative on the geodesic. For Palatini f (R), which corresponds to ω = −3/2 with a potential term, one needs that (∇φ) 2 /φ ≤ 2/3 D 2 φ ds 2 . For arbitrary Brans Dicke theories as long as ω ≫ 1, one can guarantee that the condition is satisfied; this limit is usually regarded as the GR limit of the theory. For cosmological models, which are our main interest, this range of parameters is currently allowed by observations [5].
Galileon Ghost Condensate
The Horndeski functions for the Galileon Ghost Condensate model, which allows for phantom crossing in dark energy's equation of state through a nonlinear kinetic term [43], are given by
G 2 = a 1 X + a 2 X 2 , G 3 = −3a 3 X, G 4 = M 2 Pl − 1 2 ,(37)=⇒ B 1 = (a 2 X + 2a 1 + 3a 3 φ) M 2 Pl , B 2 = 0, B 3 = a 3 M 2 Pl .(38)
This theory has a nontrivial coupling to the G 3 part of the action, which is related to the cubic interaction. The condition for this theory to satisfy the theorem is then
(a 2 X + 2a 1 + 3a 3 φ)(∇φ) 2 + a 3 M 2 Pl (∇ k φ∇ k X) ≥ 0,(39)
The parameter values a 1 ≥ a 3 ≥ 0 are allowed by the cosmological constraints obtained in [43], which in this case would only require that (∇ k φ∇ k X) ≥ 0. This condition, however, may be too strong, as the scalar field can have first and second derivatives changing signs under a standard cosmological evolution [5].
Unified k-essence
Unified k-essence was first proposed in [44] as a scalar field model unifying dark energy and dark matter through a single scalar field with quadratic kinetic term, with Horndeski functions given by [34]
G 2 = −b 0 + b 2 (X − X 0 ) 2 , G 3 = 0, G 4 = M 2 Pl − 1 2 ,(40)=⇒ B 1 = 2b 2 (X − X 0 ) M 2 Pl , B 2 = B 3 = 0.(41)
The X 0 is a positive constant characteristic kinetic scale, the extremum of the function G 2 [44]. The requirement on the functions to satisfy the theorem is then
2b 2 (X − X 0 ) M 2 Pl (∇φ) 2 ≥ 0.(42)
In order for this theory to reproduce the matter epochs in a cosmological setting, one requires that X − X 0 ≈ X 0 (1 + ǫ(t)) > 0 [23,44], so the constraint (42) is satisfied. This is in agreement with Bekenstein and Sanders' result that gravitational lensing in Scalar-Tensor theories which try to account for the dark matter effect cannot significantly modify the results derived by General Relativity [8].
Generalized Brans-Dicke
In this model with non-trivial cubic and nonminimal coupling, introduced in [28], the cosmological and stable solutions possess Horndeski functions [28,34]
G 2 = ω φ M Pl 1−n X, G 3 = − λ µ 3 φ M Pl −n X, G 4 = M 2 Pl 2 φ M Pl 3−n − 1 2 ,(43)=⇒ B 1 = φ M Pl −2 ω M Pl + λ φ µ 3 M Pl φ M Pl −1 − 4n φ µ 3 φ M Pl −1 + (3 − n)(2 − n)φ −2 φ M Pl −1 B 2 = (3 − n) 2 φ −1 , B 3 = λ µ 3 M 2 Pl φ M Pl −3 ,(44)
with the parameter n satisfying 2 ≤ n ≤ 3 and the couplings satisfying ω < 0, λ > 0 and µ > 0 [28]. For this theory, the condition is not necessarily satisfied, as its validity is highly dependent on the parameter values. In the case n = 3, the condition becomes
φ M Pl −2 ω M Pl + λ φ µ 3 M Pl φ M Pl −1 − 12 φ µ 3 φ M Pl −1 (∇φ) 2 + λ µ 3 M 2 Pl φ M Pl −3 (∇ k φ∇ k X) ≥ 0,(45)
which is more tractable, although still dependent on the theory's parameter space. In particular, since the parameters are also dependent on the late time behavior of cosmological solutions, one could in principle test the behavior of the theory through numerical solutions of the field equations with given cosmological parameters, as done in [28].
D. Focusing and magnification
In General Relativity, the focusing theorem guarantees that for the most general spacetimes satisfying the weak energy condition, the Gravitational potential has a focusing effect, that is, null rays forming an infinitesimal bundle converge when passing through a gravitational lens [36]. Equivalently, the cross section with angular size δθ of the image generated by a source S gets smaller as the light passes through the gravitational lens. Since the angular size of the cross section is related to the luminosity distance d L (z) through the Etherington relation for the angular diameter distance δθ = δl/d A (z) = (1 + z) 2 δl/d L (z), where dl is the object true observed size, the evolution of the luminosity distance modifies the cross section.
One can define, for a light bundle with cross-section δθ at the source, the magnification factor µ at the observer, which is given by [36]
µ = δθ d 2 L = s 2 d L (s) 2 ,(46)
where s is the affine parameter of the bundle, and we have used that d L (s) ≈ s near geodesics vertexes. The infinitesimal area of the bundle is δθ ≈ s 2 , as one can check from the definition of the geodesic bundle in (10). From the Sachs equations (30) and the definition of the optical scalars, one can write the Focusing Equation
d L = − |σ| 2 + 1 2 R αβ k α k β d L ,(47)
and the Focusing Theorem is the statement that d L (s) ≤ s. It follows from the integration of the previous equation on both sides and the initial conditions defined in (26). In General Relativity, one just needs the weak energy condition for the right hand side of (47) to be strictly nonpositive. The immediate consequence is that
µ = s 2 d 2 L ≥ 1,
such that light-beams are focused when passing through the lens, or that areas are magnified. For Horndeski theories, one obtains the Modified Focusing Equation
d L = − |σ| 2 + 1 2 T (m) αβ k α k β + B 1 2 (∇ k φ) 2 + B 2 2 D 2 φ ds 2 + B 3 2 (2∇ k φ∇ k X) d L .(48)
Under the conditions of the previous theorem, one can see that the Focusing theorem is easily satisfied, since the right hand side of (47) is strictly non-positive and integrating both sides twice on the affine parameter s of the geodesic.
The previous condition, however, is not necessary but sufficient. A weaker condition on the Horndeski functions is the one on the following theorem Theorem 2. Suppose that the matter stress-energy tensor satisfies the null energy condition, the initial conditions (26) are valid, and that the Horndeski functions satisfy s 0 [B 1 (∇φ) 2 + B 2 ( D 2 φ ds 2 ) + 2B 3 (∇ k φ∇ k X)]ds ≥ 0 on the light bundle generated by k µ . Then any image that passes through that lens is magnified, that is
µ ≥ 1.
Proof. Integrating both sides of (48), one gets
s 0d L (s)ds ≤ − s 0 |σ| 2 + 1 2 T (m) αβ k α k β + B 1 2 (∇ k φ) 2 + B 2 2 D 2 φ ds 2 + B 3 2 (2∇ k φ∇ k X) d L (s) ≤ 0 ⇒ḋ L (s) − 1 ≤ 0 ⇒ d L (s) ≤ s(49)
The condition for this theorem is sometimes called the averaged energy condition [45], applied to the effective stress-energy tensor. The conditions of 2 are much weaker than the ones in 1, as one doesn't need that the functions in the right hand side of (48) be strictly non-negative, rather that their integral be strictly nonnegative. Trivially, if a class of theories satisfies the conditions of 1, it also satisfies the conditions of 2.
For the theories discussed in IV C, where the validity of 1, nothing changes in relation to 2. The interesting cases are the ones where the dependence on parameters avoided the validity of the theorem. Now that the condition is over the average of the scalar field dynamics on the null geodesics, as long as the dynamics preserves the left hand side of (48), one does not need to impose that the functions B i don't change sign.
For the Generalized Brans Dicke and Ghost Condensate theories discussed in the previous subsection, numerical analysis of the cosmological dynamics could give a range of parameters where the theorems are valid. One could also use the observation of lenses as a test to the parameter range of the theories. Once one is able to use cluster and galaxy lensing statistics to constrain the magnification effect, this could put a constraint on the allowed parameters of theories that violate the average conditions, although this would need numerical evaluation of the focus and magnification equations.
In the case of Unified k-essence, the fact that the theory does not predict a deviation from the magnification derived by GR is in accordance with the results obtained in [8] in relation to Scalar field dark matter models. Although the case was made not for an accelerating cosmological model, it supports the understanding that lensing is not quantitatively modified by the inclusion of minimally coupled scalar fields. The definitive results would need a quantitative result of equation (48) for cosmological models, which we leave for future work.
V. DISCUSSION AND REMARKS
In this paper we have developed a general set of rules and results one can use to test and understand gravitational lensing in theories of the luminal Horndeski type. Theorems 1 and 2 impose sufficient conditions these theories must satisfy such that the effect of strong gravitational lensing is the same as in General Relativity. We examined these conditions and obtained inequalities the theory parameters need to satisfy, sometimes trivially in physical cases, such as metric f (R) [46] and Unified kessence [44], which shows that some classes of theories should not modify the qualitative behavior of lensing at all.
From this formalism, one could in principle derive numerical results, using equations (IV) and (48), to further constrain the theory parameter space where the gravitational lensing behavior doesn't deviate from General Relativity. Together with the calculation of the bending angle, found for instance in [8], one could derive statistics from multiple strong lensing systems and constraint the deviation from General Relativity without phenomenological or effective approaches. The observation of gravitational lensing in this regime is then able to directly constrain the parameter space of scalar tensor theories.
The formalism in section IV is general and applies not only to strong gravitational lensing, but to any lensing regime. Another possible useful application of this formalism is the study of gravitational weak lensing in cosmological settings, which is of particular interest in the EFT of DE [30], where one can relate the Horndeski functions in II to observable cosmological parameters obtained from perturbation theory, differentiating the effects arising from the Modified Gravity models from pure shear and convergence effects. The formulation of the Horndeski interactions as effective stress-energy tensors allows the test of phenomenological descriptions of dark energy with little modification to the equations. The effect of a cosmological constant on strong gravitational lensing can then be tested using different approaches from the one found in e.g. [47].
We find that the gravitational lensing effect in Modified Gravity is qualitatively identical to the one in General Relativity for popular models of Modified Gravity such as metric f (R) and Unified k-essence. For other theories, we've shown that requiring the validity of the theorems constrains their parameter space through the Horndeski functions of the theory. Precise constraints can be obtained assuming a given lens model and observations, and imposing the condition that lensing should not deviate from GR predictions.
Using the bending angle predictions for these theories [6,8], together with the constraints obtained in this paper, one can use strong lensing systems to test Modified Gravity models. In the next decade the amount of cluster and galaxy lensing data is expected to increase by orders of magnitude [48,49]. This new batch of data can provide new statistics once we're able to precisely constraint the lens models in order to separate relativistic effects from the Modified Gravity ones. We leave analysis of this kind for future work.
ACKNOWLEDGMENTSPedro Bessa would like to thank FAPES and CAPES for the PhD scholarship, as well as CBPF and Université de Genève for providing office space and computational power. He would also like to thank Marcela Campista and Alexsandre Ferreira for comments and reviews on the manuscript.
Weak lensing for precision cosmology. Rachel Mandelbaum, Annual Review of Astronomy and Astrophysics. 561Rachel Mandelbaum. Weak lensing for precision cos- mology. Annual Review of Astronomy and Astrophysics, 56(1):393-433, sep 2018.
Cluster-galaxy weak lensing. The Astronomy and Astrophysics Review. Keiichi Umetsu, 28Keiichi Umetsu. Cluster-galaxy weak lensing. The As- tronomy and Astrophysics Review, 28(1), nov 2020.
The Event Horizon Telescope Collaboration et al. First m87 event horizon telescope results. i. the shadow of the supermassive black hole. The Astrophysical Journal Letters. 87511The Event Horizon Telescope Collaboration et al. First m87 event horizon telescope results. i. the shadow of the supermassive black hole. The Astrophysical Journal Let- ters, 875(1):L1, apr 2019.
First sagittarius a* event horizon telescope results. i. the shadow of the supermassive black hole in the center of the milky way. The Astrophysical Journal Letters. 930212Event Horizon Telescope Collaboration. First sagittarius a* event horizon telescope results. i. the shadow of the supermassive black hole in the center of the milky way. The Astrophysical Journal Letters, 930(2):L12, may 2022.
Modified gravity and cosmology. Timothy Clifton, Pedro G Ferreira, Antonio Padilla, Constantinos Skordis, Physics Reports. 5131-3Timothy Clifton, Pedro G. Ferreira, Antonio Padilla, and Constantinos Skordis. Modified gravity and cosmology. Physics Reports, 513(1-3):1-189, mar 2012.
Formalism for testing theories of gravity using lensing by compact objects. i: Static, spherically symmetric case. R Charles, A O Keeton, Petters, Fields, Gravitation and Cosmology. 72Physical Review D -ParticlesCharles R. Keeton and A. O. Petters. Formalism for test- ing theories of gravity using lensing by compact objects. i: Static, spherically symmetric case. Physical Review D -Particles, Fields, Gravitation and Cosmology, 72, 11 2005.
An extended catalog of galaxy-galaxy strong gravitational lenses discovered in DES using convolutional neural networks. C Jacobs, The Astrophysical Journal Supplement Series. 243117C. Jacobs et al. An extended catalog of galaxy-galaxy strong gravitational lenses discovered in DES using con- volutional neural networks. The Astrophysical Journal Supplement Series, 243(1):17, jul 2019.
Gravitational lenses and unconventional gravity theories. Jacob D Bekenstein, Robert H Sanders, The Astrophysical Journal. 429480Jacob D. Bekenstein and Robert H. Sanders. Gravita- tional lenses and unconventional gravity theories. The Astrophysical Journal, 429:480, jul 1994.
Gravitational lensing in modified Newtonian dynamics. Monthly Notices of the Royal Astronomical Society. J Daniel, Edwin L Mortlock, Turner, 327Daniel J. Mortlock and Edwin L. Turner. Gravitational lensing in modified Newtonian dynamics. Monthly No- tices of the Royal Astronomical Society, 327(2):557-566, 10 2001.
The bending of light and lensing in modified gravity. J W Moffat, V T Toth, Monthly Notices of the Royal Astronomical Society. 3974J. W. Moffat and V. T. Toth. The bending of light and lensing in modified gravity. Monthly Notices of the Royal Astronomical Society, 397(4):1885-1892, 08 2009.
Applying modified gravity to the lensing and einstein ring in abell 3827. J W Moffat, V T Toth, Physical Review D. 1034J. W. Moffat and V.T. Toth. Applying modified gravity to the lensing and einstein ring in abell 3827. Physical Review D, 103(4), feb 2021.
Applying MOG to lensing: Einstein rings, abell 520 and the bullet cluster. John Moffat, Sohrab Rahvar, Viktor Toth, Galaxies. 6243John Moffat, Sohrab Rahvar, and Viktor Toth. Applying MOG to lensing: Einstein rings, abell 520 and the bullet cluster. Galaxies, 6(2):43, apr 2018.
Propagation of electromagnetic waves in MOG: gravitational lensing. S Rahvar, J W Moffat, Monthly Notices of the Royal Astronomical Society. 4824S Rahvar and J W Moffat. Propagation of electromag- netic waves in MOG: gravitational lensing. Monthly Notices of the Royal Astronomical Society, 482(4):4514- 4518, nov 2018.
Formalism for testing theories of gravity using lensing by compact objects. ii: Probing post-post-newtonian metrics. Physical Review D -Particles, Fields. R Charles, A O Keeton, Petters, Gravitation and Cosmology. 73Charles R. Keeton and A. O. Petters. Formalism for test- ing theories of gravity using lensing by compact objects. ii: Probing post-post-newtonian metrics. Physical Re- view D -Particles, Fields, Gravitation and Cosmology, 73, 1 2006.
Weak lensing in scalar-tensor theories of gravity. Carlo Schimd, Jean-Philippe Uzan, Alain Riazuelo, Physical Review D. 718Carlo Schimd, Jean-Philippe Uzan, and Alain Riazuelo. Weak lensing in scalar-tensor theories of gravity. Physical Review D, 71(8), apr 2005.
Light bending and gravitational lensing in brans-dicke theory. Xiaojun Gao, Shupeng Song, Jinsong Yang, Physics Letters B. 795Xiaojun Gao, Shupeng Song, and Jinsong Yang. Light bending and gravitational lensing in brans-dicke theory. Physics Letters B, 795:144-151, aug 2019.
Fatibene. Strong gravitational lensing in f (ξ) = ξ 3/2 gravity. M C Campigotto, A Diaferio, X Hernandez, L , Journal of Cosmology and Astroparticle Physics. 201706M.C. Campigotto, A. Diaferio, X. Hernandez, and L. Fat- ibene. Strong gravitational lensing in f (ξ) = ξ 3/2 grav- ity. Journal of Cosmology and Astroparticle Physics, 2017(06):057-057, jun 2017.
Gravitational lensing by f(r,t) gravity. Ahmed Alhamzawi, Rahim Alhamzawi, International Journal of Modern Physics D. 25021650020Ahmed Alhamzawi and Rahim Alhamzawi. Gravitational lensing by f(r,t) gravity. International Journal of Modern Physics D, 25(02):1650020, 2016.
Weak lensing probes of modified gravity. Fabian Schmidt, Physical Review D. 784Fabian Schmidt. Weak lensing probes of modified gravity. Physical Review D, 78(4), aug 2008.
Cosmos weak-lensing constraints on modified gravity. I Tereno, E Semboloni, T Schrabback, A&A. 53068Tereno, I., Semboloni, E., and Schrabback, T. Cos- mos weak-lensing constraints on modified gravity. A&A, 530:A68, 2011.
Testing horndeski gravity from EHT observational results for rotating black holes. Misba Afrin, G Sushant, Ghosh, The Astrophysical Journal. 932151Misba Afrin and Sushant G. Ghosh. Testing horndeski gravity from EHT observational results for rotating black holes. The Astrophysical Journal, 932(1):51, jun 2022.
Horndeski theory and beyond: a review. Tsutomu Kobayashi, Reports on Progress in Physics. 82886901Tsutomu Kobayashi. Horndeski theory and beyond: a review. Reports on Progress in Physics, 82(8):086901, jul 2019.
Screening the fifth force in the horndeski's most general scalar-tensor theories. Ryotaro Kase, Shinji Tsujikawa, Journal of Cosmology and Astroparticle Physics. 08Ryotaro Kase and Shinji Tsujikawa. Screening the fifth force in the horndeski's most general scalar-tensor the- ories. Journal of Cosmology and Astroparticle Physics, 2013(08):054-054, aug 2013.
Optical drift effects in general relativity. Korzyń Miko Laj, Jaros Ski, Law Kopiński, Journal of Cosmology and Astroparticle Physics. 03Miko laj Korzyń ski and Jaros law Kopiński. Optical drift effects in general relativity. Journal of Cosmology and Astroparticle Physics, 2018(03):012-012, mar 2018.
S W Hawking, G F R Ellis, The Large Scale Structure of Space-Time. Cambridge Monographs on Mathematical Physics. Cambridge University PressS. W. Hawking and G. F. R. Ellis. The Large Scale Struc- ture of Space-Time. Cambridge Monographs on Mathe- matical Physics. Cambridge University Press, 1973.
Second-order scalar-tensor field equations in a four-dimensional space. Gregory Walter, Horndeski , International Journal of Theoretical Physics. 106Gregory Walter Horndeski. Second-order scalar-tensor field equations in a four-dimensional space. International Journal of Theoretical Physics, 10(6):363-384, Septem- ber 1974.
Generalized g-inflation: -inflation with the most general second-order field equations. T Kobayashi, M Yamaguchi, J Yokoyama, Progress of Theoretical Physics. 1263T. Kobayashi, M. Yamaguchi, and J. Yokoyama. Gen- eralized g-inflation: -inflation with the most general second-order field equations-. Progress of Theoretical Physics, 126(3):511-529, sep 2011.
Generalized galileon cosmology. Antonio De, Felice , Shinji Tsujikawa, Physical Review D. 8412Antonio De Felice and Shinji Tsujikawa. Generalized galileon cosmology. Physical Review D, 84(12), dec 2011.
The effective field theory of dark energy. Giulia Gubitosi, Federico Piazza, Filippo Vernizzi, Journal of Cosmology and Astroparticle Physics. 02Giulia Gubitosi, Federico Piazza, and Filippo Vernizzi. The effective field theory of dark energy. Journal of Cos- mology and Astroparticle Physics, 2013(02):032-032, feb 2013.
Effective field theory of dark energy: A review. Noemi Frusciante, Louis Perenon, Physics Reports. 857Noemi Frusciante and Louis Perenon. Effective field the- ory of dark energy: A review. Physics Reports, 857:1-63, may 2020.
Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity. Emilio Bellini, Ignacy Sawicki, Journal of Cosmology and Astroparticle Physics. 07Emilio Bellini and Ignacy Sawicki. Maximal freedom at minimum cost: linear large-scale structure in general modifications of gravity. Journal of Cosmology and As- troparticle Physics, 2014(07):050-050, jul 2014.
Dark energy after GW170817: Dead ends and the road ahead. Jose Marí A Ezquiaga, Miguel Zumalacárregui, Physical Review Letters. 11925Jose Marí a Ezquiaga and Miguel Zumalacárregui. Dark energy after GW170817: Dead ends and the road ahead. Physical Review Letters, 119(25), dec 2017.
Dark energy after GW170817 and GRB170817a. Paolo Creminelli, Filippo Vernizzi, Physical Review Letters. 11925Paolo Creminelli and Filippo Vernizzi. Dark energy after GW170817 and GRB170817a. Physical Review Letters, 119(25), dec 2017.
Dark energy in horndeski theories after GW170817: A review. Ryotaro Kase, Shinji Tsujikawa, International Journal of Modern Physics D. 28051942005Ryotaro Kase and Shinji Tsujikawa. Dark energy in horn- deski theories after GW170817: A review. International Journal of Modern Physics D, 28(05):1942005, apr 2019.
Gravitational lenses. P Schneider, E E Ehlers, Falco, Astronomy and Astrophysics Library. SpringereditionP Schneider, J Ehlers, and E E Falco. Gravitational lenses. Astronomy and Astrophysics Library. Springer, Berlin, Germany, 1992 edition, June 2013.
Gravitational lensing from a spacetime perspective. Volker Perlick, Volker Perlick. Gravitational lensing from a spacetime perspective, 2010.
Line-of-sight effects in strong gravitational lensing. Pierre Fleury, Julien Larena, Jean-Philippe Uzan, Journal of Cosmology and Astroparticle Physics. 0824Pierre Fleury, Julien Larena, and Jean-Philippe Uzan. Line-of-sight effects in strong gravitational lens- ing. Journal of Cosmology and Astroparticle Physics, 2021(08):024, aug 2021.
Gravitational Lensing: Strong, Weak and Micro. Peter Schneider, Christopher S Kochanek, Joachim Wambsganss, SpringerBerlin HeidelbergPeter Schneider, Christopher S. Kochanek, and Joachim Wambsganss. Gravitational Lensing: Strong, Weak and Micro. Springer Berlin Heidelberg, 2006.
Energy conditions in modified gravity. Salvatore Capozziello, S N Francisco, José P Lobo, Mimoso, Physics Letters B. 730Salvatore Capozziello, Francisco S.N. Lobo, and José P. Mimoso. Energy conditions in modified gravity. Physics Letters B, 730:280-283, mar 2014.
Generalized energy conditions in extended theories of gravity. Salvatore Capozziello, S N Francisco, José P Lobo, Mimoso, Physical Review D. 9112Salvatore Capozziello, Francisco S. N. Lobo, and José P. Mimoso. Generalized energy conditions in extended the- ories of gravity. Physical Review D, 91(12), jun 2015.
The focusing equation, caustics and the condition for multiple imaging by thick gravitational lenses. T Padmanabhan, Kandaswamy Subramanian, Monthly Notices of the Royal Astronomical Society. 2332T. Padmanabhan and Kandaswamy Subramanian. The focusing equation, caustics and the condition for multiple imaging by thick gravitational lenses. Monthly Notices of the Royal Astronomical Society, 233(2):265-284, 07 1988.
On local conditions for multiple imaging by bounded, smooth gravitational lenses. K Subramanian, S A Cowling, MNRAS. 219K. Subramanian and S. A. Cowling. On local conditions for multiple imaging by bounded, smooth gravitational lenses. MNRAS, 219:333-346, March 1986.
Cosmological data favor galileon ghost condensate over λcdm. Simone Peirone, Giampaolo Benevento, Noemi Frusciante, Shinji Tsujikawa, Physical Review D. 1006Simone Peirone, Giampaolo Benevento, Noemi Frus- ciante, and Shinji Tsujikawa. Cosmological data favor galileon ghost condensate over λcdm. Physical Review D, 100(6), sep 2019.
Purely kinetic k-essence as unified dark matter. Robert J Scherrer, Physical Review Letters. 931Robert J. Scherrer. Purely kinetic k-essence as unified dark matter. Physical Review Letters, 93(1), jun 2004.
Averaged null energy condition in spacetimes with boundaries. Christopher J Fewster, Ken D Olum, Michael J Pfenning, Phys. Rev. D. 7525007Christopher J. Fewster, Ken D. Olum, and Michael J. Pfenning. Averaged null energy condition in spacetimes with boundaries. Phys. Rev. D, 75:025007, Jan 2007.
f (r) theories of gravity. P Thomas, Valerio Sotiriou, Faraoni, Reviews of Modern Physics. 821Thomas P. Sotiriou and Valerio Faraoni. f (r) theories of gravity. Reviews of Modern Physics, 82(1):451-497, mar 2010.
Gravitational lensing in a universe with matter and cosmological constant. Pedro Bessa, Oliver F Piattella, Pedro Bessa and Oliver F. Piattella. Gravitational lens- ing in a universe with matter and cosmological constant, 2022.
Developing a victorious strategy to the second strong gravitational lensing data challenge. B M O C R Bom, L O Fraga, P Dias, Schubert, C Blanco Valentin, M Furlanetto, Makler, Teles, R Benton Portes De Albuquerque, Metcalf, Monthly Notices of the Royal Astronomical Society. 5154C R Bom, B M O Fraga, L O Dias, P Schubert, M Blanco Valentin, C Furlanetto, M Makler, K Teles, M Portes de Albuquerque, and R Benton Metcalf. Developing a victorious strategy to the second strong gravitational lensing data challenge. Monthly Notices of the Royal As- tronomical Society, 515(4):5121-5134, jul 2022.
THE POPULATION OF GALAXY-GALAXY STRONG LENSES IN FORTH-COMING OPTICAL IMAGING SURVEYS. The Astrophysical. Thomas E Collett, Journal. 811120Thomas E. Collett. THE POPULATION OF GALAXY-GALAXY STRONG LENSES IN FORTH- COMING OPTICAL IMAGING SURVEYS. The As- trophysical Journal, 811(1):20, sep 2015.
Stability conditions for the horndeski scalar field gravity model. C Gomes, O Bertolami, Journal of Cosmology and Astroparticle Physics. 048C. Gomes and O. Bertolami. Stability conditions for the horndeski scalar field gravity model. Journal of Cosmol- ogy and Astroparticle Physics, 2022(04):008, apr 2022.
Generalized brans-dicke theories. Antonio De, Felice , Shinji Tsujikawa, Journal of Cosmology and Astroparticle Physics. 07Antonio De Felice and Shinji Tsujikawa. Generalized brans-dicke theories. Journal of Cosmology and Astropar- ticle Physics, 2010(07):024-024, jul 2010.
A determination of the deflection of light by the sun's gravitational field, from observations made at the total eclipse of may 29. F W Dyson, A S Eddington, C Davidson, Philosophical Transactions of the Royal Society of London. Series A. 220Containing Papers of a Mathematical or Physical CharacterF. W. Dyson, A. S. Eddington, and C. Davidson. A de- termination of the deflection of light by the sun's gravi- tational field, from observations made at the total eclipse of may 29, 1919. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 220:291-333, 1920.
Gravitational waves in general relativity. VI. the outgoing radiation condition. R Sachs, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 264R. Sachs. Gravitational waves in general relativity. VI. the outgoing radiation condition. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 264(1318):309-338, November 1961.
Republication of: On the transition from wave optics to geometric optics in general relativity. Jürgen Ehlers, General Relativity and Gravitation. 54440Jürgen Ehlers. Republication of: On the transition from wave optics to geometric optics in general relativity. Gen- eral Relativity and Gravitation, 54(4):40, April 2022.
The gravitational lens effect. S , MNRAS. 128295S. Refsdal. The gravitational lens effect. MNRAS, 128:295, January 1964.
Can TeVeS be a viable theory of gravity?. Masud Chaichian, Josef Klusoň, Markku Oksanen, Anca Tureanu, Physics Letters B. 735Masud Chaichian, Josef Klusoň, Markku Oksanen, and Anca Tureanu. Can TeVeS be a viable theory of gravity? Physics Letters B, 735:322-326, jul 2014.
Parametrized post-newtonian limit of horndeski's gravity theory. Manuel Hohmann, Physical Review D. 926Manuel Hohmann. Parametrized post-newtonian limit of horndeski's gravity theory. Physical Review D, 92(6), sep 2015.
Imperfect dark energy from kinetic gravity braiding. Oriol Cé Dric Deffayet, Ignacy Pujolàs, Alexander Sawicki, Vikman, Journal of Cosmology and Astroparticle Physics. 10Cé dric Deffayet, Oriol Pujolàs, Ignacy Sawicki, and Alexander Vikman. Imperfect dark energy from kinetic gravity braiding. Journal of Cosmology and Astroparticle Physics, 2010(10):026-026, oct 2010.
Classical and quantum ghosts. Fulvio Sbisà, European Journal of Physics. 36115009Fulvio Sbisà . Classical and quantum ghosts. European Journal of Physics, 36(1):015009, nov 2014.
On the stability conditions for theories of modified gravity in the presence of matter fields. Antonio De Felice, Noemi Frusciante, Georgios Papadomanolakis, Journal of Cosmology and Astroparticle Physics. 03Antonio De Felice, Noemi Frusciante, and Georgios Pa- padomanolakis. On the stability conditions for theories of modified gravity in the presence of matter fields. Journal of Cosmology and Astroparticle Physics, 2017(03):027- 027, mar 2017.
Tests of chameleon gravity. Clare Burrage, Jeremy Sakstein, Living Reviews in Relativity. 211Clare Burrage and Jeremy Sakstein. Tests of chameleon gravity. Living Reviews in Relativity, 21(1), mar 2018.
Relativity: The Special and General Theory. A Einstein, R W Lawson, H. HoltA. Einstein and R.W. Lawson. Relativity: The Special and General Theory. H. Holt, 1920.
Direct tests of general relativity under screening effect with galaxy-scale strong lensing systems. Yujie Lian, Shuo Cao, Tonghua Liu, Marek Biesiada, Zong-Hong Zhu, Yujie Lian, Shuo Cao, Tonghua Liu, Marek Biesiada, and Zong-Hong Zhu. Direct tests of general relativity under screening effect with galaxy-scale strong lensing systems, 2022.
Postnewtonian γ-like parameters and the gravitational slip in scalar-tensor and f (r) theories. D Júnior, Davi C Toniato, Rodrigues, Physical Review D. 1044Júnior D. Toniato and Davi C. Rodrigues. Post- newtonian γ-like parameters and the gravitational slip in scalar-tensor and f (r) theories. Physical Review D, 104(4), aug 2021.
Identification of galaxy-galaxy strong lens candidates in the decam local volume exploration survey using machine learning. E , E. et al Zaborowski. Identification of galaxy-galaxy strong lens candidates in the decam local volume explo- ration survey using machine learning, 2022.
Testing screened modified gravity. Universe. Philippe Brax, Santiago Casas, Harry Desmond, Benjamin Elder, 811Philippe Brax, Santiago Casas, Harry Desmond, and Benjamin Elder. Testing screened modified gravity. Uni- verse, 8(1):11, dec 2021.
hi class: Horndeski in the cosmic linear anisotropy solving system. Miguel Zumalacá Rregui, Emilio Bellini, Ignacy Sawicki, Julien Lesgourgues, Pedro G Ferreira, Journal of Cosmology and Astroparticle Physics. 201708Miguel Zumalacá rregui, Emilio Bellini, Ignacy Sawicki, Julien Lesgourgues, and Pedro G. Ferreira. hi class: Horndeski in the cosmic linear anisotropy solving sys- tem. Journal of Cosmology and Astroparticle Physics, 2017(08):019-019, aug 2017.
The strong gravitational lens finding challenge. R B Metcalf, Astronomy & Astrophysics. 625119R. B. Metcalf et al. The strong gravitational lens finding challenge. Astronomy & Astrophysics, 625:A119, may 2019.
xPert: computer algebra for metric perturbation theory. David Brizuela, M José, Guillermo A Martín-García, Mena Marugán, General Relativity and Gravitation. 4110David Brizuela, José M. Martín-García, and Guillermo A. Mena Marugán. xPert: computer algebra for metric perturbation theory. General Relativity and Gravitation, 41(10):2415-2431, feb 2009.
| [] |
Subsets and Splits